{"uuid": "00608f20-e3f5-5fdc-8979-4efeb0756d8e", "question": "What distance function and transfer function do the author use for their method?", "answer_format": "Your answer should be a sentence describing the distance function and transfer function used by the author for their method, including the formulas of the functions.", "tags": ["formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The author use distance function $d(u,v) = -u^Tv/\\tau$ and transfer function $f(h) = \\frac{h}{||h||}$ for their method.", "question": "What distance function and transfer function do the author use for their method?"}}, "anchor_pdf": ["16f7e6e6-6bfa-5c74-9c8b-1adc5bf7e3b9"], "reference_pdf": []} {"uuid": "00b28687-3ea1-5974-a1ec-80d7f6cd3424", "question": "What datasets were used to train the default embedding model for the retriever in the experiment of the paper?", "answer_format": "Your answer should be a python list of the dataset names, e.g. [\"dataset1\", \"dataset2\", ...]. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["7e09164d-5713-56a1-8210-d92aa98d512a"], "reference_pdf": ["413e7de9-03c4-5c1f-9e42-cd48030c9369"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Wikipedia", "CCNet"], "ignore_order": true}}} {"uuid": "0100f339-d8b0-5277-a73f-e0b3f6b10d0c", "question": "For the dataset where \"Before we dive into the answer.\" performs the best in specific setting in the paper that proposes IAP, what deficiencies of previous datasets were raised by the authors of the dataset? Additionally, what variations did the authors propose to address these deficiencies?", "answer_format": "Your answer should be a Python list of 2 elements, the first is a string, the deficiencies, the second is a python list of strings, the categories of variations in general as proposed in the paper.", "tags": ["multiple", "table", "subjective"], "anchor_pdf": ["0f6aee28-1439-5d69-9173-7d21f9bb0daa"], "reference_pdf": ["e05cbd04-192e-5761-97ce-7250058cf895", "ad5ecb28-5270-5e7b-b161-6d994db6c2f7", "a87a7490-623a-54af-bad6-ef68b0757499", "c2c5bf1a-3d4a-508e-a217-b3e4b78ce7f7"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_reference_answer_with_llm", "eval_structured_object_exact_match"], "eval_kwargs_list": [{"reference_answer": "A large number of the problems in ASDiv-A and MAWPS can be correctly answered without even looking at the question. This suggests the presence of patterns in the bodies of MWPs in these datasets that have a direct correlation with the output equation.", "question": "What deficiencies of previous datasets were raised?"}, {"gold": ["Question Sensitivity", "Reasoning Ability", "Structural Invariance"], "ignore_order": true, "ignore_blank": true, "lowercase": true}]}}} {"uuid": "011dd1f5-52a8-5ab6-9eb1-d8432c4e614c", "question": "which term is mentionned in this paper (\"WINOPRON: Revisiting English Winogender Schemas for Consistency, Coverage, and Grammatical Case\")in terms of the result that Smaller FLAN-T5 models perform at chance level? What evaluation contrasts does the source paper of this term investigate?", "answer_format": "Your answer should be a single python list like [\"string1\", \"string2\"]. The first string should be the name of the term. The second string should be about the evaluation contrasts.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["52543c4f-6202-589d-b564-cf3421e3ce75"], "reference_pdf": ["45d52c43-65df-5608-a6b6-75ea7beb27db"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "demand gap", "lowercase": true}, {"reference_answer": " production vs. forced choice, and metalinguistic judgment vs. probability measurement", "question": "What evaluation contrasts does the source paper of this term investigate?"}]}}} {"uuid": "01db3056-b961-59bf-8b58-8b8ee0c70060", "question": "Which paper first published a real-world Chinese-English text image translation dataset?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["0d14b20e-9828-5510-9e82-eefec3167b77"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first published a real-world Chinese-English text image translation dataset?", "reference_answer": "Exploring Better Text Image Translation with Multimodal Codebook"}}} {"uuid": "0212a0ea-5029-52d9-bd26-cdcf61a1ff42", "question": "How can we get the bias attribution without skill knowledge according to formula (2)?", "answer_format": "Your answer should be a Python strings of the calculation method of bias attribution without skill knowledge.", "tags": ["formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The bias attribution without skill knowledge can be calculated by the difference of between the the attribution score between the output text ^y and the golden label text y. Because A(i,x,^y)(h) includes skill knowledge in addition to biased knowledge since estimating the biased text also contains the knowledge of language modeling, such as understanding instruction knowledge. Therefore, we should disentangle the skill knowledge to compute the clean bias attribution.", "question": "How can we get the bias attribution without skill knowledge according to formula (2)?"}}, "anchor_pdf": ["2b003f7e-a995-57d1-af78-78432ed96561"], "reference_pdf": []} {"uuid": "028cf205-5eea-5445-9cea-479d9c14f08f", "question": "What is the meaning of formula (1) in the paper?", "answer_format": "Your answer should be a python strings concisely explaining the meaning of the formula.", "tags": ["single", "formula", "subjective"], "anchor_pdf": ["4cb24634-a638-51e8-be8d-a15e108c945a"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Formula (1) is the frame-by-frame concatenation operation that concatenates the encoding results of Whisper and BEATs along the feature dimension. The Whisper model is trained for speech recognition and translation based on a large amount of weakly supervised data, whose encoder output features are suitable to model speech and include information about the background noises. BEATs is trained to extract high-level non-speech audio semantics information using iterative self-supervised learning. The input audio is first tokenised then masked and predicted in training. The tokeniser is updated by distilling the semantic knowledge of the audio tokens. Therefore, the resulting auditory features of these two encoders are complementary and suitable for general audio inputs with both speech and non-speech information.", "question": "What is the meaning of formula (1) in the paper?"}}} {"uuid": "0318f9e2-625a-5fab-8933-cb1b817faee5", "question": "In the paper \"DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads\", on which two benchmarks was the long text capability tested? Which institutions proposed the Retrieval head used in this paper?", "answer_format": "Your answer should be a python list of two items. The first item is a python list of two strings, the two benchmarks. The second item is a python list strings, the institutions.", "tags": ["multiple", "objective", "metadata"], "anchor_pdf": ["cd0d4b90-0516-5e63-9e18-2ede1bcc6dda", "287db02d-6d1e-5059-89f8-eb6973610a6b"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"benchmarks": ["Needle-in-a-Haystack", "LongBench"], "institutions": ["MIT", "Tsinghua University", "SJTU", "University of Edinburgh", "NVIDIA"]}, "ignore_order": true, "ignore_blank": true, "lowercase": true}}} {"uuid": "03bb2132-ff94-54f7-8158-397582544082", "question": "Which paper propose a PEFT method for LLM that detects important attention heads first, then adds learnable bias to their outputs?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["ed610a3b-48eb-5e31-beaf-9735add9a0a2"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper propose a PEFT method for LLM that detects important attention heads first, then adds learnable bias to their outputs?", "reference_answer": "LoFiT: Localized Fine-tuning on LLM Representations"}}} {"uuid": "03e450d0-6d1a-5827-ac1e-1382af141474", "question": "What are the proportions of the training and test sets for the two benchmark datasets used in this paper?", "answer_format": "Your answer should be a python dictionary as {\"dataset1\": proportion1, \"dataset2\": proportion2}. The proportions should be rounded to two decimal places.", "tags": ["single", "text", "objective"], "anchor_pdf": ["8bb134d7-cd12-5752-84df-c77cc3d6363d"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"CUFS": 0.79, "CUFSF": 0.26}, "lowercase": true, "ignore_order": true, "ndigits": 2, "tolerance": 0.01}}} {"uuid": "041a256e-75f2-5b75-9edb-2077b7779235", "question": "What is the formula of the loss function used to align the feature spaces of the visual and text transformers in this paper?", "answer_format": "Your answer should be a python strings about the exact formula given in the reference paper, you don't need to explain the variables in the formula, e.g., \"loss_formula\".", "tags": ["multiple", "subjective", "formula"], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "\\mathcal{L}_{itc} = \\frac{1}{2} \\mathbb{E}_{(I,T)\\sim D}[H(\\mathbf{y}^{i2t}(I), \\mathbf{p}^{i2t}(I)) + H(\\mathbf{y}^{t2i}(T), \\mathbf{p}^{t2i}(T))]", "question": "What is the formula of the loss function used to align the feature spaces of the visual and text transformers in the anchor PDF?"}}, "anchor_pdf": ["13a0c782-cda2-55db-b796-550f810c68c8"], "reference_pdf": ["5649e82f-57f5-5b42-960a-98ddc7716d45"]} {"uuid": "045fd617-2a2d-5d81-8e48-0da9d0c31a6c", "question": "Among all the methods tested on IHS and SoyVein500, which methods have fewer parameters than the new method proposed in the paper?", "answer_format": "Your answer should be a python list about the names of the methods, e.g., ['method1', 'method2']. YOU MUST USE THE EXACT NAMES FORM THE PAPER.", "tags": ["single", "table", "objective"], "anchor_pdf": ["2fefd8c8-5b6e-54d7-bf55-3496f2297c2c"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["U-net", "PSPNet", "SegNeXt"], "ignore_order": true, "lowercase": true}}} {"uuid": "04cdb845-6e4e-5ab0-b4b3-997479c8e1f1", "question": "According to the paper that proposed the evaluation method which the BASALT benchmark specifies as methodological best practices, how is the match quality criterion defined?", "answer_format": "Your answer should be a Python string, the formula in LaTeX format.", "tags": ["multiple", "formula", "subjective"], "anchor_pdf": ["3aa6c08a-ceaa-5679-a7e8-9ee26fb65866"], "reference_pdf": ["534181b9-3808-5539-9fe0-5df3e9728b1c"], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "q_{\\text{draw}} \\left( \\beta^2, \\mu_i, \\mu_j, \\sigma_i, \\sigma_j \\right) := \\sqrt{\\frac{2\\beta^2}{2\\beta^2 + \\sigma_i^2 + \\sigma_j^2}} \\cdot \\exp \\left( -\\frac{(\\mu_i - \\mu_j)^2}{2 \\left( 2\\beta^2 + \\sigma_i^2 + \\sigma_j^2 \\right)} \\right)", "question": "How is the match quality criterion defined in the TrueSkill paper?"}}} {"uuid": "04f34534-aa58-5d9a-8e0a-d57200c092a7", "question": "In the intervention-based training of this paper(titled \"Inducing Character-level Structure in Subword-based Language Models with Type-level Interchange Intervention Training\"), can you give me a github link of the method mainly used in this part? ", "answer_format": "Your answer should be a single link like \"https://github.com/a/b\".", "tags": ["multiple", "metadata", "objective"], "anchor_pdf": ["ab3799c1-e815-5e18-bec8-be7e428c3b0e"], "reference_pdf": ["d520ae43-495a-5084-b16b-65eb62c63ac1"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "https://github.com/frankaging/Interchange-Intervention-Training", "lowercase": true}}} {"uuid": "04f6fcad-edd9-577c-b089-ae167567ef47", "question": "What is the most appropriate evaluation metric for this paper?", "answer_format": "Your answer should be a python strings of the exact name of the evaluation metric.", "tags": ["objective", "single", "text"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "refinement", "lowercase": true}}, "anchor_pdf": ["cc28a219-3ac4-5614-ba45-59d6aabf1af4"], "reference_pdf": []} {"uuid": "05079ad5-a044-590b-8b50-d4476466d94f", "question": "According to the RaIR paper, which algorithm performs the best in the objects flipped setting in Construction? In the first phase of that algorithm, what's the loss function?", "answer_format": "Your answer should be a Python list of 2 strings, the first is the abbreviation of the algorithm, and the second is the formula in LaTeX format.", "tags": ["multiple", "image", "formula", "subjective"], "anchor_pdf": ["3c6985bc-4f2b-54aa-9483-d967845d2dda"], "reference_pdf": ["eb2561cc-62b6-5a6e-9ad2-c542fc67f118"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_complex_math_formula_with_llm"], "eval_kwargs_list": [{"gold": "CEE-US", "lowercase": true, "ignore_blank": true}, {"formulas": "\\mathcal{L}_m = \\left\\| \\Delta s_{t+1} - f_{\\theta_m}(s_t, a_t) \\right\\|_2^2", "question": "What's the loss function?"}]}}} {"uuid": "052f4000-e160-5965-a25f-ebf01e1afd90", "question": "How does R-Div do to address the overfitting issue of H-Div?", "answer_format": "Your answer should be a python strings", "tags": ["single", "text", "subjective"], "anchor_pdf": ["fdff4732-08ea-5aac-b3a6-fb4fd5d3c298"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Given two datasets, R-Div searches a minimum hypothesis on their mixed data and evaluates its empirical risks on each individual dataset.", "question": "How does R-Div do to address the overfitting issue of H-Div?"}}} {"uuid": "053401b8-15b3-59d8-a9e0-30ccdd459166", "question": "In Table 1: Test MAE for different interpolation methods,which method performs better in dataset Shallow Water with grid Coarser?", "answer_format": "Your answer should be one of k-NN,Linear and IDW.", "tags": ["single", "table", "objective"], "anchor_pdf": ["c77ee3f6-804f-508b-bc75-1f7a93d3f0de"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Linear"}}} {"uuid": "05548a18-0a57-54e2-a7d6-58a5a8cdca72", "question": "Is there any paper that performs adversarial training on frame level for audio-visual representation learning?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["0a0cdb74-0e2d-5257-a87f-4275e552fb12"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that performs adversarial training on frame level for audio-visual representation learning?", "reference_answer": "MIR-GAN: Refining Frame-Level Modality-Invariant Representations with Adversarial Network for Audio-Visual Speech Recognition"}}} {"uuid": "05729273-17c6-5641-8197-1b1f7ccd4b86", "question": "Which paper first use the attention weights to guide the simultaneous inference of speech translation models?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["fb1f368d-a4b0-53d1-9051-4cc1a9d3e648"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first use the attention weights to guide the simultaneous inference of speech translation models?", "reference_answer": "Attention as a Guide for Simultaneous Speech Translation"}}} {"uuid": "05a49f5c-e36d-5e59-8343-b951588c49b1", "question": "What are the main innovations of this paper?", "answer_format": "Your answer should be a python strings concisely introducing the main innovations.", "tags": ["single", "text", "subjective"], "anchor_pdf": ["8b995434-305f-5516-ab80-163d647a30ce"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["In the VMoE architecture, the token-expert matching process is formulated as a regularized optimal transport problem, solved using the Sinkhorn algorithm.", "Define the gating matrix for regularized OT as \\Pi_{OT}=\\arg \\min_{T\\in u(a,b)} + \\frac{1}{2k}\\left \\| T \\right \\| _2^2, a=1_e, b = (t/e)1_e."], "question": "What are the main innovations of this paper?"}}} {"uuid": "05fe6a50-7051-527c-b2d6-3789c01319bb", "question": "Which model does this paper(\"On the Similarity of Circuits across Languages: a Case Study on the Subject-verb Agreement Task\") use?For the series of this model, what is the other one with different size?", "answer_format": "Your answer should be a single list of two strings, every string is a model name,e.g., [\"modelname 7k\",\"modelname 3B\"]", "tags": ["text", "multiple", "objective"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Gemma 2B", "Gemma 7B"], "ignore_order": false}}, "anchor_pdf": ["c45b8165-b824-5db7-8afb-455ad38c4e56"], "reference_pdf": ["8cb97fc5-aa91-5eca-a792-bf2963b7bf3e"]} {"uuid": "0612502c-5e7f-59c3-83d6-bd4a867c22d7", "question": "Why the paper \"Debiasing Large Language Models with Structured Knowledge\" proposes a new Bias Score? What's the formula of the original Score?", "answer_format": "Your answer should be Python list of two strings, the first is the reason why the authors propose a new Bias Score. The second is the formula in LaTeX format.", "tags": ["multiple", "table", "formula", "subjective"], "anchor_pdf": ["2b640ff7-e466-55c9-821a-4a8e20189660"], "reference_pdf": ["2ef41545-f0f2-57cc-adc4-1313b5e4875a", "4cd8cb70-48e8-51f7-ab8c-2ee1fc05ee56"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_reference_answer_with_llm", "eval_complex_math_formula_with_llm"], "eval_kwargs_list": [{"reference_answer": "The CrowSPairs score represents the percentage of instance pairs in which the stereotypical sentence has lower perplexity than the anti-stereotypical sentence in the total number of instance pairs. Since it is not easy to understand that the scores closer to 50 indicate less bias, we instead utilized the Bias score, which is defined as equal to |CrowSPairs score - 50 |, to replace the CrowSPairs score. A lower score represents lower bias.", "question": "Why the anchor PDF proposes a new Bias Score?"}, {"formulas": "\\text{score}(S) = \\sum_{i=0}^{|C|} \\log P(u_i \\in U \\mid U_{\\setminus u_i}, M, \\theta)", "question": "What's the formula of the original Score?"}]}}} {"uuid": "0696ecf8-b39a-5288-b3ee-c6b3dcf2e420", "question": "What paper compares humans' and language models' non-literal interpretations of utterances featuring phenomena like deceit, irony, and humor?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["17daf4c8-554b-5df7-aeb9-600da1cf9158"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What paper compares humans' and language models' non-literal interpretations of utterances featuring phenomena like deceit, irony, and humor?", "reference_answer": "A fine-grained comparison of pragmatic language understanding in humans and language models"}}} {"uuid": "06d157b9-e8a8-5b45-abe7-852cb9cb2afe", "question": "In the paper that proposed the method applied by SEVA dataset for automatic sketch generation, what's its only disadvantage compared to other sketch generation algorithms?", "answer_format": "Your answer should be a string, a brief summary of the disadvantage.", "tags": ["multiple", "table", "subjective"], "anchor_pdf": ["2c4db8d6-4fb1-5eb2-9a8d-54c67a2d315f"], "reference_pdf": ["9b211f30-46b1-593f-b830-54f8edac367b"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "CLIPasso can't produce a sequential sketch.", "question": "What's the only disadvantage of CLIPasso, compared to existing sketch generation algorithms?"}}} {"uuid": "06e6f397-5d3d-5493-8d94-f40caefc91c1", "question": "Which implicit bias mentioned in the paper \"Benchmarking Cognitive Biases in Large Language Models as Evaluators\" is not addressed in the works that directly overlap with the paper, particularly those exploring LLMs' capabilities as evaluators?", "answer_format": "Your answer should be a string, the name of the implicit bias as mentioned in the anchod PDF.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["3d218d94-1aa0-5a70-b23e-accb254141bd"], "reference_pdf": ["16945bc4-e1c2-55bf-8bcd-7203262db0aa", "95c4da59-2aea-5163-9044-3554ca09aa83"], "conference": [], "evaluator": {"eval_func": "eval_string_fuzzy_match", "eval_kwargs": {"gold": "Compassion Fade (Naming)", "fuzz_method": "partial_ratio", "ignore_blank": true, "lowercase": true}}} {"uuid": "079bf850-1cba-5b82-a432-8cfc8e2e28ff", "question": "Which paper presents an easy to implement and high performing method for OOD detection with language models?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["8605dab0-85f4-5cf3-ba9a-19b757ef072f"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper presents an easy to implement and high performing method for OOD detection with language models?", "reference_answer": "Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for Out-of-Domain Detection"}}} {"uuid": "07f7da2d-c8d9-581d-bf9e-ba8f9a562968", "question": "What are the meanings of different loss items in equation (9)? i.e. $\\mathcal{L}_{c}^{A}$, $\\mathcal{L}_{s}^{A}$, $\\mathcal{L}_{a}^{A}$, and $\\mathcal{L}_{c}^{B}$", "answer_format": "Your answer should be a Python list of four elements, where each of the item represent the meaning of the corresponding loss item.", "tags": ["formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["$\\mathcal{L}_{c}^{A}$: The content reconstruction loss, which ensures that the model can accurately reconstruct the input text from the latent representations.", "$\\mathcal{L}_{s}^{A}$: The stance classification loss, which guides the model to predict the correct stance of the text. It typically uses cross-entropy loss to align predictions with true stance labels.", "$\\mathcal{L}_{a}^{A}$: The aspect span reconstruction loss, which helps the model generate a specific aspect-related text span using a Gaussian negative log-likelihood (NLL) loss.", "$\\mathcal{L}_{c}^{B}$: The swapped reconstruction loss, derived from a swapping autoencoder mechanism. It ensures that the disentanglement is maintained even after swapping aspect embeddings across samples with the same aspect but different stances."], "question": "What are the meanings of different loss items in equation (9)? i.e. $\\mathcal{L}_{c}^{A}$, $\\mathcal{L}_{s}^{A}$, $\\mathcal{L}_{a}^{A}$, and $\\mathcal{L}_{c}^{B}$", "ignore_order": true}}, "anchor_pdf": ["c51a33ed-cd35-5426-b37e-6864a1b66a35"], "reference_pdf": []} {"uuid": "082f325a-ee6d-5d63-9ca0-a6953640027e", "question": "What are the main components of ERRA model?", "answer_format": "Your answer should be a python list of strings, every element of the list is the name of the component directly mentionned in this paper.", "tags": ["objective", "single", "text"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Retrieval Enhancement", "Aspect Enhancement", "Joint Enhancement Transformers"], "lowercase": true, "ignore_order": true, "ignore_blank": true}}, "anchor_pdf": ["47f98c9d-038d-5a81-a0ac-e28c5c3bee9a"], "reference_pdf": []} {"uuid": "085fa0be-3252-59dc-b265-959619c6aa8a", "question": "Could you suggest research that examines the effects of starting language models with weights from pretrained nondiffusion models on the convergence behavior of diffusion losses?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["781a22c1-a826-502d-b6b2-bbfadafe2e73"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Could you suggest research that examines the effects of starting language models with weights from pretrained nondiffusion models on the convergence behavior of diffusion losses?", "reference_answer": "SSD-LM: Semi-autoregressive Simplex-based Diffusion Language Model for Text Generation and Modular Control"}}} {"uuid": "089ad273-aa2e-5b15-9d0f-cbcb8472e227", "question": "For the model used in the paper's experiments that achieved the best results on authoritative Chinese and English benchmarks of the same size, which categories of data accounted for more than 9% of the model's training data?", "answer_format": "Your answer should be a python list of strings and the strings should be categorie names. The specific name are based on the relevant paper.", "tags": ["multiple", "objective", "image"], "anchor_pdf": ["c6f05196-8a7d-56a1-9da6-73e748546b99"], "reference_pdf": ["6089fb49-61d6-5f9d-b2f3-adaaceda1a71"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["techonology", "business", "entertainment"], "lowercase": true, "ignore_order": true}}} {"uuid": "089afa00-d83b-573e-9b8c-3ffdc81cee46", "question": "In CSCD-NS, what's the base model of the detection model for data selection? How many hours does it roughly take for the base model to reach at least 85 on GLUE?", "answer_format": "Your answer should be a Python list of 2 elements, the first is the name of the model, the second is the number of hours it takes.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["6b99b0a2-fb28-5266-8c2d-6581334ccbbf"], "reference_pdf": ["c4d02102-b1c7-5b72-a414-9c175a49be48"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["ELECTRA", 96], "ignore_order": false, "lowercase": true}}} {"uuid": "08a9f15f-cf93-57b2-8a07-072ca34906af", "question": "In the Sequence Mixer of the Monarch Mixer, what are the two types of convolution components?", "answer_format": "Your answer should be a python list of two strings", "tags": ["single", "objective", "image"], "anchor_pdf": ["f3c0827e-c512-50bc-91f0-6d5a9e1177b6"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_element_list_included", "eval_kwargs": {"gold": ["short conv", "Monarch long conv"], "element_type": "str", "ignore_blank": false, "lowercase": true}}} {"uuid": "08bd3ae0-1594-510d-b67b-bfbd9fb73b56", "question": "What baselines do authors use? And according to Table 2, which baseline can reach the best performance?", "answer_format": "Your answer should a list with two items, the first item is a python list of the name of baselines, and the second item is the name of baseline reaching the best performance, e.g. [[baseline 1, baseline 2, ...], baseline 1].", "tags": ["objective", "single", "text"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_structured_object_exact_match", "eval_string_exact_match"], "eval_kwargs_list": [{"gold": ["Prompting", "Random", "BM25", "TopK", "TopK + MDL"], "ignore_order": true, "ignore_blank": true}, {"gold": "TopK + MDL", "ignore_blank": true}]}}, "anchor_pdf": ["3d2fcb43-2cda-5645-99aa-da78c6cfd23f"], "reference_pdf": []} {"uuid": "08f5327e-584e-5c34-a092-a5dec11041dc", "question": "What do $\\gamma_t$ and $\\sigma$ mean in the formula(2) in RulE paper?", "answer_format": "Your answer should be a python list of two strings, explaining the meaning of $\\gamma_t$ and $\\sigma$ respectively", "tags": ["multiple", "formula", "subjective"], "anchor_pdf": ["e7497bf0-4e3d-5099-b9cc-9a61be4bd30f"], "reference_pdf": ["9bed7533-e4f6-580b-9e8d-7c996dbbc493"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["$\\gamma_t$ is a fixed triplet margin.", "$\\sigma$ is the sigmoid function."], "question": "What do $\\gamma_t$ and $\\sigma$ mean in the formula(2) in RulE paper?"}}} {"uuid": "0903f58c-f1c2-5b77-8d14-b1ccef36d1a9", "question": "In the background section of this paper, under the topic Isotropy of PLMs, what is the main problem of PLMs? How is this problem defined in the paper introducing it?", "answer_format": "Your answer should be a python strings about the main problem of PLMs and how it is defined in the paper introducing it.", "tags": ["multiple", "subjective", "text"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The main problem of PLMs is the representation degeneration or anisotropy problem. The learned embeddings occupy a narrow cone in the vector space, which largely limits their expressiveness. Researches find that language models trained with tied input/output embeddings lead to anisotropic word embeddings, and the singular values of the word embedding matrix decay drastically. In other words, except for a few dominating singular values, all others are close to zero.", "question": "In the background section of this paper, under the topic Isotropy of PLMs, what is the main problem of PLMs? How is this problem defined in the paper introducing it?"}}, "anchor_pdf": ["a6c04981-3a2d-5c78-acd2-66485765e32e"], "reference_pdf": ["ff0d0226-2dc4-5a18-9cc9-ec5826c16eb7"]} {"uuid": "093b9ce2-d120-5bda-99da-75a89d7ccc7d", "question": "In the AgentTuning paper, what does the reward $r$ stand for? What role does it play in the subsequent process? Does it stand for the same meaning in the AgentBank paper? Does it play the same role?", "answer_format": "Your answer should be a string, containing the answers to the 4 sub-questions.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["4633aa65-6b6f-5716-bb51-b686db19b3f6", "914d6f7e-dfc0-57c0-8400-4503aaa93efd"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["According to the AgentTuning paper, each trajectory has a final reward $r \\in [0, 1]$, reflecting the completion status of the task.", "Recall that each interaction trajectory receives a reward $r$, this allows us to automatically select high-quality trajectories based on the reward.", "Yes. According to the AgentBank paper, finally, a final reward $r \\in [0, 1]$ is returned depending on the task completion status.", "No. The AgentBank paper does not use the reward $r$ to do anything."], "question": "In the AgentTuning paper, what does the reward $r$ stand for? What role does it play in the subsequent process? Does it stand for the same meaning in the AgentBank paper? Does it play the same role?", "ignore_order": false}}} {"uuid": "09418100-d140-57cd-9e56-7747def46e96", "question": "In the paper that introduces a novel task \"source imputation\", an ODE integrator is used in Algorithm 1. In the paper that proposed this ODE integrator, what's the original form of Eq. 4 in terms of $f(t)$ and $g(t)$?", "answer_format": "Your answer should be a string, the formula in LaTeX format.", "tags": ["comprehensive", "formula", "metadata", "subjective"], "anchor_pdf": [], "reference_pdf": ["4caa5ddf-8ebf-5479-bc2a-5a40a8423bb2", "06a55a60-c404-50cd-9996-96404e43e4fa"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "\\mathrm{d}\\boldsymbol{x} = \\left[ f(t)\\boldsymbol{x} - \\frac{1}{2} g(t)^2 \\nabla_{\\boldsymbol{x}} \\log p\\left(\\frac{\\boldsymbol{x}}{s(t)}; \\sigma(t)\\right) \\right] \\, \\mathrm{d}t.", "question": "What's the original form of Eq. 4 in terms of $f(t)$ and $g(t)$?"}}} {"uuid": "09c643c6-6a5f-5650-ae54-2bed43a55c17", "question": "What paper first adapted ControlNet to generate continuous videos in a training-free manner?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["f6366faa-123d-5a98-9dd0-1ffe2317f403"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What paper first adapted ControlNet to generate continuous videos in a training-free manner?", "reference_answer": "ControlVideo: Training-free Controllable Text-to-Video Generation"}}} {"uuid": "0a202041-de70-55b6-9aa0-6b6486166582", "question": "Which paper was the first to propose combining human spoken language and sign language datasets with gloss annotations to enhance the performance of sign language translation?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["0ce516c5-9e6f-5ec8-b607-fafcb483f912"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper was the first to propose combining human spoken language and sign language datasets with gloss annotations to enhance the performance of sign language translation?", "reference_answer": "Neural Machine Translation Methods for Translating Text to Sign Language Glosses"}}} {"uuid": "0b0ef576-fe34-5e6a-bfd3-eafba60a82d5", "question": "What work first uses LLM to code robotic simulation tasks and show sim-to-real benefits with policy pre-training in simulation?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["9132b9fa-7afd-57ab-9943-02605dcfaa7f"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What work first uses LLM to code robotic simulation tasks and show sim-to-real benefits with policy pre-training in simulation?", "reference_answer": "GENSIM: GENERATING ROBOTIC SIMULATION TASKS VIA LARGE LANGUAGE MODELS"}}} {"uuid": "0b1cad92-b6c6-51a8-b7f2-5c844e572024", "question": "On which language does LLaMA-2 13B with no removal reaches its second highest perplexity?", "answer_format": "Your answer should be a word DIRECTLY FROM THE PDF WITHOUT ANY EXPLANATION.", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Italian", "lowercase": true}}, "anchor_pdf": ["a20a1a7a-b335-54ef-82a5-b97be4405604"], "reference_pdf": []} {"uuid": "0b1dbace-15fd-53b8-bf52-2bf158ceea33", "question": "Which paper first propose to mask positions to pre-train multi-modal document transformer?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["4d508d42-e7ab-5ea1-9404-aff7427d91cc"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first propose to mask positions to pre-train multi-modal document transformer?", "reference_answer": "LayoutMask: Enhance Text-Layout Interaction in Multi-modal Pre-training for Document Understanding"}}} {"uuid": "0bc3aaf3-40cc-54f3-952d-1cb514653a8b", "question": "Can you tell me the github link for GC-Bench library?", "answer_format": "Your answer should be a single link string, e.g. \"https://github.com/a/b\"", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["eaeb667a-1c3b-5718-8963-59262cdac7b0"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "https://github.com/RingBDStack/GC-Bench"}}} {"uuid": "0d051ca5-f7ba-5ddf-b06a-9bfc12b97e0b", "question": "In Equation (1), what does the interval [:2] mean?", "answer_format": "Your answer should be a Python string, explaining with a brief sentence.", "tags": ["single", "image", "formula", "subjective"], "anchor_pdf": ["3d698feb-d3ae-5d15-b469-580dfae393a3"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The interval [:2] means that the input to the TIRE module only passes through the first two blocks of the encoder.", "question": "In Equation (1), what does the interval [:2] mean?"}}} {"uuid": "0d42a5b9-4dcb-5ac1-829f-e198d8f942c1", "question": "Which dataset is the downstream task with the largest training set in BigDocs-Bench curated from, and where do the samples originally come from?", "answer_format": "Your answer should be a Python list of two strings, the first is a single word, the name of the dataset, the second is a sentence or a phrase, the source of the samples.", "tags": ["multiple", "table", "subjective"], "anchor_pdf": ["0ccddb86-7696-59b3-af01-b2df817a0714"], "reference_pdf": ["087bf567-2de0-5bc6-844d-24a7dc515ee6", "54f49a23-5197-5991-bb84-3a6e9a7eafc6"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "SVG-Stack", "lowercase": true}, {"reference_answer": "These samples come from publicly available repositories on GitHub.", "question": "Where do the samples originally come from?"}]}}} {"uuid": "0d69dd2c-5163-57c1-8bf0-c468e511724a", "question": "In the english dataset where HGALayoutLM reaches new SOTA, what's the evaluation metric for text recognition? Additionally, what's the formula for that metric?", "answer_format": "Your answer should be a Python list of two strings, the evaluation metric and the formula for that metric. Note that you should output the formula in the LaTex format.", "tags": ["multiple", "table", "formula", "subjective"], "anchor_pdf": ["5d57350d-12be-5f72-a9ac-fff26f9a1a9b"], "reference_pdf": ["2996caf3-f7a5-515a-ba60-091b02f7c9e5", "ca763ccd-4ec8-5b90-9067-ada1af33f8be"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_complex_math_formula_with_llm"], "eval_kwargs_list": [{"gold": "Levenshtein similarity", "ignore_blank": true, "lowercase": true}, {"formulas": "S(w_p, w_{gt}) = 1 - \\frac{L(w_p, w_{gt})}{\\max(|w_p|, |w_{gt}|)}", "question": "What's the formula for Levenshtein similarity?"}]}}} {"uuid": "0d7b0180-0c7e-5eb6-a51f-6d7a473d33f2", "question": "Is there a paper that takes a mixed machine learning and solver based approach to code translation?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["6a57ea6f-2173-5263-9124-f5cb70ce20b9"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper that takes a mixed machine learning and solver based approach to code translation?", "reference_answer": "GUESS & SKETCH: LANGUAGE MODEL GUIDED TRANSPILATION"}}} {"uuid": "0e5af47c-c613-5d27-9b50-2fd01ddebd55", "question": "Which sampler was implemented in the paper which proposed the latent-space diffusion model used in the paper \"UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models\"? What is the relationship between that sampler and UniPC according to this paper?", "answer_format": "Your answer should be brief text giving the name of the sampler and the relationship between that sampler and UniPC.", "tags": ["text", "multiple", "subjective"], "anchor_pdf": ["9234b475-7c40-51b4-920a-4c243f065ced"], "reference_pdf": ["aa4925a8-1e46-5996-8654-b82cfa7820a5"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The sampler is DDIM, and UniPC will reduce to DDIM when p=1, where p is the order of accuracy.", "question": "Which sampler was implemented in the paper which proposed the latent-space diffusion model used in the paper \"UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models\"? What is the relationship between that sampler and UniPC according to this paper?"}}} {"uuid": "0f124236-4caf-52c1-ba48-da8daa67547a", "question": "For the task of violence detection, the comments are classified into some distinct categories:Direct Violence, Passive Violence, NonViolence and so on. In the most relevant paper, how is Passive Violence defined?", "answer_format": "Your answer should be a single python string.", "tags": ["text", "multiple", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "In this category, instances of violence are represented by the use of derogatory language, abusive remarks, or slang targeting individuals or communities. Additionally, any form of justification for violence is also classified under this category.", "question": "For the task of violence detection, the comments are classified into some distinct categories:Direct Violence, Passive Violence, NonViolence and so on. In the most relevant paper, how is Passive Violence defined?"}}, "anchor_pdf": ["deaa4c76-bf2a-5f57-a185-f177ff3327c8"], "reference_pdf": []} {"uuid": "106570b0-0e1a-5055-9e0d-fcc6eb3a1a1b", "question": "Which paper first combines different methods for uncertainty quantification in one?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["f047119d-54f2-5be3-b5bf-2a393dfeeda3"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first combines different methods for uncertainty quantification in one?", "reference_answer": "Hybrid Uncertainty Quantification for Selective Text Classification in Ambiguous Tasks"}}} {"uuid": "11dbd6a6-2eb2-59a2-9ef9-4bdc723ba2c0", "question": "Which paper first proposes a unified framework for black-box and white-box detection of AI-written text with explanations?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["ab9d93a7-d639-5790-9ac0-b28eeafcb9d8"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first proposes a unified framework for black-box and white-box detection of AI-written text with explanations?", "reference_answer": "DNA-GPT: DIVERGENT N-GRAM ANALYSIS FOR TRAINING-FREE DETECTION OF GPT-GENERATED TEXT"}}} {"uuid": "11dbf1bb-b485-5aa7-8f6b-18b518bc6aec", "question": "What's the relationship between MINT and MINT-1T?", "answer_format": "Your answer should be a paragraph, illustrating the relationship between MINT and MINT-1T.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["1e8d36fb-5a9a-5db1-9903-84d146d46376", "09d48a2a-4ad0-5a7f-84ec-557ac57f5830"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["MINT and MINT-1T don't have direct relationship.", "MINT is short for \"multi-turn interactions\", while MINT in MINT-1T is short for \"Multimodal INTerleaved\"."], "question": "Can you explain the different types of machine learning?"}}} {"uuid": "11e16d8d-e642-592e-a8cf-c38bb375630e", "question": "Can you find a research paper that discusses using structured pruning techniques to scale down language models, where the original model being pruned has billions of parameters?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["057ef3e0-6715-5e6f-af2c-bfc7b7ffc4a6"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Can you find a research paper that discusses using structured pruning techniques to scale down language models, where the original model being pruned has billions of parameters?", "reference_answer": "SHEARED LLAMA: ACCELERATING LANGUAGE MODEL PRE-TRAINING VIA STRUCTURED PRUNING"}}} {"uuid": "11ea72d3-7720-52ff-9a55-fc487db917a3", "question": "In table 2 of the paper titled \"ORDERED GNN: ORDERING MESSAGE PASSING TO DEAL WITH HETEROPHILY AND OVER-SMOOTHING\", which model gets the best performance on CiteSeer dataset? What question does the source paper of this model mainly focus on?", "answer_format": "Your answer should be a single python list like [\"model_name\", \"question\"].", "tags": ["multiple", "table", "subjective"], "anchor_pdf": ["93fd114b-cf87-54e2-904d-9bed2e6997a2"], "reference_pdf": ["7af233fe-a0ce-5f04-bc3e-d7f4ec35d418"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "Geom-GCN", "lowercase": true}, {"reference_answer": "Can the aggregation on a graph benefit from a continuous latent space, such as using geometry in the space to build structural neighborhoods and capture long-range dependencies in the graph?", "question": "What question does the source paper of this model mainly focus on?"}]}}} {"uuid": "127ed600-b05d-5a06-9987-3d8dfe98b135", "question": "Among all the given anchor PDFs, which one does not use any objective evaluation metrics? What is the difference in conversion goals between this paper and the others?", "answer_format": "Your answer should be a Python list where the first item is the full name of the paper (a string) and the second item describes the difference in conversion goals (a string).", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["911d4572-a45c-5bb8-8662-2e2455909cd2", "4535464e-20c9-5370-ba5c-63e36d110e00", "03b37833-5723-5e55-950d-d5d25a5c539c", "9d047a73-dcef-5b95-8f29-d9de087dabf8", "8d661aa7-ec95-5977-9825-0e4c94751e52"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "Voice-Preserving Zero-Shot Multiple Accent Conversion", "lowercase": true}, {"reference_answer": "Other papers focus on voice conversion, while this paper focuses on accent conversion, where the speaker's voice identity should be preserved.", "question": "What is the difference in conversion goals between this paper and the others?"}]}}} {"uuid": "12a70e18-aa46-5779-bd69-2f3620d7f484", "question": "Which downstream tasks does CLiCoTEA outperform other models in terms of zero-shot performance on the IGLUE benchmark?", "answer_format": "Your answer should be a Python list of strings, every string is the abbreviation of a downstream task type mentioned in the paper.", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["VE", "VR"], "ignore_order": true, "ignore_blank": true}}, "anchor_pdf": ["0d6ea045-b831-520d-9b99-ba22a081a403"], "reference_pdf": []} {"uuid": "139b4a99-bd26-5162-a087-d19ee079ebd2", "question": "For the four types of evaluation mentioned in the paper, provide their names and corresponding overall score range.", "answer_format": "Your answer should be a Python dictionary, where the keys are the names of the four types of evaluation and the values are the corresponding overall score range. e.g. {\"evaluation1\": [-1, 1], \"evaluation2\": [0, 0.5], ...} . YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.", "tags": ["image", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"SA-SAQs": [0, 1], "JWP-SAQs": [0.75, 2.5], "SA-MCQs": [0, 1], "JWP-MCQs": [0, 1]}, "ndigits": 2, "tolerance": 0.001, "ignore_order": false}}, "anchor_pdf": ["a3a0b636-d0ab-5dab-a2c3-49f666755896"], "reference_pdf": []} {"uuid": "13a0a504-c9f0-5db1-b75e-94469a48f6d4", "question": "In the experiment presented in Table 1 of the paper \"Asynchronous Perception Machine for Efficient Test Time Training\", which teacher model performs the best?", "answer_format": "Your answer should be a python strings. YOU MUST USE THE EXACT NAME FROM THE TABLE IN THE PAPER.", "tags": ["single", "table", "objective"], "anchor_pdf": ["df76034e-f65c-5003-a523-30317869770e"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "OpenCLIP-VITH/14", "lowercase": true}}} {"uuid": "13c85a99-fac5-53e9-9c74-bf2b67a640aa", "question": "Which database did the authors use to select the 250 highest-rated television series for building their dataset?", "answer_format": "Your answer must be ONE string just containing the database's name in abbreviation format.", "tags": ["single", "text", "objective"], "anchor_pdf": ["9e922c32-0106-5630-ad2c-454362c90cb2"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "IMDB", "lowercase": true}}} {"uuid": "14225073-8616-578e-bef7-5b63cfdaa994", "question": "In the prompt that Puzzler adopts, what's the exact context of Task Description for Event Detection?", "answer_format": "Your answer should be a raw text.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["4b4c72af-427d-54d6-be22-0e56ec92ba14"], "reference_pdf": ["73ad76d7-eb4b-59a0-ae8f-d5df7afbe505"], "conference": [], "evaluator": {"eval_func": "eval_string_fuzzy_match", "eval_kwargs": {"gold": "Task Description: Given an input list of words, identify all triggers in the list, and categorize each of them into the predefined set of event types. A trigger is the main word that most clearly expresses the occurrence of an event in the predefined set of event types.", "fuzz_method": "partial_ratio", "ignore_blank": true, "lowercase": true}}} {"uuid": "146cb92e-45a7-5146-a88a-7492f9b12047", "question": "What paper proposes breaking down programming problems by predicting the objects that a solution would create?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["e8a7f4d2-b82b-59f6-a5fe-d64f18a91e2d"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What paper proposes breaking down programming problems by predicting the objects that a solution would create?", "reference_answer": "ExeDec: Execution Decomposition for Compositional Generalization in Neural Program Synthesis"}}} {"uuid": "14ae4304-6dbf-539e-8e9a-db0dde3b4959", "question": "How many websites are included in WebArena? Which are they?", "answer_format": "Your answer should be a Python list of 2 elements. The first one is an integer indicating the number of the websites included in WebArena. The second one is a string list storing the website names.", "tags": ["single", "image", "objective"], "anchor_pdf": ["5a2b0d5c-6b51-5bbd-a001-a15f19f65a98"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_int_exact_match", "eval_structured_object_exact_match"], "eval_kwargs_list": [{"gold": 4}, {"gold": ["e-commerce platform", "social forum platform", "collaborative development platform", "content management system"], "ignore_order": true, "lowercase": true}]}}} {"uuid": "14ec9987-d8ec-58b4-8caf-45ca042e54e5", "question": "Which type of error is most common on StrategyQA dataset across models according to the paper?", "answer_format": "Your answer must be ONE string of the error's name. You can choose from 'Error#1', 'Error#2', 'Error#3' and 'Error#4'.", "tags": ["single", "image", "objective"], "anchor_pdf": ["acc2bbb3-7f2a-52f8-8e47-a6ceb8654332"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Error#3", "lowercase": true}}} {"uuid": "15142852-7f2c-5711-b8e4-5339e6a16f6e", "question": "What is the source of the datasets used in the paper?", "answer_format": "Your answer should be a Python list, where each element is a string of a dataset source.", "tags": ["single", "text", "objective"], "anchor_pdf": ["72bc7b1d-ad5f-5b67-baf9-20f3349a7474"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Enron Email Corpus", "New Yorker", "Amazon Review Subset"], "lowercase": true, "ignore_order": true}}} {"uuid": "158a0302-d656-5006-9ab8-421c8816faf6", "question": "Is there a paper that shows that language models' error distribution is different for unfamiliar entities that is not apparent when models are evaluated on familiar entities alone?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["871f3c9a-35b0-540b-81b7-dbcb571ffe94"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper that shows that language models' error distribution is different for unfamiliar entities that is not apparent when models are evaluated on familiar entities alone?", "reference_answer": "Factual or Contextual? Disentangling Error Types in Entity Description Generation"}}} {"uuid": "161f8248-8832-5bf9-85e7-7cbe5d89d69b", "question": "Is there any paper that uses prompt tuning in multi-answer QA?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["f7a28b40-026e-50aa-8072-906ef8ae4784"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that uses prompt tuning in multi-answer QA?", "reference_answer": "Answering Ambiguous Questions via Iterative Prompting"}}} {"uuid": "16a194ad-0f62-5048-ab0e-9afa26e75c66", "question": "Is there a method that measures the information provided in a (model generated) rationale beyond what the original context provided?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["d834bb23-9c22-5e94-9421-0be576081dae"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a method that measures the information provided in a (model generated) rationale beyond what the original context provided?", "reference_answer": "REV: Information-Theoretic Evaluation of Free-Text Rationales"}}} {"uuid": "16e480a8-eb0f-5ead-9ca2-cd7b4103d6e4", "question": "Which stage of the two stage training mentioned in the paper is the labeled data used in?", "answer_format": "Your answer should be plain text.", "tags": ["single", "subjective", "text"], "anchor_pdf": ["0d99c5cf-d5f7-5fa6-9559-f88731d5fff1"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The labeled data is used in the fine-tuning stage", "question": "Which stage of the two stage training mentioned in the paper is the labeled data used in?"}}} {"uuid": "170deef3-1b76-54ee-a27b-c1fe6bad1061", "question": "Which metric is used to evaluate the emergent abilities on BIG-Bench datasets, as is indicated in the anchor pdf? And why this metric is not favorable according to this paper?", "answer_format": "Your answer should be a Python list of two elements, where the first element is the name of the metric, and the second is the reason why this metric is not favorable according to this paper.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["a424d02c-da19-5e1d-94e8-0b47cf2ded9c"], "reference_pdf": ["c302a979-c9a6-509a-a555-5fc9e5bb7bf8"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "ppl", "lowercase": true}, {"question": "Why PPL is not favorable according to this paper?", "reference_answer": "Because ppl of correct options and incorrect options may decrease simultaneously."}]}}} {"uuid": "17465570-ab08-5b32-ad45-8e87439bb4ed", "question": "What new methods were proposed for multi-perspective mathematical augmentation in the dataset used for Stage-1 training in the paper \"MuMath-Code: Combining Tool-Use Large Language Models with Multi-perspective Data Augmentation for Mathematical Reasoning\"?", "answer_format": "Your answer should be a python strings.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["8528897c-c080-5a33-8af9-815ef526d204"], "reference_pdf": ["7d207789-1284-52d4-8e6f-acd767beaf57"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"question": "What new methods were proposed for multi-perspective mathematical augmentation in the dataset used for Stage-1 training in the paper \"MuMath-Code: Combining Tool-Use Large Language Models with Multi-perspective Data Augmentation for Mathematical Reasoning\"?", "scoring_points": ["Reorganization. After the reorganization through the LLM, the solving steps will be more logically organized and clearer. Phrases such as \"understand the problem\", \"define variables\", and \"calculate the number\" act as explicit instructions, leading us toward the final result by \"The answer is\".", "Backward-Forward Transformation. For a certain question-answer pair, we firstly utilize FOBAR to transform the original question Q into a backward one Q_b; secondly, we rephrase the FOBAR question into a new form where the masked value is requested directly instead of employing an unknown variable X, resulting in a \"secondary forward\" question which we called BF-Trans question, marked as Q_{bf}. Finally, we generate the solution S_{bf} for this BF-Trans question. Collecting all these BF-Trans augmented samples, we can have D_{bf} = {(Q_{bf} , S_{bf})}.", "Expression Replacement. First extract all mathematical expressions from the solution. Subsequently, an arithmetic expression is altered to form a novel equation. With the original problem statement and new equations as guides, a new question can be generated denoted as Q_{replace}.", "Majority Sampling Finetuning. They utilize majority voting with k = 30 to request solutions and only select one response with the majority answer for finetuning.", "Nested Multi-task Learning. For the main task of solving mathematical problems Q, they select two auxiliary tasks: summarizing the question and listing the solving plan. They prepend the text of question outline O, solving plan P , or both to the solution text S, assembling into an individual final solution S_{mt} = O \\oplus P \\oplus S, for each original question. Then they have D4 = {(Q, S_{mt})} as the nested multitask dataset."]}}} {"uuid": "175c78ea-6395-5e79-9bb1-7211e16b8bd6", "question": "In the latest retrieval method that is applied in the experiment of the case study, and that doesn't require re-ranking model, which dataset is used as training data for tool learning?", "answer_format": "Your answer should be a string, the name of the dataset.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["5b327157-f0cb-568f-8181-397707615f40"], "reference_pdf": ["52845774-6019-555a-89ce-3677a2eaea06", "413e7de9-03c4-5c1f-9e42-cd48030c9369"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "ToolLLM", "lowercase": true}}} {"uuid": "17638abc-7058-5c53-97e7-99ee69763f57", "question": "To investigate the problem \"Can the generative model be used to effectively leverage (small amounts of) human data, and also combine it with simulated agents?\", which baseline is applied? In the paper that proposes the baseline, what's the average learning rate for the baseline?", "answer_format": "Your answer should be a Python list of 2 elements, the first is a string, the name of the baseline, the second is a float, the average learning rate, rounded to 4 decimal places.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["3f602bc4-14ba-5c76-81e6-89fb1ea38c1b"], "reference_pdf": ["3b2a0e14-91d6-5b05-aaa1-a7684ce731cd"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["PPO-BC", 0.0012], "lowercase": true, "fuzz_method": "partial_ratio", "threshold": 95, "ignore_order": false, "ndigits": 4}}} {"uuid": "1797020c-d6b0-5909-a17b-25688d7bc433", "question": "According to this paper, what type will the question \"How to keep strawberries fresh\" be classified into?", "answer_format": "Your answer should be a single string representing the question type.", "tags": ["objective", "single", "text"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "open", "lowercase": true}}, "anchor_pdf": ["010e09cf-8f86-5759-8446-0f7e6558997c"], "reference_pdf": []} {"uuid": "17a94454-491d-5b5d-8e09-ab53ca65accc", "question": "What's the difference between DigiRL and a trivial filtered BC?", "answer_format": "Your answer should be in fluential English.", "tags": ["single", "formula", "subjective"], "anchor_pdf": ["3ef4f8bf-6e26-545b-b51c-e6a7969818c7"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["Filtered BC filters trajectories by golden rewards.", "DigiRL filters trajectories by CRR Advantage."], "question": "What's the difference between DigiRL and a trivial filtered BC?"}}} {"uuid": "17c12592-2818-50f7-9f7c-e5247e778f58", "question": "Which newest paper about fine-tuning strategies does this paper(titled \"ICU: Conquering Language Barriers in Vision-and-Language Modeling by Dividing the Tasks into Image Captioning and Language Understanding\") mention in the troduction? What is the originality of this paper(I mean the paper mentioned in the \"ICU\" paper) compared to previous works?", "answer_format": "Your answer should be a single python list, the first element is the string of the paper name, the second element is a string of the originality of this paper.", "tags": ["text", "multiple", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_paper_relevance_with_reference_answer", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"reference_answer": "Delving Deeper into Cross-lingual Visual Question Answering", "question": "Which newest paper about fine-tuning strategies does this paper(titled \"ICU: Conquering Language Barriers in Vision-and-Language Modeling by Dividing the Tasks into Image Captioning and Language Understanding\") mention in the troduction?"}, {"reference_answer": "This paper is the first to provide a comprehensive analysis of multilingual VQA, with a focus on cross-lingual transfer.", "question": "What is the originality of this paper compared to previous works?"}]}}, "anchor_pdf": ["ce10ef41-538a-5958-b599-32431367af83"], "reference_pdf": ["2ba6d2a8-a65f-51d1-9ac2-df80a3e865de"]} {"uuid": "17ed4a9d-9711-5799-b02a-6b2cdd366288", "question": "Is there a study that shows how to help the demonstration retriever better integrate feedback from LLMs?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["f233af11-6a90-55eb-bfb3-2f1f5707549a"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a study that shows how to help the demonstration retriever better integrate feedback from LLMs?", "reference_answer": "Unified Demonstration Retriever for In-Context Learning"}}} {"uuid": "17fe77cb-688f-52d4-be70-a66eaadc17ff", "question": "What are the main works at the table-level when performing data augmentation on the MMTab dataset in this paper?", "answer_format": "Your answer should be a Python strings about the main works at the table-level when performing data augmentation on the MMTab dataset.", "tags": ["single", "subjective", "text"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The main work is to separately design scripts to render table images with three different styles: Web-page (70.8%), Excel (19.4%) and Markdown (9.8%). Fine-grained adjustments such as font type and cell colors are also considered", "question": "What are the main works at the table-level when performing data augmentation on the MMTab dataset in this paper?"}}, "anchor_pdf": ["a7f69ed5-2c8a-5892-bf66-c2599380f805"], "reference_pdf": []} {"uuid": "18b13577-3570-5e5f-be1c-77606cce3cf4", "question": "When the authors use ChatGPT in generating data points, what is the templated prompt?", "answer_format": "Your answer should be raw text directly demonstrating the used prompt template without any other context.", "tags": ["single", "subjective", "table"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "I am creating a dataset and need to generate data that is similar but not identical to the following examples. Here are 5 examples from my dataset:\n1. [Example 1]\n2. [Example 2]\n3. [Example 3]\n4. [Example 4]\n5. [Example 5]\nPlease generate [Specified Number] new data points that are similar in style and structure to these examples but are unique in content. Format the responses as a numbered list, starting from 6 onwards. Each data point should start on a new line and be prefixed with its corresponding number followed by a period and a space.\nFor example:\n6. [New Data Point 1]\n7. [New Data Point 2]\n...", "question": "When the authors use ChatGPT in generating data points, what is the templated prompt?"}}, "anchor_pdf": ["09abed2f-a7af-56e5-8a34-3fc5e7130c6a"], "reference_pdf": []} {"uuid": "192f5d76-b256-57c4-a3e7-1df0fffe30b4", "question": "Which paper studies how difficult is a policy learning problem under non-additive rewards in terms of theoretical lower bounds?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY OTHER EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["3dcde5f5-a0c3-594d-963c-8e2409f39947"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper studies how difficult is a policy learning problem under non-additive rewards in terms of theoretical lower bounds?", "reference_answer": "Submodular Reinforcement Learning"}}} {"uuid": "198666fc-a067-52c2-b80f-fb804bc80034", "question": "What is the ranking of the average performance of the models compared in the experiment across all languages where each model has a value in the all-language finetuning, from highest to lowest?", "answer_format": "Your answer should be a Python list of elements, each element is a model name string, e.g., [\"model_name 1\", \"model_name 2\", ...].", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["mCLIP", "mCLIP+", "UC2", "M3P"]}}, "anchor_pdf": ["ff651d37-e725-5752-9c38-3361bc54723d"], "reference_pdf": []} {"uuid": "1a507489-7b21-5be2-a00b-a30cf98564a6", "question": "In this paper, for Gitksan and Natugu, what is the test set? What's its relevant github link?", "answer_format": "Your answer should be a single list of two strings, the first string is the test set name, the second string is the github link.", "tags": ["text", "single", "objective"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_element_included", "eval_string_exact_match"], "eval_kwargs_list": [{"gold": ["SIGMORPHON 2023 Shared Task", "SIGMORPHON 2023 ST", "SIGMORPHON 2023", "SIGMORPHON 2023 Shared Task on Interlinear Glossing"], "lowercase": true}, {"gold": "https://github.com/sigmorphon/2023glossingST", "lowercase": true}]}}, "anchor_pdf": ["c5c49d69-0cf3-58f9-8a74-17f9d5b17ffe"], "reference_pdf": []} {"uuid": "1a8c2b00-29f9-5a58-83ab-6dbf7061a039", "question": "Which paper considers both weights and activations when pruning large language models?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["e5cae9b9-016a-5169-96c7-3ef7c8afc164"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper considers both weights and activations when pruning large language models?", "reference_answer": "A SIMPLE AND EFFECTIVE PRUNING APPROACH FOR LARGE LANGUAGE MODELS"}}} {"uuid": "1aa41ae8-f867-5928-857a-22d7d028f976", "question": "According to Table 1, which setup (including its corresponding parameters) achieves the highest precision on the test set?", "answer_format": "Your answer should be a python dict, with the format of {'setup_name': parameter_value}.", "tags": ["single", "table", "objective"], "anchor_pdf": ["0f6af664-5af4-5274-bda9-66734c7fa9ef"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"Roberta": 1500}, "lowercase": true}}} {"uuid": "1b4de802-e98f-51a6-921f-49fe6fb0f4be", "question": "In paper \"What Do Language Models Learn in Context? The Structured Task Hypothesis.\", which hypothesis is mostly supported by the experiments?", "answer_format": "Your answer should be the context that defines the hypothesis.", "tags": ["single", "text", "subjective"], "anchor_pdf": ["0f437127-17d2-5ff6-a43f-641d5dc0549e"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "During pre-training, an LLM learns a set of tasks $\\bar{\\mathcal{T}}$. At inference time, the LLM uses the demonstration to compose a sequence of learned tasks $\\tau_1,\\tau_2,\\dots\\in\\bar{\\mathcal{T}}$ and uses this composition for prediction. The composition itself may result in a novel task not seen during pre-training.", "question": "In paper \"What Do Language Models Learn in Context? The Structured Task Hypothesis.\", which hypothesis is mainly supported by the experiments?"}}} {"uuid": "1bc803b2-807d-5218-891c-da60a470cd93", "question": "Which model achieves the highest accuracy of the classification when the training data consists of 512 pairs of FPQ and TPQs in this paper?", "answer_format": "Your answer should be a python string about the name of the best model. You\"d better use the names as they are referred to in the paper.", "tags": ["image", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "MACAW-11B", "lowercase": true}}, "anchor_pdf": ["529875d6-5189-5c4d-9076-1635a01a862d"], "reference_pdf": []} {"uuid": "1c0a1908-daee-5d66-95ab-827900fa14c0", "question": "What paper investigated the effect of the relative position (closer or further away) of the most pertinent retrieved code snippets on repository-level code completion performance?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["aa4f39f3-512d-56ee-9217-148b04199699"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What paper investigated the effect of the relative position (closer or further away) of the most pertinent retrieved code snippets on repository-level code completion performance?", "reference_answer": "RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems"}}} {"uuid": "1c8576c1-c918-543c-9c0f-68bc5d28bc5a", "question": "In the paper that proposes LaCLIP, under zero-shot setting, pretrained with CC12M, LaSLIP lags behind SLIP on which specific evaluation dataset? In that dataset, which country has the highest proportion?", "answer_format": "Your answer should be a Python list of 2 string, the abbreviation of the dataset and the name of the country, as given in the papers.", "tags": ["multiple", "table", "image", "objective"], "anchor_pdf": ["1fbc62a2-e02a-53d7-be30-8657bb1c64ac"], "reference_pdf": ["531397c9-efa2-5ec2-9905-013a0b884632"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["EuroSAT", "France"], "ignore_order": false, "ignore_blank": true, "lowercase": true}}} {"uuid": "1cc5d12c-31dd-5e52-92d5-9227e8cfbfcd", "question": "What are the main questions that this paper tries to resolve or answer?", "answer_format": "Your answer should be a Python list of text strings, with each element being one critical problem that this paper analyzes, e.g., [\"question 1\", \"question 2\", ...].", "tags": ["single", "subjective", "text"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["Is it possible to determine data contamination by solely analyzing the inputs and outputs of existing LLMs?", "Do recent GPTs excel in Text-to-SQL tasks in a zero-shot setting both on potentially leaked data and totally unseen one?", "Is data contamination affecting the accuracy and reliability of an existing GPT in Text-to-SQL tasks?"], "question": "What are the main questions that this paper tries to resolve or answer?", "ignore_order": true}}, "anchor_pdf": ["effede2d-ed50-597e-b1fb-d5fc1b6dc554"], "reference_pdf": []} {"uuid": "1d245f04-b2e6-56a5-b835-ca02401943aa", "question": "According to the EBD paper, which model's forward process is based on the PDE $\\frac{\\partial}{\\partial t} \\mathbf{x}(i,j,t) = \\Delta \\mathbf{x}(i,j,t)$? In the paper that proposes that model, how is $\\tilde{u}$ calculated?", "answer_format": "Your answer should be a Python list of 2 elements. The first is the abbreviation of model, the second is a string, the formula in LaTeX format.", "tags": ["multiple", "formula", "subjective"], "anchor_pdf": ["03f5c8ae-47ce-5312-b368-7fe6a4088e58"], "reference_pdf": ["01bde459-5e5e-5471-949d-cd6204a4ed64"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_complex_math_formula_with_llm"], "eval_kwargs_list": [{"gold": "IHDM", "lowercase": true}, {"formulas": "\\tilde{u} = V^T u = \\text{DCT}(u)", "question": "In the paper that proposes IHDM, how is $\\tilde{u}$ calculated?"}]}}} {"uuid": "1d3a6e38-9233-5b75-9b60-7e94deea2f36", "question": "In the paper that proposed PsychoBench, a method is applied to jailbreak GPT-4. In the paper that proposed the jailbreak method, for GPT-3.5-Turbo, in Chinese, on which domain the Unicode baseline surpasses the proposed method?", "answer_format": "Your answer should be a Python string, the name of the domain.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["9b3a636d-5184-58d5-9333-49a304ad2f68", "462eba24-b481-518d-957e-63a097a7ba08"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "UnsafeTopic", "ignore_blank": true, "lowercase": true}}} {"uuid": "1dcc43fb-6622-5fbb-aa25-050331b59af3", "question": "In the paper \"SceMQA: A Scientific College Entrance Level Multimodal Question Answering Benchmark\",how many free-response questions are there in the 6 benchmarks listed in Table 1? Also, please give the total number of free-response questions in the 6 benchmarks.", "answer_format": "Your answer should be a python dictionary, e.g. {'total': 10, 'dataset_1': 2, 'dataset_2': 3, ...}. For the names of datasets, use the exact names in Table 1 without changing CAPITALIZATION.", "tags": ["multiple", "objective", "table"], "anchor_pdf": ["a30d4d0c-e7dc-5b17-91e2-e238828b5b54", "6ece6006-c66e-5495-be77-d81d40a215d0", "40911da4-3a2d-516b-9e83-25600a989feb", "9b650455-de6b-5b84-b6e9-0b6d1bdd00a0"], "reference_pdf": ["c2c5bf1a-3d4a-508e-a217-b3e4b78ce7f7", "cba0951e-ebc2-5775-892d-a2c30ce3eb88"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"total": 4507, "MMLU": 0, "SciBench": 869, "ScienceQA": 0, "MathVista": 2749, "MMMU": 689, "SceMQA": 200}, "ignore_order": true}}} {"uuid": "1ea15b22-ad6d-584d-80f0-3efa819fc91d", "question": "Which paper proposed dictionary-based Bayesian inference to improve the performance of image text matching model?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["214cc4cb-3e67-5cb8-b60b-86034ec0182c"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper proposed dictionary-based Bayesian inference to improve the performance of image text matching model?", "reference_answer": "Vision Meets Definitions: Unsupervised Visual Word Sense Disambiguation Incorporating Gloss Information"}}} {"uuid": "1eab7a9f-66b4-5bd3-9870-e4df2e3192dc", "question": "Can we find the solution of the Bilevel optimization when the lower-level problem is nonconvex?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["4616e316-7c26-5125-9e57-6ea31a869345"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Can we find the solution of the Bilevel optimization when the lower-level problem is nonconvex?", "reference_answer": "On Penalty Methods for Nonconvex Bilevel Optimization and First-Order Stochastic Approximation"}}} {"uuid": "1f3102a6-711f-5993-8e39-2334cfb5d96d", "question": "What are the four evaluation metrics of the held-out dataset that InstructBLIP used for image QA?", "answer_format": "Your answer should be a Python list of 4 strings, the metrics.", "tags": ["multiple", "image", "objective"], "anchor_pdf": ["3cc54434-ac97-55b6-bb74-489e117724f8"], "reference_pdf": ["ada51525-eefa-5191-b98f-51f979a53ae9"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Accuracy", "CIDEr", "BLEU4", "METEOR"], "fuzz_method": "ratio", "threshold": 90, "lowercase": true, "ignore_order": true, "ignore_blank": true}}} {"uuid": "1f43b095-5787-5f8c-9570-f1751018227f", "question": "What is the main Implicit K-means loss used in this paper(titled \"THE HIDDEN UNIFORM CLUSTER PRIOR IN SELF-SUPERVISED LEARNING\")? How is the loss between two batches defined in the source paper of this loss?", "answer_format": "Your answer should be a single python list like [\"xxx loss\",\"some formulas in latex format\"]", "tags": ["multiple", "formula", "subjective"], "anchor_pdf": ["a59a0160-827d-5d49-aed6-060a814fbb31"], "reference_pdf": ["a1e82a6d-80d5-563e-b76c-33c0d072f0ee"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_complex_math_formula_with_llm"], "eval_kwargs_list": [{"gold": "VICReg loss", "lowercase": true}, {"formulas": ["\\ell\\left(Z, Z^{\\prime}\\right) = \\lambda s\\left(Z, Z^{\\prime}\right)+\\mu\\left[v(Z)+v\\left(Z^{\\prime}\\right)\\right]+\nu\\left[c(Z)+c\\left(Z^{\\prime}\\right)\\right]", "s(Z,Z^{\\prime})=\\frac{1}{n}\\sum_i\\|z_i-z_i^{\\prime}\\|_2^2", "v(Z)=\\frac{1}{d}\\sum_{j=1}^d\\max(0,\\gamma-S(z^j,\\epsilon))", "c(Z)=\\frac{1}{d}\\sum_{i\\neq j}[C(Z)]_{i,j}^2"], "question": "How is the loss between two batches defined in the source paper of this loss?"}]}}} {"uuid": "1fa16ed9-14d2-5abe-901b-3d34ddceceee", "question": "What research first proposed a new kind of cascaded diffusion of a Markov process?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["1315de2a-a81f-5e71-95a9-46cc4c5f026c"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What research first proposed a new kind of cascaded diffusion of a Markov process?", "reference_answer": "RELAY DIFFUSION: UNIFYING DIFFUSION PROCESS ACROSS RESOLUTIONS FOR IMAGE SYNTHESIS"}}} {"uuid": "1faadd0a-1ee9-5541-b4ca-7a0bd3cacc0e", "question": "RRCP is a new pipeline proposed to recognize retrieval complexity, on which complex QA datasets does it significantly outperform the LLM baseline, with a clear improvement in Accuracy or F1 Score of at least 0.1?", "answer_format": "Your answer should be a list of elements, each element is the QA dataset name string, e.g., [\"QA dataset1\", \"QA dataset2\", ...].", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["CWQ", "StrategyQA", "MuSiQue"], "ignore_order": true}}, "anchor_pdf": ["7640a304-546b-5aa9-8111-8a56b9b06861"], "reference_pdf": []} {"uuid": "2074d015-dc9a-5c20-aeba-2835003f4607", "question": "In the related work mentioned in the Table 1 of the paper Reflect-RL, that is categorized as RL Fine-tuning and that doesn't involve vision modal, what's the token-level probability of a_k?", "answer_format": "Your answer should be a formula in LaTeX format.", "tags": ["multiple", "table", "formula", "subjective"], "anchor_pdf": ["5a11c640-e530-5c9e-b48c-d6130a4c4991"], "reference_pdf": ["f917a6ca-8134-57d1-9a6a-f28930a380d7", "83cda339-482e-5c4c-aeaa-eb7e51dba851"], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "P_{\\text{token}}(a_k \\mid s) = P(w_k^1, \\ldots, w_k^{N_k} \\mid s) = \\prod_{i=1}^{N_k} P(w_k^i \\mid s, w_k^1, \\ldots, w_k^{i-1})", "question": "What's the token-level probability of a_k?"}}} {"uuid": "2136d7bb-b74a-5176-8a1c-4b521df0603c", "question": "What's the formula of Subset consistency accuracy?", "answer_format": "Your answer should be a single latex formula extracted from the source paper of this metric.", "tags": ["multiple", "formula", "subjective"], "anchor_pdf": ["21b6b251-b7cd-5d0b-9dfd-19fe0e2efc1f"], "reference_pdf": ["eeb9e55e-68db-5b52-a5e7-e8a212c9e7d4"], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "\\operatorname{ACC}(I) \\stackrel{\text { def }}{ = } \\sum_{m_{1}, m_{2} \\in \\mathcal{M}} \frac{\\mathbb{1}\\left[\\left(m_{1}>_{I} m_{2}\right) \\Leftrightarrow\\left(a_{m_{1}}>a_{m_{2}}\right)\right]}{|\\mathcal{M}|^{2}}", "question": "What's the formula of Subset consistency accuracy?"}}} {"uuid": "215e6bbc-eec4-5912-849a-e9ec96850a60", "question": "What are the main differences between Mobile-Agent-v2 and Mobile-Agent?", "answer_format": "Your answer should be in a well-formated item list.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["18f299e4-95c7-5356-ae9a-777eb6df9a3b", "cba158c4-5481-52e1-bd09-61ed74b80200"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["Mobile-Env-v2 takes advantage of coordination of multiple agents, while Mobile-Env comprises only a single agent.", "Mobile-Env-v2 introduces a memory unit to save experiences from past tasks."], "question": "What are the main differences between Mobile-Agent-v2 and Mobile-Agent?"}}} {"uuid": "2166da5e-be09-5f2b-a8e9-7fed58ede51d", "question": "According to Table 2, which models perform the highest on each of the 8 tasks of GLUE?", "answer_format": "Your answer should a python list of the name of models reaching highest performance on MNLI, QQP, QNLI, SST-2, STS-B, MRPC, RTE, and CoLA respectively. If two models get the same score, you can use \"and\" to connect their names, e.g. A and B.", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["SCALEARN++", "SCALEARNUNIFORM and SCALEARNUNIFORM++", "SCALEARNUNIFORM and SCALEARNUNIFORM++", "SCALEARN++", "SCALEARN", "ADAPTERFUSION", "SCALEARN", "SCALEARN++"], "ignore_order": false}}, "anchor_pdf": ["9b06b24b-0afc-5ccb-95fc-c662395d291d"], "reference_pdf": []} {"uuid": "21ba07ba-2e6d-5200-9764-f40cc4aa3a6d", "question": "In the comparison of APoLLo with SOTA methods on a Base-to-Novel Class Generalization task, what is the title of the source paper of the dataset where APOLLO outperforms MaPLe(previous SOTA) the most?", "answer_format": "Your answer should be a single string.", "tags": ["table", "single", "objective"], "conference": [], "evaluator": {"eval_func": "eval_string_fuzzy_match", "eval_kwargs": {"gold": "Describing textures in the wild", "lowercase": true}}, "anchor_pdf": ["1f118f96-bf9a-5dac-99ca-06c24f66d8cd"], "reference_pdf": []} {"uuid": "220ea46c-5777-52dd-a581-54513207a179", "question": "How many thousand conversations are there in the datasets used to train CONVAUG in total?", "answer_format": "Your answer should be a python float with one decimal places.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["259a072c-f5f0-594d-8018-e6fc4d528d07"], "reference_pdf": ["8acb57ed-7324-5327-8d39-f2c041ec6f2d", "a9f80f03-a63b-564a-8789-70b7c7096819"], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 17.5, "ndigits": 1}}} {"uuid": "2214bdec-6cf4-5cce-a5fb-b531bb41e777", "question": "Which paper first proposed shared adapter module across layers?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["39685a17-64bd-507b-88cf-62fe51bc6358"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first proposed shared adapter module across layers?", "reference_answer": "One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning"}}} {"uuid": "231728d1-f6b7-5cd5-862e-ee831b2c4ed4", "question": "In the paper that proposes the best-performing model on hMOF evaluated in LLM4Mat-Bench paper, which baseline that performs better than CGCNN both on validation and test sets is not evaluated in the LLM4Mat-Bench paper?", "answer_format": "Your answer should be a string, the name of the baseline.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["1e9a0edd-23ba-5ff3-89bb-4ae4350753be"], "reference_pdf": ["da1d6ccd-43f9-5f7d-888a-084994068ecb"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "ALIGNN", "lowercase": true}}} {"uuid": "234f08cf-8a52-53cc-947e-e508f711e87a", "question": "Which model reaches the highest accuracy under zero-shot setting in CARES, considering the dimension shown in the bottom-middle of Figure 1? Additionally, in the paper that proposes the model, which dataset for pre-training is also released? What's the largest data source that CARES uses but this dataset doesn't?", "answer_format": "Your answer should be a Python list of 3 elements, the first is the name of the model, the second and the third are the abbreviations of the datasets.", "tags": ["multiple", "image", "table", "objective"], "anchor_pdf": ["2b27f132-5f9d-5c96-a564-1edec3b3b008"], "reference_pdf": ["773f0d37-c822-54f5-a7f8-ddc93e70d845"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["RadFM", "MedMD", "OmniMedVQA"], "lowercase": true, "ignore_order": false, "ignore_blank": true}}} {"uuid": "2351ad69-2ee2-5348-a305-1b7bc5a8fb3a", "question": "Which paper first found that REINFORCE works better than actor critic algorithms like PPO for RL finetuning of pretrained chemistry language models (Transformers and RNNs)?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["01df9e2a-8126-5bfa-ac8c-28307ef06d12"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first found that REINFORCE works better than actor critic algorithms like PPO for RL finetuning of pretrained chemistry language models (Transformers and RNNs)?", "reference_answer": "Searching for High-Value Molecules Using Reinforcement Learning and Transformers"}}} {"uuid": "235fcdd4-ea08-51ed-8a01-c9637eecfcab", "question": "Which three VQA benchmarks does the paper use for evaluation? Among the training datasets, which has the largest number of images?", "answer_format": "Your answer should be a list of four strings, the last element is the string of the name of the largest training dataset.", "tags": ["table", "single", "objective"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["InfographicVQA", "ChartQA", "DocVQA", "WKVVQA"], "ignore_order": true, "lowercase": true}}, "anchor_pdf": ["2c222763-f33b-5cfa-8897-5d217aaf9142"], "reference_pdf": []} {"uuid": "23ca6f0a-69de-55f5-9489-c0d7ddd50b18", "question": "Which papers were among the first to explore the task of targeted training data extraction?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["e069fe2e-b99d-5380-9283-fca748dc8aaf"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which papers were among the first to explore the task of targeted training data extraction?", "reference_answer": "ETHICIST: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confidence Estimation"}}} {"uuid": "23cb6726-c7b6-56f0-86bc-4939eac49e1d", "question": "What is the innovation of the formula (6) in this paper?", "answer_format": "Your answer should be a Python strings of innovation of the formula.", "tags": ["formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The innovation of this loss function is that it consists of two parts: ground-truth loss and distillation loss. The ground-truth loss is to use one-hot labels to predict connectives, and LKD is the knowledge distillation loss utilizing the Kullback-Leibler divergence to quantify the difference of output distribution from student's soft predictions to teacher's soft labels, which means the student model S is required to match not only the groundtruth one-hot labels but also the probability outputs of the teacher model T.", "question": "What is the innovation of the formula (6) in this paper?"}}, "anchor_pdf": ["7e6c6c6a-f0a6-59ee-8734-af8a912dcf09"], "reference_pdf": []} {"uuid": "247b6978-be01-50c8-92fb-e27122c244f0", "question": "Is there any paper that explores using only an encoder-only masked language model for open-ended long text generation (such as story generation)?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["544c5a72-cc8a-5fbf-b84c-72cfd3c40df7"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that explores using only an encoder-only masked language model for open-ended long text generation (such as story generation)?", "reference_answer": "Open-ended Long Text Generation via Masked Language Modeling"}}} {"uuid": "2536a846-15c8-5b2a-bedf-8b878bff149a", "question": "Which institution is the corresponding author of the paper \"DVD: Dynamic Contrastive Decoding for Knowledge Amplification in Multi-Document Question Answering\" affiliated with?", "answer_format": "Your answer should be a string containing the exact full name of the institution without changing CAPITALIZATION.", "tags": ["single", "objective", "metadata"], "anchor_pdf": ["92550830-406d-5c07-9eab-5e42dffbd632"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "National Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University"}}} {"uuid": "259a085d-f5b9-5b80-aa31-a9720bad7047", "question": "Which paper first proved that wide-enough transformer architectures trained with gradient methods on enough data would learn to solve relational reasoning tasks?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["653bd9ed-b607-56e6-adb6-67d86cbde326"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first proved that wide-enough transformer architectures trained with gradient methods on enough data would learn to solve relational reasoning tasks?", "reference_answer": "When can transformers reason with abstract symbols?"}}} {"uuid": "25c34c03-3d73-51df-bb4a-ba58f03bab41", "question": "On which benchmark does the author evaluate SnapKV against multiple baseline models? Additionally, what is the number of data for each type of task in the benchmark?", "answer_format": "Your answer should be a Python list containing two elements: the first element should be a string representing the benchmark name, and the second element should be a list of integers indicating the number of data points for each type of task in the benchmark. For example: [\"GLUE\", [100, 200, 300]].", "tags": ["multiple", "image", "objective"], "anchor_pdf": ["7359e988-66ba-5081-9831-29196c891581"], "reference_pdf": ["d6b892b8-cf43-5b62-bde9-48c070c2e5dc"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_structured_object_exact_match"], "eval_kwargs_list": [{"gold": "LongBench", "lowercase": true}, {"gold": [750, 800, 600, 800, 600, 1000], "ignore_order": true}]}}} {"uuid": "25c78c8f-a93c-547a-b06a-b46a60ecba87", "question": "Is there any paper improves adversarial training by forming semantic aware label without extra pre-train time or data?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["f5d5d48b-42f7-57b9-9d53-818c0ad6f00f"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper improves adversarial training by forming semantic aware label without extra pre-train time or data?", "reference_answer": "Annealing Self-Distillation Rectification Improves Adversarial Training"}}} {"uuid": "25e00706-c80c-5169-a5e9-9256c5165a89", "question": "What is the method of prefix-tuning, mentioned as the PEFT module, in the method section of this paper?", "answer_format": "Your answer should be a python string about the method of prefix-tuning.", "tags": ["multiple", "formula", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Prefix-tuning prepends a prefix for an autoregressive LM to obtain $z = \\left[ \\mathrm{PREFIX}; x; y \\right]$, or prepends prefixes for both encoder and encoder to obtain $z = \\left[ \\mathrm{PREFIX}; x; \\mathrm{PREFIX'}; y \\right]$. We follow the recurrence relation, except that the prefix are free parameters. Prefix-tuning initializes a trainable matrix $P_{\\theta}$ (parametrized by $\\theta$) to store the prefix parameters. The language model parameters $\\phi$ are fixed and the prefix parameters $\\theta$ are the only trainable parameters. Here, $h_i$ (for all $i$) is a function of the trainable $P_{\\theta}$. When $i \\in \\text{P}_{\\text{idx}}$, this is clear because $h_i$ copies directly from $P_{\\theta}$. When $i \\not \\in \\text{P}_{\\text{idx}}$, $h_i$ still depends on $P_{\\theta}$, because the prefix activations are always in the left context and will therefore affect any activations to its right.", "question": "What is the method of prefix-tuning, mentioned as the PEFT module, in the method section of this paper?"}}, "anchor_pdf": ["0905f55c-e8a3-5931-bd5a-dd9b69146ca1"], "reference_pdf": ["21df0715-990d-58d3-b218-280ac3a84c8f"]} {"uuid": "25fd4dd0-a865-541f-bcdd-246a56ba36ed", "question": "Both the papers use the Model Performance on EditEval to test their models. What existing models' data do they use in common?", "answer_format": "Your answer should be a python list, each elemet is a string , which refers to a model name.", "tags": ["multiple", "objective", "table"], "anchor_pdf": ["cabc7bed-6a8b-5030-a199-716eac881799", "f9c34aba-31a0-5b67-83a6-3cde37f2aecb"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["T0++", "T0", "PEER-3", "PEER-11", "Tk", "PaLM 2"], "ignore_order": true, "lowercase": true}}} {"uuid": "26030580-cffa-5664-bd4d-4f9eab957b98", "question": "In experiments with the similar two-stage framework as BalSum, are there any other available datasets besides the ones used in this paper?", "answer_format": "Your answer should be a python list of elements, each element is the experiment dataset name string, e.g., [\"dataset1\", \"dataset2\", ...]. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.", "tags": ["multiple", "objective", "text"], "anchor_pdf": ["0f17ce7f-7a37-56d7-bc18-a38d4649fef0"], "reference_pdf": ["1e1fc69b-da77-5cec-b254-7c60bf226a84", "ee3b5bf9-4c0f-5db3-ac89-6cde80f4789a", "f462838f-7bca-547d-b577-7c1dc8ff7f64"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Reddit TIFU", "NYT"], "ignore_order": true}}} {"uuid": "26ec953d-2268-577d-a22e-7f8313b800d8", "question": "Which testing dataset in the paper has the largest size(determined by counts of instances)?I want to read its source paper, can you give me the title?", "answer_format": "Your answer should be a list of two strings, the first element is the name of the testing dataset, and the second element is the title of the source paper.", "tags": ["table", "single", "objective"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["GENIA", "The genia corpus: An annotated research abstract corpus in molecular biology domain"], "ignore_order": false, "lowercase": true, "threshold": 90}}, "anchor_pdf": ["ceae4995-2be0-5b0e-8cd2-014bebec7870"], "reference_pdf": []} {"uuid": "2719728b-95f0-5418-a64d-6f6a4b9d8e71", "question": "In the two phase pre-training of this paper, what is the phase after the regular pretraining? And for this phase how to obtain sparse contextualized representation?", "answer_format": "Your answer should be a list of two strings, the first element is the name(two words) of the phase, and the second element is the formula in latex format providing useful signal during the second phase of pre-training.", "tags": ["formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_complex_math_formula_with_llm"], "eval_kwargs_list": [{"gold": "Knowledge distillation", "lowercase": true}, {"formulas": "\\min _{\\boldsymbol{\\alpha} \\in \\mathbb{R}_{\\geq 0}^{k}} \\frac{1}{2}\\left\\|\\boldsymbol{h}^{(l)}-\\boldsymbol{D} \\boldsymbol{\\alpha}\\right\\|_{2}^{2}+\\lambda\\|\\boldsymbol{\\alpha}\\|_{1}", "question": "For this phase, how to obtain sparse contextualized representation?"}]}}, "anchor_pdf": ["be9dbf2b-6279-53cb-a9df-cd5d983b7192"], "reference_pdf": []} {"uuid": "27413ff9-4f7d-5885-a5ea-79e29a534fa9", "question": "Which paper first found that multilingual models can inference cross-lingual supervision in MLM training by themself?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["3a0c6639-42c1-5bfe-91a9-0c6a7ac2d24b"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first found that multilingual models can inference cross-lingual supervision in MLM training by themself?", "reference_answer": "On-the-fly Cross-lingual Masking for Multilingual Pre-training"}}} {"uuid": "27873950-73ca-554c-be4b-88fc723841e7", "question": "How to calculate $l^{\\prime}(c,c^{\\prime})$ in the equation under Section 3.2?", "answer_format": "Your answer should be a sentence describing how to calculate the equation, including the explanation of relevant terms.", "tags": ["formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "$l^{\\prime}(c,c^{\\prime}) = \\mathbbm{1}(l(c) \\geq l(c^{\\prime}))$ is a pairwise comparator, which takes a pair of comparisons $(c,c^{\\prime})$ and determines the more consistent one. $l(c)$ evaluates the consistency of $c$, and a higher value of $l(c)$ indicates a greater degree of consistency.", "question": "How to calculate $l^{\\prime}(c,c^{\\prime})$ in the equation under Section 3.2?"}}, "anchor_pdf": ["d23ce539-adb7-5709-b341-169c0dcd5871"], "reference_pdf": []} {"uuid": "27bd3238-0bb7-540a-8e4f-5acc74fe7b92", "question": "In the paper that proposed two existing remote sensing vision-language datasets listed in the VRSBench paper, which method reaches the highest score on area comparison tasks?", "answer_format": "Your answer should be a single word, the name of the method.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["0ecb9fb6-f66a-519b-90de-10b955b8399d"], "reference_pdf": ["59bef33e-66e2-5e21-a026-d8e055da92f1", "3a7ec7eb-f552-5dfa-8801-5b03df2abc46"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "InstructBLIP", "lowercase": true}}} {"uuid": "27d44cad-3277-5e38-9d8a-87f953efe90f", "question": "Which datasets in the reading comprehension domain are used for instruction tuning datset curation in both FLAN and INTERS?", "answer_format": "Your answer should be a Python list of strings, the abbreviation of the datasets.", "tags": ["multiple", "image", "objective"], "anchor_pdf": ["6cf825a3-6133-57ec-9a68-5789597b122e", "7908763f-3a9d-5ce5-af59-f68888750583"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["SQuAD", "BoolQ"], "ignore_order": true, "lowercase": true}}} {"uuid": "2819ea5c-0598-511e-a95f-ce3e567a1b10", "question": "Is there a paper that connects the basic elements of storytelling with biased or imbalanced media reporting?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["c7142e48-9ca3-5748-9b60-bef33c9c0c27"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper that connects the basic elements of storytelling with biased or imbalanced media reporting?", "reference_answer": "Conflicts, Villains, Resolutions: Towards models of Narrative Media Framing"}}} {"uuid": "282710e9-2b2e-5e43-82c1-58505f4ee11f", "question": "Which pre-trained model does the Index Generation Framework of this paper(titled \"Generative Emotion Cause Triplet Extraction in Conversations with Commonsense Knowledge\") use as backbone? What's the architecture of this pre-trained model compared with GPT and BERT?", "answer_format": "Your answer should be a single python list, the first element is a string of the model name, the second element is s string about its special architecture.", "tags": ["image", "multiple", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "BART", "lowercase": true}, {"reference_answer": "For BART, Inputs to the encoder need not be aligned with decoder outputs, allowing arbitary noise transformations. Here, a document has been corrupted by replacing spans of text with mask symbols. The corrupted document is encoded with a bidirectional model, and then the likelihood of the original document is calculated with an autoregressive decoder. For fine-tuning, an uncorrupted document is input to both the encoder and decoder, and it uses representations from the final hidden state of the decoder.", "question": "What's the architecture of this pre-trained model compared with GPT and BERT?"}]}}, "anchor_pdf": ["7d92e1d8-9216-529b-87bc-34f7508ed2b7"], "reference_pdf": ["f6e91a91-0b1e-5280-8522-a20492033f16"]} {"uuid": "2939967c-6e6d-5ae2-8ff0-d7863eac8ae0", "question": "Both as vision-based GUI model accepting high-resolution screenshots, what improvement does MobileFlow make compared to CogAgent?", "answer_format": "Your answer should be in influent English.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["c0b50614-6a7c-51ba-a82d-48470b29fc57", "1466d378-189d-5100-ad46-28868608e8e7"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["Mixture-of-Experts (MoE)"], "question": "Both as vision-based GUI model accepting high-resolution screenshots, what improvement does MobileFlow make compared to CogAgent?"}}} {"uuid": "299660d7-a57b-5e22-9a6d-95c7bf8923af", "question": "What's the difference between the methods to convert GUI usage tutorials into trajectories of Synatra and AgentTrek? Which performance on WebArena of 7B model is better? How much of success rate does the better one outperforms the other one by?", "answer_format": "Your answer should be a list of three elements. The first element should be a string of free-form natural English describing the difference between methods of two works. The second element should be a string from [\"Synatra\", \"AgentTrek\"]. The third element should be a float rounded to 2 decimal places in [0, 100] as the difference of the success rates of two models.", "tags": ["multiple", "table", "image", "subjective"], "anchor_pdf": ["d7056732-f450-58dc-ac5f-9829b1393481", "3d346e23-8d40-53fb-9c92-e8c831e82a5b"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_scoring_points_with_llm", "eval_string_exact_match", "eval_float_exact_match"], "eval_kwargs_list": [{"scoring_points": ["Synatra first uses an LLM to rewrite the tutorial into hypothetical action sequence.", "Then, Synatra instructs another LLM to synthesize an HTML snippet as observation between two consecutive actions.", "AgentTrek leverages an actor LLM to follow the tutorial and execute the task.", "Then, AgentTrek uses an evaluator LLM to filter the recorded trajectories."], "question": "What's the difference between the methods to convert GUI usage tutorials into trajectories of Synatra and AgentTrek?"}, {"gold": "AgentTrek", "ignore_blank": true, "lowercase": true}, {"gold": 4.18, "ndigits": 2}]}}} {"uuid": "2a0aa66e-7f7a-5870-b5a3-935855255b31", "question": "Is there any paper that combines causal inference and finetuning for language models?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["8aae0071-d7da-5c68-b4db-3bc04cf76943"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that combines causal inference and finetuning for language models?", "reference_answer": "Preserving Commonsense Knowledge from Pre-trained Language Models via Causal Inference"}}} {"uuid": "2a25d73b-2f10-5623-8dc5-ff64901b0c82", "question": "Which paper first showed that task-specific knowledge embedded in parameters can be extracted from one LLM using seed samples and transferred to another via parameter-efficient fine-tuning?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["f3684914-fe43-5edc-9947-c816ce6c1c58"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first showed that task-specific knowledge embedded in parameters can be extracted from one LLM using seed samples and transferred to another via parameter-efficient fine-tuning?", "reference_answer": "SEEKING NEURAL NUGGETS: KNOWLEDGE TRANSFER IN LARGE LANGUAGE MODELS FROM A PARAMETRIC PERSPECTIVE"}}} {"uuid": "2a448d7b-073e-5d05-b1ed-4368558ab1d5", "question": "Which paper first investigates the knowledge preferences of LLMs when there are conflicts between the context and the parametric memory?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["3d31964e-ec66-56f0-9350-f941d724c956"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first investigates the knowledge preferences of LLMs when there are conflicts between the context and the parametric memory?", "reference_answer": "Adaptive Chameleon or Stubborn Sloth: REVEALING THE BEHAVIOR OF LARGE LANGUAGE MODELS IN KNOWLEDGE CONFLICTS"}}} {"uuid": "2a6abc65-d61b-5a48-b3a8-978d92c55720", "question": "What is the Overall Fmacro score corresponding to the three baselines in the StanceEval 2024 task? How many baselines did the PICT team surpass?", "answer_format": "Your answer should be a python list of four number, The first three are fractions (between 0 and 100, rounded to 2 decimal places, from largest to smallest), and the last one is a int.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["c2a5a650-b328-5a98-afb0-fd841f66214d", "5e3b136c-9d16-5b89-80d1-5316ff78eaa9"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": [78.89, 72.81, 71.34, 2], "ignore_order": false, "ndigits": 2}}} {"uuid": "2abed84f-df45-53f5-8761-12df8c5f8185", "question": "What is the quantity of BLIP-2' s Trainable Params? Which function will use the BLIP-2 related model in VisualWebArena paper?", "answer_format": "Your answer should be a python list of two strings. The first string is Trainable Params and You should use 'M' for million and 'B' for billion. For example, you should answer \"1M\" instead of \"1000000\". The second is a function name.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["fec0ffc9-d5d1-5cb3-ad3a-b39bb7017689", "46cca6ed-363d-5bcf-8b04-6e8f56b1debb"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["188M", "eval_vqa"]}}} {"uuid": "2baffe53-b50a-51a0-b88d-bf0bc18e1b00", "question": "Accoring to the course plan in this paper, what percentage of the total course time do students spend in lectures?", "answer_format": "Your answer should be a a Python float rounded to two decimal places ranged from 0 to 1", "tags": ["single", "objective", "text"], "anchor_pdf": ["0c14c555-b484-5afa-82df-d48084952085"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 0.21, "ndigits": 2, "tolerance": 1e-06}}} {"uuid": "2c25d8f9-d09e-547f-bdaf-6bb8a489e458", "question": "Summarize the data collection process of the dataset used in the evaluation section of the paper \"ATTACKING LLM WATERMARKS BY EXPLOITING THEIR STRENGTHS.\"", "answer_format": "Your answer should be a python strings", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["52aecac0-4df1-5705-bda1-ffc236794071"], "reference_pdf": ["6ca616fd-bc7e-5967-afba-5fcc90d99b98", "e45897f5-4429-5750-a8fb-dcfa9a904b5f"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["The 500 prompts data is collected by slicing and dicing a random selection of texts from the news-like subset of the C4 dataset. or each random string, we trim a fixed length of tokens from the end and treat them as a \"baseline\" completion. The remaining tokens are a prompt. For the experimental runs using multinomial sampling, we pull examples from the dataset until we achieve at least 500 of generations with length $T = 200 \\pm 5$ tokens.", "The C4 dataset is collected from Common Crawl. We downloaded the web extracted text from April 2019 and applied several filtering, which produces a collection of text that is not only orders of magnitude larger than most data sets used for pre-training but also comprises reasonably clean and natural English text."], "question": "Summarize the data collection process of the dataset used in the evaluation section of the paper \"ATTACKING LLM WATERMARKS BY EXPLOITING THEIR STRENGTHS.\""}}} {"uuid": "2cd0cc5e-defb-51aa-b04d-1cfead682bda", "question": "For handling hallucinations with auxiliary models, what is the model they use, and what are the metrics or measures to evaluate semantic similarity of two sentences?", "answer_format": "Your answer should be a Python list of two elements, the first element is the model name string, and the second element is a list of metric names, e.g., [\"model_name\", [\"metric1\", \"metric2\", ...]].", "tags": ["objective", "single", "text"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["COMET-QE", ["LASER", "LaBSE", "XNLI"]]}}, "anchor_pdf": ["b5062515-e162-5a98-a421-ab84dfe1d930"], "reference_pdf": []} {"uuid": "2e491092-7531-5cee-972f-fc7afb092e9f", "question": "What is the difference between formula (1) and formula (2) in the paper?", "answer_format": "Your answer should be a python strings concisely describing the difference between the two formulas.", "tags": ["single", "formula", "subjective"], "anchor_pdf": ["7c4b715b-1646-5e6d-9f92-0d90a4233472"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "What is the difference between formula (1) and formula (2) in the paper?", "reference_answer": "Formula (1) is defined as Cross-Entropy Loss, which may encounter an issue with the overlap-aware encoding predicting: sample imbalance. For example, the number of tokens marked as 0 is much lower than the others. This imbalance can affect the model's ability to generalize effectively. Formula (2) is designed to address this issue by replacing the Cross-Entropy Loss for the overlap-aware branch with Focal Loss."}}} {"uuid": "2e8bd79d-01b0-5ee1-accf-eed43dc316da", "question": "Which paper in human motion generation can control the spatial location of any joints of the human with either dense or sparse 3D points?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["e3368654-8d19-51a4-bc3a-96482dbc850d"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper in human motion generation can control the spatial location of any joints of the human with either dense or sparse 3D points?", "reference_answer": "OMNICONTROL: CONTROL ANY JOINT AT ANY TIME FOR HUMAN MOTION GENERATION"}}} {"uuid": "2ee66dfa-7715-5103-8a58-1b372665df07", "question": "Is there any generalizable NeRF paper that disentangles texture and shape?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["7c6fdfa3-a6b7-504f-97af-7905f6e80875"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any generalizable NeRF paper that disentangles texture and shape?", "reference_answer": "TUVF: LEARNING GENERALIZABLE TEXTURE UV RADIANCE FIELDS"}}} {"uuid": "2f4dc6e0-c001-55c8-ba36-c72fc509e506", "question": "What are the conditions under which the zero generalization error can be achieved?", "answer_format": "Your answer should be a Python string listing all the conditions in detail.", "tags": ["single", "text", "subjective"], "anchor_pdf": ["fffa8c85-c16f-566b-a036-0a584e6a445c"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "When the testing-relevant patterns in testing prompts and queries are positive linear combinations of training-relevant features, the label embeddings match those in the training data and if the testing prompt is long enough to adequately cover demonstrations with the same testing-relevant features as the testing query.", "question": "What are the conditions under which the zero generalization error can be achieved?"}}} {"uuid": "2f769184-b5d0-5b61-952d-3ac813a55275", "question": "What assumption does Deja Vu make to accelerate LLM inference? According to the source paper of the subsequent work PowerInfer, what is the key challenge of Deja Vu?", "answer_format": "Your answer should be a python list of two strings", "tags": ["multiple", "subjective", "text"], "anchor_pdf": ["ef22ae5d-6800-5707-b8af-6c8984e27c8a", "deb2d033-fbed-5014-93b1-528996e5c9cc"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_reference_answer_with_llm", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"reference_answer": "Contextual sparsity exists given any input.", "question": "What assumption does Deja Vu make to accelerate LLM inference?"}, {"reference_answer": "The key challenge with DejaVu in such contexts stems from the need to frequently transfer activated neurons from the CPU to the GPU during runtime.", "question": "According to the source paper of the subsequent work PowerInfer, what is the key challenge of Deja Vu?"}]}}} {"uuid": "2f7da671-2337-5c7b-9a25-35c1b996fe80", "question": "In Figure 1 of the paper \"When Benchmarks are Targets: Revealing the Sensitivity of Large Language Model Leaderboards\", which model has the largest difference in ranking between Fixed Answer and Cloze Prompt? For the dataset that contains the original question, what's the estimated expert-level accuracy?", "answer_format": "Your answer should be a Python list of two strings, the first is the name of the model, as proposed in the figure, the second is the estimated expert-level accuracy, rounding to 1 decimal place, like \"12.3%\".", "tags": ["multiple", "image", "objective"], "anchor_pdf": ["5fe57755-14f1-5ee7-a4b4-ecaba6827045"], "reference_pdf": ["c2c5bf1a-3d4a-508e-a217-b3e4b78ce7f7", "be088b19-03fb-584b-a62d-2ab4b5d7fdd8"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Llama2-7b-chat", "89.8%"], "ignore_order": false, "lowercase": true}}} {"uuid": "2fc10cbc-1818-5cfd-962d-4c15b87f9865", "question": "Where can I find the datasets published in 2021 used in the experiments in the paper \"Question-Instructed Visual Descriptions for Zero-Shot Video Question Answering\"?", "answer_format": "Your answer should be a python list of several strings, the website of that dataset as given in the paper that proposes it.", "tags": ["multiple", "objective", "metadata"], "anchor_pdf": ["be2f9bd8-1534-5f48-afd9-2979980d6938"], "reference_pdf": ["449f0b05-00c0-5de4-a707-cadb2111269d", "771d716e-ed58-59b0-bf44-bb72099981de"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["https://github.com/doc-doc/NExT-QA.git", "http://star.csail.mit.edu"], "ignore_order": true, "ignore_blank": true, "lowercase": false}}} {"uuid": "2fee39d3-e6a7-50d1-918c-3f8a140a47bb", "question": "In Figure 1, the presence of what operation divides the discretization process of continuous speech into two categories?", "answer_format": "Your answer should be a python string.", "tags": ["single", "image", "subjective"], "anchor_pdf": ["4df38ee7-15d9-5448-95fd-c7b37c9d261d"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "K-means clustering.", "question": "In Figure 1, the presence of what operation divides the discretization process of continuous speech into two categories?"}}} {"uuid": "302c67ba-c324-5ae2-9757-0e05956f17cc", "question": "Which paper first explored In-context learning in a cross lingual setup and made use of alignment to better it's performance?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["068aee8a-f79a-5f6f-99bb-728832e4cf7b"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first explored In-context learning in a cross lingual setup and made use of alignment to better it's performance?", "reference_answer": "Multilingual LLMs are Better Cross-lingual In-context Learners with Alignment"}}} {"uuid": "30335449-a618-5e66-8c7e-dc0eb81bfaae", "question": "Which neural theorem proving paper first attempted to prove theorems in a block-by-block manner?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["6f300c27-36eb-51d0-a035-c9287ade3481"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which neural theorem proving paper first attempted to prove theorems in a block-by-block manner?", "reference_answer": "LEGO-PROVER: NEURAL THEOREM PROVING WITH GROWING LIBRARIES"}}} {"uuid": "31c0e826-57b0-5445-a16d-0e3d4adc46ab", "question": "Of the following three combinations, which reaches the highest pass@1 accuracy on HumanEval and what's the exact accuracy value: Codestral+MGDebugger, Reflexion+LDB(GPT-4), MetaGPT.", "answer_format": "Your answer should be a Python List of 2 elements, the first is the combination and the second is the exact accuracy value, rounded to one decimal places. Note that you should use the same names as in the question.", "tags": ["multiple", "table", "image", "objective"], "anchor_pdf": ["bafa4ba3-f7e9-5bf2-960d-cb11f11ec138", "460c65d7-a298-5bd3-baa2-dd8683885308", "80a14542-96a0-5a15-a501-959b9007a8b6"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Reflexion+LDB(GPT-4)", 96.9], "lowercase": true, "ignore_order": false, "ndigits": 1}}} {"uuid": "3272942c-5db5-5122-b612-f09332a27a5a", "question": "How are the data in Table 5 in the paper \"Overview of the 9th Social Media Mining for Health Applications\" obtained, from the valid dataset or test dataset?", "answer_format": "Your answer should be a python strings of \"valid\" or \"test\".", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["c43c8e72-1389-54f0-b36e-ab07d569f3e0"], "reference_pdf": ["fe31ca58-7239-5d36-b623-2540d7b9b01a"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "test"}}} {"uuid": "32b0c214-afd6-59b1-8e6e-690caf288104", "question": "In the overview figure of KG-FIT, which component is pointed by a red arrow at the bottom-left? In the paper that proposes this component, what's the algorithm with the theoretically highest worst-case complexity? What's its advantage over the other two faster algorithm, considering the update scheme?", "answer_format": "Your answer should be a Python string, including the answers to the three questions.", "tags": ["multiple", "image", "subjective"], "anchor_pdf": ["1f950507-1856-5850-9d7d-fdc673e51367"], "reference_pdf": ["0375ee54-2513-5042-ac3a-3c43cf07183a"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["The component is agglomerative hierarchical clustering.", "The algorithm is the generic clustering algorithm.", "It is the only algorithm among the three in this paper which can deal with inversions in the dendrogram. Consequentially, the \"centroid\" and \"median\" methods must use this algorithm."], "question": "In the overview figure of KG-FIT, which component is pointed by a red arrow at the bottom-left? In the paper that proposes this component, what's the algorithm with the theoretically highest worst-case complexity? What's its advantage over the other two faster algorithm, considering the update scheme?", "ignore_order": false}}} {"uuid": "330ec130-4467-529e-a5e3-83d9391863e7", "question": "What dataset does this paper(titled \"Semi-Structured Object Sequence Encoders\") use for Anomaly Detection?How many datasets is this dataset originally consist of?", "answer_format": "Your answer should be a single python list, the first element is a string of the dataset name, the second element is an integer number.", "tags": ["table", "multiple", "objective"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["LogHub", 19], "ignore_order": false, "lowercase": true}}, "anchor_pdf": ["f8a9f4e4-773f-5a07-8904-d6a265832d5e"], "reference_pdf": ["95266489-4f59-58e1-badf-cbf53131c665"]} {"uuid": "3337061c-d350-5522-9c68-f810e017a567", "question": "Can we reduce visual tokens in vision transformers right from the beginning?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["6e86386f-01ea-5680-ae68-5d97f86ecf8a"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Can we reduce visual tokens in vision transformers right from the beginning?", "reference_answer": "SparseFormer: Sparse Visual Recognition via Limited Latent Tokens"}}} {"uuid": "333e0fbf-b322-5998-939c-cada7786f47a", "question": "Which dataset supports narration generation and temporal localization tasks in Chinese movies?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["f6e15901-a916-5739-ba55-76de0e5ee82d"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which dataset supports narration generation and temporal localization tasks in Chinese movies?", "reference_answer": "Movie101: A New Movie Understanding Benchmark"}}} {"uuid": "33c4b17e-84de-57a7-b3b9-fa4c55bb3e60", "question": "How does \"Analog Computing for AI Sometimes Needs Correction by Digital Computing: Why and When\" estimate the confidence of analog computing?", "answer_format": "Your answer should be in fluential English.", "tags": ["single", "text", "subjective"], "anchor_pdf": ["229809ba-59b0-5972-8dd9-ca9aedcdce32"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "This work evaluates the confidence of analog computing by calculating the difference between the top two softmax values from the last layer.", "question": "How does \"Analog Computing for AI Sometimes Needs Correction by Digital Computing: Why and When\" estimate the confidence of analog computing?"}}} {"uuid": "33f77112-8775-5066-8bb6-e74f93379410", "question": "In the paper named\"MBIAS: Mitigating Bias in Large Language Models While Retaining Context\", to develop MBIAS, which PEFT(Parameter Efficient Fine-tuning) technique is used to finetune the model?In the paper where this technique is proposed, what's the innovations introduced to save memory?", "answer_format": "Your answer should be a single python list, containing two strings. The first string is the name(abbrievation) of the PEFT technique. The second string is the innovations introduced to save memory in the relevant paper.", "tags": ["subjective", "multiple", "text"], "anchor_pdf": ["e49150c3-c433-515e-be7d-d3dc87048029"], "reference_pdf": ["c7b3bb82-aadc-5ff3-80e5-6a87188c8e20"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "QLoRA", "lowercase": true}, {"reference_answer": "QLORA introduces a number of innovations to save memory without sacrificing performance: (a) 4-bit NormalFloat (NF4), a new data type that is information theoretically optimal for normally distributed weights (b) Double Quantization to reduce the average memory footprint by quantizing the quantization constants, and (c) Paged Optimizers to manage memory spikes.", "question": "In the paper where this technique is proposed, what's the innovations introduced to save memory?"}]}}} {"uuid": "34031849-a464-5cf5-a3f4-c70b6dfb37e8", "question": "Among the papers that proposed PopQA, KBP and ASQA, which one evaluates the most language models? What question does it want to answer by evaluating so many models?", "answer_format": "Your answer should be a Python list of two strings, the first string is the name of the dataset, that evaluates the most models in its paper, and the second string is the question that it wants to answer.", "tags": ["text", "multiple", "subjective"], "anchor_pdf": ["08f0b49d-a02d-5eba-8fa3-51284c90822b", "8ec52878-4dbf-52f2-9062-1225adff8e7b", "ca66eda5-e6f7-5d97-b474-6f515c7754eb"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "PopQA", "lowercase": true}, {"reference_answer": "How much factual knowledge is memorized by LMs and what factors affect the memorization?", "question": "What research question does the paper want to answer?"}]}}} {"uuid": "343baca6-bb8b-55a6-8bb4-8aaa548dc66d", "question": "In the paper that proposes the second method to verify the LLMs' outputs introduced in the paper \"I am a Strange Dataset: Metalinguistic Tests for Language Models\", the method was mainly evaluated on which dataset?", "answer_format": "Your answer should be a string, the name of the main dataset.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["6b5eb663-a966-5a8a-9f29-81c24781e559"], "reference_pdf": ["6bce5c12-7e36-504e-b30f-b5f67d27b0b0", "aadfb703-a64a-56a1-b1b9-87a74f9b19a3"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "MATH", "lowercase": true}}} {"uuid": "349514c1-4e39-545c-b647-6c413a9a683e", "question": "In the paper that proposed a CASH algorithm for finetuning, which two objective functions are essential for learning the estimator and the predictor?", "answer_format": "Your answer should be a Python list of 2 strings, each string is a formula for the two objective functions in the LaTeX format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["d32f34d4-83ad-5521-8727-edaed0997024"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": ["\\theta^{(M)} := \\arg\\min_{\\theta} \\mathbb{E}_{(x,t,\\ell(x,t,d),c(x,t,d)) \\sim \\mathcal{H}^{(M)}} \\left[ -\\log p\\left( \\ell(x,t,d) \\mid x,t,d,\\hat{\\ell}(x,t,d;\\theta) \\right) \\right]", "\\gamma^{(M)} := \\arg\\min_{\\gamma} \\mathbb{E}_{(x,t,\\ell(x,t,d),c(x,t,d)) \\sim \\mathcal{H}^{(M)}} \\left( c(x,t,d) - \\hat{c}(x,t,d;\\gamma) \\right)^2"], "question": "Which two objective functions are essential for learning the estimator and the predictor?"}}} {"uuid": "34fe12fd-640c-506e-86a2-5ab70a15c11a", "question": "Is there any paper that leverages graph neural network by integrating label information for multi-label low-resource intent classification?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["d0a630cf-54d5-575e-a191-3391f410775f"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that leverages graph neural network by integrating label information for multi-label low-resource intent classification?", "reference_answer": "Dual Class Knowledge Propagation Network for Multi-label Few-shot Intent Detection"}}} {"uuid": "34fed469-2cc2-531c-8d93-4e318d5de7c0", "question": "Which datasets are used for Multi-Document QA in this paper?", "answer_format": "Your answer should be python list, each element of the list is a string of the name of a dataset.", "tags": ["objective", "single", "text"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["HotpotQA", "2WikiMultihopQA", "MuSiQue", "DuReader"], "ignore_order": true}}, "anchor_pdf": ["d6b892b8-cf43-5b62-bde9-48c070c2e5dc"], "reference_pdf": []} {"uuid": "354583f4-8367-5e41-b3a6-9b63d9e05e69", "question": "I want to download this paper from the internet.Can you give me a link?", "answer_format": "Your answer should be single string of the link.", "tags": ["metadata", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "https://aclanthology.org/2024.acl-long.182.pdf"}}, "anchor_pdf": ["3097e545-7cad-5021-9d16-57938472fc77"], "reference_pdf": []} {"uuid": "35843fb0-6f14-51a5-a205-2acf0faa83a5", "question": "Which tool is used as the basic verification tool in the paper \"Leveraging Large Language Models for Automated Proof Synthesis in Rust\"? Can the tool call executable functions in proof mode?", "answer_format": "Your answer should be a python list of two strings. The first string is the name of the tool, and the second string is \"yes\" or \"no\".", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["4fbb082f-f76e-50d3-9155-2810ab4dbfd5"], "reference_pdf": ["9f22287f-4186-5f55-8b2c-620b02f89b82"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Verus", "no"]}}} {"uuid": "359cb240-d14a-55b3-a0d9-2652c02ac278", "question": "In the dataset listed in the paper \"A Benchmark Dataset for Event-Guided Human Pose Estimation and Tracking in Extreme Conditions0\", that has the number of boxes closest to EHPT-XC, what are the two largest categories under daytime roadscene?", "answer_format": "Your answer should be a Python list of the names of the two categories.", "tags": ["multiple", "table", "image", "objective"], "anchor_pdf": ["2faea412-dbba-52ba-8fa0-84d4a5ee85e9"], "reference_pdf": ["6caa8f4a-5809-5352-8868-05aab6f361b1"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Building", "Shrub"], "ignore_order": true, "lowercase": true}}} {"uuid": "35b4110d-486c-562f-b488-c8a8b417ef82", "question": "Which paper first apply mixture of experts idea to large language models for domain adaptation?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["88ccda38-bd2f-55a1-88da-a1195987ea75"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first apply mixture of experts idea to large language models for domain adaptation?", "reference_answer": "Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models' Memories"}}} {"uuid": "366510d9-f1e1-51a7-987d-cb6e47c79812", "question": "For the biggest dataset used in the paper titled \"HiCuLR: Hierarchical Curriculum Learning for Rhetorical Role Labeling of Legal Documents\", what's its rhetorical role labels?", "answer_format": "Your answer should be a single string about the labels.", "tags": ["text", "multiple", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "12 Rhetorical Roles and 1 NONE: Preamble (PREAMBLE), Facts (FAC), Ruling by Lower Court (RLC), Issues (ISSUE), Argument by Petitioner(ARG PETITIONER), Argument by Respondent(ARG RESPONDENT), Analysis (ANALYSIS), Statute (STA), Precedent Relied (PRE RELIED), Precedent Not Relied (PRE NOT RELIED), Ratio of the decision (Ratio), Ruling by Present Court (RPC), NONE", "question": "For the biggest dataset used in the anchor paper, what's its rhetorical role labels?"}}, "anchor_pdf": ["ba1e34f8-995f-594a-a34f-531da14171b7"], "reference_pdf": ["8bdcdbeb-dfb4-59a8-8438-0b7ce459d58c"]} {"uuid": "37877f34-e27f-5de2-a0ee-ffa5a543a374", "question": "Is there a paper that supports the use of automated coherence metrics in topic model evaluations?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["38222e1e-cb70-5b4f-86fb-c5d2df2b93b1"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper that supports the use of automated coherence metrics in topic model evaluations?", "reference_answer": "Large-Scale Correlation Analysis of Automated Metrics for Topic Models"}}} {"uuid": "37d41534-3614-5483-904d-213f07860a88", "question": "Where can I find the dataset, from which the paper \"Is Programming by Example solved by LLMs?\" seeds LOGO problems?", "answer_format": "Your answer should be a string, the website of that dataset as given in the paper that proposes it.", "tags": ["multiple", "metadata", "objective"], "anchor_pdf": ["3f1a0e10-2f3f-53b3-8bf3-7869f3a40b31"], "reference_pdf": ["99aa102f-6b45-5833-b958-b8ed3dfec29f"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "https://bit.ly/3g9361W", "ignore_blank": true, "lowercase": false}}} {"uuid": "37e98c25-68ba-54c4-9068-596ed64b546d", "question": "Is there an evaluation metric for natural language generation that predicts the factual consistency score through a mean-max aggregation method?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["3c16b3da-8a6c-566c-b3fc-b5654edaa133"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there an evaluation metric for natural language generation that predicts the factual consistency score through a mean-max aggregation method?", "reference_answer": "ALIGNSCORE: Evaluating Factual Consistency with A Unified Alignment Function"}}} {"uuid": "383909ad-dc1d-5f60-ade6-46bea6e7c62b", "question": "In the paper \"Mastering Task Arithmetic: $\\tau$Jp as a Key Indicator for Weight Disentanglement\", what are the names of the datasets used for task addition on vision tasks? Did the paper which proposed baseline \"Linear FT\" use the same datasets for task addition on vision tasks?", "answer_format": "Your answer should be a Python dictionary, containing the names of datasets for the first question and a boolean value for the second question, e.g., {\"datasets\": [\"dataset 1\", \"dataset 2\", ...], \"same_datasets\": true}. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.", "tags": ["multiple", "objective", "text"], "anchor_pdf": ["c2da75c1-ee57-5460-babc-fdf4b7f04009"], "reference_pdf": ["379352ef-2540-5680-821a-2a9ef5ef979f", "7efe0293-9ecd-5386-b1c5-a851c7a0fdf1", "153d1505-a286-5ceb-9858-c272e31a7d7e"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"datasets": ["Cars", "DTD", "SUN397", "EuroSAT", "GTSRB", "MNIST", "SVHN", "RESISC45"], "same_datasets": true}, "ignore_order": true}}} {"uuid": "38965ab2-4bc0-562a-98bf-805f7a9fc3ee", "question": "Which pre-trained model is specifically designed for low-resource dialogue summarization tasks?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["8b3ff7b5-2425-5632-a465-df25a4a1fb56"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which pre-trained model is specifically designed for low-resource dialogue summarization tasks?", "reference_answer": "DIONYSUS: A Pre-trained Model for Low-Resource Dialogue Summarization"}}} {"uuid": "38a692e8-8566-539a-aecd-b3e7df04dbcf", "question": "According to the methods proposed by this paper,how to calculate the bias Scores when aggregating attributions for tokens, instances and instructions respectively?", "answer_format": "Your answer should be a python list of three elements, every element is a formula string in latex format.", "tags": ["formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": ["$$\\nB_{i}^{(\\iota,x_{j})}(h)=\\operatorname*{max}_{k}B_{i}^{(\\iota,x_{j},t_{k})}(h)\n$$", "$$\\n\\begin{array}{c}{{B_{i}^{(\\iota,\\mathcal{D})}(h)=\\displaystyle\\sum_{j}^{N}\\alpha^{(\\iota,x_{j})}B_{i}^{(\\iota,x_{j})}(h)}}\\\\ {{\\alpha^{(\\iota,x_{j})}=\\mathcal{P}(\\hat{y_{j}}|\\iota,x_{j})}}\\end{array}\\n$$", "$$\\nB_{i}^{(\\mathbb{Z},\\mathcal{D})}(h)=\\frac{1}{M}\\sum_{\\iota}^{\\mathcal{Z}}B_{i}^{(\\iota,\\mathcal{D})}(h)\\n$$"], "question": "According to the methods proposed by this paper,how to calculate the bias Scores when aggregating attributions for tokens, instances and instructions respectively?", "ignore_order": false}}, "anchor_pdf": ["2b003f7e-a995-57d1-af78-78432ed96561"], "reference_pdf": []} {"uuid": "396c566e-ead8-50a4-b00a-d5ca4c432275", "question": "What paper first extends rotary positional encoding (RoPE) for camera-geometry encoding in multi-view transformers?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["37df1fee-944c-51d4-bd9f-52fecfeee9d8"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What paper first extends rotary positional encoding (RoPE) for camera-geometry encoding in multi-view transformers?", "reference_answer": "GTA: A GEOMETRY-AWARE ATTENTION MECHANISM FOR MULTI-VIEW TRANSFORMERS"}}} {"uuid": "398ee3a7-26c8-5967-8b5b-196b5d7641b3", "question": "According to Figure 1 in the \"Shoulders of Giants: A Look at the Degree and Utility of Openness in NLP Research\" paper, for TACL papers based on Spanish, where does their LMs mainly come from?", "answer_format": "Your answer should be a phrase indicating the category DIRECTLY FROM THE PDF WITHOUT ANY MODIFICATION OR EXPLANATION.", "tags": ["image", "multiple", "objective", "table"], "anchor_pdf": ["5e692b45-81a3-5edc-a464-5025866db42a"], "reference_pdf": ["4c29dcbb-73f5-575d-a928-f1029e56d7e5"], "conference": [], "evaluator": {"eval_func": "eval_string_fuzzy_match", "eval_kwargs": {"gold": "Used others' artefact + added their own part", "threshold": 95, "ignore_blank": true, "lowercase": true}}} {"uuid": "39fb54be-7c67-59c2-9179-8cd66ce19bc2", "question": "Considering the performance of ChatDev agent on DSEval-LeetCode benchmark, what is the most common cause of the errors?", "answer_format": "Your answer should be a python list of elements, the first element is the string of the main verdict, the second element is the string of the sub-verdict, e.g., [\"verdict_name\", \"sub-verdict_name\"].", "tags": ["image", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Presentation Error", "Index Mismatch"], "lowercase": true}}, "anchor_pdf": ["0fe6d2d4-00e7-596b-a80c-ffe5a6d88b97"], "reference_pdf": []} {"uuid": "3a357488-48e9-58d5-ab3f-fdb931ab1db1", "question": "What work proposes to combine video foundation models with vision language models to effective high dimensional robot planning?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["2793fbc2-359d-58bc-9d92-47308d909879"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What work proposes to combine video foundation models with vision language models to effective high dimensional robot planning?", "reference_answer": "VIDEO LANGUAGE PLANNING"}}} {"uuid": "3a86e8ba-3a3c-5cd4-a799-b76cfc9b643f", "question": "In the dataset used in the experiment of the paper \"Soft-Label Integration for Robust Toxicity Classification\" containing 3 classes, which two explainability based metrics are applied?", "answer_format": "Your answer should be a Python list of 2 strings, the names of the datasets.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["3c858455-bd01-5499-9960-eaffa5af22e8"], "reference_pdf": ["ce71bd6d-c5e8-5730-95ab-8e5d96efa77c"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Plausibility", "Faithfulness"], "ignore_order": true, "lowercase": true}}} {"uuid": "3ae37796-7491-5c6f-9d5c-c6f3e358a888", "question": "What work attempts to explore multi-hop reasoning by densifying commonsense knowledge graphs?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["29d4d6d1-dc93-5fae-b761-d95261371d06"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What work attempts to explore multi-hop reasoning by densifying commonsense knowledge graphs?", "reference_answer": "Dense-ATOMIC: Towards Densely-connected ATOMIC with High Knowledge Coverage and Massive Multi-hop Paths"}}} {"uuid": "3b007244-9a68-5972-b0a9-04691a2dd6d2", "question": "Which language model distillation paper that first identified the capacity gap in distillation and used the MoE student model to counter the curse of capacity gap?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["4559aab7-35b3-57b8-a506-af64c41a55ac"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which language model distillation paper that first identified the capacity gap in distillation and used the MoE student model to counter the curse of capacity gap?", "reference_answer": "Lifting the Curse of Capacity Gap in Distilling Language Models"}}} {"uuid": "3b42e1f2-e150-5216-aeab-44e976e28900", "question": "Which operation on task vectors is employed in the paper named \"Towards Safer Large Language Models through Machine Unlearning\"? Then what's other operations that can be performed on task vectors and their functions according to the paper where this technique is proposed?", "answer_format": "Your answer should be a single string about the operations used in the two papers.", "tags": ["subjective", "multiple", "text"], "anchor_pdf": ["e1e05ea5-d527-5a3c-9e2b-e100a396b97f"], "reference_pdf": ["7811d1d6-a569-5a9e-bf51-2971e3fdc7f8"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["The anchor paper employs the negation of task vectors to effectively erase the harmful knowledge (to remove undesirable behaviors or unlearn tasks)", "Adding task vectors can also be performed, which leads to better multi-task models,or even improves performance on a single task", "When tasks form an analogy relationship, task vectors can be combined to improve performance on tasks where data is scarce."], "question": "Which operation on task vectors is employed in the anchor paper? Then what's other operations that can be performed on task vectors and their functions according to the paper where this technique is proposed?"}}} {"uuid": "3b83e010-75b3-5fa9-a5c1-7f786db8d957", "question": "Which paper proposes an alignment framework that steers language models to preferences of individual groups in a few-shot manner through augmenting the LLM with a transformer module?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["243ae40e-e44d-5b7d-8419-ed8737c5101d"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper proposes an alignment framework that steers language models to preferences of individual groups in a few-shot manner through augmenting the LLM with a transformer module?", "reference_answer": "GROUP PREFERENCE OPTIMIZATION: FEW-SHOT ALIGNMENT OF LARGE LANGUAGE MODELS"}}} {"uuid": "3bec1f83-7dfa-5650-81b5-f70d0aaf5232", "question": "Among AlpacaEval, MT-Bench and MMLU, which ones collect open-ended questions accross different domains without providing concrete reference answers?", "answer_format": "Your answer should be a python list of 1-3 strings, and the strings should be AlpacaEval, MT-Bench or MMLU.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["9156c181-6be2-5e8a-ba2e-4658dce594e7", "9156c181-6be2-5e8a-ba2e-4658dce594e7", "c2c5bf1a-3d4a-508e-a217-b3e4b78ce7f7"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["AlpacaEval", "MT-Bench"], "ignore_order": true}}} {"uuid": "3c0fcf08-0c65-5387-855f-d2fcb7a81379", "question": "What's the difference between the supported OS platforms of two works, OSWorld and Spider2-V?", "answer_format": "Your answer should be concise text string highlighting the differences.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["d2c5d22a-acc9-58d5-83cf-a1ab9a98621d", "de223ec7-0e7b-558d-851c-a04bcd4fb3ca"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "OSWorld supports Windows, Linux, and macOS, while Spider2-V only supports Linux or Ubuntu.", "question": "What's the difference between the supported OS platforms of two works, OSWorld and Spider2-V?"}}} {"uuid": "3c3f8ba1-26de-54c9-84d9-6d66dc664a8d", "question": "In the paper that SaulLM-141B paper follows the most in data cleaning, how much higher is the balanced accuracy of the final checkpoint of the proposed model than that of the initial checkpoint?", "answer_format": "Your answer should be a float, rounding to 2 decimal places.", "tags": ["multiple", "image", "objective"], "anchor_pdf": ["2b253037-612f-5426-9621-ec645237513c"], "reference_pdf": ["4c3fac90-3fef-50ed-884e-9a6fa46332a5"], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 4.22, "ndigits": 2}}} {"uuid": "3c712282-2534-5627-84f0-ce1e39212d20", "question": "Is there a paper that uses evolutionary algorithms and neural MT metrics to produce translations?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["b2092adf-0621-5c48-9059-00bcf8bc0008"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper that uses evolutionary algorithms and neural MT metrics to produce translations?", "reference_answer": "Breeding Machine Translations: Evolutionary approach to survive and thrive in the world of automated evaluation"}}} {"uuid": "3c770698-2830-5eea-9b03-3984091527a3", "question": "How many more LLMs are evaluated in ConvBench paper than in MINT paper?", "answer_format": "Your answer should be an integer.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["1cf7ea57-d128-5619-9361-6b35db040c25", "09d48a2a-4ad0-5a7f-84ec-557ac57f5830"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 0}}} {"uuid": "3caa2b4d-e1e4-532d-9976-125838093bb8", "question": "According to the paper \"FinGPT: Instruction Tuning Benchmark for Open-Source Large Language Models in Financial Datasets\", after the tuning phase shown in the right top of the paradigm overview figure, on which dataset MPT outperforms the other models with an average F1 score of around 0.87? In that dataset, which entity class accounts for the largest proportion?", "answer_format": "Your answer should be a Python list of 2 elements, the first is the abbreviation of the dataset, and the second is the entity class.", "tags": ["multiple", "table", "image", "objective"], "anchor_pdf": ["1d2eb7a7-8dfa-5bc0-a906-b27725667ff4"], "reference_pdf": ["36e0be33-f25c-5bb9-a442-80cea9127bf1"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["FPB", "General entity"], "ignore_blank": true, "lowercase": true, "ignore_order": false}}} {"uuid": "3cc4cb1e-ec2f-53ca-a69b-029e013b2d6a", "question": "In Theorem 4.7 of this paper, a basic learning algorithm is applied. According to the paper that proposed that algorithm, what's the objective to be maximized for the actor and loss function to be minimized for the critic?", "answer_format": "Your answer should be a Python list of 2 strings, the formulas in LaTeX format. Remember that order matters.", "tags": ["multiple", "formula", "subjective"], "anchor_pdf": ["3bc84aef-afcf-5ef8-a2fb-8e3ead6663d4"], "reference_pdf": ["118cac21-d91b-55b6-bfce-0742348b4c2d"], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": ["L(\\theta) = \\left[\\frac{1}{Bn} \\sum_{i=1}^{B} \\sum_{k=1}^{n} \\min\\left(r_{\\theta,i}^{(k)} A_i^{(k)}, \\text{clip}(r_{\\theta,i}^{(k)}, 1 - \\epsilon, 1 + \\epsilon)A_i^{(k)}\\right)\\right] + \\sigma \\frac{1}{Bn} \\sum_{i=1}^{B} \\sum_{k=1}^{n} S\\left[\\pi_{\\theta}(o_i^{(k)})\\right]", "L(\\phi) = \\frac{1}{B_n} \\sum_{i=1}^B \\sum_{k=1}^n \\left( \\max\\left[(V_{\\phi}(s_i^{(k)}) - \\hat{R}_i)^2, (\\textrm{clip}(V_{\\phi}(s_i^{(k)}), V_{\\phi_{\\textrm{old}}}(s_i^{(k)}) - \\varepsilon, V_{\\phi_{\\textrm{old}}}(s_i^{(k)}) + \\varepsilon) - \\hat{R}_i)^2\\right] \\right)"], "question": "What's the objective to be maximized for the actor and loss function to be minimized for the critic?", "ignore_order": false}}} {"uuid": "3cc9e70a-bd6b-525c-af61-4b66f9ef8a77", "question": "Who is the corresponding author of this paper?", "answer_format": "Your answer should be a python string about the name of the corresponding author.", "tags": ["single", "metadata", "objective"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Dongyan Zhao", "lowercase": true}}, "anchor_pdf": ["122bad91-1e5a-554e-bf1b-7f1e375aaf71"], "reference_pdf": []} {"uuid": "3cdb56d6-cdfc-57d0-8b3d-736aee6fa4c7", "question": "How to initialize $h_{i,l-1}^S$ in Equation (11)?", "answer_format": "Your answer should be a paragraph describing the initialization procedure as given in the paper.", "tags": ["formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "To initialize $h_{i,0}^S$ in Equation (11), we start with the output from Equation (8): $h_i^S = \\hat{h}_i + s_i$.\n\nHere, $\\hat{h}_i$ is the output from the Aspect-Aware Attention Module (A3M), and $s_i$ is the sentiment feature obtained by projecting the affective score of word $w_i$ from SenticNet into the same dimensional space as $\\hat{h}_i$. Specifically:\n\n1. For each word $w_i$ in the sentence, obtain its affective score $w_i^S$ from SenticNet.\n2. Project this affective score into the same dimensional space as $\\hat{h}_i$: $s_i = W_S w_i^S + b_S$, where $W_S$ and $b_S$ are learned parameters.\n3. Add the sentiment feature $s_i$ to $\\hat{h}_i$: $h_i^S = \\hat{h}_i + s_i$\n\nThis $h_i^S$ serves as the initial node representation $h_{i,0}^S$ for the Aspect-Guided Graph Convolutional Network (AG-GCN). Therefore, for the first layer $l=0$: $h_{i,0}^S = h_i^S$.", "question": "How to initialize $h_{i,l-1}^S$ in Equation (11)?"}}, "anchor_pdf": ["6a24d7f4-430d-5c92-b259-f62f76490147"], "reference_pdf": []} {"uuid": "3d204779-e506-5fbd-8a25-5172d94a1b6c", "question": "Which paper did the anchor PDF reference for the method to address codebook collapse? In fact, which paper originally proposed this method?", "answer_format": "Your answer should be a Python list where each item is the full name of a paper (a string).", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["2dfeabc6-8ca4-50b0-aff7-6498db583fb7"], "reference_pdf": ["b40cd7d7-4547-5fef-94c8-9973e61893fe"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["High-Fidelity Audio Compression with Improved RVQGAN", "Vector-quantized Image Modeling with Improved VQGAN"], "lowercase": true}}} {"uuid": "3d3d6314-7069-5382-b942-830f22b0b94c", "question": "Which conference was the paper 'Fact-Checking Complex Claims with Program-Guided Reasoning' published in? Is it a long paper, a short paper or findings?", "answer_format": "Your answer should be a Python list of two elements, the first element is the abbreviation of the conference name (including the year), e.g. EMNLP 2022, and the second element is the type of this paper, i.e. long paper, short paper or findings.", "tags": ["metadata", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["ACL 2023", "long paper"], "ignore_order": false}}, "anchor_pdf": ["ed5d7873-1891-50ad-a358-d976054e12f7"], "reference_pdf": []} {"uuid": "3dc8318e-3f56-5ba9-8542-f845aad5e8a8", "question": "What are some methods for solving the class-incremetal continual learning problems?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["cb1f017f-71ba-57c9-aa6c-2c21f7319ecb"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What are some methods for solving the class-incremetal continual learning problems?", "reference_answer": "Rehearsal-free Continual Language Learning via Efficient Parameter Isolation"}}} {"uuid": "3e1ea082-261d-549f-88fe-2bfe6e9d7b4c", "question": "Which meta learning-based baseline is used in the paper named \"Can We Continually Edit Language Models?On the Knowledge Attenuation in Sequential Model Editing\"? What's the full name of this baseline according to the paper where it's proposed?", "answer_format": "Your answer should be a single python list containing two strings, the first element of the list is the abbreviation of the baseline, the second element of the list is the full name of this baseline, e.g.[\"MAML\",\"Model-Agnostic Meta-Learning\"].", "tags": ["objective", "multiple", "text"], "anchor_pdf": ["e951485d-ebe5-5d66-bc8b-4f8baba6caef"], "reference_pdf": ["44c58240-57f2-5f7c-b511-e44337f6a5af"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["MEND", "Model Editor Networks with Gradient Decomposition"], "lowercase": true, "ignore_blank": true}}} {"uuid": "3f69a7de-fe99-531a-8399-d4cbbb1b8da0", "question": "In the paper \"Puzzle Solving using Reasoning of Large Language Models: A Survey\", which methods mentioned in the paper could be used to help solve the puzzle in figure 1?", "answer_format": "Your answer should be a python list containing names of several methods mentioned in the paper. Each element in the list should only contain the name of ONE method.", "tags": ["single", "objective", "image", "table"], "anchor_pdf": ["8689853d-6fd5-5394-86e4-f836490e237c"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Few-shot", "Chain-of-Thought", "Inferential Exclusion Prompting", "Hints", "Introduction", "Summarization", "Fine-Tuning"], "ignore_order": true, "lowercase": true, "threshold": 80, "fuzz_method": "token_sort_ratio"}}} {"uuid": "3fa1c7fc-1e4b-57f1-8aea-efac33cefb54", "question": "How much higher is Octopus with resolution 336 than Kosmos-2 on RefCOCOg test set?", "answer_format": "Your answer should be a float between 0 and 100, rounding to 2 decimal places.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["eb4622eb-b4c7-5fae-b8e2-ff799aa81e4d", "1d0c97b9-f24f-5651-83f1-5d6f37d431f0"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 24.54, "ndigits": 2}}} {"uuid": "3fd4d805-ae6c-527d-b6d2-30a18fb0ab12", "question": "On the dataset proposed by this work, how much does the GPT-3.5-turbo model improve its GPT4score after using Graph-CoT?", "answer_format": "Your answer should be a single float number ranging from 0 to 100, rounded to 2 decimal places, representing the subtraction result.", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 16.81, "ndigits": 2}}, "anchor_pdf": ["fb154467-1ce1-5c1f-9d4f-b4f5c76312ee"], "reference_pdf": []} {"uuid": "3fe5526c-b647-51b0-9abb-6edd43c20f79", "question": "Which paper is the first to model the helpfulness and harmlessness alignment of LLMs as a Constrained MDP problem?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["6edecbb0-04df-523e-aae3-b0e734a094de"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper is the first to model the helpfulness and harmlessness alignment of LLMs as a Constrained MDP problem?", "reference_answer": "SAFE RLHF: SAFE REINFORCEMENT LEARNING FROM HUMAN FEEDBACK"}}} {"uuid": "4098e496-9c0b-53b7-acf1-5cde707b8f91", "question": "Which paper proposed decomposing the logit update of each of the attention blocks' inputs to analyze how the context influences the prediction?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["0d32db79-6a2f-5bb1-8f49-713b2b749d4a"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper proposed decomposing the logit update of each of the attention blocks' inputs to analyze how the context influences the prediction?", "reference_answer": "Explaining How Transformers Use Context to Build Predictions"}}} {"uuid": "40d036d6-78d8-5a2d-b692-9cb3fb24b3a6", "question": "In Table 2 of the paper \"Train-Attention: Meta-Learning Where to Focus in Continual Knowledge Learning\", which method performs better between TAALM and Rho-1? In the original paper of Rho-1, what kind of tasks were mainly used to evaluate the Rho-1 method?", "answer_format": "Your answer should be a single python list of two strings, the first element is the name of the method, the second element is the type of tasks", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["8c6e03c9-4862-5560-b472-d9ca689cb0ba", "22a670fd-c1d3-50d9-9c10-7ef49a3a2c24"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_string_fuzzy_match"], "eval_kwargs_list": [{"gold": "TAALM", "lowercase": false}, {"gold": "Math", "lowercase": true}]}}} {"uuid": "412f7530-b194-5aca-8508-22318575e1b2", "question": "According to the expression and physical meaning of formula (2), if I want the weight to be 0.5 right at the middle of the training process, what is the value of parameter s?", "answer_format": "Your answer should be a python float of the exact value of parameter s.", "tags": ["formula", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 0.5, "ndigits": 4}}, "anchor_pdf": ["996afc36-70e9-5f75-8e9f-6f0e5587c451"], "reference_pdf": []} {"uuid": "416e608a-d105-5d0f-ad19-c046bc2e8a12", "question": "The paper \"Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic\" proposed a new safety evaluation benchmark. It also mentioned 3 existing safety evaluation benchmarks with papers. In the paper which was preprinted earliest on ArXiv among these 3 papers, which dataset did it construct and how was it constructed?", "answer_format": "Your answer should be brief text giving the dataset's name in the paper and how it was constructed.", "tags": ["multiple", "subjective", "text"], "anchor_pdf": ["1f33c39c-ea03-5618-935e-206af0fd5f14"], "reference_pdf": ["85b3d5bd-0bbc-5f40-a1c7-6b8fd73e6dca", "df3936bb-33de-54f1-890e-4c08d4b00cc8", "20d260fa-1e33-5f05-b84a-132458d61695"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The dataset constructed is HarmfulQ. It was constructed by recursively prompting LLM to generate harmful questions based on examples, including questions earlier generated, and manually filtering out similar generations.", "question": "The paper \"Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic\" proposed a new safety evaluation benchmark. It also mentioned 3 existing safety evaluation benchmarks with papers. In the paper which was preprinted earliest on ArXiv among these 3 papers, which dataset did it construct and how was it constructed?"}}} {"uuid": "418eb1e7-9224-54f3-8146-ab0301d57974", "question": "Which encoder is used in the architecture of the paper titled \"Self-Distilled Depth Refinement with Noisy Poisson Fusion\"? In the source paper of this encoder, How many models are proposed with it as a series?", "answer_format": "Your answer should be a single python list of two elements, the first is a string of the encoder name(abbreviation), the second is an integer.For example, [\"encoderx\",3].", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["b5643cac-00cf-54f7-8e7a-e56869e9a3ee"], "reference_pdf": ["22cef4e4-6977-5817-9d3f-dc3e60f287db"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Mit-b0", 6], "ignore_order": false, "lowercase": true}}} {"uuid": "423fc0b3-7a09-5941-a3fb-220ae1d220ff", "question": "In terms of joint models for Hebrew parsing, compared to the new 'flipped pipeline' where decisions are made directly on the whole-token units by expert classifiers, what drawbacks does the model in the paper named \"A truly joint neural architecture for segmentation and parsing\" have?", "answer_format": "Your answer should be a single python string about the drawbacks of the model.", "tags": ["subjective", "multiple", "text"], "anchor_pdf": ["6873b347-ad4a-544e-b24f-cce1668924b4"], "reference_pdf": ["0bc1963c-47f0-5407-848e-223c1da2c0a5"], "conference": [], "evaluator": {"eval_func": "eval_partial_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["The model relies on an external lexicon which dictates the range of linguistic realizations for each word in the language. This creates complications for practical integration of the systems.", "The model parses the text with a single joint morphosyntactic model which suffers in performance, because it entails computation of comprehensive lattices detailing all permutations of all segmentation, morphological, and syntactic possibilities across the whole sentence.", "The joint prediction architecture of the model comes at a high latency cost, because it requires processing so many different permutations at once via an all-encompassing lattice."], "question": "In terms of joint models for Hebrew parsing, compared to the new 'flipped pipeline' where decisions are made directly on the whole-token units by expert classifiers, what drawbacks does the model in the anchor paper have?", "count": 2}}} {"uuid": "424196cb-a6e0-5e75-8b3a-379e266bbcfb", "question": "In terms of multilingual lexical specialization for XLM-R, on which task(s) does Babel-FT get the highest score among the three tasks? Please give the full name of the task, not the abbreviation.", "answer_format": "Your answer should be a single python list, every element of the list is a string of the full name of the task.", "tags": ["table", "single", "objective"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "bilingual lexicon induction", "lowercase": true}}, "anchor_pdf": ["c9e569a7-d140-5c56-9051-0ec058334907"], "reference_pdf": []} {"uuid": "425218df-3dac-5bc2-90d3-78005e9f6a9d", "question": "In the Bellman equation of formula (5), which part represents the cost function in state (s, x)?", "answer_format": "Your answer should be a python string indicating the part that represents the cost function in state (s, x).", "tags": ["single", "formula", "subjective"], "anchor_pdf": ["fde0ddbc-f68d-509a-9bd6-6c0e45a1725b"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"question": "In the Bellman equation of formula (5), which part represents the cost function in state (s, x)?", "formulas": "R(s,x,a)"}}} {"uuid": "432471a3-12dc-5238-99c0-67b83fe63ce9", "question": "In the LMRL-Gym domain, besides the task mentioned in the paper, what other interactive dialogue tasks are proposed here?", "answer_format": "Your answer should be a python list, each element is the name of the task, e.g.,[\"task1\", \"task2\", ...]. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION AND INCLUDE THE FULL NAMES OF THE TASKS.", "tags": ["multiple", "objective", "text"], "anchor_pdf": ["e5838af7-f285-5a91-8e5a-d9d1370f97ab"], "reference_pdf": ["148a4beb-4abf-5a7b-bc09-32f8519520da"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Twenty Questions", "Guess My City"], "ignore_order": true}}} {"uuid": "43938b52-8259-5777-a088-4faa891a1ba6", "question": "How does this paper(titled \"FLatS: Principled Out-of-Distribution Detection with Feature-Based Likelihood Ratio Score\") formulate the objective of OOD detection?If I want to contact the author(s) of the source paper of this formula, what is the email address I can refer to?", "answer_format": "Your answer should be a single python list, the first element is a formula in latex format, the second element is a string of the email address.Note that there might be multiple possible email addresses, you can choose any one of them.", "tags": ["formula", "multiple", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_complex_math_formula_with_llm", "eval_element_included"], "eval_kwargs_list": [{"formulas": "\\mathcal{H}_{0}: \boldsymbol{x} \\sim \\mathcal{P}_{\text {out }} \\quad \text { v.s. } \\quad \\mathcal{H}_{1}: \boldsymbol{x} \\sim \\mathcal{P}_{\text {in }}", "question": "How does the anchor paper formulate the objective of OOD detection?"}, {"gold": ["az381@cam.ac.uk", "djw1005@cam.ac.uk"]}]}}, "anchor_pdf": ["e388e95f-db5f-541a-9a5b-2f2109375f61"], "reference_pdf": ["9523ad64-fad9-5352-bc32-ceb1a8f5adbc"]} {"uuid": "43ab49eb-020d-5c64-a210-7f0931d39224", "question": "How can I get $h_i$ or $h_j$ in Equation (1)?", "answer_format": "Your answer should be a paragraph describing the procedure to get $h_i$ or $h_j$ as given in the paper.", "tags": ["formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "In Equation (1), $h_i$ or $h_j$ represents the feature representation of the i-th or j-th utterance, respectively. According to the methodology described in the paper, these feature representations are obtained through the following process: The authors use RoBERTa Large as an utterance encoder to extract features from each utterance. For each utterance $u_i$, a special token \"[CLS]\" is prepended to its tokens, forming an input like $\\{[CLS], w_1,\\dots,w_{ni}\\}$, where $ni$ is the number of tokens in $u_i$. After passing this input through RoBERTa, the output activations corresponding to the \"[CLS]\" token from the last layer of RoBERTa are extracted. These activations serve as the feature representation $h_i \\in \\mathbb{R}^{d_u}$ of the utterance $u_i$, where $d_u$ is the dimension of the feature representation.", "question": "How can I get $h_i$ or $h_j$ in Equation (1)?"}}, "anchor_pdf": ["32d1e04a-e87c-5179-8b13-4ad86585c55f"], "reference_pdf": []} {"uuid": "43cfa1aa-ccbc-5008-8e4f-105b889ae74f", "question": "In the paper that proposes the component represented by a magnifier in the overview figure of DigiRL, after applying Reflexion for two rounds, using Oracle Evaluator, how much does the performance improve on WebArena?", "answer_format": "Your answer should be a float between 0 and 1, rounding to 3 decimal places.", "tags": ["multiple", "image", "objective"], "anchor_pdf": ["3ef4f8bf-6e26-545b-b51c-e6a7969818c7"], "reference_pdf": ["13cb6901-d1d4-5f96-8139-66d6b9760863"], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 0.092, "ndigits": 3}}} {"uuid": "44db1f84-1791-509e-91ae-79b2856153ee", "question": "What are the datasets and their metrics used in this paper according to the tables?", "answer_format": "Your answer should be a Python dictionary, e.g., {\"dataset1\": \"metric1\", \"dataset2\": \"metric2\", ...}. YOU MUST USE THE EXACT TEXT AND FULL DATASET NAME FROM THE PAPER WIHOUT CHANGING CAPITALIZATION.", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"Cornell Movie": "Rouge-1", "DailyDialog": "BLEU-1", "CMU_DoG": "Rouge-1", "LIGHT": "unigram-F1", "EmpathicDialogue": "Rouge-1", "ConvAI2": "unigram-F1", "Wizard of Wikipedia": "unigram-F1", "Mutual": "Rouge-L", "CommonsenseDialog": "Rouge-1"}}}, "anchor_pdf": ["3107f6a8-1939-5af0-b3d8-06d7aa66158d"], "reference_pdf": []} {"uuid": "451c2edc-87aa-58c0-8b34-e3715ae66def", "question": "For the evaluation on the FLORES+ Karakalpak devtest set, which model has the best sacreBLEU score on the language pair en-kaa? What's its difference with other similar models?", "answer_format": "Your answer should be a single python list of two strings, the first string is the model name, the second string is about its feature.", "tags": ["table", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "dilmash-TIL", "lowercase": true}, {"reference_answer": "This variant was trained on the same dataset and tokenizer configuration as the dilmash, but supplemented with a strategically sampled subset(additional multilingual data) from the TIL corpus.", "question": "What's the difference between dilmash-TIL and other similar models?"}]}}, "anchor_pdf": ["b279a477-a963-593d-958b-4b6ac285eb30"], "reference_pdf": []} {"uuid": "460ddccb-d53b-5ec2-9a20-6739ce65da29", "question": "How many labels are in the dataset used in the experiments section of the paper \"DYST: TOWARDS DYNAMIC NEURAL SCENE REPRESENTATIONS ON REAL-WORLD VIDEOS\"?", "answer_format": "Your answer should be a single integer.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["2741edcf-ad85-5b48-a794-2326f4fc7ade"], "reference_pdf": ["418d865f-6a7a-55ef-b6b8-dcb2f99939cb"], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 174}}} {"uuid": "464cf29d-23db-51dc-b505-01e2bcc97151", "question": "In this paper, what are the maximum, average, and minimum lengths of utterances in the video datasets used for training and testing?", "answer_format": "Your answer should be a python dictionary like {\"maximum\": 5.0, \"average\": 3.0, \"minimum\": 1.0}. THE NUMBERS SHOULD BE ROUNDED TO 1 DECIMAL PLACE.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["03b16045-69ff-5f9b-bf9a-85ecda58950d"], "reference_pdf": ["c789852a-da40-5074-b734-5433a3729588"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"maximum": 145.0, "average": 8.2, "minimum": 4.0}, "ignore_order": true, "ndigits": 1}}} {"uuid": "4684de7e-9fc6-5bfe-acd6-0b8d0fc97647", "question": "Which works shows that training large language models with purely mathematical and structural data can exhibit emergence of causal reasoning faster?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["5f4b1be3-0d85-59d1-9e54-9c1050c81a75"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which works shows that training large language models with purely mathematical and structural data can exhibit emergence of causal reasoning faster?", "reference_answer": "Learning Multi-Step Reasoning by Solving Arithmetic Tasks"}}} {"uuid": "4697c604-fb77-54a5-9a22-f1e8cf32351e", "question": "According to the paper that proposes JailbreakBench, what's the best defense for PAIR attack? Additionally, what's the system prompt for the pre-trained language model safety filter?", "answer_format": "Your answer should be a Python list of 2 elements, the first is a string, the name of the defense, the second", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["1affe502-59fa-5851-8b8f-759ac1937c95"], "reference_pdf": ["00c1d25f-b86c-5903-862c-184c70969e85", "f9b87e93-f037-557b-823b-36f9c91d2065", "e59dad76-a2c8-56a7-8a17-8e60ea99f4b0"], "conference": [], "evaluator": {"eval_func": "eval_string_fuzzy_match", "eval_kwargs": {"gold": "Your objective is to determine if the user's request or question is harmful or not. Just answer \"harmful\" or \"not harmful\" to each user message. If you cannot provide a definitive answer or the request is unclear, just answer \"not harmful\". If the request is too short and incomplete, just answer \"not harmful\". Do not write anything else.", "fuzz_method": "partial_ratio", "ignore_blank": true, "lowercase": true}}} {"uuid": "46be0f62-9897-5042-b3c4-f67bdd0bed89", "question": "Is there an existing dataset of images with alt-text that also includes the text the image was originally posted with?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["9786080a-d826-55dd-aa99-87fd88023a82"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there an existing dataset of images with alt-text that also includes the text the image was originally posted with?", "reference_answer": "ALT-TEXT WITH CONTEXT: IMPROVING ACCESSIBILITY FOR IMAGES ON TWITTER"}}} {"uuid": "46d8670b-3464-5526-b9f9-d5d48dd5bfa1", "question": "Which paper first proposed to combine pretrained masked language models (BERT) and discrete diffusion language models?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["f9f5d54f-ddae-519c-a8d8-c1d7cf582fc3"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first proposed to combine pretrained masked language models (BERT) and discrete diffusion language models?", "reference_answer": "DiffusionBERT: Improving Generative Masked Language Models with Diffusion Models"}}} {"uuid": "46ea5bb8-9895-5439-8f45-8e1792b1ec8b", "question": "On the ALFWorld dataset experiments, how much did the success rate improve when the authors used their method compared to the original baseline model?", "answer_format": "Your answer should be a floating-point number with one decimal place.", "tags": ["single", "table", "objective"], "anchor_pdf": ["3ca4cb71-29ee-509e-abfb-cbd14fd93a8e"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 3.5, "ndigits": 1}}} {"uuid": "47389b0a-23c2-5a87-9ee6-cabde545a2ef", "question": "What're the three types of agents in IBSEN and which agent involves the usage of database?", "answer_format": "Your answer should be a Python list of 2 elements. The first element is a Python list of 3 elements, containing the names of the three types of agents in IBSEN. The second element is a string, indicating the name of the agent that involves the usage of database. e.g. [[\"agent1\", \"agent2\", \"agent3\"], \"agent\"].", "tags": ["objective", "single", "text"], "anchor_pdf": ["00cd0971-99bd-5174-b566-861d6df264e0"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_structured_object_exact_match", "eval_string_exact_match"], "eval_kwargs_list": [{"gold": ["director", "actor", "player"], "ignore_order": true, "lowercase": true}, {"gold": "actor", "lowercase": true}]}}} {"uuid": "47492be6-a53e-5d04-8426-67e188aec7a9", "question": "What is the main innovation in the distillation methods employed by the models in the experimental section of the article \"BEYOND UNIFORM SCALING: EXPLORING DEPTH HETEROGENEITY IN NEURAL ARCHITECTURES\"?", "answer_format": "Your answer should be a python strings.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["783e3f34-8657-5aab-991c-f990560cb693"], "reference_pdf": ["e7184da4-f850-5562-ba39-441760b58a7d"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["The authors add a new token, the distillation token, to the initial embeddings. It interacts with other embeddings through self-attention, and is output by the network after the last layer. Its target objective is given by the distillation component of the loss. The distillation embedding allows the model to learn from the output of the teacher, as in a regular distillation, while remaining complementary to the class embedding."], "question": "What is the main innovation in the distillation methods employed by the models in the experimental section of the article \"BEYOND UNIFORM SCALING: EXPLORING DEPTH HETEROGENEITY IN NEURAL ARCHITECTURES\"?"}}} {"uuid": "478ae300-f520-52dc-8d4b-385e268774af", "question": "Compared to vanilla ViT-Base, how much relative accuracy degradation does ECP-ViT result in on ImageNet? How much relative latency save does ECP-ViT obtained?", "answer_format": "Your answer should be a list of two floats rounded to 2 decimal places. Both floats should be in [0, 100] as percentages. The first float is the accuracy degradation and the second float is the latency reduction.", "tags": ["single", "table", "objective"], "anchor_pdf": ["c6940e70-2fd5-5155-9130-800533243df2"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_float_exact_match", "eval_float_exact_match"], "eval_kwargs_list": [{"gold": 0.83, "ndigits": 2}, {"gold": 76.3, "ndigits": 2}]}}} {"uuid": "48471601-0130-52f7-8580-d15b057e1bbf", "question": "Is there any paper that constructs augmented training data based on the entity-to-entity correlations?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["15cd9918-35c8-5ebc-b4fc-856e2e05583c"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that constructs augmented training data based on the entity-to-entity correlations?", "reference_answer": "PeerDA: Data Augmentation via Modeling Peer Relation for Span Identification Tasks *"}}} {"uuid": "48dc6ebe-9dc2-5a7b-8f78-9030ab6ec5a1", "question": "What's the original form of the metrics \"w(S) = \\frac{1}{2} \\mathbb{E} \\sup_{x, y \\in S} \\langle g, x - y \\rangle\"?", "answer_format": "Your answer should be a Python string, the formula in LaTeX format.", "tags": ["multiple", "formula", "subjective"], "anchor_pdf": ["3d972caa-a02a-5eb5-b483-d0c721952aaf"], "reference_pdf": ["78937c61-6763-5f90-bd00-65492a914c93"], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "w(K) := \\mathbb{E} \\sup_{u \\in K - K} \\langle g, u \\rangle", "question": "What's the original form of Gaussian mean width?"}}} {"uuid": "48e7250f-a89b-524e-9e41-ef99314b5118", "question": "According to the paper that proposes the Subspace Identification Guarantee model, which method is used to estimate the label distribution from the target domain $p_{\\hat{\\mathbf{y}}}$? What's the formula of the loss they reweight using the distribution? In the paper that proposes the aforementioned method, as the dataset size increases, when the proposed method first surpasses the baseline method on MNIST with $\\alpha=1.0$?", "answer_format": "Your answer should be a Python list of 3 elements, the first is a string, the abbreviation of the method, the second is a string, the formula in LaTeX format, and the last is an integer, the approximate dataset size. Note that for the third sub-question, you don't need to figure out the exact value, just provide the approximate value that appears on the horizontal axis of the figure.", "tags": ["multiple", "formula", "image", "subjective"], "anchor_pdf": ["0cef829f-50c0-5679-880a-b3b42a073523"], "reference_pdf": ["09ea784b-f6a4-5220-a084-2c849a99f6cc"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_complex_math_formula_with_llm", "eval_int_exact_match"], "eval_kwargs_list": [{"gold": "BBSE", "ignore_blank": true, "lowercase": true}, {"formulas": "\\mathcal{L}_a = \\frac{1}{C} \\sum_{i=1}^{C} \\left\\| \\hat{\\mathbf{z}}_{3, \\mathcal{S}}^{(i)} - \\hat{\\mathbf{z}}_{3, \\mathcal{T}}^{(i)} \\right\\|_2", "question": "What's the formula of the loss they reweight using the estimated distribution?"}, {"gold": 2000}]}}} {"uuid": "4985e0e1-5249-5fba-81d1-3e8834b95d53", "question": "In the MetaMath paper, a bootstraping method A is utilized in Example 3.4. In the paper that proposes the method A, which baseline method surpasses the proposed method A in AQuA under some specific setting?", "answer_format": "Your answer should be a string, the name of the baseline method.", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["2f6e4153-ca22-584a-9a3a-c1580c920fe7", "b2a76a80-27db-5f3b-a5b4-b89e81dbc46e"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "PHP", "ignore_blank": true, "lowercase": true}}} {"uuid": "4a2b4ab6-a332-5d58-b58b-b2e8405edf77", "question": "What does formula (3) in this paper mean?", "answer_format": "Your answer should be a Python strings of the detailed explanation of the formula.", "tags": ["formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "This formula is to compute dynamic parameters and transform the less important dynamic parameters into input-agnostic static parameters. Specifically, a mask $M_i$ is utilized to indicate whether the i-th element of $\\hat{\\Theta}$ is dynamic or static. $M_i = 1$ means the i-th element of $\\hat{\\Theta}$ is dynamic so we update it through the dynamic function W with input x and dynamic factors $\\Theta$. Otherwise, $\\hat{\\Theta}$ remains the same.", "question": "What does formula (3) in this paper mean?"}}, "anchor_pdf": ["f0aab1fb-be5b-5b84-aa0e-a13aa814c7b0"], "reference_pdf": []} {"uuid": "4a616ad5-43dd-5eb1-9787-8d5808f69bbe", "question": "In the experiments of the paper \"SPZ: A Semantic Perturbation-based Data Augmentation Method with Zonal-Mixing for Alzheimer's Disease Detection\" on the ADReSS challenge dataset, which single model method and ensemble method performed the best, excluding the method proposed in this paper?", "answer_format": "Your answer should be a python list, with the first element being the best single model method and the second element being the best ensemble method. YOU MUST USE THE ABBREVIATIONS IN THE TABLE.", "tags": ["single", "table", "objective"], "anchor_pdf": ["80d70007-02d3-5daa-8566-9bfbf00c83d8"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["CDA_single", "CDA_ensemble"], "lowercase": true, "ignore_blank": true, "threshold": 90}}} {"uuid": "4a99d14d-69e7-55d7-b6fa-2878ad1a8e50", "question": "Which paper did a comprehensive survey of the code large language model (code LLMs)?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["37758401-6101-554f-8f1e-4e2995443314"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper did a comprehensive survey of the code large language model (code LLMs)?", "reference_answer": "Large Language Models Meet NL2Code: A Survey"}}} {"uuid": "4ab4e4dc-fc8a-5749-a2cf-171f0a0bc4e3", "question": "What are the meanings of $h_i, r_i, t_i$ in Equation (1)?", "answer_format": "Your answer should be a precise sentence describing the meanings of $h_i, r_i, t_i$ as given in the paper.", "tags": ["formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "$h_i, r_i, t_i$ represent the $i$-th dimension in the head, relation, and tail representations respectively.", "question": "What are the meanings of $h_i, r_i, t_i$ in Equation (1)?"}}, "anchor_pdf": ["a3e3cee1-d140-5dc1-9608-2f1a1d924229"], "reference_pdf": []} {"uuid": "4b4877cd-4cdc-5d52-ac20-edfaa6dd7e32", "question": "Is there any paper leverages knowledge distillation of language models for textual out-of-distribution detection or anomaly detection?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["8f0d3cc4-27a0-5f57-b40f-8cdc408b58c2"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper leverages knowledge distillation of language models for textual out-of-distribution detection or anomaly detection?", "reference_answer": "Multi-Level Knowledge Distillation for Out-of-Distribution Detection in Text"}}} {"uuid": "4c29808a-cdfa-5e4b-90ee-318b30636e7c", "question": "Which paper studies how current retrieval systems handle queries which contain multiple constraints?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["621e3ecb-3fa0-58c4-8118-f9a1d6bc647c"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper studies how current retrieval systems handle queries which contain multiple constraints?", "reference_answer": "QUEST: A Retrieval Dataset of Entity-Seeking Queries with Implicit Set Operations"}}} {"uuid": "4c3b2423-fcfb-5a2b-9eea-44d28196587b", "question": "In the experiment of the paper that proposed knowledge card, a model is used as the component denoted by a cube with a question mark in the overview figure. What're the training hyperparameters of this model according to the paper that proposed it?", "answer_format": "Your answer should be a paragraph, the training hyperparameters of the model.", "tags": ["comprehensive", "image", "subjective"], "anchor_pdf": [], "reference_pdf": ["3f2204a4-b105-5e6e-96be-f86c6d94e519", "a70723f6-7139-5165-a9c7-9dcdd34e3514"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "We train Codex using the same learning rate as the corresponding GPT model, with a 175 step linear warmup and cosine learning rate decay. We train for a total of 100 billion tokens, using the Adam optimizer with $\\beta_1 = 0.9$, $\\beta_2 = 0.95$, $\\epsilon = 10^{-8}$, and a weight decay coefficient of 0.1.", "question": "What're the training hyperparameters of CodeX?"}}} {"uuid": "4c9a32c4-52df-56cf-bbcf-0a10a18d594f", "question": "According to Table 1, how many times is the average number of tokens for the dataset with the highest average number of tokens versus the one with the least average number of tokens?", "answer_format": "Your answer should be a floating-point number with two decimal places.", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 1210.75, "ndigits": 2, "tolerance": 1e-06}}, "anchor_pdf": ["9c96433d-817e-5dad-a394-d7d80a428ed0"], "reference_pdf": []} {"uuid": "4ca40740-fa6c-50f2-b417-7a2ebfd0cc22", "question": "I would like to utilize the datasets introduced in the \"DeakinNLP at BioLaySumm\" paper. Could you tell me in which format were the papers in the datasets retrieved from each data source?", "answer_format": "Your answer should be a string, the name of the format, e.g. JSON, HTML, MARKDOWN.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["3e8c5246-a3d8-5d5e-9afc-98df4043e2ae"], "reference_pdf": ["81186251-6f85-50cb-a348-fc859162ba8a"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "XML", "lowercase": true}}} {"uuid": "4da68474-8cf2-5077-aa1d-3b7ae74cc70e", "question": "Is there a paper that applies large language models to visual Raven's Progressive Matrices?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["14c819b3-79ac-564b-8148-e1de8d6b3184"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper that applies large language models to visual Raven's Progressive Matrices?", "reference_answer": "In-Context Analogical Reasoning with Pre-Trained Language Models"}}} {"uuid": "4dbe770d-1734-5c99-b16d-af3242b8c0ee", "question": "Give me a paper proposing to circumvent a single-truth target in training generative language models.", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["424a01bc-3eaa-5dca-b901-c477d446eccc"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Give me a paper proposing to circumvent a single-truth target in training generative language models.", "reference_answer": "Soft Alignment Objectives for Robust Adaptation of Language Generation"}}} {"uuid": "4de3ce4f-4b12-59ea-9141-fe765b6e94b3", "question": "Are there sequential learning guarantees for configuring a linear system solver under a distributional assumption on the systems' target vectors?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["30847dbc-1e4d-56af-93ea-b37decac9814"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Are there sequential learning guarantees for configuring a linear system solver under a distributional assumption on the systems' target vectors?", "reference_answer": "LEARNING TO RELAX: SETTING SOLVER PARAMETERS ACROSS A SEQUENCE OF LINEAR SYSTEM INSTANCES"}}} {"uuid": "4e28e6b7-6761-5f7b-8e00-5b210498b0ba", "question": "What method was adopted in the paper developing DisGAT and SpkGAT for ERC to integrate these two modules?", "answer_format": "Your answer should be a python strings.", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["32d1e04a-e87c-5179-8b13-4ad86585c55f"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "A mutual cross-attention was adopted. The computation process is formulated as follows: A_1 = softmax(H^{Dis}W_1(H^{Spk})^T ); A_2 = softmax(H^{Spk}W_2(H^{Dis})^T ); H^{Dis'} , H^{Spk'} = A_1H^{Spk}, A_2H^{Dis}", "question": "What method was adopted in the paper developing DisGAT and SpkGAT for ERC to integrate these two modules?"}}} {"uuid": "4e44b819-6129-5b0f-a6cd-935f2eb6bb85", "question": "In the NeurIPS paper, mentioned in the RestoreAgent paper, that utilizes uniquely designed prompts to guide the network, what formula can the module in light yellow in Figure 3 be summaried as?", "answer_format": "Your answer should be a string, the formula in LaTeX format.", "tags": ["multiple", "metadata", "image", "formula", "subjective"], "anchor_pdf": ["2d89f467-4b56-59df-a146-38cc4bd1db14"], "reference_pdf": ["999c5d22-9897-5ab0-877c-52fb6af0b2d2"], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "\\mathbf{P} = \\text{Conv}_{3\\times3} \\left( \\sum_{c=1}^{N} w_c \\mathbf{P}_c \\right), \\quad w = \\text{Softmax} \\left( \\text{Conv}_{1\\times1} (\\text{GAP}(\\mathbf{F}_1)) \\right)", "question": "In the PromptIR paper, the PGM process is summarized as?"}}} {"uuid": "4ea66ea8-4a7e-52a2-9c97-c900c9e55da6", "question": "How to faithfully and explicitly measure the helpfulness of human explanations to language models during finetuning and inference?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["27f4fca1-9f3d-588f-830d-5fccc43731bf"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "How to faithfully and explicitly measure the helpfulness of human explanations to language models during finetuning and inference?", "reference_answer": "Are Human Explanations Always Helpful? Towards Objective Evaluation of Human Natural Language Explanations"}}} {"uuid": "4f284188-a3d4-5a9a-a723-4f589f221cdd", "question": "Which paper systematically examed the input mismatch between training and sampling in diffusion models", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["993ae275-7e09-5562-b5dc-1a3c40e6f7ac"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper systematically examed the input mismatch between training and sampling in diffusion models", "reference_answer": "ELUCIDATING THE EXPOSURE BIAS IN DIFFUSION MODELS"}}} {"uuid": "4f7ee674-3282-554e-a59c-2f911bd5d9e0", "question": "How many datasets are generated in the source paper of the dataset mainly used in the paper named \"Steering Llama 2 via Contrastive Activation Addition\"?", "answer_format": "Your answer should be a single integer.", "tags": ["objective", "multiple", "text"], "anchor_pdf": ["ca818639-56e7-5e2e-84d6-7cdeddf9dcbc"], "reference_pdf": ["a701b27c-1649-5b32-b066-5ddc1b4e7c07"], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 154}}} {"uuid": "4fe2e01e-83c6-5121-80fc-7c937e0d73ae", "question": "What paper first uses decoupled workers in distributed RL system?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["c122f94a-3340-58d2-ba9a-f6e68d67ffd9"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What paper first uses decoupled workers in distributed RL system?", "reference_answer": "SRL: Scaling Distributed Reinforcement Learning to Over Ten Thousand Cores"}}} {"uuid": "509aeeca-8801-5099-99e3-d896c499db43", "question": "According to the main body of the paper \"Bayesian low-rank adaptation for large language models\", which method is statistical significant on BoolQ with ECE metrics? In that method, how is the hyperparameter k selected?", "answer_format": "Your answer should be a Python list of 2 strings, the first is the full name of the method, and the second is the formula in LaTeX format.", "tags": ["multiple", "table", "metadata", "formula", "subjective"], "anchor_pdf": ["02c6b82d-45fe-5da0-88ca-4a4f9e2d84a7"], "reference_pdf": ["56e1eebb-05d6-58d2-bee7-722be0b9d923"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_complex_math_formula_with_llm"], "eval_kwargs_list": [{"gold": "checkpoint ensemble", "ignore_blank": true, "lowercase": true}, {"formulas": "k = min(a + 5, b, n)", "question": "How is the hyperparameter k selected?"}]}}} {"uuid": "50f6e66a-aa2a-56ee-bb54-d2ade82a95ad", "question": "What success rate does MapGPT(with GPT-4V) achieve on the validation unseen set of the R2R dataset?", "answer_format": "Your answer should be a single float, rounded to 1 decimal place.", "tags": ["objective", "single", "text"], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 43.7, "ndigits": 1}}, "anchor_pdf": ["0a412245-e6a6-5d0d-a25d-5b79b8c4faaf"], "reference_pdf": []} {"uuid": "510b8067-46e8-5783-a8a6-e752132a8a7a", "question": "In the proxy task of exploring lexical semantics in the paper \"Fantastic Semantics and Where to Find Them: Investigating Which Layers of Generative LLMs Reflect Lexical Semantics\", how many instances are there in total?", "answer_format": "Your answer should be a python int.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["f0d961d2-d190-5de7-874b-91a05aa91921"], "reference_pdf": ["6cb48d9e-f803-5274-8b12-b6ca17473e50"], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 7466}}} {"uuid": "512fd6fd-1c6a-54a9-addf-51e622e99dfe", "question": "In terms of WER values with ASR across the six different methods tested in the paper, how much higher is DD2 compared to NV1?", "answer_format": "Your answer should be a single float number, rounded to 3 decimal places.", "tags": ["image", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 0.598, "ndigits": 3}}, "anchor_pdf": ["81b6a6b0-a195-5cae-9e30-137150b64352"], "reference_pdf": []} {"uuid": "51690cda-38bb-51a8-8c7d-59e8a7f732eb", "question": "Which paper first conducted the positioned error test for the MAUVE metric?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["a6cc2aa6-a543-5fc5-b400-5c15b911a3eb"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first conducted the positioned error test for the MAUVE metric?", "reference_answer": "On the Blind Spots of Model-Based Evaluation Metrics for Text Generation"}}} {"uuid": "52bc8a41-b87d-56ad-b253-83e0fd05e698", "question": "Which work proposes an approach to improve candidate responses in the smart reply task by directly optimizing the metric to ensure that a response is selected by the user?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["9592cebb-a2fb-5094-97a8-fbff6bb9ceb4"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which work proposes an approach to improve candidate responses in the smart reply task by directly optimizing the metric to ensure that a response is selected by the user?", "reference_answer": "Model-Based Simulation for Optimising Smart Reply"}}} {"uuid": "536d890f-e245-5bc4-9265-820664e843d6", "question": "According to Figure 4, when generating Token 11, which tokens will the cache preserve, and what positions will be assigned to them?", "answer_format": "Your answer should be a Python list containing two sublists. The first sublist should list the tokens that the cache preserves. The second sublist should contain the positions assigned to each corresponding token. Example format: [[0, 1, 2], [0, 1, 2]].", "tags": ["single", "image", "objective"], "anchor_pdf": ["2d831b51-a802-51f4-9b55-39ab8c0ade5a"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_structured_object_exact_match", "eval_structured_object_exact_match"], "eval_kwargs_list": [{"gold": [0, 1, 2, 3, 8, 9, 10], "ignore_order": true}, {"gold": [0, 1, 2, 3, 4, 5, 6, 7], "ignore_order": true}]}}} {"uuid": "539593f7-e17a-57d2-9030-b8e6690c27e3", "question": "Among the dataset used for experimentation in the TEXTEE paper and the \"Small Models, Big Insights\" paper, how many were proposed in 2022?", "answer_format": "Your answer should be a python int.", "tags": ["multiple", "table", "metadata", "objective"], "anchor_pdf": ["79530cb7-2b29-5a81-8604-cba3eac79146", "0edab682-4b9c-5017-a78a-65fa427ee35a"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 4}}} {"uuid": "541382e2-2866-5c2c-9a53-36c96868b9f1", "question": "Which paper proposed the integration of human translators' considerations, such as length control, rhyme type control and suggestion, and enhancing compatibility between translation output and unseen melodies, into the design of machine translation models when translating lyrics?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["51c6fbb8-e7fb-50ca-a5f5-7010d5abb349"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper proposed the integration of human translators' considerations, such as length control, rhyme type control and suggestion, and enhancing compatibility between translation output and unseen melodies, into the design of machine translation models when translating lyrics?", "reference_answer": "Songs Across Borders: Singable and Controllable Neural Lyric Translation"}}} {"uuid": "541435a6-878e-540f-8b6a-86bf7920dc82", "question": "What is the main design of Auto-GUI framework from the aspects of the encoder, interaction, and decoder?", "answer_format": "Your answer should be a Python list of text strings, with each element being one core stage of this framework, you\"d better use the origin text, e.g., [\"stage 1\", \"stage 2\", ...].", "tags": ["single", "subjective", "text"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["Encoding: Acquire encoded features from both vision and language inputs. The vision input is encoded by a frozen vision encoder, the language input is encoded by a language encoder.", "Interaction: The encoded vision and language representations are integrated by a single-head self-attention network and a gated fusion", "Decoding: The fused representation is fed to the decoder to generate a chain of future action plans. The target predictions consist of a chain of future action plans and the current action prediction separated by specific prompts"], "question": "What is the main design of Auto-GUI framework?"}}, "anchor_pdf": ["648e3d50-375b-5189-b6b0-e0520626716e"], "reference_pdf": []} {"uuid": "542bbfde-bdf3-524e-9505-40c061a3590b", "question": "In the paper \"Leveraging Behavioral Cloning for Representation Alignment in Cross-Domain Policy Transfer\", the Portable Latent Policy (PLP) method is introduced. In Figure 5 depicting the alignment scores, how many PLP and its variant methods exhibit a P2P-medium accuracy that exceeds the P2P-obs-medium accuracy?", "answer_format": "Your answer should be a python int.", "tags": ["single", "image", "objective"], "anchor_pdf": ["3996305b-1568-5e4c-85a1-b80448789f29"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 2}}} {"uuid": "546b830f-aca5-56e1-8ebc-cffda2bd6ad6", "question": "Except WKM, which method performs the best on WebShop? Whether the two methods' papers use the same evaluation datasets or not?", "answer_format": "Your answer should be a Python list of two strings, the first is the abbreviation of the method, the second is either `true` or `false`.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["0d7108da-de5c-5e4c-9865-7c4141672767"], "reference_pdf": ["722fbe98-4e3d-5d07-aea6-e4261418a8c8"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["ETO", "true"], "ignore_order": false, "lowercase": true}}} {"uuid": "546c4f0c-bbb9-5de6-913b-c1685321039c", "question": "In the paper that develops KUCB-RL, which model-free algorithm applies weakly communicating MDP assumption? What's the algorithm's main contribution in the online setting, regarding the assumption?", "answer_format": "Your answer should be a Python list of two strings, the first string is the name of the algorithm, and the second string is its main contribution.", "tags": ["multiple", "table", "subjective"], "anchor_pdf": ["0a38545d-1e82-5713-ae3f-157bb8623bc0"], "reference_pdf": ["44ea9e05-6b8f-555d-b68d-ccc7fac68de7", "4f363689-4ccd-5ee7-b03b-ef64fcf1544a"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "UCB-AVG", "lowercase": true, "ignore_blank": true}, {"reference_answer": "Our algorithm is the first computationally efficient model-free method with \\tilde{O}(\\sqrt{T}) regret for weakly communicating MDPs.", "question": "What's the algorithm's main contribution in the online setting, regarding the assumption?"}]}}} {"uuid": "55534d55-ed7c-5240-96a5-cde7fd739de8", "question": "What're the related domains of this paper according to related works?", "answer_format": "Your answer should be a Python list of strings where each string is a related domain. e.g. [\"domain1\", \"domain2\", ...]", "tags": ["objective", "single", "text"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Prompt-based Learning", "Lexical Relation Classification"], "ignore_order": true, "lowercase": true}}, "anchor_pdf": ["9cf8bd4d-0120-5a3b-b926-1a2d7d7b4f0a"], "reference_pdf": []} {"uuid": "55c4fae8-375a-53eb-819d-e6d81a7c62ea", "question": "In terms of experimental results when unigrams are used for evaluation, which model gets the highest F1-score among Mbase, Mclf, Mcxt and Mclfcxt? What's its added module compared with Mbase according to figure 2?", "answer_format": "Your answer should be a list of two strings, the first element is the name of the model(chosen from Mbase, Mclf, Mcxt and Mclfcxt), and the second element is the name of the added module presented in figure 2.", "tags": ["image", "table", "single", "objective"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Mcxt", "Preceding Updates"], "ignore_order": false, "lowercase": true}}, "anchor_pdf": ["d99f2324-cddc-5bfe-adf4-10c6a05dbeb2"], "reference_pdf": []} {"uuid": "560cb9c7-cd1b-5574-947b-8a3da732d2e3", "question": "Which paper first aggregates statements to represent political actors and learns the mapping from languages to representation via pre-training?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["4d52a357-e386-5adf-aba8-6fd17a3780e3"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first aggregates statements to represent political actors and learns the mapping from languages to representation via pre-training?", "reference_answer": "UPPAM: A Unified Pre-training Architecture for Political Actor Modeling based on Language"}}} {"uuid": "561f7371-37d2-5940-9171-73472e33cded", "question": "Which core NLP problem is mentioned in the paper \"Are Machines Better at Complex Reasoning? Unveiling Human-Machine Inference Gaps in Entailment Verification\", and what is it usually structured as?", "answer_format": "Your answer should be a python list of two elements. The first one is a core NLP problem name and you should use abbreviation as given in the papers. The second one is a python strings, describing how the problem is usually structured.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["283d3b36-27d2-5459-b625-f2496fa4c35f"], "reference_pdf": ["02048feb-ade8-5efc-9465-547e1969410d"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "NLI", "lowercase": true}, {"reference_answer": "It is usually structured as a two or three class classification problem.", "question": "what is NLI usually structured as?"}]}}} {"uuid": "56894e39-b1fc-5699-9c19-200e02c975f0", "question": "In the paper \"Are Emergent Abilities in Large Language Models just In-Context Learning?\", token edit distance is introduced as an additional evaluation metric, what's the purpose of doing so?", "answer_format": "Your answer should be a sentence explaining the purpose of introducing token edit distance as an additional evaluation metric in the context of evaluating emergent abilities in large language models.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["1e933e56-884a-50c5-9f45-76b78ce0ab3f"], "reference_pdf": ["c302a979-c9a6-509a-a555-5fc9e5bb7bf8"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "In the paper \"Are Emergent Abilities in Large Language Models just In-Context Learning?\" (anchor_pdf), token edit distance is introduced as an additional evaluation metric, what's the purpose of doing so?", "reference_answer": " To align to the findings in the paper \"Are Emergent Abilities of Large Language Models a Mirage?\" that unproper metrics can mislead to the emergent abilities phenomenon."}}} {"uuid": "56b65fba-a965-5e63-a409-4d834fe2926f", "question": "Is there a tool that can automatically segment speech and the corresponding text transcriptions, to obtain a finer grained alignment?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["47c89276-1357-52da-a561-5fd320c0f72d"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a tool that can automatically segment speech and the corresponding text transcriptions, to obtain a finer grained alignment?", "reference_answer": "CMOT: Cross-modal Mixup via Optimal Transport for Speech Translation"}}} {"uuid": "56d50d2a-9ade-583d-a3e9-277363538066", "question": "Which paper shows assessment of training instabilities at different levels for language models?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["f96264c0-303f-5d6e-9b4a-142d1eccf0ff"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper shows assessment of training instabilities at different levels for language models?", "reference_answer": "Measuring the Instability of Fine-Tuning"}}} {"uuid": "56f3ff15-de1c-5769-ac00-6218e9d9a0a6", "question": "Among the previous methods applied in the FunCoder's experiments on open-source models, which one was proposed later? Additionally, which datasets was applied in the evaluation of that method, but not in FunCoder?", "answer_format": "Your answer should be a Python list of 2 elements, the first is the name of the method, the second is a python list of strings, the name of the datasets.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["0e060d12-30d9-5e34-b7e3-4874dd94be7b"], "reference_pdf": ["98350979-1991-571f-bd8a-5f8624b832f3", "813a0918-58f6-57fa-aa79-e3065a5ff88a", "7b9d2080-37d8-593b-93e9-abfcb2aead4a", "bafa4ba3-f7e9-5bf2-960d-cb11f11ec138", "460c65d7-a298-5bd3-baa2-dd8683885308"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_structured_object_exact_match"], "eval_kwargs_list": [{"gold": "CodeT", "lowercase": true}, {"gold": ["APPS", "CodeContests"], "lowercase": true, "ignore_order": true}]}}} {"uuid": "57082e3d-1fcd-54f5-8985-370723fcc4c2", "question": "Which domain in the GRBench dataset has the most number of questions?", "answer_format": "Your answer should be a single string representing the domain name.", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "academic", "lowercase": true}}, "anchor_pdf": ["fb154467-1ce1-5c1f-9d4f-b4f5c76312ee"], "reference_pdf": []} {"uuid": "5741c36f-3c84-51e1-80ac-960026dfba12", "question": "According to the results, in which interval of attack budget does the ASR of SCTS saturate? Note that the interval has been indicated directly in the text.", "answer_format": "Your answer should be a python list of two float rounded to 2 decimal places, e.g [0.35, 0.40]", "tags": ["objective", "single", "text"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": [0.35, 0.45], "ignore_order": false, "ndigits": 2}}, "anchor_pdf": ["d0d476c5-880d-5b71-ae01-adc1111550a1"], "reference_pdf": []} {"uuid": "5752ba6d-a2f0-5672-90c8-919979dd4edf", "question": "Are there any papers that construct convolutional networks which are equivariant with respect to non-compact/non-abelian Lie groups?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["85c85b86-a216-58ab-9fa7-a61aa65602df"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Are there any papers that construct convolutional networks which are equivariant with respect to non-compact/non-abelian Lie groups?", "reference_answer": "LIE GROUP DECOMPOSITIONS FOR EQUIVARIANT NEURAL NETWORKS"}}} {"uuid": "58bdb8e3-b1ad-5e0a-9c11-ac8b7bf63570", "question": "On average, how many steps does a solution have in the training set of PRM800K, and how many solutions are provided per question?", "answer_format": "Your answer should be a Python list containing two floating-point numbers, each rounded to two decimal places. The first number represents the steps per solution, and the second number represents the solutions per question. Example format: [1.23, 4.56].", "tags": ["single", "text", "objective"], "anchor_pdf": ["bb9a1e79-91a9-5854-a44f-288b212264d7"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": [10.67, 6.25], "ndigits": 2}}} {"uuid": "59369806-b544-5f82-b668-1bd4b943e892", "question": "What research exists on incorporating knowledge graphs into language models to improve their complex question-answering capabilities?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["c393053b-cb4b-51a8-8333-3373eddcc39f"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What research exists on incorporating knowledge graphs into language models to improve their complex question-answering capabilities?", "reference_answer": "Knowledge Graph-augmented Language Models for Complex Question Answering"}}} {"uuid": "5937898e-7f8e-5d38-9acc-c09060fbf7a5", "question": "In the paper that introduces TMID dataset, which pretrained model gets the best F1 score after fine-tuning on TMID dataset? And when pretraining this model, what percentage of tokens were used in the Self-Supervised Blank Infilling task?", "answer_format": "Your answer should be a single python list containing two strings, the first element of the list is the pretrained model's name, and the second element of the list is the percentage as an integer, for example, '56%'.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["94217f41-ec91-59ed-879b-66911867c7e6"], "reference_pdf": ["1496affd-ee29-52de-8179-42e0418c899e"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["ChatGLM", "95%"], "lowercase": true, "ignore_order": true}}} {"uuid": "5960606a-4a02-5726-8048-bc2c52ad726b", "question": "Is there any paper that applies curriculum learning to various NLG tasks without depending on specific metrics?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["66d902db-a268-532d-b8d6-9f3b3a130bf5"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that applies curriculum learning to various NLG tasks without depending on specific metrics?", "reference_answer": "In-sample Curriculum Learning by Sequence Completion for Natural Language Generation"}}} {"uuid": "5be96361-68a3-5b32-8d15-668f306d33e7", "question": "According to the tables about Zero-shot performance, what is the range of accuracy of OPT-125M in different task tests (considering the data tested in all papers)?", "answer_format": "Your answer should be a python list of two floats, rounding to one decimal place, e.g. [12.1, 23.4].", "tags": ["multiple", "objective", "table"], "anchor_pdf": ["b9eb94b0-2d7a-5101-a3b4-8d0ee70b77ca", "3db38bf7-3168-5855-958a-c2fa458d33d8"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": [25.2, 80.3], "ignore_order": false, "ndigits": 1}}} {"uuid": "5c49a736-420a-52b4-8188-ad80f375e948", "question": "From which subset of ExHVV was MemeMQACorpus chosen, and why? How many questions were selected? Also, provide the changes in each role-label for the chosen subset.", "answer_format": "Your answer should be a Python list of 4 elements. The first element is the subset's name. The second element is the reason why the author chose this subset. The third element is an integer, denoting the number of questions chosen. The fourth element is a Python dict, containing role-labels and their corresponding changes, where each change is calculated as the new count minus the old count. e.g. [\"answer1\", \"answer2\", 3, {\"role1\": -2, \"role2\": 3, ...}]", "tags": ["multiple", "subjective", "table"], "anchor_pdf": ["0b35eefa-16ce-5586-9a5b-5ce712108204", "0514e9c9-a396-56cc-be31-2045b166c85d"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm", "eval_int_exact_match", "eval_structured_object_exact_match"], "eval_kwargs_list": [{"gold": "US Politics", "ignore_blank": true, "lowercase": true}, {"reference_answer": "This domain choice is based on diversity in the entity distribution across different roles compared to the other subset (on Covid-19) of ExHVV dataset.", "question": "Why was MemeMQACorpus chosen from the US Politics subset of ExHVV?"}, {"gold": 1880}, {"gold": {"hero": -89, "villain": -628, "victim": -241}}]}}} {"uuid": "5c4be3c8-e4ad-5154-83af-3e2ff896c210", "question": "How many words are in the train splits of the oldest dataset used by GeNE to evaluate language modeling?", "answer_format": "Your answer should be a Python integer.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["2d0a9f7f-6c7d-571d-90f4-8bafdbb97ce3"], "reference_pdf": ["d6b892b8-cf43-5b62-bde9-48c070c2e5dc", "866c3296-5bb8-5010-89e5-89a849f6dda9", "14a49a53-f223-549d-a025-d745f23f1adf"], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 1973136207}}} {"uuid": "5c967488-f464-5ab5-aa13-d1dc6be7e4e2", "question": "Is there any paper that proposes a set of criteria to comprehensively evaluate generated conversations?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["b4a13bdd-3ae1-5c76-b779-13b974184209"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that proposes a set of criteria to comprehensively evaluate generated conversations?", "reference_answer": "Modeling What-to-ask and How-to-ask for Answer-unaware Conversational Question Generation"}}} {"uuid": "5c98eeb0-ae95-530c-85b4-6e7dc3c12ecf", "question": "For Llama2 on DialogSum, which newly proposed module contributes more to the improvement of performance? How do the authors further explain why that module works?", "answer_format": "Your answer should be a Python list of 2 elements, where the first element is the FULL NAME of the module that contributes more to the improvement of performance, and the second element is a string, explaining why that module works. e.g. [\"module\", \"explanation\"].", "tags": ["single", "subjective", "table"], "anchor_pdf": ["0a2c3d8b-dc16-570c-b354-11797aebe290"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "fusion generation", "lowercase": true}, {"reference_answer": "In our main experiments, we use dialogue as the condition for generating the final summary in the FG. To explore the effect of using such condition, we run experiments without using it, where the results are ported in Table 10. It clearly shows that, compared with the models without using the entire dialogue, our approach is able to generate better summaries, which emphasizes the contribution of the entire dialogue, for the reason that it provides global or environmental information to guide FG identifying useful content produced by the experts.", "question": "How do the authors further explain why Fusion Generation (FG) works?"}]}}} {"uuid": "5cae6dda-4a2d-52ec-b511-953f476c3600", "question": "Is there any paper that proposes a new multimodal video dataset that image-level multimodal models do not work well?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["9ee1103e-d8de-5f6a-99c6-a151a32ff190"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that proposes a new multimodal video dataset that image-level multimodal models do not work well?", "reference_answer": "Revealing Single Frame Bias for Video-and-Language Learning"}}} {"uuid": "5df175e3-e99c-5eb3-8a5f-8133701c474b", "question": "According to the paper that enhances traditional input transformations by mixing the input image with images from other categories to create admixed images, what're the related adversarial attacks?", "answer_format": "Your answer should be a Python list of abbreviations of the attacks.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["3d007608-c976-5f84-9235-80e865a993ed"], "reference_pdf": ["ca05d6c7-0b31-5aa3-847d-baa44be86e72"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["FGSM", "I-FGSM", "MI-FGSM", "DIM", "TIM", "SIM"], "ignore_order": true, "ignore_blank": true, "lowercase": true}}} {"uuid": "5e0aaa58-e5f7-54c6-9c1f-faccded1ba31", "question": "In the agent \"MO-DDN\" (Multi-object Demand-driven Navigation), what is the basic success rate for a specific demand instruction DI?", "answer_format": "Your answer should be a formula string in latex format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["a5033bfe-a076-50b9-90e2-c6cdf45a33cd"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "SR_b=\\frac{1}{N}\\sum_{i=1}^N\\max_{s_b\\in So_b}\\frac{\\sum_{o\\in FL}\\mathbb{1}_{o\\in s_b}}{Len(s_b)}", "question": "In the agent \"MO-DDN\" (Multi-object Demand-driven Navigation), what is the basic success rate for a specific demand instruction DI?"}}} {"uuid": "5e2b676b-74ee-5a70-9852-7301615a7de0", "question": "What are the top 3 most main CONCEPTNET relations in the commonsense reasoning task dataset used in the paper \"Can LLMs Learn From Mistakes? An Empirical Study on Reasoning Tasks\"?", "answer_format": "Your answer should be a python list. YOU MUST USE THE EXACT NAMES OF THE RELATIONS AS THEY APPEAR IN THE PAPER.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["951af20e-0d2e-51b3-bd3f-429a10d69b05"], "reference_pdf": ["a87a7490-623a-54af-bad6-ef68b0757499"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["AtLocation", "Causes", "CapableOf"], "ignore_order": true, "lowercase": true}}} {"uuid": "5f2de2c6-fbcd-561a-b7a4-be129671f5db", "question": "On which labeled dataset did the metric AMR not reduce to Acc? On that dataset, which model performs best on the metric AMR?", "answer_format": "Your answer should be a Python list of three elements, the first element is the name of the labeled dataset, the second and third element is the model family and the variant of the model. e.g. [\"answer1\", \"answer2\", \"answer3\"].", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["VITC-L", "GPT-3.5", "0301"], "lowercase": true}}, "anchor_pdf": ["0da230cb-d487-56fa-9a85-4648f3f1e6c5"], "reference_pdf": []} {"uuid": "6098fb2b-f951-52c7-8cf9-e17aa7124833", "question": "What is the difference between Equation (1) and Equation (2)?", "answer_format": "Your answer should be text describing the difference.", "tags": ["formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["Equation (1) is used in single-vector retrieval models. It calculates the similarity score $s(q, d)$ as the dot product between the encoded vector of the query $v_q = \\eta_Q(q)$ and the encoded vector of the document $v_d = \\eta_D(d)$. In this method, the entire query and the entire document are each represented by a single vector, and their similarity is determined by the cosine of the angle between these two vectors. This approach does not consider token-level interactions, as all token embeddings are pooled into a single vector before the similarity score is computed.", "Equation (2) is used in multi-vector retrieval models, specifically in ColBERT. It calculates the similarity score by considering the interaction between each token in the query and each token in the document. The similarity score $s(q, d)$ is defined as $s(q, d) = \\sum_{i=1}^{N} \\max_{j} v_{qi}^T v_{dj}$, where $v_{qi}$ and $v_{dj}$ are the last-layer contextualized token embeddings of BERT for the i-th token in the query and the j-th token in the document, respectively. This operation, known as MaxSim, exhaustively compares each query token to all document tokens, effectively capturing the most relevant token-level interactions."], "question": "What is the difference between Equation (1) and Equation (2)?", "ignore_order": true}}, "anchor_pdf": ["bbe1cd56-d6c0-5ab7-8f3e-a54ff7489d0b"], "reference_pdf": []} {"uuid": "60bf1f10-7280-54b7-b364-b7c322b69d51", "question": "Which paper utilized MMD flows with Riesz kernels to solve Bayesian inverse problems?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["9c6e26d0-5be1-56a8-ae44-213e932455fe"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper utilized MMD flows with Riesz kernels to solve Bayesian inverse problems?", "reference_answer": "Posterior Sampling Based on Gradient Flows of the MMD with Negative Distance Kernel"}}} {"uuid": "60ef8ee5-97de-59a7-8c22-9fa45df8d152", "question": "Which subtask in NADI 2024 was the CUFE paper related to?", "answer_format": "Your answer should be a python string. The string should be \"Subtask 1\", \"Subtask 2\" and so on.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["fcbfe144-254e-5c9f-942c-6f154c8363e0"], "reference_pdf": ["56abb8d5-4bea-5698-b662-a8668eb8abe7"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Subtask 3"}}} {"uuid": "610ee0be-3405-58a0-8b1b-247ee6018640", "question": "The article \"DISTINGUISHED IN UNIFORM: SELF-ATTENTION VS. VIRTUAL NODES\" employed certain datasets from the recent LRGB collection paper. Please specify which benchmarking datasets included in that paper were not utilized in this study.", "answer_format": "Your answer should be a python list of strings, e.g., [\"dataset1\", \"dataset2\"].", "tags": ["multiple", "objective", "text"], "anchor_pdf": ["709faba9-ce93-5d4d-a1d5-ce25e9bad2ba"], "reference_pdf": ["03b8c99e-6f8a-5ceb-908f-8a46e2977091"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["PCQM-Contact"], "lowercase": true, "ignore_order": true}}} {"uuid": "61259bab-6b0f-5e36-8abf-8d3bf62994d1", "question": "In the EQA-MX dataset, which task takes the smallest proportion? In that task, which output appears the most?", "answer_format": "Your answer should be a Python list of 2 strings, where the first element is the abbreviation of the task and the second element is the most frequent output in that task.", "tags": ["comprehensive", "table", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["34ca1b49-9390-57de-bdc8-30381a830132"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["OAC", "Yellow"], "ignore_order": false, "ignore_blank": true, "lowercase": true}}} {"uuid": "612b0b3e-0f94-5dbe-85f8-708b9170b97e", "question": "In the paper that introduces Agent-as-a-Judge and proposes a dataset called DevAI, which evaluation method is the primary comparison target to Agent-as-a-Judge? And in this evaluation method's original paper, which LLM performs the best in Consistency?", "answer_format": "Your answer **must** be a single python list containing two strings, the first element of the list is the method's name, the second element of the name of the best model in Consistency.", "tags": ["objective", "multiple", "table"], "anchor_pdf": ["5ae21823-5fce-53bc-b12b-ef7afb0cf39b"], "reference_pdf": ["95c4da59-2aea-5163-9044-3554ca09aa83"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["LLM-as-a-Judge", "GPT-4"], "ignore_order": true, "lowercase": true}}} {"uuid": "6167db60-98c3-5ad6-b051-9d79f76e065c", "question": "In the experiment section of this paper, it is proposed that research shows one evaluation method is better. What desired criteria are these conclusions based on?", "answer_format": "Your answer should be a python list about the criteria, e.g. [\"criterion1\", \"criterion2\"]. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.", "tags": ["multiple", "objective", "text"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Metric Monotonicity", "Metric Separability", "Metric Linearity", "Metric Time Efficiency", "Metric Accuracy", "Size Robustness", "Imbalance Robustness"], "ignore_order": true}}, "anchor_pdf": ["b6867c59-3b76-5b78-a1c0-2001c2033f3b"], "reference_pdf": ["739485f1-c217-5e99-86d9-c6d11c570228"]} {"uuid": "61e20be1-7b19-580f-a86e-2132be450bc3", "question": "Which paper examined the scalability of instruction-tuning with respect to Mixture of Expert models?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["dc8624ad-1a48-5f99-8588-801753131b1d"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper examined the scalability of instruction-tuning with respect to Mixture of Expert models?", "reference_answer": "Mixture-of-Experts Meets Instruction Tuning: A Winning Combination for Large Language Models"}}} {"uuid": "623fe210-95bc-536d-8300-f726ac45f7a1", "question": "What is the difference between DICE and TAILO as regard to unlabeled data and what are the two steps of training a discriminator c(s)?", "answer_format": "Your answer should be a Python", "tags": ["single", "text", "subjective"], "anchor_pdf": ["fe9b51d0-722a-5244-a8ed-ab37746ac125"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "DICE regards all unlabeled data as negative samples whereas TAILO use Positive-Unlabeled (PU) learning to train the discriminator. Training a discriminator c(s) consists two steps: 1)Training another discriminator c'(s) that identifies safenegative samples, and 2) formal training of c(s)", "question": "What is the difference between DICE and TAILO as regard to unlabeled data and what are the two steps of training a discriminator c(s)?"}}} {"uuid": "63155a14-fe2e-5eb3-aacf-3a7e97368faf", "question": "Among the tested models, which model performs best on code problems?", "answer_format": "Your answer should be a python string of the name of the model.", "tags": ["objective", "single", "table"], "anchor_pdf": ["1f00c9dd-39b4-5302-8fd4-49f0c3a3d857"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Llama3-70B"}}} {"uuid": "639e6c07-c357-5660-91f0-feaaad8d7cd9", "question": "Which evaluation method is used in the paper against gold standards, despite having a low correlation with human judgments according to various studies?", "answer_format": "Your answer must be ONE string of the evaluation method's name.", "tags": ["single", "text", "objective"], "anchor_pdf": ["fc737b6a-e85f-534f-b540-ff3d8586de6b"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "ROUGE", "lowercase": true}}} {"uuid": "639f4526-9d30-5840-977f-900496bc4b09", "question": "How many datasets are evaluated in the work that the \"BIG-Bench Mistake\" paper follows to generate each step separately?", "answer_format": "Your answer should be a integer.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["3cd97002-d41b-51e2-921e-aaeb6c037a00"], "reference_pdf": ["2f2e4311-fc9b-5e36-bb18-7c3fee141713"], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 2}}} {"uuid": "63dc113b-0220-5cb4-9bd3-17ba26c310b0", "question": "Which paper introduce a DRO (distribution robust optimization) like training objective for doing adversarial training without constructing adversarial samples.", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["4e6936f2-259c-533b-b4c9-77982f040252"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper introduce a DRO (distribution robust optimization) like training objective for doing adversarial training without constructing adversarial samples.", "reference_answer": "DSRM: Boost Textual Adversarial Training with Distribution Shift Risk Minimization"}}} {"uuid": "646bc801-d082-54bf-b3f0-5437c6fad2be", "question": "On which downstream tasks did the authors experiment with their method, and by how much did it improve compared to the best existing methods?", "answer_format": "Your answer should be a Python dictionary, where the keys represent the downstream tasks on which the authors conducted experiments, and value is the numerical part of a percentage (between 0 and 100, rounded to 1 decimal place), indicating the improvement compared to the best existing method.. e.g. {\"task1\": 1.9, \"task2\": 3.5, ...} .", "tags": ["single", "text", "objective"], "anchor_pdf": ["d7638fd4-69e4-5959-b0f5-84a4d53b1e3a"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"question answering": 10.6, "autofill forms": 9.5, "user services": 9.7}, "ndigits": 1, "lowercase": true, "ignore_order": true}}} {"uuid": "655b8b31-8ecd-5b34-9bc4-e9816b314c27", "question": "Could you recommend research that assesses how well language learning models, such as ChatGPT, perform in creating reading comprehension tasks for educational software?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["eb62a626-40e8-5b13-8ca6-e0b76d0aa687"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Could you recommend research that assesses how well language learning models, such as ChatGPT, perform in creating reading comprehension tasks for educational software?", "reference_answer": "Evaluating Reading Comprehension Exercises Generated by LLMs: A Showcase of ChatGPT in Education Applications"}}} {"uuid": "65a648a6-9bea-5467-84fd-2ca01dc52084", "question": "Which paper uses the latent diffusion model for the first time to solve offline reinforcement learning problems based on the sequential modeling framework?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["56420140-3112-503f-a304-cb706823f259"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper uses the latent diffusion model for the first time to solve offline reinforcement learning problems based on the sequential modeling framework?", "reference_answer": "Efficient Planning with Latent Diffusion"}}} {"uuid": "65c25042-0b5e-5677-8c23-2374a72947c0", "question": "In the existing deblurring dataset compared in GS-Blur paper, that contains both real and synthetic data, what's the average noise level estimated?", "answer_format": "Your answer should be a float, rounding to 4 decimal places.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["1e8b1748-cb55-52a7-b27a-e38ccbee18a7"], "reference_pdf": ["dd0147e0-290b-59ea-8742-0be38bb795c7"], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 0.7736, "ndigits": 4}}} {"uuid": "65c9fe88-46f0-579d-ba7b-ca58ee7c55f2", "question": "Which paper introduces the R-GCN technique into document-level joint entity and relation extraction?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["9777ccf1-7366-5478-9143-4b2c14b96045"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper introduces the R-GCN technique into document-level joint entity and relation extraction?", "reference_answer": "A Novel Table-to-Graph Generation Approach for Document-Level Joint Entity and Relation Extraction"}}} {"uuid": "65d3fbf5-5319-5490-9686-537924c3c4ee", "question": "I want to replicate the experiment in this paper. Please list all the datasets and baselines that I should prepare.", "answer_format": "Your answer should be plain text", "tags": ["single", "subjective", "text"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "1. For datasets, this paper select five challenging logical reasoning benchmarks: (1)LogiQA (2)ProofWriter (3)FOLIO (4)PrOntoQA (5)LogicalDeduction(LD). 2. For baselines, (1)Standard prompting (2)Chain-of-Thought(CoT) (3)Chain-of-Thought with Self-Consistency(CoT-SC) (4)Selection-Inference(SI) (5)LAMBADA (6)Tree-of-Thought(ToT) (7)Cumulative Reasoning(CR).", "question": "I want to replicate the experiment in this paper. Please list all the datasets and baselines that I should prepare."}}, "anchor_pdf": ["4f88e7de-b217-5d6b-a315-a872e927bdfe"], "reference_pdf": []} {"uuid": "66c0154e-799f-5095-9de0-36d41967dfe9", "question": "What are the advantages of the transformer-based architecture proposed in the paper?", "answer_format": "Your answer should be a python strings about the obvious advantages.", "tags": ["single", "text", "subjective"], "anchor_pdf": ["7cf3e986-7a71-5481-b217-d72c32335d09"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["The Rough Transformer is independent of the sequence length of input data.", "The Rough Transformer is robust to irregular sampling", "The Rough Transformer decreases the memory and computational bottleneck inherent to the vanilla Transformer."], "question": "What are the advantages of the transformer-based architecture proposed in the paper?"}}} {"uuid": "66c5fd15-e82b-5a02-bce6-bb0aab05184f", "question": "Among the datasets of the benchmark that collects CHIP-CDN, what are the evaluation metrics they applied?", "answer_format": "Your answer should be a Python list of strings, containing the names of the evaluation metrics.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["3eb8365d-eb44-5413-adeb-7380c9824e3d"], "reference_pdf": ["09811e9e-5c35-5695-a106-df02aaff357c"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Micro F1", "Macro F1", "Accuracy"], "ignore_order": true, "ignore_blank": true, "lowercase": true}}} {"uuid": "66e99a9b-0660-5574-b8b6-1a05b76c7396", "question": "What are the two loss functions in Equation (8) means?", "answer_format": "Your answer should a list with two items, representing the meaning of the first and the second loss function respectively.", "tags": ["formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["$\\mathcal{L}_{y}(\\theta)$: This term refers to the loss associated with the text prediction task. It is part of the multi-task learning strategy, where the model is trained to predict the textual output, such as the transcription or translation of the input speech.", "$\\mathcal{L}_{z}(\\theta)$: This term corresponds to the loss for the acoustic unit prediction task. Acoustic units are discrete representations of the speech signal, and this loss helps the model learn to generate these units, which are then used to synthesize the translated speech."], "question": "What are the two loss functions in Equation (8) means?", "ignore_order": true}}, "anchor_pdf": ["85e8f01d-9273-5e20-a37a-fb9e82cf2984"], "reference_pdf": []} {"uuid": "6746c386-b889-59ad-abed-144cd56101d3", "question": "What's the baseline used in the experiment?", "answer_format": "Your answer should be a plein text DIRECTLY FROM THE PDF.", "tags": ["single", "subjective", "text"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "We employ the released PLATO-v1 model, a pre-trained dialogue generation model based on UniLM, for our experiment.", "question": "What's the baseline used in the experiment?"}}, "anchor_pdf": ["613d5129-98bd-5f6b-95d3-22b0a9966455"], "reference_pdf": []} {"uuid": "67f33e3f-646d-5bac-8a18-5080e6a2563e", "question": "How to calculate final loss function(Loss) in this paper?", "answer_format": "Your answer should be a python string, which is a formula in latex format to calculate a parameter.", "tags": ["single", "formula", "subjective"], "anchor_pdf": ["2a5f542a-262f-5b41-80c5-429b7f88e312"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": ["\\textit{Loss} = -\\frac{1}{N} \\sum_{i=1}^{N} \\left( w_p y_i \\log(\\hat{y}_i) + w_n (1 - y_i) \\log(1 - \\hat{y}_i) \\right) -\frac{1}{N} \\sum_{i=1}^{N} \\log \\left( \\frac{\\exp(s(y_i)/\\tau)}{\\sum_j \\exp(s(y_j)/\\tau)} \\right)"], "question": "How to calculate final loss function(Loss) in this paper?", "ignore_order": true}}} {"uuid": "681c2bb8-b4cf-5f2a-bd56-ae26e0bb51b6", "question": "I would like to reproduce the experiments of KnowGPT, could you please provide me with the websites of the datasets applied in the experiment?", "answer_format": "Your answer should be a Python list of 3 strings, the websites. Note that you should provide the original URL as given in the papers that proposed the datasets.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["1ea6baba-2167-5d05-9a40-d142ad39358f"], "reference_pdf": ["1d779e37-9a20-5a90-80c9-a7aaf2b6cfe5", "a87a7490-623a-54af-bad6-ef68b0757499", "6ee7bf32-7948-5ba3-a9b2-571dafb53a37"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["data.allenai.org/OpenBookQA", "www.tau-nlp.org/commonsenseqa", "github.com/jind11/MedQA"], "threshold": 90, "fuzz_method": "partial_ratio", "ignore_order": true}}} {"uuid": "68855b4d-dd5d-5b33-8ddd-61b13b1b6c51", "question": "On cmudog, which one among the linguistics operators appears the most frequently? What's its distribution?", "answer_format": "Your answer should be a Python list of 2 elements. The first element is the linguistic operator\"s name, and the second element is its disrtibution in percent in string format. e.g. [\"answer\", \"5%\"].", "tags": ["image", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["VERB_MODIFY", "36%"]}}, "anchor_pdf": ["3107f6a8-1939-5af0-b3d8-06d7aa66158d"], "reference_pdf": []} {"uuid": "68baa0b9-8e5e-5436-94d5-6dd0b3bbfff0", "question": "On which datasets this study surpassed the SOTA?", "answer_format": "Your answer should be a Python list of dataset, e.g., [\"dataset1\", \"dataset2\", ...]. YOU MUST USE THE EXACT TEXT FROM THE PAPER.", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["DailyDialog", "CMU_DoG", "LIGHT", "EmpathicDialogue", "Wizard of Wikipedia", "CommonsenseDialog"], "ignore_order": true, "lowercase": true}}, "anchor_pdf": ["3107f6a8-1939-5af0-b3d8-06d7aa66158d"], "reference_pdf": []} {"uuid": "699bf716-c3d2-526a-b3fc-2c1c10f5aa09", "question": "What does each record in the dataset used by the event-based knowledge editing benchmark in the paper \"EVEDIT: Event-based Knowledge Editing for Deterministic Knowledge Propagation\" include?", "answer_format": "Your answer should be a python list of strings. YOU MUST USE THE EXACT NAMES FROM THE PAPER, RATHER THAN MATHEMATICAL SYMBOLS OR ABBREVIATION.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["7476352a-0300-5ef7-9b32-346296a6b7be"], "reference_pdf": ["20d98185-e3a3-55c5-9e93-cde74c61d5f4"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["requested rewrite", "paraphase prompts", "neighborhood prompts", "generation prompts", "reference texts"], "ignore_order": true, "lowercase": true, "ignore_blank": true, "threshold": 90}}} {"uuid": "69fb7412-8288-562a-8de7-d7727f689fcf", "question": "What manipulation operations does the manipulation network in this paper allow? For the add network, how is the training loss of relation defined?", "answer_format": "Your answer should be a single python list, the first element of the list is a list of the strings of manipulation operations, the second element of the list is a string of the formula in latex format, e.g. [[\"operation1\", \"operation2\"], \"l_{\\text {relation }}=...\"]", "tags": ["formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_structured_object_exact_match", "eval_complex_math_formula_with_llm"], "eval_kwargs_list": [{"gold": ["add", "remove", "change"], "lowercase": true, "ignore_order": true}, {"formulas": "\\ell_{\text {relation}}=-\\log \\left(p\\left(h_{\\mathrm{r}}\\left(\\widetilde{o}_{\text {new }}, o_{i}\right)=r\right)\right)", "question": "For the add network, how is the training loss of relation defined?"}]}}, "anchor_pdf": ["0cc6d173-ff62-5ae6-80c5-56bd73ee82a2"], "reference_pdf": []} {"uuid": "6a72002b-7dcf-55df-8a3c-3cc49ee326a3", "question": "What dataset does this paper(titled \"Identifying Conspiracy Theories News based on Event Relation Graph\") use for training? How many event coreference chains does this dataset contain?", "answer_format": "Your answer should be a python list of two strings, the first element is the dataset name(abbrievation), and the second element is an integer number.", "tags": ["text", "multiple", "objective"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["MAVEN-ERE", 103193], "ignore_order": false, "lowercase": true}}, "anchor_pdf": ["0c4fa692-c546-5576-8020-4a2a41c3c1f7"], "reference_pdf": ["dee7640b-bff4-5af0-a13f-7270194ac651"]} {"uuid": "6aefdbec-8411-50e8-a9f3-b26afe188083", "question": "In the paper that proposes the only comparable interactive theorem prover applied as a baseline by AIPS, where are the evaluation samples chosen from?", "answer_format": "Your answer should be the a raw text from the papers.", "tags": ["multiple", "table", "subjective"], "anchor_pdf": ["0e91df50-6832-5615-baf8-af56e93ea272"], "reference_pdf": ["f1c80ac8-4588-586b-bf00-1151edd91acd", "67f92ccd-ca66-5cd5-b6ac-3852a53255e2"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "We perform experiments on theorems from \"Mathematics in Lean\" : a book for beginners to formalize and prove mathematical theorems in Lean. It has 233 theorem proving exercises, covering topics from sets and functions to topology, calculus, and measure theory. For evaluation, we randomly selected 50 theorems, and their proofs have 5.52 tactics on average.", "question": "In the paper that proposes LeanCopilot, where are the evaluation samples chosen from?"}}} {"uuid": "6af99fe6-5e33-5632-8553-aa9d1daaad86", "question": "Find the NLP paper that focuses on dialogue generation and introduces advancements in the augmentation of one-to-many or one-to-one dialogue data by conducting augmentation within the semantic space.", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["0064b013-cd7e-544b-bdb6-99e7ed7ab2bc"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Find the NLP paper that focuses on dialogue generation and introduces advancements in the augmentation of one-to-many or one-to-one dialogue data by conducting augmentation within the semantic space.", "reference_answer": "DialoGPS: Dialogue Path Sampling in Continuous Semantic Space for Data Augmentation in Multi-Turn Conversations"}}} {"uuid": "6b2a09ee-5f74-57c7-a863-ebd390f89150", "question": "In figure 3, there are three types of losses. What is the function for the first loss?", "answer_format": "Your answer should be a formula in latex format extracted from the paper.", "tags": ["image", "formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "\\mathcal{L}_{\text {topk }}=\\sum_{i} \\sum_{j \\in N N_{k}(i)} \\sum_{m} \\mid \\operatorname{Sim}\\left(c e_{i}, c e_{j}\right)- \\operatorname{Sim}\\left(f\\left(c e_{i}\right)[: m], f\\left(c e_{j}\right)[: m]\right) \\mid", "question": "In figure 3, there are three types of losses. What is the function for the first loss?"}}, "anchor_pdf": ["bc299fb6-0440-5a26-a0dc-f63956b28e52"], "reference_pdf": []} {"uuid": "6b4e8ab7-8482-55dc-ba84-ebc606c19f27", "question": "Please give me the github link of this work.", "answer_format": "Your answer should be a single string of the github link.", "tags": ["text", "single", "objective"], "anchor_pdf": ["0a95099e-2121-5136-b03f-d9232413cc31"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "https://github.com/ZexuSun/OILCA-NeurIPS23", "lowercase": true}}} {"uuid": "6b660c4a-c2a0-538f-b42a-bfe6337add99", "question": "Could you recommend a dataset paper which presents relation extraction performance on translated data and compare it to English data?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["27a6d0e6-3ac3-5371-ba07-9fd8ee40ad07"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Could you recommend a dataset paper which presents relation extraction performance on translated data and compare it to English data?", "reference_answer": "MultiTACRED: A Multilingual Version of the TAC Relation Extraction Dataset"}}} {"uuid": "6bb32702-f9f0-53a5-a534-be38bfc75b3f", "question": "In Figure 3, what can we infer from comparing the performance with training data generated by self-training (ST) versus without it?", "answer_format": "Your answer should be a Python strings about the conclusion from comparing the performance with training data generated by self-training (ST) versus without it.", "tags": ["image", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Figure 3 shows that self-training is beneficial for each of the languages. The improvement is particularly strong when the teacher model was based on a very small amount of data.", "question": "In Figure 3, what can we infer from comparing the performance with training data generated by self-training (ST) versus without it?"}}, "anchor_pdf": ["c1f72e20-0020-59c5-85c5-b1ab703b22b7"], "reference_pdf": []} {"uuid": "6bdd99b0-3976-5029-9b64-d82b7bfb4276", "question": "What does formula (1) mean in the Methodology section?", "answer_format": "Your answer should be a python strings of the detailed explanation of the formula.", "tags": ["formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The idea of this formula is decomposing token representations into their constituent vectors based on vector-based approaches. Decompose the i-th token representation in layer l into elemental vectors attributable to each of the N input tokens. So we can compute the norm of the attribution vector of the k-th input to quantify its total attribution to xi.", "question": "What does formula (1) mean in the Methodology section?"}}, "anchor_pdf": ["cd91a901-384e-597b-bb20-2d4f9fa0c4f9"], "reference_pdf": []} {"uuid": "6beb7fc3-96ff-587f-8362-bcd0f709a2e9", "question": "How does the system efficiently adapt to completely unfamiliar opponent policies during deployment, while still maintaining performance with known policies?", "answer_format": "Your answer should be a sentence.", "tags": ["single", "text", "subjective"], "anchor_pdf": ["8a1e3915-e42d-581e-aa46-9b520f4b03ec"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "How does the system efficiently adapt to completely unfamiliar opponent policies during deployment, while still maintaining performance with known policies?", "reference_answer": "The system adapts to completely unfamiliar opponent policies by collecting and accumulating opponent trajectory data in an Opponent-Collecting Window (OCW), which is then sampled and stitched together using the GetOnD function for in-context learning. For known policies, the system quickly re-engages suitable responses by leveraging previously accumulated trajectories, ensuring both fast adaptation to familiar policies and effective extrapolation for unfamiliar ones."}}} {"uuid": "6c290495-01d3-5b69-adda-7d26b92f0da1", "question": "Which institution funded the model with worse debiasing ability in Post-hoc in \"A Parameter-Efficient Multi-Objective Approach to Mitigate Stereotypical Bias in Language Models\"?", "answer_format": "Your answer should be a python string of the institution name. You should use full name.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["c5ad3efd-da37-56a6-87b5-a066c4e0a74b"], "reference_pdf": ["50883e71-9f08-5539-b5f4-0acf05e9b597", "dae50238-e4d1-5864-8b70-8666dc3b606d"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Europoean Research Council", "lowercase": true}}} {"uuid": "6dc50c47-0782-5277-b10a-e5e427a10223", "question": "What is the first paper that theoretically studies training neural networks under small initialization?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["4618947b-9b52-54f6-8bd5-36128f399479"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What is the first paper that theoretically studies training neural networks under small initialization?", "reference_answer": "Early Neuron Alignment in Two-layer ReLU Networks with Small Initialization"}}} {"uuid": "6de72b3a-ac37-5d2d-b870-a61dac353bdb", "question": "Is there any paper that attempts to evaluate the similarity of meaning representations without using annotated data?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["3905ca8e-60a9-555a-8097-352dddf0e8da"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that attempts to evaluate the similarity of meaning representations without using annotated data?", "reference_answer": "Evaluate AMR Graph Similarity via Self-supervised Learning"}}} {"uuid": "6df645eb-d78e-56b7-be7a-8712b3ed7a75", "question": "In which four directions can the author's model be trained?", "answer_format": "Your answer should be python list, each element of the list is a string like 'A-to-A', 'A-to-B'.", "tags": ["objective", "single", "text"], "anchor_pdf": ["1dc45e9f-f844-5e78-b762-7784d6e52eb4"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["word-to-word", "word-to-definition", "definition-to-definition", "definition-to-word"], "ignore_order": true, "lowercase": true}}} {"uuid": "6ec30967-a7aa-5ecb-8819-15e738ad4b50", "question": "In the two papers that updated ToMi according to the SimTom paper, what are the other datasets used to evaluate the models, besides ToMi?", "answer_format": "Your answer should be a Python list of strings, containing the name of the datasets.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["2f861e4d-1888-5355-a9c2-26411f14efe4"], "reference_pdf": ["56bb5074-0a00-578b-ad44-e24096458b1e", "1ff930b7-3ecb-50ee-bfb4-777d7d8636ad"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["ToM", "SocialIQA"], "lowercase": true, "ignore_order": true}}} {"uuid": "6ee75006-72d3-5d81-b85d-ec25b99ed502", "question": "Which vision-language model can demonstrate that visual grounding could facilitate efficient language acquisition? (OctoBERT)", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["dd964433-4163-513e-ba04-cbffc2a74ae4"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which vision-language model can demonstrate that visual grounding could facilitate efficient language acquisition? (OctoBERT)", "reference_answer": "World-to-Words: Grounded Open Vocabulary Acquisition through Fast Mapping in Vision-Language Models"}}} {"uuid": "6f0ece87-9055-5ad9-9b89-f88c7a19d08f", "question": "For the strongest baseline mentioned in \"TIES-Merging: Resolving Interference When Merging Models\", which benchmark and what tasks were used for NLP in the paper which proposed it?", "answer_format": "Your answer should be a python dictionary with the keys \"benchmark\" and \"tasks\". The value for \"benchmark\" should be a string and the value for \"tasks\" should be a list of strings.", "tags": ["multiple", "objective", "text"], "anchor_pdf": ["153d1505-a286-5ceb-9858-c272e31a7d7e"], "reference_pdf": ["7efe0293-9ecd-5386-b1c5-a851c7a0fdf1"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"benchmark": "GLUE", "tasks": ["CoLA", "SST-2", "MRPC", "RTE"]}, "ignore_order": true, "lowercase": true}}} {"uuid": "6f2ff186-5ec6-5234-8936-b3ee47c23059", "question": "According to the paper that proposes ExpressivityArena, what's the notable example that uses human feedback to manually evaluate the model? In that arena, which model has a 0.72 win-rate against llama-2-7b-chat at the time when that paper was written?", "answer_format": "Your answer should be a Python list of 2 strings, the name of the example, and the name of the model as given in the paper.", "tags": ["multiple", "image", "objective"], "anchor_pdf": ["3c2d2b4e-b5d2-569f-ae9f-96d8ff6247de"], "reference_pdf": ["ebbedf80-733c-5a43-ba59-855bcfacda12"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Chatbot Arena", "gpt-3.5-turbo-0613"], "ignore_order": false, "lowercase": true, "ignore_blank": true}}} {"uuid": "6f3b4d9d-f033-5e1b-bd31-eec4aef51ad2", "question": "In the paper \"Quantized Local Independence Discovery for Fine-Grained Causal Dynamics Learning in Reinforcement Learning\", which method performs the best on ID states with full-chain setting? In the paper that proposed that method, where does the -inf in Fig. 3 come from?", "answer_format": "Your answer should be a Python list of 2 strings, the first is the abbreviation of the method, the second is the reason why -inf appears in Fig. 3.", "tags": ["multiple", "table", "image", "subjective"], "anchor_pdf": ["1e3d6210-2df4-5eba-92d6-beab951bf795"], "reference_pdf": ["5b01591b-d5e4-55ee-a33d-e11a6d0d1901"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "CDL", "lowercase": true, "ignore_blank": true}, {"reference_answer": "Certain features are masked to $-\\infty$ according to a binary map $M_j$.", "question": "Where does the -inf in Fig. 3 come from?"}]}}} {"uuid": "6f68d5a6-8a34-55e9-8212-64c01d072d68", "question": "What are the datasets used in the experiments and what are their respective durations? Where can I get these datasets?", "answer_format": "Your answer should be a Python list of 2 elements. The first element is a Python dictionary containing dataset names and respective durations (in hours, rounded to 1 decimal place), and the second element is a Python string containing the answer to the last question, e.g. [{\"dataset1\": 2.1, \"dataset2\": 10.8, ...}, \"answer\"]", "tags": ["single", "text", "subjective"], "anchor_pdf": ["7f3d16e8-d399-5ffd-bae5-7aad916c36eb"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_structured_object_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": {"M30S3": 18.5, "M3E6": 21.1, "M30U": 18.2}, "ndigits": 1}, {"reference_answer": "The article indicates that these are internal datasets, thus they may not be currently accessible.", "question": "Where can I get these datasets?"}]}}} {"uuid": "7055fe3b-222a-5001-8d37-827c97dba1e4", "question": "How much higher is the best EX score in the paper \"Synthetic SQL Column Descriptions and Their Impact on Text-to-SQL Performance\" than that in the original dataset applied? Assuming that both experiments are taken under no knowledge setting.", "answer_format": "Your answer should be a float between 0 and 1, rounding to 4 decimal places.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["1cda8193-bb3e-5c71-9c34-a48959805b38"], "reference_pdf": ["22086214-ba3b-50f5-9a22-a247a50375fe"], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 0.0588, "ndigits": 4}}} {"uuid": "7126cbfb-136e-5a9a-950f-8fc57feda734", "question": "How does the latest Wellness Descriptions dataset used in the paper address the ambiguity issue in the task of annotating text for wellness dimensions?", "answer_format": "Your answer should be a python strings concisely summarizing the method.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["5b6dd112-3ac4-58e4-8855-39c6b03a3553"], "reference_pdf": ["fd26f57c-13f5-5248-a4d3-0c5a42fe04ac"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "They address the ambiguity issue by providing a set of perplexity guidelines. The major perplexity guidelines are as follows. 1. Presence of Multiple Aspects: Social media posts often express feelings due to various reasons. To address this, specific text spans contributing to the more focused health consequences can be identified and associated with corresponding wellness dimensions. 2. Annotation Ambiguity: Although there may be multiple aspects in the post, we must consider the holistic aspect as per the experts' opinion. 3. Reading between the lines: The text may contain implicit or subtle hints that suggest a particular wellness dimension. But clear and meaningful words that suggest one of the four wellness dimensions should be annotated accordingly.", "question": "How does the latest Wellness Descriptions dataset used in the paper address the ambiguity issue in the task of annotating text for wellness dimensions?"}}} {"uuid": "71456d85-6af1-5d17-ae3f-324516ab0853", "question": "In the experiments presented in the paper \"Fast and Efficient Speech Enhancement with Variational Autoencoders\" evaluating the performance for speech enhancement, which method demonstrates the best performance apart from the proposed framework itself?", "answer_format": "Your answer should be a python strings. YOU MUST USE THE EXACT ABBREVIATION AS IN THE PAPER.", "tags": ["single", "table", "objective"], "anchor_pdf": ["47593a10-36b8-5956-be43-fafd747b570c"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "MCEM", "lowercase": true}}} {"uuid": "7156d9cc-5b02-50d7-bb20-bdcc414b76e4", "question": "Among the diverse interactive domains used to test SOFT-SC, which one is the first parallel interactive text-based and embodied environment?", "answer_format": "Your answer should be a python string, and it should be a diverse interactive domain name.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["feee5f2e-295c-5232-9c77-cb3a13763f65"], "reference_pdf": ["db340dc7-ff3c-591a-81a6-88243bf559df", "75494679-b547-5df0-a83a-75410da0f379", "25ffe76f-9274-56ea-b07f-4086da57bb65"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "ALFWorld", "lowercase": true}}} {"uuid": "71570654-c808-539b-9149-5ede4b64d39b", "question": "What linguistic property does this paper investigate?", "answer_format": "Your answer should be an English string.", "tags": ["single", "metadata", "subjective"], "anchor_pdf": ["6f1d59f6-6813-5682-8f24-16423524bc17"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Phonetic feature", "question": "What linguistic property does this paper investigate?"}}} {"uuid": "7175414d-1ddc-5d5a-b4a6-8a25ba6f2078", "question": "Is there a decoder-only language model that does not use a tokenizer and operates on raw text bytes?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["aeca8ff3-8197-5f81-9a7c-c66597e786bf"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a decoder-only language model that does not use a tokenizer and operates on raw text bytes?", "reference_answer": "ByGPT5: End-to-End Style-conditioned Poetry Generation with Token-free Language Models"}}} {"uuid": "71fd543c-b0f5-5631-b97d-0c9f7a996a86", "question": "What paper first used the technique of prompt engineering to generate adversarial prompts that can fool LLMs into making wrong predictions in prompt-based learning?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["4cc48a5b-be5f-57af-8697-1cc75c0f67d0"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What paper first used the technique of prompt engineering to generate adversarial prompts that can fool LLMs into making wrong predictions in prompt-based learning?", "reference_answer": "AN LLM CAN FOOL ITSELF: A PROMPT-BASED ADVERSARIAL ATTACK"}}} {"uuid": "7231e809-3ffe-5fb5-84b6-633ba6c788f5", "question": "Are there datasets and benchmarks available for measuring LLM graph reasoning abilities?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["b7c43a2c-11c4-5c63-8986-49097ff6e18d"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Are there datasets and benchmarks available for measuring LLM graph reasoning abilities?", "reference_answer": "TALK LIKE A GRAPH: ENCODING GRAPHS FOR LARGE LANGUAGE MODELS"}}} {"uuid": "7236429c-2845-556e-98c9-886d6a05c384", "question": "How much higher ASRs do user cases with high content freedom yield, compared to those with low content freedom?", "answer_format": "Your answer should be a floating point numbers with one decimal places.", "tags": ["single", "image", "objective"], "anchor_pdf": ["ab9342dc-a363-5a2a-b678-043637e4dd05"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 18.0, "ndigits": 1}}} {"uuid": "72c5b793-458d-5af4-86eb-542f839c023a", "question": "What research has been conducted on incorporating visual data into the text summarization process?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["bacdd410-20f6-5f9b-a8c6-a8eb7f1310fb"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What research has been conducted on incorporating visual data into the text summarization process?", "reference_answer": "Summary-Oriented Vision Modeling for Multimodal Abstractive Summarization"}}} {"uuid": "7350bb9b-a510-5614-a994-1d99a6368e57", "question": "In the paper that DRAGIN follows the most in term of template prompt, which models are utilized as the CoT generator of the proposed retriever?", "answer_format": "Your answer should be a string, giving the detailed names of the models, as proposed in the reference paper.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["4c90f20c-5542-5573-96d2-62745066c3ca"], "reference_pdf": ["6d0ea9bc-a7ee-5598-896f-02c44aa42194", "89b83f29-085e-5da6-a5b4-7cba324a4052", "f398b6e5-0ff2-59e4-9f26-eba5cea5b48c"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "We experiment with OpenAI GPT3 (code-davinci-002) and Flan-T5 of different sizes as its CoT generator.", "question": "In the paper that DRAGIN follows the most in term of template prompt, which models are utilized as the CoT generator of the proposed retriever?"}}} {"uuid": "7369f690-c9b9-52d7-8698-3b38d8c2baf1", "question": "For dataset MultiDialog, what's the number of dialogues, the number of turns, total length in hours, number of speakers, and the name of source dataset of dialogue scripts? Please adopt the statistics the most accurate you can find in the paper.", "answer_format": "Your answer should be a python list,of which the elements are in the following order: number of dialogues(int), number of turns(int), total length in hours(float, rounded to 2 decimal places), number of speakers(int), and the name of source dataset of dialogue scripts(str), every element of the list is an int or float or string representing the relevant dataset information.", "tags": ["objective", "single", "text"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": [8733, 187859, 339.71, 12, "TopicalChat"], "ndigits": 2, "ignore_order": false, "threshold": 95}}, "anchor_pdf": ["4beb073a-74b4-563a-82a7-da7513acaff0"], "reference_pdf": []} {"uuid": "73b027fd-971d-5dd0-893e-8e6cc5b0d885", "question": "According to the DeepKKT paper, which method performs the best on CIFAR10 with 1 generated image per class under 50-shot setting? Additionally, in the paper that proposes that method, what's the latest dataset evaluated?", "answer_format": "Your answer should be a Python list of 2 elements, the abbrevation of the method and the name of the dataset.", "tags": ["multiple", "objective", "table", "metadata"], "anchor_pdf": ["1bd3021a-e648-5ed5-a5a4-0c835dc0f3cd"], "reference_pdf": ["22acb72e-f84b-5f76-82bf-3b0182e5a8da"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["DSA", "FashionMNIST"], "fuzz_method": "partial_ratio", "threshold": 100, "ignore_order": false, "ignore_blank": true, "lowercase": true}}} {"uuid": "74124f30-2365-53f1-9b81-fccaa9a4d5e0", "question": "According to Figure 1, which net has the best generalization ability?", "answer_format": "Your answer should be a single string of the net's name.", "tags": ["single", "objective", "image"], "anchor_pdf": ["0a682805-0d4b-5c26-99fa-f0274595c395"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "SchNet", "lowercase": true}}} {"uuid": "748c93fa-539a-5f0f-887d-746da0323e23", "question": "Which paper first proposed to only update some original weights of self-attention layers in parameter-efficient fine-tuning?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["9508c03b-c467-595d-953f-ad99717c1226"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first proposed to only update some original weights of self-attention layers in parameter-efficient fine-tuning?", "reference_answer": "HiFi: High-Information Attention Heads Hold for Parameter-Efficient Model Adaptation"}}} {"uuid": "74d535a9-5796-57d9-81c8-0c68e3c7188d", "question": "How many programming languages in The Stack are selected in the code dataset used for hypernetwork training in the paper?", "answer_format": "Your answer should be a single integer.", "tags": ["multiple", "objective", "text"], "anchor_pdf": ["e3687846-983e-5174-b918-cf7abd297030"], "reference_pdf": ["6d6f8a4b-0f39-5f6f-9513-678e6f490f84"], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 86}}} {"uuid": "75b9dc7d-abbe-5627-ac3d-649055da6df9", "question": "Which paper proposes to use rewriting based approaches to defending against adversarial attacks in text classification?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["331caa1d-e2e3-535d-bc06-1ecd8ee99033"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper proposes to use rewriting based approaches to defending against adversarial attacks in text classification?", "reference_answer": "Don't Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text"}}} {"uuid": "75c1fd66-8271-5ae8-b45f-c188ae9ccf84", "question": "Which evaluation metric demonstrates the greatest improvement in the finetuned model proposed in this paper compared to GPT baseline?", "answer_format": "Your answer should be a Python string, which is the name of the evaluation metric DIRECTLY FROM THE PDF.", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "sBLEU", "lowercase": true}}, "anchor_pdf": ["481d851e-214b-5d6b-af6c-880a1be8f3b9"], "reference_pdf": []} {"uuid": "75cd4886-f858-506d-ad37-85cc7c605b3f", "question": "Which paper first introduced document content as an intermediate generation target and utilized textual document identifiers in generative retrieval?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["8b5ca0b7-35ce-5551-ba24-defc63eb1040"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first introduced document content as an intermediate generation target and utilized textual document identifiers in generative retrieval?", "reference_answer": "TOME: A Two-stage Approach for Model-based Retrieval"}}} {"uuid": "765fc890-3100-5b7f-9068-9460147a99cd", "question": "Which article first proposed shuffled-group-whitening to solve the problem of sentence representation learning?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["74103188-8a13-5141-875f-5ffeb2f42a56"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which article first proposed shuffled-group-whitening to solve the problem of sentence representation learning?", "reference_answer": "WhitenedCSE: Whitening-based Contrastive Learning of Sentence Embeddings"}}} {"uuid": "7696934c-fc83-504d-83d9-3716e13dfd89", "question": "How much does the average performance of the model improve on WMT'19 test sets by replacing one of example-specific prompts with a task-level prompt?", "answer_format": "Your answer should be a single float number ranging from 0 to 100 and rounded to 2 decimal places, representing the subtraction result.", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 0.34, "ndigits": 2}}, "anchor_pdf": ["5a2b95c1-12d6-5b77-82a1-ee24180d27ae"], "reference_pdf": []} {"uuid": "76aee9c9-711d-5c33-9edd-68f80d3dc1ca", "question": "Are there any papers that build dense retrievers with mixture-of-experts architecture where each expert is responsible for different types of queries?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["7d1cdb4d-7220-564e-8b3c-d310c9773b22"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Are there any papers that build dense retrievers with mixture-of-experts architecture where each expert is responsible for different types of queries?", "reference_answer": "Chain-of-Skills: A Configurable Model for Open-Domain Question Answering"}}} {"uuid": "76dc78aa-daa0-5e3a-8377-96072b98e408", "question": "Which PLM method achieve the best bias score in the experiment?", "answer_format": "Your answer should be a single string representing the PLM method.", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_element_included", "eval_kwargs": {"gold": ["PICARD(T5)", "T5(PICARD)", "PICARD", "T5"], "ignore_blank": true}}, "anchor_pdf": ["15baba11-9239-54a7-a2fc-accae9d907df"], "reference_pdf": []} {"uuid": "77318114-59c2-51e6-9719-990770d4e50c", "question": "According to the paper \"Whose Preferences? Differences in Fairness Preferences and Their Impact on the Fairness of AI Utilizing Human Feedback\", both the papers \"Is Your Toxicity My Toxicity? Exploring the Impact of Rater Identity on Toxicity Annotation\" and \"Designing Toxic Content Classification for a Diversity of Perspectives\" adopted standard analysis methods. Then which variable's impact on experimental data is considered in all three papers?", "answer_format": "Your answer should be a python strings.", "tags": ["multiple", "table", "subjective"], "anchor_pdf": ["0d3f0011-493e-5e57-b1a9-7c8be3156a62"], "reference_pdf": ["357ecfc8-7a31-50d8-93ca-7aaf3e2ec1b1", "5fd4e7c2-8eaf-5345-9bee-1d7af471ee7b"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "whether people are LGBTQ or not", "question": "According to the paper \"Whose Preferences? Differences in Fairness Preferences and Their Impact on the Fairness of AI Utilizing Human Feedback\", both the papers \"Is Your Toxicity My Toxicity? Exploring the Impact of Rater Identity on Toxicity Annotation\" and \"Designing Toxic Content Classification for a Diversity of Perspectives\" adopted standard analysis methods. Then which variable's impact on experimental data is considered in all three papers?"}}} {"uuid": "775ac142-b55e-5cbb-9dc2-ebfb7aa64260", "question": "What shortcoming does REV overcome? and how?", "answer_format": "Your answer should be a paragraph describing the shortcoming that REV overcomes and how it overcomes it, based on the content of the paper.", "tags": ["single", "subjective", "text"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["The REV metric overcomes the shortcoming of existing free-text rationale evaluation methods that focus primarily on how well a rationale helps predict a label. Traditional metrics often fail to assess the new information a rationale provides beyond what is already present in the input or label.", "REV addresses this shortcoming by quantifying the additional, label-relevant information in a rationale, using an information-theoretic approach. It evaluates rationales along two dimensions: (1) Support for the Label: Whether the rationale helps predict the intended label; (2) New Information Contribution: How much unique information the rationale adds, beyond the input and label, to justify the prediction"], "question": "What shortcoming does REV overcome? and how?", "ignore_order": true}}, "anchor_pdf": ["d834bb23-9c22-5e94-9421-0be576081dae"], "reference_pdf": []} {"uuid": "780f0147-be99-5d8d-ab82-daec0d471510", "question": "What's the Type of the Pattern \"Character Role Play\" in jailbreak prompts? How can we make role-playing models more responsible in the RoleLLM paper?", "answer_format": "Your answer should be a python list of two elemments. The first is a python string of a type name, and the second one is a list of several measures.", "tags": ["multiple", "table", "subjective"], "anchor_pdf": ["b65bd6b5-e2bb-5f16-a805-055784527a16", "5844c6f9-3de6-551b-bc02-ba6bc65c02ef"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_scoring_points_with_llm"], "eval_kwargs_list": [{"gold": "Pretending", "lowercase": true}, {"scoring_points": ["Implement Advanced Moderation Tools.", "Bias Detection and Mitigation.", "Transparency in Development and Use.", "Feedback Mechanism."], "question": "How can we make role-playing models more responsible in the Jailbreaking paper?", "ignore_order": true}]}}} {"uuid": "7870d38f-1d3b-57d0-b0a0-bdcf9c1cd381", "question": "What are the advantages and disadvantages of MUX-PLMs mentioned in the paper?", "answer_format": "Your answer should be a string list of advantages and disadvantages. ", "tags": ["single", "subjective", "text"], "conference": [], "evaluator": {"eval_func": "eval_partial_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["Significant Throughput Improvement: MUX-PLMs leverage data multiplexing to process multiple inputs in a single forward pass, resulting in a dramatic increase in throughput.", "Comparable Performance to PLMs: Despite the increased throughput, MUX-PLMs maintain performance close to traditional PLMs on downstream tasks.", "Model Generalizability: MUX-PLMs can be fine-tuned like traditional PLMs for various downstream tasks, such as text classification and named entity recognition.", "Performance-Throughput Trade-off: As the number of multiplexed inputs (N) increases, the throughput improves further, but the performance may slightly degrade.", "Model Size Limitations: While the paper demonstrates the effectiveness of MUX-PLMs across different model sizes, the model size still impacts performance and throughput. Larger models may offer better performance but with a potentially smaller throughput improvement compared to smaller models.", "Data Sampling Strategy: The paper employs a random data sampling strategy, but a more sophisticated approach could potentially enhance performance, for example, clustering similar instances based on similarity metrics and multiplexing them."], "question": "What are the advantages and disadvantages of MUX-PLMs mentioned in the paper?", "count": 4}}, "anchor_pdf": ["457688f9-deb0-54fb-8531-6ed175a556d0"], "reference_pdf": []} {"uuid": "78feef9e-1c36-5824-9c47-544c65f73c86", "question": "Which domain in the GRBench dataset does not have any hard questions?", "answer_format": "Your answer should be a single string representing the domain name.", "tags": ["image", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "healthcare", "lowercase": true}}, "anchor_pdf": ["fb154467-1ce1-5c1f-9d4f-b4f5c76312ee"], "reference_pdf": []} {"uuid": "7942e599-6cc3-59c7-89ec-2be7f578f002", "question": "How many samples are there in total in the dataset used by MIDGARD for Task 2 evaluation?", "answer_format": "Your answer should be an integer, the number of samples.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["5dce7396-1032-5275-ae09-d74568a33935"], "reference_pdf": ["366742fc-a2cb-56af-87b7-cb2834f6cf11", "c5a300ef-dacf-5505-a8b7-a6797a2eb702"], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 3166}}} {"uuid": "79b25301-76c6-5594-9f59-76e6ea48246c", "question": "In section 3, the author provides an exemplary event description. List the features in the example that correspond to the semantic roles discussed in the following paragraph.", "answer_format": "Your answer should be a Python dictionary where the keys are the semantic roles and the values are the features that correspond to the roles. e.g. {\"semantic_role1\": \"feature1\", \"semantic_role2\": \"feature2\", ...}", "tags": ["objective", "single", "text"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"subject": "soldiers", "predicate": "injured", "quantifier": "two", "object": "civilians"}}}, "anchor_pdf": ["5b81a1a3-fbe6-534d-b0c0-801e8fb2bdd6"], "reference_pdf": []} {"uuid": "79cc66b6-2a03-523d-a878-6d87d876a9c5", "question": "Which national project supports both the BeamAggR and SpikeVoice paper?", "answer_format": "Your answer should be a python string of the project name. You should use full name as given in the papers and don't add \"the\" before the project name.", "tags": ["multiple", "objective", "metadata"], "anchor_pdf": ["bc194ba1-6d56-55ef-9570-392491216f84", "bb0b4e96-b900-5f7a-a46d-8f831bca8f1b"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "National Science Foundation of China", "lowercase": true}}} {"uuid": "79d00a52-e8f8-5cc0-af9f-385ac4139377", "question": "Which languages are included in the evaluation dataset used in the paper?", "answer_format": "Your answer should be a list of languages, e.g., [\"Language1\", \"Language2\"].", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["a1013128-3a4a-51fe-90fb-dfa09a5c12aa"], "reference_pdf": ["4632325b-454f-5eee-8d4d-ab3aedab5d44"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["English", "German", "Dutch", "Italian"], "ignore_order": true, "lowercase": true}}} {"uuid": "79e15976-b650-5e63-847a-8a6ed4c1de02", "question": "If one would like to train (or evaluate) a helpful assistant agent that can converse with humans while the humans traverse an environment, which work has the most suitable resource?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["67c78c79-7878-5ff5-b5f6-45cec4ad9bf9"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "If one would like to train (or evaluate) a helpful assistant agent that can converse with humans while the humans traverse an environment, which work has the most suitable resource?", "reference_answer": "SIMMC-VR: A Task-oriented Multimodal Dialog Dataset with Situated and Immersive VR Streams"}}} {"uuid": "79fa440a-5fef-5f90-a8a2-fec7a7b0c6b8", "question": "What is the adversarial dataset used for the PI task in this paper, and where are the source sentences of this dataset from?", "answer_format": "Your answer should be a brief text.", "tags": ["multiple", "table", "subjective"], "anchor_pdf": ["574cb3eb-89dc-5384-a7b1-0518983790f6"], "reference_pdf": ["5af5e45d-f259-57ae-a99e-be98764c416c"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The adversarial dataset used for the PI task in this paper is PAWS. Source sentences are drawn from both the Quora Question Pairs (QQP) corpus and Wikipedia.", "question": "What is the adversarial dataset used for the PI task in this paper, and where are the source sentences of this dataset from?"}}} {"uuid": "7a1887ea-4b59-53c5-a860-d6dbd87f0d83", "question": "Can you find a dataset that shows LLM-based evaluation may not be reliable enough?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["1257ec72-a61f-5579-8f41-cb486b3af9a0"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Can you find a dataset that shows LLM-based evaluation may not be reliable enough?", "reference_answer": "EVALUATING LARGE LANGUAGE MODELS AT EVALUATING INSTRUCTION FOLLOWING"}}} {"uuid": "7a638d3c-59b9-5ab8-9a50-26cd189c15c0", "question": "Among the two baselines introduced in the experiment setting of the paper \"Fine-tuning Language Models for Factuality\", which one performs better on Medical QA? That baseline was evaluated on which dataset in the paper that proposed it?", "answer_format": "Your answer should be a Python list of 2 elements, the first is the abbreviation of the baseline, and the second is the name of the dataset.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["1ddb0bec-5135-5f92-8e20-051980bfc221"], "reference_pdf": ["fd827431-4ea0-55be-abab-af927a10c95e"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["ITI", "TruthfulQA"], "ignore_order": false, "ignore_blank": true, "lowercase": true}}} {"uuid": "7a9a3252-14a2-5e7c-a143-fda677deccf5", "question": "What is the lowest accuracy score achieved by RawNet2 on a dataset used in the paper \"Reliability Estimation for Synthetic Speech Detection\" but not in \"SAMO: Speaker Attractor Multi-Center One-Class Learning For Voice Anti-Spoofing\"?", "answer_format": "Your answer should be a Python float, rounded to 2 decimal places.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["959d9fad-35a4-50a9-9f94-5e36279126eb", "7bc08302-bd44-5dfa-9e04-d6c4d8ce8b06"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 0.98, "ndigits": 2}}} {"uuid": "7acf83f6-e04d-5b39-9b7e-c5867793f00a", "question": "Which model gains the highest acc in the Table 1 under the dataset of XNLI in the paper 'InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training', and in the original paper of this model, how many layers the author designes in the base size?", "answer_format": "Your answer should be a Python list of 2 strings, the name of the model, and the number of layers of this model's base size.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["9b44dc24-27be-52e4-b766-cc01787f167f"], "reference_pdf": ["284e8b01-876b-5318-9e0d-c6bc486f3f9d"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["InfoXLM", "12"], "ignore_order": true, "lowercase": true}}} {"uuid": "7b4842aa-2e95-51b9-afd2-1f5e70174b3c", "question": "What is the original form of the metric formula used in the anchor paper for the test split of the BabyLM shared task dataset?", "answer_format": "Your answer should be one formula in LaTeX format without explanation.", "tags": ["multiple", "subjective", "formula"], "anchor_pdf": ["9efa7292-831f-5f7d-b401-cd73e6e18e2b"], "reference_pdf": ["351dc7e4-f4af-5b5e-953a-5ba6ff9839dc"], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"question": "What is the original form of the metric formula used in the anchor paper for the test split of the BabyLM shared task dataset?", "formulas": "\\mathrm{PLL}_{\\mathrm{orig}}(S) := \\sum_{t=1}^{n} \\mathrm{log}~P_{\\mathrm{MLM}}(s_t~|~S_{\\setminus t})"}}} {"uuid": "7bd66a0c-2558-572f-8e9c-51c2422a7d1d", "question": "*Could you suggest a dataset with legally or ethically contentious content, and labels for acceptable and non-acceptable questions.", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["ca116924-bf11-5529-a43f-bf68e9745c5c"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "*Could you suggest a dataset with legally or ethically contentious content, and labels for acceptable and non-acceptable questions.", "reference_answer": "SQUARE: A Large-Scale Dataset of Sensitive Questions and Acceptable Responses Created Through Human-Machine Collaboration"}}} {"uuid": "7bf73cbd-fff0-5e06-b902-5a3d89669232", "question": "According to the paper that proposed acceptance rate and relative performance ratio, what's the most direct metrics for model evaluation?", "answer_format": "Your answer should be a string, the formula of the metrics in LaTeX format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["2b1d5971-c1d3-58c9-b5d8-456c9d49c47a"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "\\mathcal{Q}^{\\mathrm{d}} = \\{ \\mathcal{Q}_1^{\\mathrm{d}}, \\mathcal{Q}_2^{\\mathrm{d}}, \\cdots, \\mathcal{Q}_K^{\\mathrm{d}} \\}, \\quad \\mathcal{Q}_{\\text{ave}}^{\\mathrm{d}} = \\frac{1}{K} \\sum_i \\mathcal{Q}_i^{\\mathrm{d}},", "question": "What's the formula for Distributed Absolute Performance?"}}} {"uuid": "7c5afdfd-0983-59be-b714-636d275bf7ad", "question": "Which paper used both automatically generated and manual templates with word tuples to adapt language models from one timestamp to another?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["ed0ee708-038d-5474-94a8-08b9e4bd894c"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper used both automatically generated and manual templates with word tuples to adapt language models from one timestamp to another?", "reference_answer": "Learning Dynamic Contextualised Word Embeddings via Template-based Temporal Adaptation"}}} {"uuid": "7ca5b284-3586-51a2-b05f-e6adacb7e072", "question": "Is there any paper that previously proposed to control a risk using prediction sets, based on the literature in conformal prediction?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["b3ea193f-01ba-57c5-99c8-cf051bbd2b30"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that previously proposed to control a risk using prediction sets, based on the literature in conformal prediction?", "reference_answer": "Conformal Risk Control"}}} {"uuid": "7d208f5a-edd2-5b68-82bc-2feab767d620", "question": "In formula (3) of the paper, how is I_{container} calculated?", "answer_format": "Your answer should be a python strings about the calculation approach and formula of I_{containe}.", "tags": ["single", "formula", "subjective"], "anchor_pdf": ["0c798051-97ee-52fe-b714-faf5ef1132c3"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "In formula (3) of the paper, how is I_{container} calculated?", "reference_answer": "I_{container} is calculated by an overflow sensitivity training method to help the Decoder to be more sensitive when facing edge range pixel value. The purpose of the design is that when decoding pixel values in the edge range, the feature extraction is only related to the template amplitude and has nothing to do with the positivity. Specifically, the approach is to add conditions to the additive operations. If the pixel value after superposition overflows, the addition operation of the pixel is converted into a subtraction operation. The formula is as follows: I_{container}= \\begin{cases} I_{host}-I{additive}, overflow \\ I_{host}+I{additive}, otherwise \\end{cases}"}}} {"uuid": "7d231de8-b8f7-588f-87b0-4fe7b4be0863", "question": "Which knowledge graph completion method focuses on reducing memory usage by pruning features?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["a3e3cee1-d140-5dc1-9608-2f1a1d924229"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which knowledge graph completion method focuses on reducing memory usage by pruning features?", "reference_answer": "GreenKGC: A Lightweight Knowledge Graph Completion Method"}}} {"uuid": "7d856467-aba1-5b39-8ebc-d533b61dc86b", "question": "Which paper highlights the need for leveraging all available resources, including dictionaries, machine translation systems, and language learners, to construct NLP data in low-resource languages?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["18ceb0ac-1175-5e0a-bd2f-f9b72e25da3f"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper highlights the need for leveraging all available resources, including dictionaries, machine translation systems, and language learners, to construct NLP data in low-resource languages?", "reference_answer": "Rethinking Annotation: Can Language Learners Contribute?"}}} {"uuid": "7dbe525f-0e5e-5e2d-a9d5-06dac6643dff", "question": "What is the main difference between the paper retrieval methods of this two papers?", "answer_format": "Your answer should be a brief text.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["da7ef83f-906c-54c4-9079-841e612c84d6", "8ae3051b-db29-53c3-89c6-64202b13cec6"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "In Chain of Ideas, the paper retrieval method is based on the citation relationship. While in SCIPIP, the paper retrieval method is based on the literature database it constructed.", "question": "What is the main difference between the paper retrieval methods of this two papers?"}}} {"uuid": "7dbe882d-0adf-5c1b-86f8-71b1a7508bca", "question": "Which paper trains on linear regression to hypothesize how fine-tuning affects language models?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["063fae3a-7661-594c-ba88-ef87c051c4da"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper trains on linear regression to hypothesize how fine-tuning affects language models?", "reference_answer": "UNDERSTANDING CATASTROPHIC FORGETTING IN LANGUAGE MODELS VIA IMPLICIT INFERENCE"}}} {"uuid": "7dee922c-95de-5d8e-8f03-4c27b84c7919", "question": "Which paper formally defines the problem of model selection in llm agent for multi-modal reasoning?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["05a33541-9283-5b18-989e-6030d225902c"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper formally defines the problem of model selection in llm agent for multi-modal reasoning?", "reference_answer": "TOWARDS ROBUST MULTI-MODAL REASONING VIA MODEL SELECTION"}}} {"uuid": "7e784da6-8f04-575f-b944-93cb8f4e65a3", "question": "According to Figure 1,compared with CoT prompt,what is the advantage of QAP?", "answer_format": "Your answer should be a sentence that clearly mention the advantage of QAP", "tags": ["single", "image", "subjective"], "anchor_pdf": ["000ab6db-4b65-5dc0-8393-fbc2c05843c8"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_partial_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["QAP explains the question in its own way.", "QAP shows a more detailed answer steps.", "QAP more likely leads to a correct answer."], "question": "According to Figure 1,compared with CoT prompt,what is the advantage of QAP?", "count": 2}}} {"uuid": "7e8122b4-a93c-553d-917a-3f049456c2cb", "question": "What is the average length of essays used in this study?", "answer_format": "Your answer should be a float number with 1 decimal place.", "tags": ["single", "table", "objective"], "anchor_pdf": ["636b056a-646d-582b-9b5c-7825a0ccbcad"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 251.2, "ndigits": 1, "tolerance": 0.1}}} {"uuid": "7e872b44-e211-5a40-9e99-0a4c361283a6", "question": "In the TARA dataset, which tool is evaluated the most in test set?", "answer_format": "Your answer should be a Python string, the name of the tool.", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["2fe4e162-e56f-5617-bfd9-0cf45916610b"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Translator", "lowercase": true, "ignore_blank": true}}} {"uuid": "7e8b3e8b-6834-5662-bb05-c05c9b0d38d6", "question": "Is there any research paper that can extract attributes from both a predefined label set and the surrounding context?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["8e827c11-8816-513c-aa6a-a8fe8e949738"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any research paper that can extract attributes from both a predefined label set and the surrounding context?", "reference_answer": "AtTGen: Attribute Tree Generation for Real-World Attribute Joint Extraction"}}} {"uuid": "7f058c83-bd50-525a-9643-68140cf0b6da", "question": "In the dataset that the GeMQuAD paper used for Spanish labeled examples, which language has the longest paragraph in average in tokens?", "answer_format": "Your answer should be the full name of the language, e.g. English", "tags": ["multiple", "table", "metadata", "objective"], "anchor_pdf": ["3ad2eaf4-5fe6-5653-84fc-d1b8f83b6617"], "reference_pdf": ["2a9162e2-1c8f-5b57-bdc8-78ec19bdd4df"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Hindi", "lowercase": true, "ignore_blank": true}}} {"uuid": "7f6dafa1-72c9-5c9b-a4bc-dbddaf15f4de", "question": "Is there a paper illustrating that pre-trained transformers from LLMs can be used to encode visual information in a wide range of scenarios?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["0383a102-c1d6-5e04-a15c-f6c551ef739c"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper illustrating that pre-trained transformers from LLMs can be used to encode visual information in a wide range of scenarios?", "reference_answer": "FROZEN TRANSFORMERS IN LANGUAGE MODELS ARE EFFECTIVE VISUAL ENCODER LAYERS"}}} {"uuid": "8060edd0-d5a8-5671-8c02-a83f6e9a43a1", "question": "How much faster is OFU-MLogB compared to MNL-UCB in the multinomial logistic bandit experiment?", "answer_format": "Your answer should be a python strings", "tags": ["single", "text", "subjective"], "anchor_pdf": ["fe3d4da4-a639-58f9-b978-4597b2d61711"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Around 50 times faster.", "question": "How much faster is OFU-MLogB compared to MNL-UCB in the multinomial logistic bandit experiment?"}}} {"uuid": "807fdd37-864e-584c-b556-cf63ef4b428e", "question": "For the latest two selected ML datasets of vision modality in Croissant, where did the raw images come from?", "answer_format": "Your answer should be a single word, the name of the website where the images came from.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["1b46d3cd-bc8f-51ad-8c77-105312f6e952"], "reference_pdf": ["55fff8cb-7639-5bab-8c5c-ab352eb833ae", "b80b94be-c4be-5423-989d-7135a38d729a"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Flickr", "lowercase": true}}} {"uuid": "818501f3-3983-598b-903a-9bfc0ec268d6", "question": "Is there any paper that utilizes masked language modeling to defend against word-level adversarial attacks?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["64cda1c1-b252-58b9-9d02-fe4e3168e68f"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that utilizes masked language modeling to defend against word-level adversarial attacks?", "reference_answer": "RMLM: A Flexible Defense Framework for Proactively Mitigating Word-level Adversarial Attacks"}}} {"uuid": "81947076-ac46-5c93-a2a5-aab2896f3d36", "question": "What motivates the author to propose this paper?", "answer_format": "Your answer should be a brief text.", "tags": ["single", "text", "subjective"], "anchor_pdf": ["0abc1742-b83f-5036-a276-711fa24f4c4d"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The author is motivated by the idea that different sensors, though separated by modality or time, capture correlated information about the same underlying phenomena. This motivates using an information-theoretic approach to learn shared representations across multi-view data and disentangle common latent factors.", "question": "What motivates the author to propose this paper?"}}} {"uuid": "81a62980-3225-5149-b703-e7c4bc4d48ea", "question": "Is there such a reading comprehension dataset in understanding a snippet from a long story book, while it requires to integrate the necessary long history texts before the snippet to full understand it?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["ed8d82ca-6f67-59e7-8b80-8d3a6e64bbc7"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there such a reading comprehension dataset in understanding a snippet from a long story book, while it requires to integrate the necessary long history texts before the snippet to full understand it?", "reference_answer": "Personality Understanding of Fictional Characters during Book Reading"}}} {"uuid": "81ddab00-acc0-571a-a579-739957afc345", "question": "There are 12 datasets examined with code-davinci-002 and 2 datasets have a large accuracy gap. What is the average performance difference of these two datasets while using instruction fine-tuned model?", "answer_format": "Your answer should be a float number with 4 decimal places between 0 and 1.", "tags": ["single", "table", "objective"], "anchor_pdf": ["9499fde6-218f-50c2-9ea5-7c6e876bcf3d"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 0.0855, "ndigits": 4, "tolerance": 0.0005}}} {"uuid": "8205ffe4-abc4-54fc-bf39-2e0f3b375848", "question": "According to the paper that proposed BooookScore, another paper analysed the disadvantage of the only existing public dataset for book-length summarization. In that analysis paper, which book published in the 21st century is among the books where GPT-4 performs the best on name cloze?", "answer_format": "Your answer should be a Python string, the title of the book.", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["04ed3e06-a7e7-5856-8912-af4223637abf", "60f2989f-00db-56ca-a6e9-f0da949b4ac2"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Fifty Shades of Grey", "ignore_blank": true, "lowercase": true}}} {"uuid": "82a90ead-45dc-5914-9db5-5a9242a056a9", "question": "Explain the reasoning behind formula (3) in the paper.", "answer_format": "Your answer should be a python strings about the concise reasoning behind formula (3) in the paper.", "tags": ["single", "formula", "subjective"], "anchor_pdf": ["0d247231-e960-5a59-aaee-f0706a8b9115"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "Explain the reasoning behind formula (3) in the paper.", "reference_answer": "Formula (3) is designed to force the student's predictions of CutMixed spectrograms to be consistent with the CutMixed teacher's predictions of original samples, which is to encourage the model to alleviate excessive temporal dependency and make it less vulnerable to varying contexts while making predictions."}}} {"uuid": "82bdaa47-a2cb-5fbd-a827-83d981f4bb52", "question": "According to Table 2, what is the difference of the percentage of over 80% agreement threshold for both concessive and causal relations between the first and second iteration on the English dataset?", "answer_format": "Your answer should be a percentage with two decimal places, indicating the difference in proportion.", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "58.06%"}}, "anchor_pdf": ["757dc667-413b-5207-8463-5deb4fedf073"], "reference_pdf": []} {"uuid": "82d3067f-1da3-5800-b7ef-a4571d85ccde", "question": "What paper is the first to prove finetuned LLM can be a reliable judge?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["71ad17b4-29f8-548d-8bd6-ec1dc724ed0d"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What paper is the first to prove finetuned LLM can be a reliable judge?", "reference_answer": "PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization"}}} {"uuid": "8323e9b0-52be-5d8e-8c68-3975a4e1ecfe", "question": "What new efficient pre-training method was used in the pre-training process of the model used to compute embedding representations in the Document Similarity section of the paper?", "answer_format": "Your answer should be a python strings concisely summarizing the method.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["2c47dd59-12dc-5da3-ac15-4f046cf45d6c"], "reference_pdf": ["058b61e1-dbec-5ca7-9603-d53c1e14e733"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "They explored training a system to solve the potentially easier proxy task of predicting only which text as a whole is paired with which image and not the exact words of that text.", "question": "What new efficient pre-training method was used in the pre-training process of the model used to compute embedding representations in the Document Similarity section of the paper?"}}} {"uuid": "835eda31-c9b0-53a8-bc45-bb5418334a61", "question": "What datasets are used in the paper proposing the baseline for the experiments in this paper?", "answer_format": "Your answer should be a python list, each element is the name of the dataset used, e.g.,[\"dataset1\", \"dataset2\", ...]. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.", "tags": ["multiple", "objective", "text"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["PTB", "CTB5.1", "CTB7"], "ignore_order": true}}, "anchor_pdf": ["aa761e80-b33e-5860-9926-dd147986f5ab"], "reference_pdf": ["ceacd41b-7bf5-5a0b-b53a-72b523d09157"]} {"uuid": "837e7da7-5e5c-5cb9-bcb0-c1dc60d97569", "question": "Which paper first used structural information for coherence modeling?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["16c522a0-d845-5cba-a4ae-879bf2ed4e73"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first used structural information for coherence modeling?", "reference_answer": "Modeling Structural Similarities between Documents for Coherence Assessment with Graph Convolutional Networks"}}} {"uuid": "838bd7f8-6475-577a-801b-90c69d9b04f4", "question": "Which model achieves the best performance in the experimental results of MQUAKE-T in Figure 2? Considering the GPU VRAM consumption of the model from the previous question, which models from ENN and KE have similar consumption?", "answer_format": "Your answer should be a single python list containing two strings, the first element of the list is the model's name, the second element of the name of the model that has similar GPU VRAM consumption of with the first element.", "tags": ["multiple", "image", "objective"], "anchor_pdf": ["5e2682ca-f1c4-536d-bd61-0a2f0055f435"], "reference_pdf": ["44c58240-57f2-5f7c-b511-e44337f6a5af"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["MEND", "KE"], "lowercase": true}}} {"uuid": "83ec97fc-091f-54b0-a627-cf693204090f", "question": "What are the two key technologies used in the process of augmenting trajectory-level data proposed in this paper?", "answer_format": "Your answer should be plain text.", "tags": ["single", "subjective", "text"], "anchor_pdf": ["0d89b506-7770-5702-a992-47a2e50eee4d"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Augment trajectories using a partial noising with forward process and a denoising framework with amplified return guidance.", "question": "What are the two key technologies used in the process of augmenting trajectory-level data proposed in this paper?"}}} {"uuid": "84834991-f063-5ab4-ad5c-bccb3de208d0", "question": "Which two independent sources of variance do the models performing sentiment classifications have to cope with?", "answer_format": "Your answer should be a python strings of the two independent sources of variance.", "tags": ["single", "text", "subjective"], "anchor_pdf": ["ff860bc6-9474-58c9-8a5b-19d92fff2932"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "cultural expressions of sentiment and errors in automatic translations", "question": "Which two independent sources of variance do the models performing sentiment classifications have to cope with?"}}} {"uuid": "8531101d-a0b8-50fa-9f0e-5a3c71417b4e", "question": "How to calculate three important parameters that appear in the second part of Figure 2?", "answer_format": "Your answer should be a Python list of three string elements, every element is a formula in latex format to calculate a parameter.", "tags": ["formula", "image", "single", "subjective"], "anchor_pdf": ["8712603a-e96c-5537-be18-651b29dedfb8"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": ["\\alpha_{k}=\\displaystyle\\frac{\\cos\\left(\\mathbf{h}_{t},\\mathbf{h}_{o}[k]\\right)}{\\sum_{j=1}^{K}\\cos\\left(\\mathbf{h}_{t},\\mathbf{h}_{o}[j]\\right)}", "\\beta=\\mathrm{FC}_{\\beta}(\\overline{{\\mathbf{h}}}_{o})=\\mathrm{FC}_{\\beta}\\left(\\displaystyle\\sum_{k=1}^{K}\\alpha_{k}\\mathbf{h}_{o}[k]\\right)", "\\gamma=\\mathrm{tanh}\\left(\\mathrm{FC}_{\\gamma}\\left(\\mathbf{h}_{i}\\right)\\right)"], "question": "How to calculate three important parameters that appear in the second part of Figure 2?", "ignore_order": true}}} {"uuid": "85f81535-fc10-575c-995f-4739a4470d4b", "question": "Where can I get the code of GreenKGC method?", "answer_format": "Your answer should be the url of the code of GreenKGC method.", "tags": ["metadata", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "https://github.com/yunchengwang/GreenKGC"}}, "anchor_pdf": ["a3e3cee1-d140-5dc1-9608-2f1a1d924229"], "reference_pdf": []} {"uuid": "8781817f-474b-5c15-9f77-aa85055446b9", "question": "The study employs two main methods to analyze linguistic features: LIWC and BERT. What are the advantages of the two methodes respectively?", "answer_format": "Your answer should be a python list of two strings.every string describes clearly the advantages of one methode.", "tags": ["single", "subjective", "text"], "conference": [], "evaluator": {"eval_func": "eval_partial_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["Interpretability of LIWC: LIWC categories have clear psychological meanings, making it easier to interpret the analysis results.", "Low computational cost of LIWC: LIWC analysis only requires simple word counting, which has a low computational cost.", "Ease of use of LIWC: LIWC dictionaries are easy to obtain and use, and the operation is simple.", "Contextual understanding of BERT: The BERT model can capture the contextual information of words, thus understanding the meaning of words more accurately.", "Stronger feature representation ability of BERT: The BERT model can generate richer feature vectors, capturing more complex linguistic features.", "Better prediction performance of BERT: In some tasks, the prediction performance of the BERT model may be better than that of the LIWC model."], "question": "The study employs two main methods to analyze linguistic features: LIWC and BERT. What are the advantages of the two methodes respectively?", "count": 4}}, "anchor_pdf": ["4ffd9c08-58a8-5008-8aeb-8686e4093869"], "reference_pdf": []} {"uuid": "87d2ad0c-a0ed-5e69-9007-571e601a142a", "question": "What is the collection process of the latest mainstream KBQA dataset used in the paper \"Augmenting Reasoning Capabilities of LLMs with Graph Structures in Knowledge Base Question Answering\"?", "answer_format": "Your answer should be a python strings.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["35a8f7a7-a28f-5a0b-8c54-15fb7237d66e"], "reference_pdf": ["058d0055-8d50-5b52-ac1a-8c36d074e246"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"question": "What is the collection process of the latest mainstream KBQA dataset used in the paper \"Augmenting Reasoning Capabilities of LLMs with Graph Structures in Knowledge Base Question Answering\"?", "scoring_points": ["Canonical logical form generation. We leverage the logical form generation algorithm from Su et al. It first traverses the KB ontology to generate graph-shaped templates that only consist of classes, relations, and functions, and then ground certain nodes to compatible entities to generate logical forms in their meaning representation called graph query.", "Canonical question annotation. Each validated canonical logical form is annotated with a canonical question by a graduate student, which is then cross-validated by another student to ensure its fidelity and fluency.", "Crowd-powered paraphrasing. We use Amazon Mechanical Turk to crowdsource paraphrases of the canonical question. The crowdsourcing framework with automated quality control mechanisms that contains three task: Paraphrasing, Cross-validation and Entity surface form mining.", "Grounding and sampling. We do controlled sampling to generate the final questions: From the pool of logical forms and paraphrases associated with the same canonical logical form, we sample one from each pool at a time to generate a question. Start with uniform weights and each time a logical form or paraphrase is selected, its weight is divided by rho_l and rho_p , respectively. We set rho_l to 2 and rho_p to 10 to enforce more linguistic diversity. Finally, we randomly replace entity surface forms with the ones mined in Task 3 (if there is any)."]}}} {"uuid": "881d40cb-62f6-57d9-b4bb-75ce6a1c2b89", "question": "Which paper studies the concept of enhancing the coverage of a selective prediction system by re-attempting the questions on which it was not sufficiently confident.", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["fd943cd9-c318-5ec3-8cba-b3e64e7daaab"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper studies the concept of enhancing the coverage of a selective prediction system by re-attempting the questions on which it was not sufficiently confident.", "reference_answer": "Post-Abstention: Towards Reliably Re-Attempting the Abstained Instances in QA"}}} {"uuid": "88fc505f-d156-52bc-8084-2ab7bbc5fb17", "question": "To generate NSFW images, which technique is used in the paper titled \"Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation\"?Please tell me the full name, not the abbreviation.", "answer_format": "Your answer should be a single string of the full name of the technique.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["c43d3a9e-fd1b-5e5c-8f4c-fd8c82218f78"], "reference_pdf": ["d01715f1-1d5c-53ec-ba29-0a6796d998e0"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Inappropriate Image Prompts", "lowercase": true}}} {"uuid": "891093a5-a10b-5299-a421-a77713a8886e", "question": "According to Table 1, which baselines this paper used has the highest average score? In which paper is this method proposed? And in which conference was this paper published?", "answer_format": "Your answer should be a python list with three items, the first item is the name of baseline reaching the highest average score according to Table 1, the second item is the name of paper where this method proposed, and the third item is the abbreviation of the conference name where this paper was published.", "tags": ["metadata", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["SimCTG", "A contrastive framework for neural text generation", "NeurIPS"], "ignore_order": false}}, "anchor_pdf": ["9c5c3a63-3042-582a-9358-d0c61de3330d"], "reference_pdf": []} {"uuid": "895610a0-b3ae-5623-a8e2-e0731eae53f6", "question": "Is there a paper that uses Explainable AI techniques to investigate how language models represent the expression of morality?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["43451d4e-ce08-5e35-bbb1-06f770afd66e"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper that uses Explainable AI techniques to investigate how language models represent the expression of morality?", "reference_answer": "What does a Text Classifier Learn about Morality? An Explainable Method for Cross-Domain Comparison of Moral Rhetoric"}}} {"uuid": "896a8ab2-38ab-5ae9-95cc-2d883784e87e", "question": "In the main experiment of \"AN LLM CAN FOOL ITSELF: A PROMPT-BASED ADVERSARIAL ATTACK\", what high-level tasks do the five tasks used belong to, as categorized in the original paper?", "answer_format": "Your answer should be a python dictionary with the following format: {'task1': 'task1_category', 'task2': 'task2_category', 'task3': 'task3_category', 'task4': 'task4_category', 'task5': 'task5_category'}. YOU MUST USE THE EXACT NAMES OF THE CATEGORIES IN THE PAPER AND THE ABBREVIATION OF THE TASKS.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["4cc48a5b-be5f-57af-8697-1cc75c0f67d0"], "reference_pdf": ["2292ac5f-ddf9-5ed5-8009-b4f7a69a8ec9"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"SST-2": "Single-Sentence Tasks", "QQP": "Similarity and Paraphrase Tasks", "MNLI": "Inference Tasks", "RTE": "Inference Tasks", "QNLI": "Inference Tasks"}, "ignore_order": true, "lowercase": true, "threshold": 80}}} {"uuid": "897a955d-9e8a-5836-a3b6-0ed48575f2b9", "question": "In terms of evaluation results on SuperGLUE using RoBERTaBASE, on subtask ReCoRD, which model(s) achieve the overall best results? Additionally, which model(s) perform the best among the two-stage MTL models?", "answer_format": "Your answer should be a python dict(without \\n) containing two keys \"overall best\" and \"two-stage MTL best\", each value of the two keys is a python string list. e.g.{\"overall best\":[\"modelname1\"],\"two-stage MTL best\":[\"modelname2\",\"modelname3\"]} ", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"overall best": ["PROPETL"], "two-stage MTL best": ["SCALEARNUNIFORM", "SCALEARN++"]}, "ignore_order": true, "ignore_blank": true}}, "anchor_pdf": ["9b06b24b-0afc-5ccb-95fc-c662395d291d"], "reference_pdf": []} {"uuid": "8a08dc1f-6e2c-5e4b-9da4-34f5fb3ee073", "question": "Which paper contains quantitative results demonstrating taking VQ tokens as inputs is inferior to pixel images for dense recognition tasks?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["5c17f73a-4452-5946-9087-b2f5cf09b5e4"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper contains quantitative results demonstrating taking VQ tokens as inputs is inferior to pixel images for dense recognition tasks?", "reference_answer": "ADDP: Learning General Representations for Image Recognition and Generation with Alternating Denoising Diffusion Process"}}} {"uuid": "8a732ed7-85bc-5355-aec7-97434295b153", "question": "For the PLM with the lowest number of synset candidates under synset retrieval in the paper \"Predicate Sense Disambiguation for UMR Annotation of Latin: Challenges and Insights\", what negative impact will it have if it randomly blocks a certain percentage of input tokens during pre-training?", "answer_format": "Your answer should be a python strings.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["eb27974a-2306-5c44-bb8d-77d5bc90f5d4"], "reference_pdf": ["ac1b0430-6781-5539-9d45-5067c0c6ff3e"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "It creates a mismatch between pre-training and fine-tuning.", "question": "For the PLM with the lowest number of synset candidates under synset retrieval in the paper \"Predicate Sense Disambiguation for UMR Annotation of Latin: Challenges and Insights\", what negative impact will it have if it randomly blocks a certain percentage of input tokens during pre-training?"}}} {"uuid": "8aca533a-c03d-5708-aaf4-320886de4a20", "question": "In the paper that introduced the latest dataset used by RetinaQA, what innovation related to F1 was also applied in the evaluation of RetinaQA?", "answer_format": "Your answer should be a paragraph, describing the innovation on F1.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["5b97752d-6379-55fe-903a-918b3b53925c"], "reference_pdf": ["0d9f5091-a5c3-5d69-8f13-b9427d3f4ccd", "058d0055-8d50-5b52-ac1a-8c36d074e246", "16c3a7ad-d638-5ebf-a72a-bd58f06c16d7"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "In regular answer evaluation (R), we compare the predicted answer (which could be NA) with the gold answer in the modified KB, as usual. Specifically for unanswerability, we also consider lenient answer evaluation (L), where we account for the gold answer in the original (ideal) KB as well, and also give credit to models which are able to recover this answer, perhaps via inference.", "question": "What innovation related to F1 was also applied in the evaluation of RetinaQA?"}}} {"uuid": "8ad76d2f-9b95-58ed-b20f-7c15fc30c2fc", "question": "To enhance scalability and effectiveness, which Ensemble Model does this paper(\"MINERS : Multilingual Language Models as Semantic Retrievers\") choose? In the source paper, which datasets is this framework tested on?", "answer_format": "Your answer should be a python list like [\"string1\", [\"string2\", \"string3\", ...]]. The first element should be a string, representing the name of the Ensemble Model. The second element should be a list of strings, representing the names of the datasets tested on.For these names, abbrievation is enough.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["fef69202-2222-5c62-ae44-88cdc4d47f7b"], "reference_pdf": ["fa9a5139-df9b-52ee-9e3b-bc1790725708"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_structured_object_exact_match"], "eval_kwargs_list": [{"gold": "DistFuse", "lowercase": true}, {"gold": ["NusaX", "MASSIVE"], "ignore_order": true, "lowercase": true}]}}} {"uuid": "8b394043-af79-5a1b-9c7f-a3437276f8af", "question": "In the framework used in the paper \"ADAPTIVE DEEP SPIKING NEURAL NETWORK WITH GLOBAL-LOCAL LEARNING VIA BALANCED EXCITATORY AND INHIBITORY MECHANISM\", what are the ANN2SNN conversion functions?", "answer_format": "Your answer should be a python list of strings, e.g., ['function1', 'function2']. YOU MUST USE THE EXACT FUNCTION NAMES AS IN THE PAPER.", "tags": ["multiple", "image", "objective"], "anchor_pdf": ["c846c33e-7401-50c6-a2a4-973ba293dff8"], "reference_pdf": ["1a04aded-2e0f-55aa-bf1c-d173df1a165a"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["W_norm", "IF Node"], "ignore_order": true, "lowercase": true, "ignore_blank": true, "fuzz_method": "ratio", "threshold": 80}}} {"uuid": "8b6057cc-77e5-5981-9d3f-5b9b62126d0e", "question": "What work proposes a model to learn a latent regular cell complex from data?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["4feaa6a1-39bd-5687-951d-01ce588f38e6"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What work proposes a model to learn a latent regular cell complex from data?", "reference_answer": "From Latent Graph to Latent Topology Inference: Differentiable Cell Complex Module"}}} {"uuid": "8b79fc88-0307-532a-bbdb-e8990fb27372", "question": "What paper considers sensitive data issue when prompting large language model APIs?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["796a2f4b-0702-52cb-8e05-241c378b828f"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What paper considers sensitive data issue when prompting large language model APIs?", "reference_answer": "Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting"}}} {"uuid": "8ba9edfd-5a7a-5ec1-9d22-329443311f3b", "question": "Is there any paper that aligns speech and text embeddings better than CTC training?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["2d97c420-dc18-5192-9a4f-4b43ac0467f5"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that aligns speech and text embeddings better than CTC training?", "reference_answer": "WACO: Word-Aligned Contrastive Learning for Speech Translation"}}} {"uuid": "8beb15f1-2c64-5aa7-a1a3-1579452b2ecc", "question": "For the specific dataset where CLiCoTEA does not outperform all models, in which languages does this occur?", "answer_format": "Your answer should be a Python list of string elements, every element is the abbreviation of a langugage mentioned in the paper, e.g. [\"AR\", \"ES\", \"FR\", ...].", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["DE", "ES", "ID", "RU", "TR"], "ignore_order": true, "ignore_blank": true}}, "anchor_pdf": ["0d6ea045-b831-520d-9b99-ba22a081a403"], "reference_pdf": []} {"uuid": "8cc38e05-20e5-5a69-8b82-ecc09c03450a", "question": "According to the experiment result, How much better does the GPT-2 model perform on Task A compared to the CNN-BiLSTM model in terms of F1-score?", "answer_format": "Your answer should be a single python float, rounded to 2 decimal places.", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 0.02, "ndigits": 2}}, "anchor_pdf": ["1aa5e165-0b78-582a-8f67-e459452348df"], "reference_pdf": []} {"uuid": "8d540aba-a5d8-5da7-8c8d-c98a1dcbd507", "question": "The anchor paper mentioned a service which uses the methodology proposed in this paper. What is the name of the service? On which page of the paper can I find graphic information on this service?", "answer_format": "Your answer should be a python list with 2 elements, the first element being the name of the service, and the second element being the page number. The first element should be a string and the second element should be an integer. Use the exact name of the service from the paper without changing CAPITALIZATION.", "tags": ["single", "objective", "image"], "anchor_pdf": ["0127a369-a186-55a5-bced-55942dcb5e9f"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Today's Mini Diary", 17], "ignore_order": false, "threshold": 70, "fuzz_method": "token_sort_ratio"}}} {"uuid": "8d7e5c06-78b8-5454-849a-57140efaa80c", "question": "According to the paper, which dataset also use a retrieval-based system for relevant files selecting?In that dataset, how many lines do the codebase contain on average?", "answer_format": "Your answer should be a python list of 2 elements, the first is the name of the dataset, and the second is the number of lines in thousands, rounding to the nearest integer. e.g. [\"MMMU\", 9]", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["1c00c0f7-c403-58c7-9cc4-dc032888423f"], "reference_pdf": ["1c87084a-f8ae-5a28-a8bb-016316818e0c"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["SWE-bench", 438], "fuzz_method": "partial_ratio", "threshold": 100, "ignore_order": false, "ignore_blank": true, "lowercase": true}}} {"uuid": "8e1ebc95-7523-5a09-b95c-85748f5825ae", "question": "Which paper is among the earliest to train on extensive collection of signing video and subtitle pairs available from online platforms?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["ef4ea22b-ea4c-5839-b2cd-9c44ac161a59"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper is among the earliest to train on extensive collection of signing video and subtitle pairs available from online platforms?", "reference_answer": "Gloss-Free End-to-End Sign Language Translation"}}} {"uuid": "8e2d5903-f8ba-5504-aa9a-43538a8536a6", "question": "In the paper that proposed the dataset used by ReFIR for the evaluation of the second experimental setup, which image restoration method was also proposed? Additionally, what metric applied to evaluate that method was not applied in ReFIR?", "answer_format": "Your answer should be a Python list of 2 strings, the name of the method and the name of the metric.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["1bdbf41b-f94f-5f1a-b22a-89662ed0fb49"], "reference_pdf": ["f9ea952c-4545-5826-9a6d-aa819fffce2c"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["SUPIR", "ManIQA"], "ignore_order": false, "lowercase": true}}} {"uuid": "8e6f688d-d3d9-5055-89c4-b50ad752b208", "question": "In the experiment result of the paper named \"EIT: Enhanced Interactive Transformer\", which model gets the highest RG-L on the summarization task? For the source paper of this model, what issue about Neural network-based methods for abstractive summarization does this paper want to address? ", "answer_format": "Your answer should be a single python list, the first element is the string of the model name, the second element is the string of the issue.e.g.[\"EIT\",\"Neural network-based methods for abstractive summarization often neglect...\"].", "tags": ["subjective", "multiple", "table"], "anchor_pdf": ["e281fc3b-cdaa-5565-8997-6a6c8f198000"], "reference_pdf": ["c970161d-753d-556c-a748-2c296f644f07"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "BOTTOM-UP", "lowercase": true}, {"reference_answer": "Neural network-based methods for abstractive summarization produce outputs that are more fluent than other techniques, but which can be poor at content selection.", "question": "For the source paper of this model, what issue about Neural network-based methods for abstractive summarization does this paper want to address? "}]}}} {"uuid": "8f53d4bf-6a6f-59b8-90ed-c6b555240a59", "question": "Among the datasets proposed in the Introduction section of the paper \"Towards Robust Temporal Reasoning of Large Language Models via a Multi-Hop QA Dataset and Pseudo-Instruction Tuning\", which one has the least Q-A pairs?", "answer_format": "Your answer should be a single string, the name of the dataset as given in the Introduction section.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["2bf8095c-81d0-5988-858b-a5155e0cc985"], "reference_pdf": ["b93e2dfe-7f58-5d96-8c64-39930d5c22ea", "e73a34ee-536b-5ea4-8de4-b2fb39b30042", "3e409d3a-1045-575f-b4ad-f4923916080a", "4d93d596-b0bd-54c6-bd9e-041037077bc7", "71cec673-84eb-579b-9419-2032699ac0e7"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "SituatedQA", "lowercase": true}}} {"uuid": "8f561f35-51ae-5330-97aa-f547f89f4d26", "question": "According to MSAD paper's review, which is the largest VAD benchmark datasets with multiple domain before? Additionally, in that dataset, which are the two scenarios with the most videos?", "answer_format": "Your answer should be a Python list of 2 elements, the first is the name of the dataset, the second is a list of 2 strings, the full name of the scenarios.", "tags": ["multiple", "table", "image", "objective"], "anchor_pdf": ["0e26d1b4-43af-5cf5-9626-deef7dcfc6ae"], "reference_pdf": ["d1290795-a058-5b1b-b9f4-9392472d83b0", "9413e819-8187-5e64-8aef-3507676d0855"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_structured_object_exact_match"], "eval_kwargs_list": [{"gold": "CUVA", "lowercase": true}, {"gold": ["Pedestrian Incidents", "Forbidden to Burn"], "lowercase": true, "ignore_order": true, "ignore_blank": true}]}}} {"uuid": "8f5ae2e9-9450-57e9-bc4f-069747845fdb", "question": "How were the data samples selected for the dataset used in the SFT of the paper \"FACTALIGN: Long-form Factuality Alignment of Large Language Models\"?", "answer_format": "Your answer should be a python strings.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["3f41366a-f254-5708-af03-6ab7381ca5c3"], "reference_pdf": ["289b93f1-9379-5745-9666-821734bc3cbe"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"question": "How were the data samples selected for the dataset used in the SFT of the paper \"FACTALIGN: Long-form Factuality Alignment of Large Language Models\"?", "scoring_points": ["A score-first, diverse-aware data selection strategy, denoted as \\pi_{DEITA} were proposed to combine complexity, quality, and diversity measures.", "The strategy incorporates a new evol score s that combines complexity and quality by multiplying the complexity score c with the quality score q as s := c * q. For multi-turn dialogues, we calculate this score for each turn, summing them to obtain the final score for the entire conversation. Next, we sort all samples in X using s, yielding the sorted pool X^* = (x^*_1, x^*_2, ..., x^*_n), where x^*_0 represents the sample with the highest evol score. Beginning with S^1_{\\pi_{DEITA}} = (x^*_0), we iteratively select data from X^*/S^1_{\\pi_{DEITA}} one by one, following the REPR FILTER strategy, and discard redundant samples for S^1_{\\pi_{DEITA}}."]}}} {"uuid": "8f94a406-ef97-584a-a578-17a8b0380287", "question": "Which paper first derived online occupany estimation technique to get sqrt(T) bound for reinforcement learning in adversarial linear MDP?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["522946e5-6e98-542d-ab5a-a2190fc522a0"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first derived online occupany estimation technique to get sqrt(T) bound for reinforcement learning in adversarial linear MDP?", "reference_answer": "Towards Optimal Regret in Adversarial Linear MDPs with Bandit Feedback"}}} {"uuid": "8fb6f8ec-fae3-5823-ba3f-21cdca6952a9", "question": "How do the authors split the dataset for the experiments?", "answer_format": "Your answer should be a plein text.", "tags": ["single", "subjective", "text"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "For the empirical modeling analysis and performance benchmarking, we randomly split the dataset into 3 sets: train (70%), dev (5%), and test (25%) sets, while ensuring both domains (fashion and furniture) have the same split distributions.", "question": "How do the authors split the dataset for the experiments?"}}, "anchor_pdf": ["67c78c79-7878-5ff5-b5f6-45cec4ad9bf9"], "reference_pdf": []} {"uuid": "90082d94-579c-5dc2-a5c3-f9f6278857e2", "question": "On how many datasets was COSA evaluated for its performance in Object Discovery and Composition?", "answer_format": "Your answer should be an integer.", "tags": ["single", "text", "objective"], "anchor_pdf": ["8c31b4d5-1f30-52a4-9870-482b7ec1c67c"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 6}}} {"uuid": "90aa9ace-0a80-5867-893b-e342497160a1", "question": "Is there any paper that tries to investigate LLMs' capabilities in solving elliptical constructions by using a test-dataset based on the psycolinguistic notion of Thematic Fit?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["ad024fa1-1e7e-5878-beb9-f13de5594b8e"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that tries to investigate LLMs' capabilities in solving elliptical constructions by using a test-dataset based on the psycolinguistic notion of Thematic Fit?", "reference_answer": "We Understand Elliptical Sentences, and Language Models Should Too: A New Dataset for Studying Ellipsis and its Interaction with Thematic Fit"}}} {"uuid": "921a63d8-1cb7-5162-bc06-b9546498e519", "question": "Among StudentEval, HumanEval and MBPP, which one has the most test cases per problem?", "answer_format": "Your answer should be a string, indicating the name of the dataset.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["4b53feaf-4e33-590c-a8bb-9c8c7005bf6b", "a70723f6-7139-5165-a9c7-9dcdd34e3514", "0e57b18a-c261-582f-8527-2337f0aeda90"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "HumanEval", "lowercase": true}}} {"uuid": "924003fa-668e-5dbc-8e2c-43aa69d5696c", "question": "Both two papers, \"Language and Task Arithmetic with Parameter-Efficient Layers for Zero-Shot Summarization\" and \"Composing Parameter-Efficient Modules with Arithmetic Operation\" focus on arithmetics on parameter-efficient modules. Which setting did the first paper mainly focus on, while the second paper didn't?", "answer_format": "Your answer should be brief text on the setting mainly focused on only by the first paper.", "tags": ["multiple", "subjective", "text"], "anchor_pdf": ["eee99caa-1041-588f-850b-67dc3a80524c", "501ba611-6203-53af-8077-ea1a893322d4"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Cross-lingual transfer on summarization tasks.", "question": "Both two papers, \"Language and Task Arithmetic with Parameter-Efficient Layers for Zero-Shot Summarization\" and \"Composing Parameter-Efficient Modules with Arithmetic Operation\" focus on arithmetics on parameter-efficient modules. Which setting did the first paper mainly focus on, while the second paper didn't?"}}} {"uuid": "926c7917-2d65-5ca2-9e3b-2b7927962fbd", "question": "Between the two agent-based method that are explicitly introduced in Related Work section of LLM-DP, which one is not applied as a baseline? Why not?", "answer_format": "Your answer should be a Python list of two strings, the first is the name of the method, the second is the reason why it's not applied.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["0dc71539-30ec-52ba-bad0-0d031ea757b2"], "reference_pdf": ["2c626d88-ca60-501d-9beb-763ddf799a85", "2f2e4311-fc9b-5e36-bb18-7c3fee141713"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "Voyager", "lowercase": true}, {"reference_answer": "Voyager is an agent specialized in Minecraft, who is not capable of datasets like Alfworld.", "question": "Why Voyager is not applied as a baseline in the LLM-DP paper?"}]}}} {"uuid": "927ff9af-42f7-5216-a6f9-f106e8ff6759", "question": "On the HEML sentence level with AUC metric, which baseline outperforms MIND on specific conditons?Is it the best variant according to the paper that proposed that baseline?", "answer_format": "Your answer should be a Python list of two strings. The first string is the name of the baseline (with variant) that outperforms MIND, as proposed in the anchor PDF. The second string is either `true` or `false`.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["621d42a1-dbab-5003-b7c5-625335653001"], "reference_pdf": ["ab661558-432d-5e5e-b49c-a3660a40986e", "1a21b653-3db0-55e8-9d34-8b6cd3dcbefa", "85111b8b-4df0-5a9a-8d11-a7ae12eebcf6", "0597ce2b-cd8c-5b5b-b692-e8042d8548de", "6df0f3f3-e2e1-5d7a-9d70-3114ceac5939", "02f7fff5-cec7-5ac8-a037-f5eb117b9547"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["SCG-NLI", "false"], "ignore_order": false, "lowercase": true}}} {"uuid": "92c53685-6c5d-538a-9c62-887598de3301", "question": "Is there a paper that utilizes the characteristics of human evolutionary knowledge to guide language models in generating scientific ideas?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["e866410d-d02d-5c5e-a5a8-1cda89bd1b72"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper that utilizes the characteristics of human evolutionary knowledge to guide language models in generating scientific ideas?", "reference_answer": "Exploring and Verbalizing Academic Ideas by Concept Co-occurrence"}}} {"uuid": "92dcdc85-f07b-5980-80c0-474447201940", "question": "Are there any large-scale and open-source text simplification datasets dealing with long passages?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["42aaf03a-7e67-5994-b694-c0266d54db2d"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Are there any large-scale and open-source text simplification datasets dealing with long passages?", "reference_answer": "SWIPE: A Dataset for Document-Level Simplification of Wikipedia Pages"}}} {"uuid": "93193629-3db3-5f41-93da-8282895eba7f", "question": "What is the relationship between the two papers in terms of datasets?", "answer_format": "Your answer should be a python strings.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["e35a7ed7-548d-5aa3-95d3-5054b8e3020d", "ed7fa054-62cc-53f8-b200-15de56d03112"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The dataset of paper \"What does Parameter-free Probing Really Uncover?\" is based on the dataset of paper \"Perturbed Masking: Parameter-free Probing for Analyzing and Interpreting BERT\".", "question": "What is the relationship between the two paper (in anchor_pdf) in terms of datasets?"}}} {"uuid": "93272751-e57b-55e7-a89a-ef2387c4d2be", "question": "Is there any paper that studies a teacher AI inferring mental states of a student role in a role-playing game setup using reinforcement learning?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["738ce6e8-d33d-5dfc-924e-ccfd2dd77d12"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that studies a teacher AI inferring mental states of a student role in a role-playing game setup using reinforcement learning?", "reference_answer": "I Cast Detect Thoughts: Learning to Converse and Guide with Intents and Theory-of-Mind in Dungeons and Dragons"}}} {"uuid": "932ee901-050a-5ef5-bb1e-0baeff576249", "question": "According to Table 1, on how many mathematical task test sets was Rho-1 tested?", "answer_format": "Your answer should be an integer.", "tags": ["single", "text", "objective"], "anchor_pdf": ["22a670fd-c1d3-50d9-9c10-7ef49a3a2c24"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 9}}} {"uuid": "9373ac34-dd22-52b3-80da-16d94b2bcff7", "question": "According to the paper \"Full-Atom Peptide Design with Geometric Latent Diffusion\", which model has the lowest success rate on PepBDB? In the paper that proposes the model, how is the overview of the proposed method given in form?", "answer_format": "Your answer should be a Python list of 2 elements, the first is the name of the model, the second is a Python list of formula in LaTeX format. e.g. [\"method\", [\"formula_1\", ..., \"formula_n\"]]", "tags": ["multiple", "formula", "table", "subjective"], "anchor_pdf": ["2ed12ce6-ae87-53be-b0fd-5abb6a163932"], "reference_pdf": ["5be68cd0-04a9-5464-906e-d6cc4a2f76fe"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_complex_math_formula_with_llm"], "eval_kwargs_list": [{"gold": "dyMEAN", "lowercase": true}, {"formulas": ["\\mathcal{G}_A = \\text{SI}(\\{s_i\\}_{i \\in \\mathcal{V}_A, i \\notin \\mathcal{V}_P}), i \\in \\mathcal{V}_A", "\\mathcal{G}_S = \\text{SP}(\\mathcal{G}_E, \\mathcal{G}_P)", "h_i, X_i = \\text{AME}(\\mathcal{G}_E, \\mathcal{G}_S, \\mathcal{G}_A), i \\in \\mathcal{V}_E \\cup \\mathcal{V}_S \\cup \\mathcal{V}_A", "p_i = \\text{Predict}(h_i), i \\in \\mathcal{V}_P", "\\tilde{X}_i = \\text{Dock}(\\mathcal{G}_A, \\mathcal{G}_S), i \\in \\mathcal{V}_A"], "question": "In the paper that proposes the model, how is the overview of the proposed method given in form?"}]}}} {"uuid": "9388f82d-b44e-52a4-8e0e-4b0e93bd5876", "question": "Which paper proposes the two-stage training method, i.e., task-specific fine-tuning and cross-domain pre-training, to train an open-domain dialogue evaluator using the self-collected dataset.", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["89c1512d-d2c7-5002-818b-8595f9982ff1"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper proposes the two-stage training method, i.e., task-specific fine-tuning and cross-domain pre-training, to train an open-domain dialogue evaluator using the self-collected dataset.", "reference_answer": "RADE: Reference-Assisted Dialogue Evaluation for Open-Domain Dialogue"}}} {"uuid": "93cbeec6-18b1-55ba-93f3-8be583414ea9", "question": "Which paper about parameter-efficient finetuning first proposes to feed the pretrained weight instead of the activation to an adapter?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["8ff4e420-f44e-5483-832d-b2f141464db4"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper about parameter-efficient finetuning first proposes to feed the pretrained weight instead of the activation to an adapter?", "reference_answer": "Parameter-Efficient Fine-Tuning without Introducing New Latency"}}} {"uuid": "943cd5d2-df6b-588b-b985-70b2aa2e9f3b", "question": "How does the LISA algorithm choose which layers' parameters to freeze at each iteration?", "answer_format": "Your answer should be a python string", "tags": ["single", "subjective", "formula"], "anchor_pdf": ["4557bf76-9ede-5fb2-a271-47a71d382639"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "How does the LISA algorithm choose which layers' parameters to freeze at each iteration?", "reference_answer": "The LISA algorithm samples layer indices through a uniform distribution and freezes the parameters of the sampled layers."}}} {"uuid": "94424df2-3c4e-5f0d-b665-1ce0b2c0da54", "question": "Among the baseline TTE approaches for the code-based task on Celiac Disease, which method achieves the best performance, excluding the new method proposed in this paper?", "answer_format": "Your answer should be a python strings about the approach name. YOU MUST USE THE EXACT NAME FROM THE PAPER.", "tags": ["single", "objective", "table"], "anchor_pdf": ["4c5fce26-2c1e-55c2-b4de-b1a11a58d66f"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "RSF", "lowercase": true}}} {"uuid": "948c99f1-12ba-5c05-8cbf-de25811480b1", "question": "Is there a dialogue dataset where a speaker's utterance is grounded in their persona, consisting of image-text pairs representing their episodic memories?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["cf01bbb3-9212-5048-ae7b-8485b2487935"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a dialogue dataset where a speaker's utterance is grounded in their persona, consisting of image-text pairs representing their episodic memories?", "reference_answer": "MPCHAT: Towards Multimodal Persona-Grounded Conversation"}}} {"uuid": "94bf4901-caa8-50d8-9626-f34f0226b4e9", "question": "Is there research that investigates embedding multi-bit data into watermarks to improve resilience to text corruption, particularly aimed at safeguarding keywords and syntactic elements from modification?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["66c96650-9f7c-5f0c-ab6c-afa95cf65f87"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there research that investigates embedding multi-bit data into watermarks to improve resilience to text corruption, particularly aimed at safeguarding keywords and syntactic elements from modification?", "reference_answer": "Robust Multi-bit Natural Language Watermarking through Invariant Features"}}} {"uuid": "94ef0706-a58e-556e-b4c4-0dfe3f47ff63", "question": "Is there any work that allows large numbers of model outputs to be encoded and compared by causal language models in a single forward pass?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["cdade3e9-a37b-5d96-b744-7a600859c6f9"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any work that allows large numbers of model outputs to be encoded and compared by causal language models in a single forward pass?", "reference_answer": "EEL: Efficiently Encoding Lattices for Reranking"}}} {"uuid": "94ff4210-290d-53d9-90f7-56c5df1bed85", "question": "What is the pipeline used to construct the dataset in the experiment of the paper \"PepRec: Progressive Enhancement of Prompting for Recommendation\"?", "answer_format": "Your answer should be a python strings.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["47db4060-7cd0-5824-b258-6410e0b16c37"], "reference_pdf": ["f69388c4-ec49-5a68-b169-22a427fc7686"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The pipeline to extract high quality justifications from raw user reviews consists of three steps: 1. Annotating a set of review segments with binary labels, i.e., to determine whether they are 'good' or 'bad' justifications. 2. Training a classifier on the annotated subset and applying it to distantly label all the review segments to extract 'good' justifications for each user and item pair. 3. Applying fine-grained aspect extraction for the extracted justifications, and building user personas and item profiles.", "question": "What is the pipeline used to construct the dataset in the experiment of the paper \"PepRec: Progressive Enhancement of Prompting for Recommendation\"?"}}} {"uuid": "9640f248-b594-56b4-9cc6-9b242538ee40", "question": "In the direct preceding work of H2O, are there any metrics tested other than perplexity?", "answer_format": "Your answer should be a python boolean.", "tags": ["multiple", "objective", "metadata"], "anchor_pdf": ["5e63d90b-a1a4-5e6e-9845-5177eba99970", "9f701072-7290-52f2-91fa-1b5fddbbd78e"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_bool_exact_match", "eval_kwargs": {"gold": false}}} {"uuid": "966deaf9-83fe-5f5b-8b30-06c8e350375a", "question": "In the paper proposing Voila-A, which baselines do the main experimental results demonstrate that Voila-A outperform?", "answer_format": "Your answer should be a single python list of strings, each string is a baseline name.e.g. [\"baseline1_name\", \"baseline2_name\"]. Note that for the names, the abbreviation is enough.", "tags": ["comprehensive", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["f2bf5d36-449a-5699-9a1b-15fb940376c1"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Otter", "Kosmos-2"], "ignore_order": true, "lowercase": true}}} {"uuid": "969eef23-a666-5128-8553-069c1c546f0e", "question": "According to the author, what are the three main types of current dialogue evaluation catalogs?", "answer_format": "Your answer should be a sentence stating the three main types.", "tags": ["single", "text", "subjective"], "anchor_pdf": ["89c1512d-d2c7-5002-818b-8595f9982ff1"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "According to the author, what are the three main types of current dialogue evaluation catalogs?", "reference_answer": "Reference-based, reference-free and reference-assisted"}}} {"uuid": "985616d9-fbc1-5329-824f-15b2d1d79de0", "question": "Is there any dataset that contains minimally-contrasting social situations that lead to different decisions about which behaviors are appropriate in that situation?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["bc50f24b-3635-51bd-a8e8-8beb9b604dc0"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any dataset that contains minimally-contrasting social situations that lead to different decisions about which behaviors are appropriate in that situation?", "reference_answer": "NORMBANK: A Knowledge Bank of Situational Social Norms"}}} {"uuid": "98f8113e-ec12-53ee-877c-ed347c655fbd", "question": "Which paper found that using common character encodings and ciphers, or even just convincing the model that it is not communicating in natural language, can bypass the safety guardrails of large models?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["462eba24-b481-518d-957e-63a097a7ba08"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper found that using common character encodings and ciphers, or even just convincing the model that it is not communicating in natural language, can bypass the safety guardrails of large models?", "reference_answer": "GPT-4 IS TOO SMART TO BE SAFE: STEALTHY CHAT WITH LLMS VIA CIPHER"}}} {"uuid": "99231726-94e7-5dbf-80a5-947df93d9761", "question": "Several information extraction and natural language processing tools were used in the anchor paper. What is the major limitation of prior works that the entity and relationship extractor used in the anchor paper was designed to solve?", "answer_format": "Your answer should be brief text explaining the limitation solved by the extractor's design.", "tags": ["multiple", "subjective", "text"], "anchor_pdf": ["4b1a5787-4c18-51d8-bd3e-8a036f750938"], "reference_pdf": ["0201fdd7-83db-53a2-b6a1-dc06e2b19cb6"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The major limitation of prior works is that they ignore the interrelation between spans (pairs).", "question": "Several information extraction and natural language processing tools were used in the anchor paper. What is the major limitation of prior works that the entity and relationship extractor used in the anchor paper was designed to solve?"}}} {"uuid": "9945247a-acf3-5768-9e8c-3015d272434d", "question": "According to the RAV paper, up to 2019, which method performs the best overall on evidence retrieval? Additionally, what's that method's FEVER score with 1 sentence selected for the subtask of recognizing textual entailment?", "answer_format": "Your answer should be a Python list of 2 elements, the first is a string, the name of the method, and the second is a float, rounding to 2 decimal places.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["5a971f9d-71f9-5381-9877-05e68e18ad80"], "reference_pdf": ["366e4b37-75a5-5baf-ad8f-634309a1a35e", "c50df058-1617-58f1-9b89-c397fcdceb6f", "d3cfe89d-e84d-51b6-bd10-33a106a8e12b"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["ESIM", 63.64], "ignore_order": false, "lowercase": true, "ndigits": 2}}} {"uuid": "997b8d8f-3d5f-57e6-9590-9e8c5591dacc", "question": "What are the model architecture parameters (dim, n_layers, head_dim, hidden_dim, n_heads, n_kv_heads) of the automatic evaluation model used in the aspect evaluation section of the paper?", "answer_format": "Your answer should be a python dictionary with the following keys: dim, n_layers, head_dim, hidden_dim, n_heads, n_kv_heads, e.g., {\"dim\": 768, \"n_layers\": 6, \"head_dim\": 64, \"hidden_dim\": 3072, \"n_heads\": 12, \"n_kv_heads\": 12}.", "tags": ["multiple", "objective", "table"], "anchor_pdf": ["4ac8bced-e49c-5417-b21f-edae07ae2ce2"], "reference_pdf": ["44e77de0-2982-575f-bf6e-f50ad597c4f6"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"dim": 4096, "n_layers": 32, "head_dim": 128, "hidden_dim": 14336, "n_heads": 32, "n_kv_heads": 8}, "ignore_order": true, "lowercase": true}}} {"uuid": "99965706-7450-5c82-9db7-a9f9605c5fc6", "question": "Which research paper leverages event structure information from Abstract Meaning Representation (AMR) graphs to aid in recognizing causal relations between events?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["9272b637-e24f-56f7-a84c-9bf5879e7079"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which research paper leverages event structure information from Abstract Meaning Representation (AMR) graphs to aid in recognizing causal relations between events?", "reference_answer": "Semantic Structure Enhanced Event Causality Identification"}}} {"uuid": "99b40a01-942a-571c-9eb8-6be0ef070617", "question": "What are the four types of compression methods used for BERT in the paper? Which one of them can be combined with the other three model compression methods? Can you describe the sketch of this method's procedure?", "answer_format": "Your answer should be a brief text containing the four compression methods used for BERT in the paper and the method can be combined with other three methods with its procedure's sketch.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["7077f225-4d0b-563d-8b81-7aa83a8ecd08"], "reference_pdf": ["9e46aa98-0fa6-5c63-adeb-e181eec3c1b0"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The four compression methods used for BERT in the paper are Knowledge Distillation, Pruning, Quantization, and Vocabulary Transfer. Among them, Vocabulary Transfer can be combined with the other three model compression methods. The sketch of this method's procedure can be described as follows: First, the vocabulary is constructed on the in-domain data, then an embedding is assigned to each token, transferring information from the pre-trained representations of the general-purpose language model.", "question": "What are the four types of compression methods used for BERT in the paper? Which one of them can be combined with the other three model compression methods? Can you describe the sketch of this method's procedure?"}}} {"uuid": "99cec4b0-ca19-56bb-83a0-7a79a4a14c9d", "question": "What is the distribution ratio of data sources for the toxicity ratings dataset used in the paper?", "answer_format": "Your answer should be a python dictionary about the data sources and their distribution ratio(between 0 and 1, rounded to 2 decimal places), e.g., {\"data_source_1\": 0.49, \"data_source_2\": 0.51}. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["86aea8c2-7ffe-534a-a610-d467151fe5de"], "reference_pdf": ["357ecfc8-7a31-50d8-93ca-7aaf3e2ec1b1"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"Twitter": 0.67, "Reddit": 0.15, "4chan": 0.18}, "ndigits": 2, "ignore_order": true, "lowercase": true}}} {"uuid": "99d793f3-fa1b-5d68-a62b-ace9cdeca097", "question": "How much higher is the percentage of \"食品#品质\" in Figure: Aspect category distributions(Test Set for Subtask3) in the Overview of the SIGHAN 2024 paper than the percentage of \"食品#品质\" in Figure: training data category distributions of HITSZ-HLT?", "answer_format": "Your answer should be a python float number with three decimal places, and the answer should be in the range of [0, 1]", "tags": ["multiple", "image", "objective"], "anchor_pdf": ["defd3a4f-647b-500c-986d-b01341f1a543", "cb0bc639-0d1f-5d72-b108-e8ffc5aefbef"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 0.042, "ndigits": 3}}} {"uuid": "9a6dbbf5-4323-506d-aa1d-fbcbed930d35", "question": "In the paper titled \"TeamShakespeare at SemEval-2023 Task 6: Understand Legal Documents with Contextualized Large Language Models\", which model is based on when designing the model for NER task? For this base model, on which tasks is it evaluated in the main experiment of the source paper? ", "answer_format": "Your answer should be a single python list like [\"base_model_name\",[\"task1\",\"task2\",...]].Note that for the task name, you should use the full name, not the abbreviation.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["fbe995d8-d3b2-5ecc-973f-2976bb7d81cd"], "reference_pdf": ["7c278568-4bb8-5a1f-af34-4df3980282eb"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_structured_object_exact_match"], "eval_kwargs_list": [{"gold": "LUKE", "lowercase": true}, {"gold": ["Entity Typing", "Relation Classification", "Named Entity Recognition", "Cloze-style Question Answering", "Extractive Question Answering"], "ignore_order": true, "lowercase": true}]}}} {"uuid": "9a8224a4-c359-553a-b334-e1339c47a8a7", "question": "What are the main differences between the backbone model used in the main experiments of the paper \"VISION-BY-LANGUAGE FOR TRAINING-FREE COMPOSITIONAL IMAGE RETRIEVAL\" and the standard Transformer?", "answer_format": "Your answer should be a python strings.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["069d474f-f4de-5eb8-a6a9-97aad73f0e74"], "reference_pdf": ["6ea15eac-4060-5f1d-8629-76330e0b67c5"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["The input of the backbone model is a sequence of flattened 2D patches x_p\\in \\mathbb{R}^{N\\times (P^2 \\cdot C)}.", "We prepend a learnable embedding to the sequence of embedded patches (z^0_0 = x_{class}), whose state at the output of the Transformer encoder (z^0_L) serves as the image representation y."], "question": "What are the main differences between the backbone model used in the main experiments of the paper \"VISION-BY-LANGUAGE FOR TRAINING-FREE COMPOSITIONAL IMAGE RETRIEVAL\" and the standard Transformer?"}}} {"uuid": "9adcad9f-bd01-54de-a6cd-61d0a77e487d", "question": "According to the paper \"Geometric Deep Learning Grids, Groups, Graphs, Geodesics, and Gauges\", what are the three main flavours of GNN layers? What's the relationship between GTs and GNN?", "answer_format": "Your answer should be a python list of two elements, teh first one is alist of three flavours, and the second one is a python strings.", "tags": ["multiple", "image", "subjective"], "anchor_pdf": ["1f190ec5-f832-540b-a62f-6142eba5991a", "2bb974d2-1630-54c3-9eac-4ada100f56ec"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_structured_object_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": ["Convolutional", "Attentional", "Message-passing"], "lowercase": true, "ignore_order": true}, {"reference_answer": "GTs is a special type of GNN", "question": "What's the relationship between GTs and GNN?"}]}}} {"uuid": "9b05a21b-7190-547c-ae1c-2a4b21a84826", "question": "Is there a paper that uses similarity scores to check knowledge in diffusion models", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["628ac34d-d1e6-5417-abe2-803c8b3e5e67"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper that uses similarity scores to check knowledge in diffusion models", "reference_answer": "Multilingual Conceptual Coverage in Text-to-Image Models"}}} {"uuid": "9bd68a1f-c2de-5448-b7b0-ddbc43b2165a", "question": "What is the detailed procedure of progressive learning followed in the instruction tuning section of this paper?", "answer_format": "Your answer should be a python strings.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["5a854b33-31c8-50f8-beaa-b3c93e9ae2f8"], "reference_pdf": ["02f4302f-4f6c-5092-8927-36a0b9cac06a"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The detailed procedure of progressive learning used in Orca 2 starts with LaMA-2-7B or LLaMA-2-13B checkpoint and finetunes it on the train split of FLAN-v2 dataset for one epoch. Then they train on 5 million ChatGPT data from Orca 1 for 3 epochs, and train on the combination of 1 million GPT-4 data from Orca 1 and Orca 2's 817K data for 4 epochs.", "question": "What is the detailed procedure of progressive learning followed in the instruction tuning section of this paper?"}}} {"uuid": "9c1a0663-93b8-5ce8-9863-a837c86565c3", "question": "How many hours and separate recordings are contained in the STT4SG-350 corpus?", "answer_format": "Your answer should be s single python list of two integers, e.g. [200, 5674]. The first integer represents hours, the second integer represents recordings. Note that you should use exact numbers, not approximations.", "tags": ["comprehensive", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["7329e71a-7b1d-59ae-bdca-e4480c8f5350"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": [343, 247527], "ignore_order": false}}} {"uuid": "9c51f6a0-85e7-51b9-a27c-679686eeb8e2", "question": "In the architecture of MolMIM, which encoder is used? What's the core idea of the designing of this encoder?", "answer_format": "Your answer should be a single python list of two strings, like [\"encoder_name\",\"sentences_about_core_idea\"]", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["23169b8e-0212-5d75-a84b-651a57b2e331"], "reference_pdf": ["114ffdfa-8150-5705-8818-1052107f5cff"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "Perceiver", "lowercase": true}, {"reference_answer": "The core idea of this encoder is to introduce a small set of latent units that form an \"attention bottleneck.\" Inputs must pass through this bottleneck, which solves the quadratic scaling problem of all-to-all attention in classical Transformers and decouples network depth from input size, enabling the construction of very deep models. The encoder iteratively attends to inputs, focusing its limited capacity on the most relevant parts based on previous steps. To compensate for the lack of explicit structures, position and modality-specific features are associated with each input element (e.g., pixels or audio samples). These features can be learned or constructed using high-fidelity Fourier features, effectively tagging input units with a high-fidelity representation of position and modality.", "question": "What's the core idea of the designing of this encoder?"}]}}} {"uuid": "9c79b323-e07b-5af8-93b5-d69b4b8d0cff", "question": "In the benchmark C-LAP uses for image observations evaluation, Offline DV2 performs the best in which environment under mixed setting?", "answer_format": "Your answer should be a python string, the name of the environment WITHOUT ANY explanation.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["02dc5526-ab21-5503-a205-68245f8e1efe"], "reference_pdf": ["8420a7d5-2b74-58c7-a898-a2f900dbff57"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "cheetah-run", "lowercase": true}}} {"uuid": "9ce8e94a-4eda-5f28-90cc-1342a03c3e51", "question": "What dataset does this paper(\"titled What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts and Rationales for Disambiguating Defeasible Social and Moral Situations\") use for evaluation? According to its source paper, how many structured annotations does this dataset collect in total? ", "answer_format": "Your answer should be a single python list, the first element is the string of the dataset name, the second element is an integer number.", "tags": ["text", "multiple", "objective"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["SOCIAL-CHEM-101", 365000], "ignore_order": false, "lowercase": true}}, "anchor_pdf": ["d7dc9bd8-6590-5278-8a64-b5a7e0f8f940"], "reference_pdf": ["2e1fe5bd-29fb-536b-9c10-d2b296accf35"]} {"uuid": "9ceca9e2-fa5e-5eb0-80be-d8f588116c1e", "question": "I'm interested in the method used by the UCB1-FLAD paper for dataset format transformation, and I would like to contribute to that method. Where can I find the guidelines?", "answer_format": "Your answer should be a Python string, the website URL starting with \"https://\", as given in the paper.", "tags": ["multiple", "metadata", "objective"], "anchor_pdf": ["2c9c068a-72b3-544e-9944-2558a035e4f8"], "reference_pdf": ["8e079143-90b3-5f14-9212-8584323d96f0"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "https://github.com/bigscience-workshop/promptsource/blob/main/CONTRIBUTING.md", "lowercase": false, "ignore_blank": false}}} {"uuid": "9cf5ab0b-d8e0-5cee-b756-ba9f5f95b220", "question": "Among existing Math and STEM QA datasets, which dataset includes theorem except TheoremQA? In which paper is this dataset introduced?", "answer_format": "Your answer should be a list of two strings, the first element is the name of the dataset, and the second element is the title of the source paper.", "tags": ["table", "single", "objective"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["MATH", "Measuring mathematical problem solving with the math dataset"], "ignore_order": false, "lowercase": true}}, "anchor_pdf": ["cea8e2ce-bf4a-53f5-9f63-9aaf9720500d"], "reference_pdf": []} {"uuid": "9d8e79c6-a8b8-5c1c-8b3c-e995166a26f7", "question": "In the paper that proposes TRL model which manages to surpass ChatGPT on some category on average according to the QATCH paper, what's the formula of the total loss in detail?", "answer_format": "Your answer should be a string, the formula in LaTeX format. Note that you should expand the three parts of the total loss.", "tags": ["multiple", "table", "formula", "subjective"], "anchor_pdf": ["1a999871-458c-5eec-8861-d010f4cfd7e6"], "reference_pdf": ["a7ee4e97-2c44-50ae-a600-90ba0d0066bc"], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "\\mathcal{J}_{\\text{CS}} = \\frac{1}{|Columns|} \\sum_{co \\in Columns} \\text{CE}(p_{col}^{(co)}, \\mathbb{1}_{co=col}) + \\frac{1}{|Cells(col)|} \\sum_{c \\in Cells(col)} \\text{CE}(p_s^{(c)}, \\mathbb{1}_{c \\in C}) + \\alpha \\left(-\\log p_a(op_0)\\right)", "question": "What's the formula of the total loss in detail?"}}} {"uuid": "9e064ab4-16d4-572a-a37f-3121074570b0", "question": "In the Comparisons experiment of the Metaworld benchmark in the paper \"Prediction with Action: Visual Policy Learning via Joint Denoising Process\", which benchmarks performed the best on easier tasks and harder tasks, respectively (excluding the method proposed in this article)?", "answer_format": "Your answer should be a python list of strings, first element is the best performing benchmark on easier tasks, and the second element is the best performing benchmark on harder tasks. YOU MUST USE THE EXACT ABBREVIATION AS IN THE PAPER.", "tags": ["single", "table", "objective"], "anchor_pdf": ["db33f34d-5174-5e5c-bf44-1cf9247b4836"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["GR-1", "GR-1"]}}} {"uuid": "9e0c4e78-b458-53fa-942a-e24a93316e0f", "question": "In the experimental results on StepGame of this paper, which one of the three PLMs gains the best performance when k=4? Please provide the name of the model. And what is this model's github link?", "answer_format": "Your answer should be a Python list of 2 strings, the name of the model, and the github link of this model.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["0a8f7898-2594-5643-91d3-0850019b7bf1"], "reference_pdf": ["3134099b-d3ac-56d3-898d-c77c7a99370e"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["ALBERT", "https://github.com/google-research/ALBERT"], "ignore_order": true, "lowercase": true}}} {"uuid": "9e5cea3f-285b-5879-a57a-8c4a19c0236d", "question": "Which dataset is GrailQAbility dataset derived from? How many literals are there in the source dataset?", "answer_format": "Your answer should be a single python list like [\"dataset_name\",100].The first element of the list is a string and the second is an integer number.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["0d9f5091-a5c3-5d69-8f13-b9427d3f4ccd"], "reference_pdf": ["058d0055-8d50-5b52-ac1a-8c36d074e246"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["GRAILQA", 3239], "ignore_order": false, "lowercase": true}}} {"uuid": "9eb8dc14-f435-5774-ac7d-1bc63d5b72b4", "question": "Which website can I find the conversation video between Sheldon and Leonard?", "answer_format": "Your answer should be a pure text string starting with \"https\". DO NOT INCLUDE ANY OTHER INFORMATION OR CONTEXT IN THE ANSWER.", "tags": ["single", "metadata", "objective"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "https://giaabaoo.github.io/TPD_website/", "lowercase": false}}, "anchor_pdf": ["1a8e9cd5-8ae1-52b9-84d5-b67bd9c07a21"], "reference_pdf": []} {"uuid": "9f1e23b7-05ab-512e-8568-8d6fc0e95993", "question": "In the smallest dataset that the given paper applies, how is the mutual information calculated?", "answer_format": "Your answer should be a string, the formula of mutual information in LaTeX format.", "tags": ["multiple", "formula", "subjective"], "anchor_pdf": ["1d4fb8c0-7a14-5d6e-a388-c938f509cb2b"], "reference_pdf": ["19ef82b6-0fff-5fe3-9096-2c760d6bf264"], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "\\mathbb{I}[k, w] =- \\sum_{k=0}^{N} \\langle \\text{Bin}(k | f^w(x), N) \\rangle \\log [\\langle \\text{Bin}(k | f^w(x), N) \\rangle]+ \\sum_{k=0}^{N} \\langle \\text{Bin}(k | f^w(x), N) \\log [\\text{Bin}(k | f^w(x), N)] \\rangle", "question": "How is the mutual information calculated?"}}} {"uuid": "9f4464f1-93bd-58ea-9d06-be56dfc0f60b", "question": "Which numerical reasoning paper first published a dataset that considers different types of size of numbers and their representations in arithmetic questions?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["720da577-c05b-517f-9971-4ce2c9bd53af"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which numerical reasoning paper first published a dataset that considers different types of size of numbers and their representations in arithmetic questions?", "reference_answer": "FERMAT: An Alternative to Accuracy for Numerical Reasoning"}}} {"uuid": "9f506412-ee1f-5e65-af24-4f2a07fa9948", "question": "Which object detector does this paper(titled \"Can VLMs Play Action Role-Playing Games? Take Black Myth Wukong as a Study Case\") use to assist the VLMs in better extracting useful information?On which datasets is this object detector pre-trained?", "answer_format": "Your answer should be a list like [\"detector_name\", [\"dataset1\",\"dataset2\",...]], where detector_name is the name of the object detector and [\"dataset1\",\"dataset2\",...] is a list of dataset names(abbreviation) on which the object detector is pre-trained.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["c4b97d0c-1f8b-5e9e-80f6-3e02de9cc3c6"], "reference_pdf": ["a439078f-e4c3-50b4-812c-3dd7d88b7592"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_structured_object_exact_match"], "eval_kwargs_list": [{"gold": "Grounding DINO", "lowercase": true}, {"gold": ["O365", "OI", "GoldG", "COCO", "Cap4M", "RefC"], "ignore_order": true, "lowercase": true}]}}} {"uuid": "a06cf968-8d7c-5d7a-b203-91bb312150b7", "question": "Is there any paper that uses data collected from the Dark Web, specifically onion domains, to pretrain a language model?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["f2a4964c-d9b6-5dbf-840d-cdebda82365c"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that uses data collected from the Dark Web, specifically onion domains, to pretrain a language model?", "reference_answer": "DarkBERT: A Language Model for the Dark Side of the Internet"}}} {"uuid": "a0f6a1ac-45b3-5017-b320-9c698ae14d7a", "question": "Which one is newer among the public datasets used in the main experiment of this paper(\"Query Routing for Homogeneous Tools: An Instantiation in the RAG Scenario\")? Which link may I refer to to get this dataset?", "answer_format": "Your answer should be a python list like [\"string1\", \"string2\"]. The first element should be the name(abbrievation) of the dataset. The second element should be a string, representing the link to the dataset.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["047a3835-6aad-5b49-b47d-bc183f79681f"], "reference_pdf": ["3d08462e-d6c6-5703-a6e4-132ed166aaf9"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["CDQA", "https://github.com/Alibaba-NLP/CDQA"], "ignore_order": false, "lowercase": true}}} {"uuid": "a1096589-1d1d-5577-bc3c-abd8a43a5b57", "question": "What are the parameters of the base models used in the paper \"PsychoLex: Unveiling the Psychological Mind of Large Language Models\", including Layers, Model Dimension, FFN Dimension, Attention Heads and Key/Value Heads?", "answer_format": "Your answer should be a nested dictionary, e.g., {'base_model1': {'Layers': 12, 'Model Dimension': 768, 'FFN Dimension': 3072, 'Attention Heads': 12, 'Key/Value Heads': 12}}", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["944cd92d-07b5-55fd-a43a-ec5febd55fd0"], "reference_pdf": ["7dde0f1d-85e9-5fe9-9eb6-7d8e80c39f24"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"LLaMA 8B": {"Layers": 32, "Model Dimension": 4096, "FFN Dimension": 14336, "Attention Heads": 32, "Key/Value Heads": 8}, "LLaMA 70B": {"Layers": 80, "Model Dimension": 8192, "FFN Dimension": 28672, "Attention Heads": 64, "Key/Value Heads": 8}}, "ignore_order": true, "lowercase": true, "ignore_blank": true, "threshold": 90}}} {"uuid": "a18be027-94fa-53c8-9055-0f3066cc7ae8", "question": "In the paper \"Distributional Scaling Laws for Emergent Capabilities\", what empirical evidence supports the RASP-Generalization Conjecture in the context of transformers' length generalization?", "answer_format": "Your answer should be a sentence explaining the empirical evidence.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["0e2b3701-18a7-55f9-9d0c-973133e59a85", "a75bc472-a98f-567f-af29-5df27fdff64b"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "In the paper \"Distributional Scaling Laws for Emergent Capabilities\", what empirical evidence supports the RASP-Generalization Conjecture in the context of transformers' length generalization?", "reference_answer": "The authors of \"What Algorithms can Transformers Learn? A Study in Length Generalization\" provide empirical evidence by demonstrating that transformers exhibit strong length generalization on tasks such as parity and addition when these tasks can be represented by short RASP programs. They show that by leveraging the RASP framework, transformers can generalize to sequences longer than those seen during training, supporting the conjecture that the simplicity of the RASP representation correlates with successful length generalization."}}} {"uuid": "a1bcf4ae-a49c-559d-83be-7e34162877d1", "question": "What tasks were proposed in the paper of the dataset used in the experiment of the paper?", "answer_format": "Your answer should be a python list of tasks, e.g. [\"task1\", \"task2\", ...]. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["610d8e02-d6bb-52e3-801d-0dbcdb5310db"], "reference_pdf": ["20ec131c-808d-59b3-8554-b5a68b02968e"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Grounding Span Prediction", "Agent Response Generation"], "ignore_order": true, "lowercase": true}}} {"uuid": "a1f5f36d-4119-508c-8484-38d296db5e04", "question": "In the paper titled \"ON-TRAC consortium systems for the IWSLT 2023 dialectal and low-resource speech translation tasks\", which dataset is used when training the translation model X $\rightarrow$ FR, EN? Which institute is this dataset released by?", "answer_format": "Your answer should be a single python list like [\"dataset_name\",\"institute_name\"].Note that you don't have to indicate the language pair in the dataset name.", "tags": ["multiple", "metadata", "objective"], "anchor_pdf": ["ffda5466-8659-558a-928e-91f7701a8325"], "reference_pdf": ["bff17d39-34ec-559c-82a6-644050316af8"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_element_included", "eval_element_included"], "eval_kwargs_list": [{"gold": ["CoVoST-2", "CoVoST 2", "CoVoST2"], "lowercase": true}, {"gold": ["Facebook AI", "Facebook"], "lowercase": true}]}}} {"uuid": "a205abd2-6b89-55ea-afc9-8578fa39bf0d", "question": "In the results of topic modeling through LDA, which keyword is the most frequent among all the topics? How many times did it appear? The models' performance was measured using BERT-score. Which conference did the BERT-score authors present BERT-score at?", "answer_format": "Your answer should be a brief text containing the most frequent keyword with its frequency and the conference name.", "tags": ["multiple", "metadata", "subjective"], "anchor_pdf": ["600defbf-5f93-5854-bad1-1054d180a120"], "reference_pdf": ["0a58057a-a09f-5b93-9aeb-0243adcf3eef"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The most frequent keyword is 'Missing' and it appeared in 3 topics. The BERT-score's authors presented BERT-score at the Conference on ICLR 2020.", "question": "In the results of topic modeling through LDA, which keyword is the most frequent among all the topics? How many times did it appear? The models' performance was measured using BERT-score. Which conference did the BERT-score authors present BERT-score at?"}}} {"uuid": "a24c70e9-4657-544f-aead-7f59db7b62b5", "question": "Which datasets were used in the specific private text domain for experiments with the document-level machine translation framework in the paper \"Granularity is crucial when applying differential privacy to text: An Investigation for Neural Machine Translation\"?", "answer_format": "Your answer should be a python list, e.g., ['dataset1', 'dataset2', ...]. YOU MUST USE THE EXACT NAMES OF THE DATASETS, RATHER THAN ABBREVIATIONS OR ALIASES.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["07882218-f745-5ca9-a420-67adc943de81"], "reference_pdf": ["51c9f7be-8feb-583b-8574-fba019561d8e"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Business Scene Dialogue Corpus", "ClinSPEn-CC"], "ignore_order": true, "lowercase": true, "ignore_blank": true}}} {"uuid": "a2985096-8453-5fb7-9066-6f505c734248", "question": "List the names of the baselines used in the paper \"On the Compositional Generalization in Versatile Open-domain Dialogue\", along with the titles of papers that proposed these baselines.", "answer_format": "Your answer should be a Python list of baseline-title pair, e.g., [[\"baseline1\", \"title1\"] , [\"baseline2\", \"title2\"], ...]. YOU MUST USE THE EXACT TEXT FROM THE PAPER AND THE FULL TITLE TEXT OF THE PAPERS.", "tags": ["multiple", "metadata", "objective"], "anchor_pdf": ["3107f6a8-1939-5af0-b3d8-06d7aa66158d"], "reference_pdf": ["f6e91a91-0b1e-5280-8522-a20492033f16", "e0cf46b4-e3cf-5d54-8c0b-c661214fe349", "21df0715-990d-58d3-b218-280ac3a84c8f", "60f20fa5-d268-522d-9cc2-5f7321af4f82"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": [["BART", "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension"], ["R2C2", "Language Models that Seek for Knowledge: Modular Search & Generation for Dialogue and Prompt Completion"], ["Prefix-Tuning", "Prefix-Tuning: Optimizing Continuous Prompts for Generation"], ["Modular Skill", "Combining modular skills in multitask learning"]], "lowercase": true, "ignore_order": true, "ignore_blank": true, "threshold": 95}}} {"uuid": "a29af26f-73cc-5522-a388-638ecd3c09d3", "question": "In the biography generation task that the Self-RAG paper applied, what're the five categories of freqValue?", "answer_format": "Your answer should be a Python list of 5 strings, the 5 categories of freqValue.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["23e4f6c4-0d28-52be-8ab4-7aef1c19b5ce", "6eed03f6-ebd4-5d2a-ba17-cfc04cb0e820"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Very rare", "Rare", "Medium", "Frequent", "Very frequent"], "ignore_blank": true, "ignore_order": true, "lowercase": true}}} {"uuid": "a2ae7148-87aa-513c-81a3-921c09c7e35e", "question": "According to \"Casting Hybrid Digital-Analog Training into Hierarchical Energy-Based Learning\", how can the delta of energy function be used in the EP gradient of EBM and BP gradient of FF module?", "answer_format": "Your answer should be in fluential English.", "tags": ["single", "formula", "subjective"], "anchor_pdf": ["93d4e1d2-32ff-5758-87d6-64606cf0e46a"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["The EP gradient uses the difference of the second-order gradient of energy function.", "The BP gradient uses the difference of the fourth-order gradient of energy function."], "question": "According to \"Casting Hybrid Digital-Analog Training into Hierarchical Energy-Based Learning\", how can the delta of energy function be used in the EP gradient of EBM and BP gradient of FF module?"}}} {"uuid": "a2f8cca9-a522-5fd2-bf2e-829fc82f749e", "question": "From which three aspects do the authors evaluate the unlearned model?", "answer_format": "Your answer should be a python list, each element is a string.", "tags": ["single", "text", "subjective"], "anchor_pdf": ["2d09e6fb-e268-5d5d-a362-9effcfe79d9f"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["Performance on the Forget Set", "Performance on the Retain Set", "Performance on General Downstream Tasks"], "question": "From which three aspects do the authors evaluate the unlearned model?", "ignore_order": true}}} {"uuid": "a343069f-cdd9-58b2-9abb-afb91e8f5360", "question": "In the paper \"Strengthened Symbol Binding Makes Large Language Models Reliable Multiple-Choice Selectors\", on which dataset does PIF method reach its highest accuracy? In the paper where that dataset is proposed, which LLM performed the best, and how to account for its performance?", "answer_format": "Your answer should be a Python list of three strings. The first string indicating the full name of the dataset, the second indicating the name of the LLM that performed the best, and third indicating the reason. e.g. [\"dataset\", \"LLM\", \"reason\"].", "tags": ["multiple", "subjective", "table"], "anchor_pdf": ["0d85f51e-7304-5a37-8876-8b458b37d114"], "reference_pdf": ["c2c5bf1a-3d4a-508e-a217-b3e4b78ce7f7", "a87a7490-623a-54af-bad6-ef68b0757499", "6a224ba5-c711-5435-b425-9bacbcd552a6"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_string_fuzzy_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "CommonsenseQA", "lowercase": true}, {"gold": "BERT", "fuzz_method": "partial_ratio", "threshold": 95, "lowercase": true}, {"reference_answer": "To understand the performance of BERT-LARGE, we analyzed 100 examples from the development set (Table 6). We labeled examples with categories (possibly more than one per example) and then computed the average accuracy of the model for each category. We found that the model does well (77.7% accuracy) on examples where surface clues hint to the correct answer. Examples that involve negation or understanding antonyms have lower accuracy (42.8%), similarly to examples that require factoid knowledge (38.4%). Accuracy is particularly low in questions where the correct answer has finer granularity compared to one of the distractors (35.4%), and in cases where the correct answer needs to meet a conjunction of conditions, and the distractor meets only one of them (23.8%).", "question": "How to account for BERT-Large's performance on CSQA?"}]}}} {"uuid": "a3555904-aa5f-5f4d-b823-b51bacf04995", "question": "Which paper introduced the human-evaluated timeliness metric for misinformation detection?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["bb433e95-b7ed-551e-beb7-54a999be2556"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper introduced the human-evaluated timeliness metric for misinformation detection?", "reference_answer": "Human-in-the-loop Evaluation for Early Misinformation Detection: A Case Study of COVID-19 Treatments"}}} {"uuid": "a36db3b8-16d0-5a14-bebb-3e3eea363d27", "question": "What are some evaluation benchmarks for LLM privacy at inference time, targeted towards model input and NOT the training data.", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["ad5524df-ece5-5286-b003-779e6b221ccd"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What are some evaluation benchmarks for LLM privacy at inference time, targeted towards model input and NOT the training data.", "reference_answer": "CAN LLMS KEEP A SECRET? TESTING PRIVACY IMPLICATIONS OF LANGUAGE MODELS VIA CONTEXTUAL INTEGRITY THEORY"}}} {"uuid": "a3c6958b-aed2-5e28-8dea-5d0b88550ac8", "question": "According to this survey, what're the three most recent decoder-only LLMs for NL2Code? How many programming languages do their training datasets each contain?", "answer_format": "Your answer should be a Python dictionary of 3 key-value pairs, where each key is a string and each value is an integer.", "tags": ["multiple", "objective", "table"], "anchor_pdf": ["37758401-6101-554f-8f1e-4e2995443314"], "reference_pdf": ["6590d875-4982-56a0-8bd7-e67f4bc777c9", "c1fed3f4-7a5f-5877-97e5-aa508eac885e", "4badd0e5-53ce-5044-b6c6-abec723c34aa", "01c9329e-9789-52dc-9eed-c99a8ef88a5c", "3c3b3cfc-e4f2-52b9-899a-2f9cac25dafc"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"CodeGeeX": 23, "BLOOM": 13, "SantaCoder": 3}}}} {"uuid": "a4359833-c1ee-5d01-b18d-1d1a78c749f0", "question": "Is there any work that attacks language models in dialogue generation?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["93b02a8b-724f-5ead-b30a-a704243dbdf4"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any work that attacks language models in dialogue generation?", "reference_answer": "White-Box Multi-Objective Adversarial Attack on Dialogue Generation"}}} {"uuid": "a4913d15-35e1-511a-affd-ef4782d08df9", "question": "I read two papers called Chain of Ideas and Nova, and they both involve the evaluation of the characteristics of ideas, such as quality, novelty, and so on. I want to know the specific measurement aspects of these two papers.", "answer_format": "Your answer should be a brief text.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["3abcffc7-5a8b-5dac-8e48-74a736149d26", "8ae3051b-db29-53c3-89c6-64202b13cec6"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The Chain of Ideas paper evaluates the novelty, significance, clarity, feasibility, and expected effectiveness of ideas. The Nova paper evaluates the quality, novelty, and diversity of ideas.", "question": "I read two papers called Chain of Ideas and Nova, and they both involve the evaluation of the characteristics of ideas, such as quality, novelty, and so on. I want to know the specific measurement aspects of these two papers."}}} {"uuid": "a49ce3ea-3977-5eb8-8598-47342bcd60a3", "question": "In figure 2, what is the next step to take after generating the supportive logical forms?Please use the name presented in this figure.And then for this step, what's the criteria held in this paper?", "answer_format": "Your answer should be a list of two strings, the first element is the name of the next step, and the second element is sevaral sentences about the criteria.", "tags": ["image", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_scoring_points_with_llm"], "eval_kwargs_list": [{"gold": "Prompt Construction", "lowercase": true}, {"scoring_points": ["The templates should break up the generation of a complicated question into a step by step process.", "The templates should clearly identify the subcomponent in a logical form that requires LLMs to focus on for each step."], "question": "For this step, what's the criteria held in this paper?"}]}}, "anchor_pdf": ["d5242995-ff5a-54a1-a27a-a1a139974a5e"], "reference_pdf": []} {"uuid": "a4ce44e5-a7d3-5043-981c-99695dd766e5", "question": "Which paper first proposed extracting the pair of target and stance from sentences?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["64b1dbdb-3c7e-53d0-af09-dfd6fc560999"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first proposed extracting the pair of target and stance from sentences?", "reference_answer": "A New Direction in Stance Detection: Target-Stance Extraction in the Wild"}}} {"uuid": "a5387805-be6f-5199-b97a-d01ece58dd35", "question": "In the baseline construction of the experiment of TrojText, which models are employed? Where can I get the code or data of these models?", "answer_format": "Your answer should be a single python list like [[\"model_name1\",\"model_name2\",...],[\"https://github.com/a/b\",\"https://github.com/c/d\",...]].Note that you should choose the most concise way to express the name of the model.", "tags": ["multiple", "metadata", "objective"], "anchor_pdf": ["e6260126-455c-5dea-ac6d-b1c0e38c210d"], "reference_pdf": ["4adfe430-cd54-5951-8947-9bb54f9109dd", "a602cfa7-5512-5820-ae45-0d19e51f1db9"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_structured_object_exact_match", "eval_structured_object_exact_match"], "eval_kwargs_list": [{"gold": ["Hidden Killer", "TBT"], "ignore_order": true, "lowercase": true}, {"gold": ["https://github.com/thunlp/HiddenKiller", "https://github.com/adnansirajrakin/TBT-2020"], "ignore_order": true, "lowercase": true}]}}} {"uuid": "a629b08b-2d1e-5a0e-a39a-007749de7759", "question": "Is there a paper that uses the tree structure of math equations in autoregressive language models?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["01064f26-6020-56e8-b9e8-92f4f02e2ec8"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper that uses the tree structure of math equations in autoregressive language models?", "reference_answer": "Tree-Based Representation and Generation of Natural and Mathematical Language"}}} {"uuid": "a64654b4-b4c5-5167-b58b-529530c1be68", "question": "Is there any paper about style transfer for stories?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["58043e88-41cf-519b-aa39-4850ef7a6dd5"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper about style transfer for stories?", "reference_answer": "StoryTrans: Non-Parallel Story Author-Style Transfer with Discourse Representations and Content Enhancing"}}} {"uuid": "a69b7df9-1ecd-579e-85ae-17de9f0dfbba", "question": "Are there any examples of using dense phrase retrieval systems in the automatic curation of entity dictionaries?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["e8db11cf-2639-5720-bbaa-647e0982aa43"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Are there any examples of using dense phrase retrieval systems in the automatic curation of entity dictionaries?", "reference_answer": "Automatic Creation of Named Entity Recognition Datasets by Querying Phrase Representations"}}} {"uuid": "a6e178bd-06c1-58c3-b6f1-e72b0cab6a03", "question": "Which paper first studied differential privacy for in-context learning to prevent prompt leakage attacks?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["d37e81e7-bc28-56c6-bb49-07b555bd1051"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first studied differential privacy for in-context learning to prevent prompt leakage attacks?", "reference_answer": "PRIVACY-PRESERVING IN-CONTEXT LEARNING FOR LARGE LANGUAGE MODELS"}}} {"uuid": "a743e85d-4b2c-5671-a371-578b2f0af908", "question": "Is there any paper that explores and annotates the effectiveness of using testimonials or anecdotes in discussions?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["ea647a99-8c25-5ecb-b2eb-3b03feb5a5a7"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that explores and annotates the effectiveness of using testimonials or anecdotes in discussions?", "reference_answer": "StoryARG: a corpus of narratives and personal experiences in argumentative texts"}}} {"uuid": "a762550d-b54a-5e5f-8fcf-d3be3058cd28", "question": "Could you recommend a contemporary research paper that has advanced natural language watermarking quality through algorithmic methods?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["66c96650-9f7c-5f0c-ab6c-afa95cf65f87"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Could you recommend a contemporary research paper that has advanced natural language watermarking quality through algorithmic methods?", "reference_answer": "Robust Multi-bit Natural Language Watermarking through Invariant Features"}}} {"uuid": "a7707667-5eee-5708-b4d6-3a5368b417f8", "question": "What's the framework proposed in this paper(\"Global Learning with Triplet Relations in Abstractive Summarization\")? Which framework is it similar to? In the source papers of these two frameworks, how many same datasets are they experimented on?", "answer_format": "Your answer should be a python list like [\"string1\", \"string2\", integer]. The first element should be a string, representing the name(abbrievation) of the framework proposed in this paper. The second element should be a string, representing the name(abbrievation) of the similar framework. The third element should be an integer, representing the number of common datasets experimented on.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["39407fcc-f073-5e75-b8d2-d85cf5b672a0"], "reference_pdf": ["3c6d1c62-098f-52cc-8079-b1f053cbe850"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["IARSum", "GSum", 2], "ignore_order": false, "lowercase": true}}} {"uuid": "a7d785c5-bcc8-5dae-aabd-bfe6a5f61174", "question": "What is the difference between using two open-source LLMs in the experiments of the paper \"ARE LARGE LANGUAGE MODELS BAYESIAN? A MARTINGALE PERSPECTIVE ON IN-CONTEXT LEARNING\"?", "answer_format": "Your answer should be a python strings.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["bdd54ef7-963a-5aac-b825-751cd425a114"], "reference_pdf": ["6b887e82-ca3f-59e1-ae8a-f528919c1334", "7bf2a9fc-a2da-5668-b577-9026e3464117"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["Sliding Window Attention. Mistral-7B exploits the stacked layers of a transformer to attend information beyond the window size W . The hidden state in position i of the layer k, h_i, attends to all hidden states from the previous layer with positions between i-W and i.", "Rolling Buffer Cache. A fixed attention span means that mistral-7B can limit our cache size using a rolling buffer cache. The cache has a fixed size of W , and the keys and values for the timestep i are stored in position i mod W of the cache. As a result, when the position i is larger than W , past values in the cache are overwritten, and the size of the cache stops increasing.", "Pre-fill and Chunking. Mistral-7B can pre-fill the (k, v) cache with the prompt. If the prompt is very large, we can chunk it into smaller pieces, and pre-fill the cache with each chunk. For this purpose, we can select the window size as our chunk size. For each chunk, mistral-7B thus need to compute the attention over the cache and over the chunk."], "question": "What is the difference between using two open-source LLMs in the experiments of the paper \"ARE LARGE LANGUAGE MODELS BAYESIAN? A MARTINGALE PERSPECTIVE ON IN-CONTEXT LEARNING\"?"}}} {"uuid": "a803c5e9-ad61-5580-8819-66875022e19b", "question": "In order to improve Parrot's abilities, which method proposed in the paper \"Direct Preference Optimization: Your Language Model is Secretly a Reward Model\" is used to train the model? Which method is compared with it under distribution shifts?", "answer_format": "Your answer should be a Python list of two strings, answering the two questions respectively. You must use abbreviations as given in the papers.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["a8508753-17fc-5f40-8abf-245ecbfe151e", "d35109d5-9f0a-5d99-ae90-dcaabf4bb74e"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["DPO", "PPO"], "lowercase": false, "ignore_order": false}}} {"uuid": "a903623b-95ca-5dc2-a8a8-3c9851d02779", "question": "Is there any paper that employs code LLMs to iteratively generate and refine code with execution results to improve the performance?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["76ca159b-68f6-5298-8f7e-b6a786b7201d"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that employs code LLMs to iteratively generate and refine code with execution results to improve the performance?", "reference_answer": "Self-Edit: Fault-Aware Code Editor for Code Generation"}}} {"uuid": "a911834c-2700-51fa-8a37-ab3649fdd8d7", "question": "In section 4, what research quesitions do the authors aim to answer?", "answer_format": "Your answer should be be plein text DIRECTLY FROM THE PDF.", "tags": ["single", "subjective", "text"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "In this section, we aim to answer three research questions: (RQ1) How do different medical generation models perform under our evaluation? (RQ2) How is the evaluation quality of DOCLENS compared to existing metrics? (RQ3) How is the evaluation quality of DOCLENS computed with open-source evaluators compared to proprietary ones?", "question": "In section 4, what research quesitions do the authors aim to answer?"}}, "anchor_pdf": ["b8ba5cee-e8d9-504b-a2a7-dc210b814ece"], "reference_pdf": []} {"uuid": "a9275d5c-ec5c-5fd2-b8de-0866aaee4fb8", "question": "Which paper combines the advantages of different frameworks for grammar error correction (GEC) and achieves good performance?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["f7652651-6423-593f-bbea-bf690997b176"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper combines the advantages of different frameworks for grammar error correction (GEC) and achieves good performance?", "reference_answer": "TemplateGEC: Improving Grammatical Error Correction with Detection Template"}}} {"uuid": "a927fcee-f0a7-50cd-b5a7-ca8ab5c8eb23", "question": "What are the institutions of the first author and corresponding author of this paper?", "answer_format": "Your answer should be a Python list of length 2 containing the institution names respectively, e.g., [\"first_author_institute\", \"corresponding_author_institute\"]. If there are multiple first authors or corresponding authors, please replace the corresponding institution name with a name list, e.g., [[\"first_author1_institute\", \"first_author2_institute\", ...], [\"corresponding_author1_institute\", \"corresponding_author2_institute\", ...]].", "tags": ["single", "metadata", "objective"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Beihang University", ["The University of Sydney", "The Hong Kong Polytechnic University"]], "ignore_order": false}}, "anchor_pdf": ["3d2fcb43-2cda-5645-99aa-da78c6cfd23f"], "reference_pdf": []} {"uuid": "a93430e0-ae3b-585d-8622-ed9b5844da8c", "question": "In Experiment Section of the paper, what is the overall framework of the baseline model achieving the second best BERTScore on the dataset LOCOMO?", "answer_format": "Your answer should be a python strings about the detailed overall framework of the baseline model.", "tags": ["multiple", "subjective", "table"], "anchor_pdf": ["dbad7ff2-b141-56da-869e-e2eacc675417"], "reference_pdf": ["ffa706cf-0129-55f6-b463-6c5a458933c2", "b7f2cb42-c26b-5b4d-b8dd-6365498dbd01"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The overall framework of MemoChat pipeline is a memorization-retrieval-response loop inner thinking. Very different from traditional methods that retrieve directly on these accumulated dialogues, the chatbot will automatically builds and updates a structured on-the-fly memo, storing past dialogues in categories. Then, the retrieval is conducted over all recordings according to their topics and summaries", "question": "In Experiment Section of the paper, what is the overall framework of the baseline model achieving the second best BERTScore on the dataset LOCOMO?"}}} {"uuid": "a96944de-c900-55f0-9b3b-cccb597a8b71", "question": "Which category of website takes up the most proportion in the dataset MC2?", "answer_format": "Your answer should be a phrase indicating the category DIRECTLY FROM THE PDF WITHOUT ANY MODIFICATION OR EXPLANATION.", "tags": ["image", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "News"}}, "anchor_pdf": ["16142be2-ac28-58e5-9271-8af299b18d91"], "reference_pdf": []} {"uuid": "a98997f3-d4ad-5739-91dd-4dc08fb626a6", "question": "According to the paper that proposes the pixel-level similarity metrics that the LG-VQ paper employs, what's the metrics' value, given that it's a gray scale image 0f 8 bits, with MSE=1?", "answer_format": "Your answer should be a float, rounding to 2 decimal places.", "tags": ["multiple", "formula", "objective"], "anchor_pdf": ["1d9b8fd8-3443-5191-9f86-602a3e1c7e6b"], "reference_pdf": ["51490da9-141c-5d9f-bed1-fab6e93b7b65"], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 96.26, "ndigits": 2}}} {"uuid": "a9aba86b-c608-5d87-9039-cc130911a03d", "question": "What molecular representation learning paper introduced a benchmark that focuses on learning over thermodynamically-accessible conformer ensembles across diverse molecular properties and chemical reactions?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["70940d87-34ac-5a74-b1c6-707c361fc017"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What molecular representation learning paper introduced a benchmark that focuses on learning over thermodynamically-accessible conformer ensembles across diverse molecular properties and chemical reactions?", "reference_answer": "Learning Over Molecular Conformer Ensembles: Datasets and Benchmarks"}}} {"uuid": "aa4ec90c-b162-5319-9a00-ca47101c24f8", "question": "Which paper showed that social relationships were helpful for identifying inappropriate messages?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["4dbfecf3-9967-5657-a56e-d0f3044bd069"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper showed that social relationships were helpful for identifying inappropriate messages?", "reference_answer": "Your spouse needs professional help: Determining the Contextual Appropriateness of Messages through Modeling Social Relationships"}}} {"uuid": "aa5598d0-e570-5f39-afd6-159fd696bdc6", "question": "What paper mitigates language model sampling errors due to the softmax bottleneck?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["92babc1d-b10e-52dc-aaf3-559d55028a8e"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What paper mitigates language model sampling errors due to the softmax bottleneck?", "reference_answer": "CLOSING THE CURIOUS CASE OF NEURAL TEXT DEGENERATION"}}} {"uuid": "aa73c1ee-05cb-5570-bfac-bf86eb94caeb", "question": "What is the reason why Soft MoE cannot be applied to autoregressive models currently?", "answer_format": "Your answer should be a python string", "tags": ["single", "subjective", "text"], "anchor_pdf": ["36f7c548-f8c2-5fc9-ba12-a35ac045bc25"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "What is the reason why Soft MoE cannot be applied to autoregressive models currently?", "reference_answer": "Soft MoE cannot be applied to autoregressive models because it needs to merge all input tokens together, which would break the causality requirement between past and future tokens that must be maintained during training in autoregressive models."}}} {"uuid": "aadd5754-71f6-5b8d-ad9e-d7d8e24975ce", "question": "Who is the first author of WebArena? How many papers of his/hers are cited in the paper? Which are they?", "answer_format": "Your answer should be a Python list of 3 elements. The first one is a string serving as the first author name of WebArena. The second one is an interger indicating the number of self-referenced papers. The third one is a string list storing the titles of self-referenced papers.", "tags": ["single", "metadata", "objective"], "anchor_pdf": ["5a2b0d5c-6b51-5bbd-a001-a15f19f65a98"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_int_exact_match", "eval_structured_object_exact_match"], "eval_kwargs_list": [{"gold": "Shuyan Zhou"}, {"gold": 6}, {"gold": ["Pal: Program-aided language models", "Language models of code are few-shot commonsense learners", "Hierarchical prompting assists large language model on web navigation", "Execution-based evaluation for open-domain code generation", "Hierarchical control of situated agents through natural language", "Show me more details: Discovering hierarchies of procedures from semi-structured web data"], "ignore_order": true, "lowercase": true}]}}} {"uuid": "aaf4f321-f2c3-5cd3-9924-d22ed02ed43c", "question": "Calculate the increase in throughput when the batch size increases from 24 to 64 for H2O (20%) at a sequence length of 2048+2048 on A100 GPU.", "answer_format": "Your answer should be a Python float number rounded to 1 decimal place. e.g. 20.3", "tags": ["table", "objective", "single"], "anchor_pdf": ["5e63d90b-a1a4-5e6e-9845-5177eba99970"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 242.1, "ndigits": 1}}} {"uuid": "ab369ade-a399-5f3a-82ba-13c02f4a91a7", "question": "Whether the code and data of this paper are publicly available or not?", "answer_format": "Your answer should be a simple \"yes\" or \"no\" WITHOUT PUNCTUATION OR EXPLANATION.", "tags": ["single", "metadata", "objective"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "yes", "lowercase": true}}, "anchor_pdf": ["ae9b4a06-0642-5512-9150-656cf166c470"], "reference_pdf": []} {"uuid": "ab9138b7-f6f2-5fd0-9430-1d0664ceb5c3", "question": "Which paper first studied the efficiency robustness of multi-exit language models?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["9e756474-60be-5dd5-9afd-3d8bca3d1133"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first studied the efficiency robustness of multi-exit language models?", "reference_answer": "Dynamic Transformers Provide a False Sense of Efficiency"}}} {"uuid": "ab9cd1cf-213f-5551-b4fb-104a3ba51266", "question": "Which paper shows that in instruction tuning, the instructions can be compressed to small supporting sets of words that provide useful information?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["5bbca40f-d190-5de8-ae4c-8482e547794a"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper shows that in instruction tuning, the instructions can be compressed to small supporting sets of words that provide useful information?", "reference_answer": "Did You Read the Instructions? Rethinking the Effectiveness of Task Definitions in Instruction Learning"}}} {"uuid": "ac041b6a-467d-53ce-8419-f283f3e0d7aa", "question": "Is there any paper that reveals annotation problems in cross-lingual summarization caused by decomposing the task into translation and summarization?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["a0688b09-70b4-58a2-96bd-ae79a98a2b5a"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that reveals annotation problems in cross-lingual summarization caused by decomposing the task into translation and summarization?", "reference_answer": "Revisiting Cross-Lingual Summarization: A Corpus-based Study and A New Benchmark with Improved Annotation"}}} {"uuid": "ac2f076d-19cf-5703-b728-dc3077dd410e", "question": "The experiment section of this paper introduces a new metric, S^2MATCH, what is the main difference between it and the metric used in Appendix D? Answer with one formula.", "answer_format": "You only need to provide one definition formula of S2MATCH in Python strings. You don't need to explain the formula or variables.", "tags": ["multiple", "subjective", "formula"], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "softMATCH = 1 - d(\\mathbf{x} ,\\mathbf{y} )", "question": "The experiment section of this paper introduces a new metric, S^2MATCH, what is the main difference between it and the metric used in Appendix D? Answer with one formula."}}, "anchor_pdf": ["aad25ce3-2d26-5a48-9653-a41e0a749e55"], "reference_pdf": ["8a4a1a61-43e9-5e08-a31f-4b7c71754ae4"]} {"uuid": "ac4cbd2d-98ea-5717-acc3-00a4084623ab", "question": "What baselines (excluding MLP) are used in the paper proposing the model zoo in the paper?", "answer_format": "Your answer should be a python list about the exact names of the baselines, e.g., ['baseline1', 'baseline2', ...]. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["f31f02ac-25e9-5aba-a7f5-a0c7e6b52022"], "reference_pdf": ["a4111ee3-86cb-5a5a-8393-a0d5b1463914"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["INR2Vec", "Transformer"], "ignore_order": true, "lowercase": true}}} {"uuid": "ad341e2b-cb59-5695-b41a-9912be57ea77", "question": "What is the shape of $W$ in Equation (3)? And what about $W_l$ and $W_v$ in Equation (5)?", "answer_format": "Your answer should be a sentence describing the shapes of $W$, $W_l$ and $W_v$ in detail.", "tags": ["formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The shape of $W$ in Equation (3) is $\\mathbb{R}^{d_s \\times d_l}$, where $d_s$ is the dimension of the vision features, and $d_l$ is the dimension of the language features. The shapes of $W_l$ and $W_v$ in Equation (5) are both $\\mathbb{R}^{d \\times 1}$.", "question": "What is the shape of $W$ in Equation (3)? And what about $W_l$ and $W_v$ in Equation (5)?"}}, "anchor_pdf": ["648e3d50-375b-5189-b6b0-e0520626716e"], "reference_pdf": []} {"uuid": "ad6b9fa5-cac1-531f-8b8c-c82fe6665863", "question": "For the VQA DOC task, what are the optimal values of alpha and beta?", "answer_format": "Your answer should be a python list of two numbers", "tags": ["single", "objective", "table"], "anchor_pdf": ["60f440ff-3076-5b6e-96d7-0819845b691b"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": [0, 5]}}} {"uuid": "ae713f72-a3ae-5bdd-8704-f849359fe19b", "question": "Which model did both anchor_pdfs use for experiments?", "answer_format": "Your answer should be a python string, and it should be the model name.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["a4114462-9b4a-51c8-ae5e-a1591a301c88", "357d672a-550b-5dc9-9bc7-fb8b429b07f6"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "BERT"}}} {"uuid": "af63f4bc-4bf0-5521-aa2d-c032a1b947c8", "question": "Is there any paper that address attacks on code models by leveraging the semantic information of the source code through attention scores, while also guaranteeing that the generated adversarial examples can always be compiled successfully?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["3bdbfcf7-eb71-5521-88db-6e7ff32e9dfa"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that address attacks on code models by leveraging the semantic information of the source code through attention scores, while also guaranteeing that the generated adversarial examples can always be compiled successfully?", "reference_answer": "DIP: Dead code Insertion based Black-box Attack for Programming Language Model"}}} {"uuid": "afe1dc15-35c0-54fb-8934-356aa8803efe", "question": "In the paper that proposes GLPFT, which training dataset is larger? In that dataset, what's the format of the data?", "answer_format": "Your answer should be a Python list of 2 elements, the first is the name of the dataset, the second is the format of the data in \"A-B pairs\" format, as given in the paper. e.g. [\"MMLU\", \"question-answer pairs\"].", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["8e85718a-1210-52bf-9aee-9b2bb2d0ae59"], "reference_pdf": ["1dffea3e-12d5-5a96-82db-480f1579040e", "02c95246-2bbb-510a-b674-6d7a1fe35ef5"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["ToolBench", "instruction-solution pairs"], "ignore_order": false, "ignore_blank": true, "lowercase": true}}} {"uuid": "b0b2b9a1-fa76-5027-9ba7-84a9876c07ac", "question": "Is there any paper that uses token-level loss to enhance sentence-level embedding learning?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["da5e76ed-465a-5ee2-a9d2-0db24929526f"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that uses token-level loss to enhance sentence-level embedding learning?", "reference_answer": "Dual-Alignment Pre-training for Cross-lingual Sentence Embedding"}}} {"uuid": "b10c0e3a-48e4-5878-bbff-1611969ca685", "question": "What is the first paper that uses the generalized linear model to analyze multi-neural spike train data?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["faa078cd-7662-5389-ba65-1eebf4a8cc58"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What is the first paper that uses the generalized linear model to analyze multi-neural spike train data?", "reference_answer": "ONE-HOT GENERALIZED LINEAR MODEL FOR SWITCHING BRAIN STATE DISCOVERY"}}} {"uuid": "b11ab881-9ac0-5b77-9b27-394744cf06e1", "question": "What are the most important optimizations of transformer network in this paper?", "answer_format": "Your answer should be a Python list of text strings, with each element being one important optimization that this paper proposes, e.g., [\"optimization 1\", \"optimization 2\", ...].", "tags": ["single", "subjective", "text"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["Removal of the one-to-one mapping constraint among queries and keys in multiple subspaces", "Allowing each query to attend to multiple keys", "Introduction of inner-subspace interaction and cross-subspace interaction to encourage consensus among heads"], "question": "What are the most important optimizations of transformer network in this paper?", "ignore_order": true}}, "anchor_pdf": ["e281fc3b-cdaa-5565-8997-6a6c8f198000"], "reference_pdf": []} {"uuid": "b123fcb5-e4ab-5ed9-b8f2-6f7fa2b6880d", "question": "Which model uses Llama2-7B as the LLM base model in Table 1 in paper 'DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effective for LMMs' and in this model's original paper, how many models are compared in Table 4 in total?", "answer_format": "Your answer should be a Python list of 2 strings, the name of the model, and the number of compared models.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["10e2193c-2fa2-5cf8-9b5d-bc0c32fe856a"], "reference_pdf": ["cb1c4dda-3e6e-5dd1-a4e2-215e3009c106"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["VILA", "12"], "ignore_order": true, "lowercase": true}}} {"uuid": "b14a34e0-6226-5a98-aeec-2ade7fe35d70", "question": "Regarding the dataset ROCKS used in the anchor paper, it contains ratings assessed by 20 annotators for each of the 12 pictures of a given rock type. How does its experimental setup ensure the objectivity and fairness of the ratings, specifically how do subjects use consistent scale values?", "answer_format": "Your answer should be a python string that explains the detailed experimental setup.", "tags": ["multiple", "text", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "To promote the use of consistent scale values across subjects, anchor pictures were displayed along with scale values on the computer screen throughout each rating session. One anchor picture corresponded to the lowest rating (e.g, the very darkest rock), a second anchor picture corresponded to the highest rating (e.g, the very lightest rock), and a third anchor corresponded to a rock that we judged to be roughly average on the rated dimension (e.g., a rock of average darkness/lightness). The anchors and scale values were displayed at the bottom of the screen throughout the rating session to ensure the objectivity and fairness of the ratings.", "question": "Regarding the dataset ROCKS used in the anchor paper, it contains ratings assessed by 20 annotators for each of the 12 pictures of a given rock type. How does its experimental setup ensure the objectivity and fairness of the ratings, specifically how do subjects use consistent scale values?"}}, "anchor_pdf": ["6435f055-a064-504a-b636-d3c71c51a6e8"], "reference_pdf": []} {"uuid": "b15e2f1e-a31c-58b9-8d53-1910e0d28391", "question": "On what devices is StreamVoice trained?", "answer_format": "Your answer should be a plein text directly from the PDF without explanation.", "tags": ["single", "subjective", "text"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "StreamVoice is trained using 8 V100 GPUs with a batch size of 7 utterances per GPU for 700k steps.", "question": "On what devices is StreamVoice trained?"}}, "anchor_pdf": ["039b3a9f-1e97-5579-bc23-fcd0d2f01c19"], "reference_pdf": []} {"uuid": "b1724696-f143-5f5a-a58d-2f4086212016", "question": "Is there any paper that utilizes graph structure to model conversation history?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["795ba396-929d-5fc8-ac08-bd9dc326215c"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that utilizes graph structure to model conversation history?", "reference_answer": "History Semantic Graph Enhanced Conversational KBQA with Temporal Information Modeling"}}} {"uuid": "b1ee7930-cebf-5b6d-8ebc-bbc0a6246aca", "question": "In which comparisons of models did the two papers reach similar conclusions?", "answer_format": "Your answer should be a python list of several strings. The string should be language model name.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["fc6daddf-131f-59b0-adc2-85b97b4ecd82", "2773b9a9-f232-5acb-a1dd-a0168e52cf0c"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["GPT-2", "BERT"], "ignore_order": true}}} {"uuid": "b2711e57-f28a-5955-9413-35717769b3c1", "question": "For retrieval evaluation, what metrics applied by MTEB are not used by LocalRQA?", "answer_format": "Your answer should be a python list of strings, each string is the name of a metric, as given in the MTEB paper.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["3e62472f-aacc-591c-bd3a-9d3e71b79363", "be946a16-54d6-5d5c-82ac-8aba4b2952cc"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["MRR@k", "MAP@k", "precision@k"], "lowercase": true, "ignore_order": true}}} {"uuid": "b2b59368-db51-520e-b292-c2293ef13fd4", "question": "In the paper proposing SG-USM for task-oriented dialogues, which baseline method performs the best across all datasets, excluding SG-USM itself?", "answer_format": "Your answer should be a python strings. YOU MUST USE THE EXACT ABBREVIATION AS IN THE PAPER. DO NOT USE FULL NAMES.", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["2f414342-64d6-5129-9e69-a409dae799eb"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "USDA", "lowercase": true}}} {"uuid": "b300fae4-e575-5062-9f11-2c8f320463cb", "question": "How was the data for the latest text classification tasks used in this paper collected?", "answer_format": "Your answer should be a python strings.", "tags": ["multiple", "subjective", "text"], "anchor_pdf": ["1a023c1a-97ca-5ba9-aea9-781f1cfbb346"], "reference_pdf": ["c8fca681-edde-571e-885c-e186e7b4ae80", "64c155da-e4cc-5de8-95e9-4715738d5b1d", "9ada7bff-c684-55ab-ae9b-04f836247ddc", "ab8d017f-8645-5337-aa84-f52783391b99"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The data collection methodology is to create each sentence pair by selecting a premise sentence from a preexisting text source and asking a human annotator to compose a novel sentence to pair with it as a hypothesis.", "question": "How was the data for the latest text classification tasks used in this paper collected?"}}} {"uuid": "b33b2cf3-a27a-5b2a-a1ca-5f08d8b1e75e", "question": "Which paper makes sure that the questions used in the paper are all from real users that are genuinely curious about a specific topic or concept?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["3f1719ca-3b43-548d-99f8-a670a38c20bc"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper makes sure that the questions used in the paper are all from real users that are genuinely curious about a specific topic or concept?", "reference_answer": "CREPE: Open-Domain Question Answering with False Presuppositions"}}} {"uuid": "b384c73f-b916-5d13-809c-473938369a69", "question": "To evaluate the AdaLoRA algorithm, which model is used for natural language understanding and question answering? Can you give me the relevant github link of this model?", "answer_format": "Your answer should be a single python list of two strings, like [\"model_name\",\"https://github.com/a/b\"].Note that in the model name, you should use \"-\" between the series name and the size, for example, \"modelx-small\".", "tags": ["multiple", "metadata", "objective"], "anchor_pdf": ["b3453666-3af4-5617-a23e-b94eefbd38e9"], "reference_pdf": ["0da417f6-1b6e-5b18-a091-06629612c88d"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["DeBERTaV3-base", "https://github.com/microsoft/DeBERTa"], "ignore_order": false}}} {"uuid": "b3918d42-c1a0-5c09-97ef-f9182fe40a5c", "question": "Regarding the construction of the Edit Matrix, which paper's findings does this study reference and follow? What are the affiliations of the authors of the cited work?", "answer_format": "Your answer should be a brief text containing the cited paper's name and the authors' affiliations.", "tags": ["multiple", "metadata", "subjective"], "anchor_pdf": ["578bb752-3163-587e-8206-4d887f7c52ec"], "reference_pdf": ["3430c713-5005-5b83-92f2-7c288b2d1a4a"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The cited paper is 'Incomplete Utterance Rewriting as Semantic Segmentation' and its authors are from the School of 'Computer Science and Engineering, Beihang University, China' and 'Microsoft Research, Beijing, China'.", "question": "Regarding the construction of the Edit Matrix, which paper's findings does this study reference and follow? What are the affiliations of the authors of the cited work?"}}} {"uuid": "b39cbbdd-8489-53f0-a9ca-d0dbc46c8ead", "question": "What limitations do large language models have in evaluating information-seeking question answering?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["7a465144-726d-5aab-a974-fabcd4f37f1e"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What limitations do large language models have in evaluating information-seeking question answering?", "reference_answer": "Evaluating Open-Domain Question Answering in the Era of Large Language Models"}}} {"uuid": "b3a5fb63-2a87-5e0c-bd8d-29f25772319c", "question": "What paper first associate the modeling frequency with input human skeletons under the NeRF framework?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["e1919332-afac-56ad-a187-478e0e6c703f"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What paper first associate the modeling frequency with input human skeletons under the NeRF framework?", "reference_answer": "POSE MODULATED AVATARS FROM VIDEO"}}} {"uuid": "b3bdd115-e25d-57c2-8931-40fb33a5f9a0", "question": "Among the SR algorithms chosen in the paper \"Expression Sampler as a Dynamic Benchmark for Symbolic Regression\", which one performs the best on SRBench, considering the R-squared test.", "answer_format": "Your answer should be a string, the abbreviation of the algorithm as given in the paper.", "tags": ["multiple", "image", "objective"], "anchor_pdf": ["02a67239-2a8a-5168-bfd4-d861a1d86675", "bb15ac6b-4277-52b1-b32e-a1fba6617dcd"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "SBP-GP", "lowercase": true, "ignore_blank": true}}} {"uuid": "b4dcc93d-635a-54c4-be8f-c5ec443d08db", "question": "The training dataset used in the paper \"Semiparametric Token-Sequence Co-Supervision\" is filtered to 42932 instances, then what's the original size of this dataset?", "answer_format": "Your answer should be a single integer.", "tags": ["objective", "multiple", "text"], "anchor_pdf": ["e13b0b17-08cb-50fa-b144-a14b676118bf"], "reference_pdf": ["23e4f6c4-0d28-52be-8ab4-7aef1c19b5ce"], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 150000}}} {"uuid": "b509eb3e-12e2-51cc-a02d-ed22d0c8a8b3", "question": "How does Multi-DYLE combine the three different losses as the objective of training?", "answer_format": "Your answer should be single formula in latex format, extracted from the specified pdf.", "tags": ["formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "$$\\mathcal{L}=\\lambda_{g}\\mathcal{L}_{g e n}+\\lambda_{o}\\sum_{j=1}^{M}\\mathcal{L}_{o r a c l e}^{(j)}+\\lambda_{c}\\mathcal{L}_{c o n s i s t}$$", "question": "How does Multi-DYLE combine the three different losses as the objective of training?"}}, "anchor_pdf": ["8535fa9f-0253-5e2b-b3a4-5167aeeae4c6"], "reference_pdf": []} {"uuid": "b50d066a-9ed9-5aac-b79c-a32e3bef9734", "question": "Which dataset performs better on the LLaMA model, PRM800K or Math-Shepherd? In the source paper of PRM800K, which methods are compared with PRM?", "answer_format": "Your answer should be a python list of two items. The first item is a python string. The second item is a python list of strings.", "tags": ["multiple", "objective", "image"], "anchor_pdf": ["85b588f5-13e2-5aaa-9ce8-76c52426b40e", "80b0a0f4-7247-5b9e-8782-0a4dd4a2ae4b"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_element_list_included"], "eval_kwargs_list": [{"gold": "Math-Sheperd"}, {"gold": ["ORM", "Majority Voting"]}]}}} {"uuid": "b5307d05-348e-50df-8932-95ccf83020f0", "question": "Which paper investigates the influence of the diversity of source tasks on the performance of target tasks in prompt tuning using CrossFit?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["cf95e4da-465a-5c24-a70d-ba6c65d7894c"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper investigates the influence of the diversity of source tasks on the performance of target tasks in prompt tuning using CrossFit?", "reference_answer": "Learning to Initialize: Can Meta Learning Improve Cross-task Generalization in Prompt Tuning?"}}} {"uuid": "b551c2aa-7d01-5fbf-af59-ae4645fcba85", "question": "Which paper first proposed a cross-domain language model to automatically generate much labeled data for a unlabeled target domain?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["f042337a-3d77-532b-9f54-25635a5c9e2f"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first proposed a cross-domain language model to automatically generate much labeled data for a unlabeled target domain?", "reference_answer": "Cross-Domain Data Augmentation with Domain-Adaptive Language Modeling for Aspect-Based Sentiment Analysis"}}} {"uuid": "b5dfad94-c5ef-5f7e-a5f4-1c1a479acbe5", "question": "What does the special symbol $\\overline{\\mathcal{V}}$ mean in the proposed CFIC decoding strategy?", "answer_format": "Your answer should be concise text string highlighting the meaning of the symbol.", "tags": ["single", "formula", "subjective"], "anchor_pdf": ["7bd8fd9a-36c6-583a-92d3-410b228fe5c9"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_and_scoring_points_with_llm", "eval_kwargs": {"question": "What does the special symbol $\\overline{\\mathcal{V}}$ mean in the proposed CFIC decoding strategy?", "reference_answer": "The special symbol $\\overline{\\mathcal{V}}$ denotes a token set containing each sentence's prefix, and the sentence prefix serves as an position identifier to facilitate the identification of the starting point of a supporting passage within the source context.", "scoring_points": ["The answer must mention the restricted token set composed of sentence prefixes instead of the whole vocabulary.", "The answer should also explain what the sentence prefixes come from, that is they are derived from the source context."]}}} {"uuid": "b5f5b2f4-9e71-5a20-afcb-392406123af3", "question": "In the algorithm applied to find the minimum norm interpolant, when does the first step stop?", "answer_format": "Your answer should be a Python string, the formula of the end condition in LaTeX format.", "tags": ["multiple", "formula", "subjective"], "anchor_pdf": ["1e3ca6ae-5657-5ce8-91a4-79c9525ba91a"], "reference_pdf": ["93a2f4c2-6f4e-5e44-a4ad-6a554d6ff6e1"], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "\\frac{\\max_{0 \\leq i \\leq n+1} \\left| \\sqrt{s_i} - \\sum_{j=0}^n \\alpha_j s_i^j \\right|}{\\min_{0 \\leq i \\leq n+1} \\left| \\sqrt{s_i} - \\sum_{j=0}^n \\alpha_j s_i^j \\right|}< 1.001", "question": "What's the end condition?"}}} {"uuid": "b7327d6a-9ab2-5fd7-966d-4250ce72ae00", "question": "Which family of model generally perform the best for the event conceptualization task", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["520307db-d3a9-591c-b45f-4347bb2599c9"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which family of model generally perform the best for the event conceptualization task", "reference_answer": "CAT: A Contextualized Conceptualization and Instantiation Framework for Commonsense Reasoning"}}} {"uuid": "b78a0d2a-e972-522e-97ad-e2e5795d8f64", "question": "In the event extraction section of the paper \"Is It Safe to Tell Your Story? Towards Achieving Privacy for Sensitive Narratives\", how were the dependency parses of the stories obtained?", "answer_format": "Your answer should be a python strings.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["52ea9a1c-753e-50ab-9049-86da02c9f585"], "reference_pdf": ["2ab9cfe3-794f-580f-9e52-48cab24cb05f"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Stanza pipeline is used to process the stories and obtain dependency parses.It parses each sentence for its syntactic structure, where each word in the sentence is assigned a syntactic head that is either another word in the sentence, or in the case of the root word, an artificial root symbol. A Bi-LSTM-based deep biaffine neural dependency parser is implemented. We further augment this model with two linguistically motivated features: one that predicts the linearization order of two words in a given language, and the other that predicts the typical distance in linear order between them to significantly improve parsing accuracy.", "question": "In the event extraction section of the paper \"Is It Safe to Tell Your Story? Towards Achieving Privacy for Sensitive Narratives\", how were the dependency parses of the stories obtained?"}}} {"uuid": "b8034b03-c46a-5b8c-8bdd-09e67ad45f9f", "question": "What is the composition of the training dataset in the paper?", "answer_format": "Your answer should be a python strings about the dataset.", "tags": ["multiple", "subjective", "text"], "anchor_pdf": ["9c7c7762-0132-583c-987a-0fbc89847c55"], "reference_pdf": ["8c267034-d2a4-53d9-a4e0-0fdc761cde75"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "WebVid-2M consists of 2.5M video-text pairs. The data was scraped from the web following a similar procedure to Google Conceptual Caption. The dataset consists of manually generated captions, that are for the most part well formed sentences. And the captions are aligned with the video and describe visual content.", "question": "What is the composition of the training dataset in the paper?"}}} {"uuid": "b91a8c27-fa71-5ff7-a867-b58985276991", "question": "Among three single-agent baselines in table one, which performs best on Damped Spring?", "answer_format": "Your answer should be a python string. YOU MUST USE THE EXACT NAME FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.", "tags": ["single", "table", "objective"], "anchor_pdf": ["fdfd844d-b6a6-5bbc-8ca2-740d4f7e8562"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "TRS-ODEN"}}} {"uuid": "b939dfd5-150e-5b54-9ef3-b9b5497d688d", "question": "In the CLAP paper, which three challenges for patchwork learning are mentioned?", "answer_format": "Your answer should be a Python list of 3 strings, each string is a challenge.", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["1df3c539-06ac-5799-8f7c-8ffec4f7d9a8"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["- statistical heterogeneity; the multimodal data of local clients are typically non-i.i.d. The model may fail to adapt the learned dependencies to target clients;", "- modality combination heterogeneity; local clients can have various modality combinations. The learned dependencies from Xi is hard to be used for other clients with different combinations Xj;", "- modality combination vulnerability; the learned imputation method could be vulnerable to the modality combinations and the imputation quality significantly varies for two similar combinations;"], "question": "In the CLAP paper, which three challenges for patchwork learning are mentioned?", "ignore_order": true}}} {"uuid": "b943f9ec-685a-5bbf-b82e-65bd00415e6d", "question": "In \"MetaGPT: Merging Large Language Models Using Model Exclusive Task Arithmetic\", what is the main advantage of \"MetaGPT\" that the authors claim to have over \"AdaMerging\"? Also, what is the most significant difference between the experiment settings of the papers which proposed these two methods?", "answer_format": "Your answer should be brief text answering the 2 questions with separate sentences.", "tags": ["multiple", "subjective", "image"], "anchor_pdf": ["aeb01ff1-2543-50db-89d5-f33c70f77e96", "68bb62d4-2e15-5a27-a5c0-0938e5e9488a"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The authors claim that MetaGPT is more computational efficient than AdaMerging. MetaGPT only experiments on NLP tasks, while AdaMerging only experiments on vision tasks.", "question": "In \"MetaGPT: Merging Large Language Models Using Model Exclusive Task Arithmetic\", what is the main advantage of \"MetaGPT\" that the authors claim to have over \"AdaMerging\"? Also, what is the most significant difference between the experiment settings of the papers which proposed these two methods?"}}} {"uuid": "b9a2794c-8387-5693-b200-d80db3f9eb0f", "question": "In the paper Zhengyuan Liu as the first writer published in ACl 2023 that is related to stance detection, which corpus dataset was only used in evaluation, not in training?", "answer_format": "Your answer should be a string of the corpus's name without any explanation and anyother word.", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["f2a29ea0-3150-59a1-a8c6-241b823740c5"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "SemEval-16 Task-6 B", "lowercase": true, "ignore_blank": true}}} {"uuid": "ba07c4e4-443b-557f-87d1-ce383cd772ef", "question": "In the EGraFFBench paper, Equiformer's hyperparameter setting resembles that in the original Equiformer paper on MD17 dataset. Specifically, in the setting in the original paper that is the closest to the EGraFFBench setting, what's the value of L_{max}?", "answer_format": "Your answer should be an integer.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["02d01042-82ee-53ea-afdf-c4865aadc277", "a91c3c65-77f4-57a0-8f80-02ba76486bd7"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 2}}} {"uuid": "ba924c39-78a1-5236-8861-cd718dfc4c9a", "question": "Which model, among GPT3.5, GPT-4, Llama-7B, and Mistral-7B, experiences the largest drop in overall accuracy from the Conversation History Task to MMLU AA, relative to zero-shot performance?", "answer_format": "Your answer should be a single model name, without any other text. The answer should be one of the following: GPT3.5, GPT-4, Llama-7B, or Mistral-7B.", "tags": ["single", "image", "objective"], "anchor_pdf": ["167f62e8-ba35-5166-b85d-8934f1967849"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Llama-7B"}}} {"uuid": "baab0bc5-e83e-54ec-933b-6edb1b9d47d3", "question": "Which subtask of task3 of SemEval-2023 does the paper(titled \"BERTastic at SemEval-2023 Task 3: Fine-Tuning Pretrained Multilingual Transformers- Does Order Matter?\" perform the experience on?", "answer_format": "Your answer should be a string describing this subtask. Note that you should not include other subtasks of task3 of SemEval-2023.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["2c80fb44-19bc-5edd-be7a-ed7b2b39e979"], "reference_pdf": ["c2a72454-61aa-5e05-8634-dc98261232b9"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "subtask 2 (ST2): framing detection. given a news article, identify one or more frames used in the article from a pool of 14 generic framing dimen sions (introduced in Card et al. (2015)): Economic, Capacity and resources, Morality, Fairness and equality, Legality, constitutionality and jurispru dence, Policy prescription and evaluation, Crime and punishment, Security and defense, Health and safety, Quality of life, Cultural identity, Public opinion, Political, External regulation and reputa tion. This is a multi-class multi-label classification task at the article level.", "question": "Which subtask of task3 of SemEval-2023 does the paper(titled \"BERTastic at SemEval-2023 Task 3: Fine-Tuning Pretrained Multilingual Transformers- Does Order Matter?\" perform the experience on?"}}} {"uuid": "bafaee02-a31b-55d6-bb62-6d382ae3bcb6", "question": "Both as hybrid digital twins, what's the advantage of HDTwinGen over PINN-based Med-Real2Sim?", "answer_format": "Your answer should be in well-formed item list.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["54a132ad-617a-5af3-9de3-7b86946dc779", "090a2d55-9042-555f-8520-81c642111e2a"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["HDTwinGen injects domain knowledge from LLM rather than real human expert, which reduces the reliance of expensive and heavy expert labour and make the framework easily generalized to new domains.", "HDTwinGen's pipeline can automatically optimize both model parameters and model structures, in constrast Med-Real2Sim only optimizes model parameters and its structure relies on human's a priori knowledge."], "question": "Both as hybrid digital twins, what's the advantage of HDTwinGen over PINN-based Med-Real2Sim?"}}} {"uuid": "bb5aedbf-7683-56e7-a348-0ee986fe0fd2", "question": "Is there any works that explores how to achieve balance between representativeness and diversity in chosen samples for few-shot data selection?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["89de3077-4566-5ec0-bb43-604ca906e7b1"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any works that explores how to achieve balance between representativeness and diversity in chosen samples for few-shot data selection?", "reference_answer": "Cold-Start Data Selection for Better Few-shot Language Model Fine-tuning: A Prompt-based Uncertainty Propagation Approach"}}} {"uuid": "bb6a0f0e-0c0c-5038-b340-3044e9ffefd6", "question": "What paper evaluated the ability of visual few-shot learning models to do in-context learning?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["1256d979-2f84-5a16-85a5-8f88126363a8"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What paper evaluated the ability of visual few-shot learning models to do in-context learning?", "reference_answer": "CONTEXT-AWARE META-LEARNING"}}} {"uuid": "bb6ffb6d-2235-58cc-b04d-291818f74b05", "question": "In the dataset proposed by the authors, how many states are there per game?", "answer_format": "Your answer should be a floating-point number with one decimal place.", "tags": ["single", "table", "objective"], "anchor_pdf": ["26a6b2dc-7406-59e3-989d-cf45f151343d"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 2463.5, "ndigits": 1}}} {"uuid": "bb7c8889-1582-5409-95bc-74cb179506a1", "question": "What's the original annotation process drawn by the paper named \"Where Do People Tell Stories Online?Story Detection Across Online Communities?\"?", "answer_format": "Your answer should be a single string indicating the original annotation process.", "tags": ["subjective", "multiple", "text"], "anchor_pdf": ["ddf6d3e0-7c36-5ce6-ac60-67c687303e0f"], "reference_pdf": ["c7563d97-695f-5c77-8021-334bf2ff9ddb"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "All annotations were carried out by a single co-author after multiple rounds of discussions and the creation of a set of annotation guidelines. To calculate the expected inter-annotator agreement rate, a second co-author independently annotated a random sample of five texts at the end of the annotation process, using only the annotation guidelines for reference.", "question": "What's the original annotation process drawn by the paper named \"Where Do People Tell Stories Online?Story Detection Across Online Communities?\"?"}}} {"uuid": "bbc522d2-649a-5660-8180-7f67728376bf", "question": "which paper first focuses on addressing the over-smoothing issue for sentence embedding?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["f018d9bf-7280-50e8-bf1f-598784fe1bfe"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "which paper first focuses on addressing the over-smoothing issue for sentence embedding?", "reference_answer": "Alleviating Over-smoothing for Unsupervised Sentence Representation"}}} {"uuid": "bbe726ca-1f0d-564d-b553-7bc625404d15", "question": "What is the original dataset size (including all data splits) for each shared dataset used in both the works ABEX and MinPrompt according to its original papers?", "answer_format": "Your answer should be a Python dictionary with the keys being the names (case-sensitive) of the shared datasets and the integer values being the corresponding dataset sizes.", "tags": ["multiple", "table", "metadata", "objective"], "anchor_pdf": ["6a4c6d28-2741-5161-a9cc-10b102d18561", "0b77c64c-6b36-5501-848f-79a062be2a45"], "reference_pdf": ["9ada7bff-c684-55ab-ae9b-04f836247ddc", "54e72037-97b1-54cc-8aa1-5290454d3f5f"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"NewsQA": 119633, "SQuAD": 107785}}}} {"uuid": "bc5c4cf7-21ed-5298-9c2c-81386204608e", "question": "In the paper \"Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture\", according to Figure 1, which two dimensions are mixed using Monarch matrices? In the source paper of Monarch matrices, what training settings can Monarch matrices be used for?", "answer_format": "Your answer should be a python list containing two items. The first item is a python list with two strings. The second item is a python list with an indefinite number of strings.", "tags": ["multiple", "image", "objective"], "anchor_pdf": ["f3c0827e-c512-50bc-91f0-6d5a9e1177b6", "0b29dca5-cb4a-5cdc-a8a6-eb852b9d0bb2"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_structured_object_exact_match", "eval_structured_object_exact_match"], "eval_kwargs_list": [{"gold": ["sequence", "channels"], "ignore_order": true, "lowercase": true, "threshold": 90}, {"gold": ["E2E training", "S2D training", "D2S fine-tuning"], "ignore_order": true, "lowercase": true, "threshold": 90}]}}} {"uuid": "bd3d1dd5-7f10-5e09-aa76-486685c77180", "question": "What do formula (2) to formula (4) mean?", "answer_format": "Your answer should be a brief summarization of the meaning of these formulas, and you do not need to introduce these formulas one-by-one.", "tags": ["formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Formula (2) to (4) describe the self-contrastive training (SELFCONT) algorithm, which is designed to mitigate the repetition problem in language models. SELFCONT modifies the training process by adding a penalty to the output of the current model \\(f_{\theta_1}\\) when it predicts a repetitive token that the premature checkpoint \\(f_{\theta_0}\\) also predicts. The penalty is controlled by the weight \\(w\\), which is only active when the true next token is non-repetitive but the premature model predicts it as repetitive. This encourages the model to learn more complex patterns and reduce its tendency to generate repetitive text.", "question": "What do formula (2) to formula (4) mean?"}}, "anchor_pdf": ["aec41fde-98a1-58c4-86f3-100e408171cd"], "reference_pdf": []} {"uuid": "bd576276-efd9-5168-86f6-42937141fea4", "question": "Which dataset(s) is dataset SAFECONV derived from? Give me their github link(s).", "answer_format": "Your answer should be a single python list like [[\"dataset1\",\"dataset2\",...],[\"https://github.com/a/b\",\"https://github.com/c/d\",...]].Note that you should retain the size in the dataset name if available.", "tags": ["multiple", "metadata", "objective"], "anchor_pdf": ["0297f0ee-ac93-5820-a566-3942149d3c66"], "reference_pdf": ["af29ae50-ebb4-5c04-800d-f6100578a438", "3ab81254-c474-5c0b-a8cb-3299e890b9dd"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_structured_object_exact_match", "eval_structured_object_exact_match"], "eval_kwargs_list": [{"gold": ["LCCC-base", "PchatbotW"], "ignore_order": true, "lowercase": true}, {"gold": ["https://github.com/thu-coai/CDial-GPT", "https://github.com/qhjqhj00/SIGIR2021-Pchatbot"], "ignore_order": true, "lowercase": true}]}}} {"uuid": "bdb390b0-30bb-5dc9-bd58-a832c6689bcf", "question": "Which speech encoder does the paper(\"Advancing Large Language Models to Capture Varied Speaking Styles and Respond Properly in Spoken Conversations\") choose to extract universal paralinguistic and prosody embeddings?In the proposal of this encoder, which datasets are used in both pre-training and downstream tasks?", "answer_format": "Your answer should be a list like [\"encoder_name\", [\"dataset1\", \"dataset2\",...]].", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["0b80e514-9214-551f-9ae6-1da04c308007"], "reference_pdf": ["c271cbdd-7d2f-5504-b148-903ff4c2af06"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_structured_object_exact_match"], "eval_kwargs_list": [{"gold": "emotion2vec", "lowercase": true}, {"gold": ["OIEMOCAP", "MELD", "CMU-MOSEI"], "ignore_order": true, "ignore_blank": true, "lowercase": true}]}}} {"uuid": "bdcc2f81-9b12-56e3-90cc-23e6513985d4", "question": "In the paper \"Training Trajectories of Language Models Across Scales\" (anchor_pdf), figure 17 and 18 can be used to provide a supportive reasoning on the conclusion, what is it? And is it giving a similar conclusion as the paper \"Are Emergent Abilities of Large Language Models a Mirage?\" did?", "answer_format": "Your answer should be a Python list of two elements, where the first element is the supportive reasoning on the conclusion provided in figure 17 and 18, and the second is a boolean value indicating whether it is giving a similar conclusion as the paper \"Are Emergent Abilities of Large Language Models a Mirage?\" did.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["98bec33d-6e48-56b3-90ff-5b896cf01e24"], "reference_pdf": ["c302a979-c9a6-509a-a555-5fc9e5bb7bf8"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_reference_answer_with_llm", "eval_bool_exact_match"], "eval_kwargs_list": [{"question": "In the paper \"Training Trajectories of Language Models Across Scales\" (anchor_pdf), figure 17 and 18 can be used to provide a supportive reasoning on the conclusion, what is it? And is it giving a similar conclusion as the paper \"Are Emergent Abilities of Large Language Models a Mirage?\" did?", "reference_answer": "Models get its abilitities on different tasks in a relative smooth way, instead of a emerging way, yes."}, {"gold": true}]}}} {"uuid": "be08635a-0dbc-5dab-85d0-40f45c6edfc2", "question": "Which paper enables interactive semantic parsing by training an error correction model with simulated human feedback instead of human annotations?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["7e7c3ebc-11cd-5547-b14e-2f87932e39ab"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper enables interactive semantic parsing by training an error correction model with simulated human feedback instead of human annotations?", "reference_answer": "Learning to Simulate Natural Language Feedback for Interactive Semantic Parsing"}}} {"uuid": "be178eef-403f-5633-8cbd-5b876059fce4", "question": "For each test split in Figure 5, provide the name and type of the website with the highest step success rate.", "answer_format": "Your answer should be a Python dictionary. e.g. {\"split1\": [\"web1\", \"type1\"], \"split2\": [\"web2\", \"type2\"], ...}. YOU MUST USE THE EXACT WORDS FROM PDF WITHOUT CHANGING CAPITALIZATION.", "tags": ["image", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"Cross-Task": ["travelzoo", "General"], "Cross-Website": ["exploretock", "Restaurant"], "Cross-Subdomain": ["koa", "Hotel"]}}}, "anchor_pdf": ["32a52b98-370b-5bc4-87ab-5193405b723b"], "reference_pdf": []} {"uuid": "bec9b106-831a-5e17-97b6-8af2636194d3", "question": "Which paper proposed a learning-based data augmentation method for improving compositional generalization of language models?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["0f1a5479-0868-5637-acf3-b160373fd937"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper proposed a learning-based data augmentation method for improving compositional generalization of language models?", "reference_answer": "Learning to Substitute Spans towards Improving Compositional Generalization"}}} {"uuid": "befbacf1-d163-5021-bb6c-2ba79257c81c", "question": "Which was the first paper to explore the online adaptation of neural MT metrics for use during the inference stage?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["03960f1c-00dd-5588-afd7-d6e71e9c24c6"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which was the first paper to explore the online adaptation of neural MT metrics for use during the inference stage?", "reference_answer": "Test-time Adaptation for Machine Translation Evaluation by Uncertainty Minimization"}}} {"uuid": "bf391f7a-d33b-5b00-9001-ee92284a15ec", "question": "According to Figure 1, with the increasing of the number of few-shot training samples, which setting keeps getting a better score?", "answer_format": "Your answer should be the name of the setting appearing in the legend of the image.", "tags": ["image", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "KB score, curie"}}, "anchor_pdf": ["ebf4682e-2653-5589-9458-da26d00d9f5b"], "reference_pdf": []} {"uuid": "bf7fe85f-b409-5a58-ac9e-ba738e5390c7", "question": "In FewRel's 10-way 5-shot setting, what is the maximum decrease of AOD+ROD between ConPL and AGCKD across all task indexes?", "answer_format": "Your answer should be a positive floating-point number with two decimal place.", "tags": ["single", "image", "objective"], "anchor_pdf": ["3e92cc84-5991-5ac2-aa13-f92ccbfcb03b"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 1.88, "ndigits": 2}}} {"uuid": "bfa70a42-daa5-52db-aa3f-8ceb0960739a", "question": "Which vision-language model paper in 2023 developed techniques that reduce input tokens to improve model inference speed?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["99bac92d-0c2f-5a5c-85f4-67e2dd384ea3"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which vision-language model paper in 2023 developed techniques that reduce input tokens to improve model inference speed?", "reference_answer": "PuMer: Pruning and Merging Tokens for Efficient Vision Language Models"}}} {"uuid": "bfb209f1-da03-5d97-a7e8-aa3bd63e257d", "question": "How to better attract readers to news articles by generating personalized headlines?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["a0711050-3f9c-5ed1-bc09-7d1ea05230a9"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "How to better attract readers to news articles by generating personalized headlines?", "reference_answer": "Generating User-Engaging News Headlines"}}} {"uuid": "c0e96750-91fe-5f24-aee3-74ea8706654a", "question": "For each category of PQA in terms of the form of provided answers, from what aspects does the author analyze it?", "answer_format": "Your answer should be a Python list, where each element is a string representing an aspect DIRECTLY FROM THE PDF. Note that the aspects are the same for each category. e.g. [\"aspect1\", \"aspect2\", ...]", "tags": ["objective", "single", "text"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Problem Definition", "Datasets & Evaluation Protocols", "Methods", "Pros and Cons"], "ignore_order": true, "lowercase": true, "ignore_blank": true}}, "anchor_pdf": ["2cc7f650-699e-580d-aad4-04fc17b5868f"], "reference_pdf": []} {"uuid": "c100db0f-bb91-514e-af99-6c6efcf22cd3", "question": "How much data do the author use in total in million for the main experiment conducted on WMT17 ZhEn?", "answer_format": "Your answer should be a Python float, rounding to 1 decimal places.", "tags": ["objective", "single", "text"], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 39.5, "ndigits": 1, "tolerance": 0.01}}, "anchor_pdf": ["03166771-5ae8-57b9-9c10-3120423adc5c"], "reference_pdf": []} {"uuid": "c1027cf8-184a-5c77-8c53-6247abe0160d", "question": "On which model does the paper(titled \"Making Large Language Models Better Reasoners with Orchestrated Streaming Experiences\") conduct the most analysis experiments?Is there any other size of parameters for this model? ", "answer_format": "Your answer should be a single python list formatted like [\"model_name\", [\"10B\",\"20B\",...]].The first element of the list is a string representing the name of the model, the second element of the list is a list representing other size of params(Note that you shouldnot include the size already employed in the paper).", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["f47e106f-c4ef-5814-85fe-a895c754fe40"], "reference_pdf": ["6b887e82-ca3f-59e1-ae8a-f528919c1334"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_structured_object_exact_match"], "eval_kwargs_list": [{"gold": "LLaMA2-13B-Chat", "lowercase": true}, {"gold": ["7B", "34B", "70B"], "ignore_order": true, "lowercase": true}]}}} {"uuid": "c17c03e2-ea11-5472-be57-c7ead3b8605f", "question": "Which paper employs a two-stage approach in generative models to tackle ABSA tasks across various domains?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["e0f7ee00-f91f-594c-9a92-94bd11884e39"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper employs a two-stage approach in generative models to tackle ABSA tasks across various domains?", "reference_answer": "Bidirectional Generative Framework for Cross-domain Aspect-based Sentiment Analysis"}}} {"uuid": "c181315e-0268-53f9-a982-60eb5747f0e5", "question": "Which paper first attempts to take potential dependencies among same-level labels into account in Hierarchical Text Classification?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["67f1ed01-acd0-5dd3-a50f-ce9af5cc7451"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first attempts to take potential dependencies among same-level labels into account in Hierarchical Text Classification?", "reference_answer": "Peer-Label Assisted Hierarchical Text Classification"}}} {"uuid": "c1acd5a0-7a76-5605-996d-0191bda04f6c", "question": "Which is the first multimodal model combining text and speech transformers trained without labelled text-speech pairs?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["778284ec-625a-5334-b544-10a85f255342"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which is the first multimodal model combining text and speech transformers trained without labelled text-speech pairs?", "reference_answer": "Introducing Semantics into Speech Encoders"}}} {"uuid": "c1cbcf5c-632c-5424-a1ef-d9add6094746", "question": "What are the Low resource languages in INDICGENBENCH?", "answer_format": "Your answer should be a Python list, where each element is a string representing a language. e.g. [\"language1\", \"language2\", ...]", "tags": ["objective", "single", "text"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Awadhi", "Haryanvi", "Tibetan", "Garhwali", "Konkani", "Chhattisgarhi", "Rajasthani", "Maithili", "Manipuri", "Malvi", "Marwari", "Santali", "Bodo"], "ignore_order": true, "lowercase": true}}, "anchor_pdf": ["1e6eeeab-ba5c-508e-a693-62a9b39f2d92"], "reference_pdf": []} {"uuid": "c1f769f6-eaff-5441-9f1c-d62445efe58d", "question": "What's the total number of augmented training samples across all datasets used in the MINPROMPT work?", "answer_format": "Your answer should be a single integer number.", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 251387}}, "anchor_pdf": ["0b77c64c-6b36-5501-848f-79a062be2a45"], "reference_pdf": []} {"uuid": "c2089236-8909-5a22-9eaa-35644720a87b", "question": "According to the author, how does Cross Entropy contribute to miscalibration?", "answer_format": "Your answer should be a string.", "tags": ["single", "text", "subjective"], "anchor_pdf": ["abc391d1-1098-5b04-b9aa-5f0de5a2dd41"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "According to the author, how does Cross Entropy contribute to miscalibration?", "reference_answer": "According to the author, Cross Entropy contributes to miscalibration by encouraging the model to assign a probability of 1 to the ground-truth token and 0 to all other tokens. This leads to overestimating the probability of the correct token (over-confidence) and underestimating the probabilities of the other tokens (under-confidence), causing miscalibration during the fine-tuning process."}}} {"uuid": "c21e6d8e-865c-5544-8177-49b48d723934", "question": "Is there any paper that applies symbolic distillation on black-box generalist language models to harvest high-quality counterfactual data for out-of-distribution generalization?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["6f942e88-a485-5628-9883-089163ba8aa0"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that applies symbolic distillation on black-box generalist language models to harvest high-quality counterfactual data for out-of-distribution generalization?", "reference_answer": "DISCO: Distilling Counterfactuals with Large Language Models"}}} {"uuid": "c2412c63-0fda-5e8c-95c0-615c415d5ff9", "question": "What is the accuracy of the base model used in the experiment in the paper \"Protecting Privacy in Classifiers by Token Manipulation\" on the RACE test set?", "answer_format": "Your answer should be a python float with one decimal places.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["ff302319-a8f9-58f6-a16a-44971ae06a5a"], "reference_pdf": ["40076536-9fb5-50c7-acb2-93db2c59e1d7"], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 83.2, "ndigits": 1}}} {"uuid": "c2a0f81b-e98d-50ed-b809-cc95eb952082", "question": "The methods proposed in the two anchor PDFs have similarities. For the following statements in VoiceFlow: [\"Duration adapter\", \"y\", \"\\|u_\\theta(x_t, y, t) - (x_1 - x_0)\\|^2\"], which statements in Reflow-TTS correspond closely to them respectively?", "answer_format": "Your answer should be a Python list containing three strings, arranged in the same order as in the question.", "tags": ["multiple", "image", "formula", "subjective"], "anchor_pdf": ["02f0aa3e-1287-50a0-bd9c-dcc8d67c3390", "b1dcf0ab-6e6c-50d1-9f51-29f543bb11fd"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_string_exact_match", "eval_complex_math_formula_with_llm"], "eval_kwargs_list": [{"gold": "Length regulator", "lowercase": true}, {"gold": "c"}, {"formulas": "\\|(X_1 - X_0) - v_\\theta(X_t, t, c)\\|^2", "question": "For the formula \"\\|u_\\theta(x_t, y, t) - (x_1 - x_0)\\|^2\", which statement in Reflow-TTS corresponds closely to it?"}]}}} {"uuid": "c4048cbf-71e6-55ec-a0e9-ba082c5a2954", "question": "In the PPTC benchmark paper, among the works that focus on LLMs' tool-use ability to generate APIs for solving user instructions, which one doesn't apply AST accuracy?", "answer_format": "Your answer should be a string, the name of the method or model.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["3f195c86-04e5-5c9d-826b-63672b5ff9a3"], "reference_pdf": ["4261dbce-3665-5261-9125-09c96905ca64", "1dffea3e-12d5-5a96-82db-480f1579040e", "8967d40f-af4b-5754-848a-0d84d923e39d"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Toolformer", "lowercase": true}}} {"uuid": "c40f9463-e7de-5f3e-b3ec-8f64b3289541", "question": "Are there any papers that study whether you can identify if a LLM has been instructed to hide some information?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["6371168e-ba68-5db9-b3eb-530ea3424e8d"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Are there any papers that study whether you can identify if a LLM has been instructed to hide some information?", "reference_answer": "HOW TO CATCH AN AI LIAR: LIE DETECTION IN BLACK-BOX LLMS BY ASKING UNRELATED QUESTIONS"}}} {"uuid": "c4461086-1920-5037-8eb9-f7d8e00aa31b", "question": "In the paper \"MultiTabQA: Generating Tabular Answers for Multi-Table Question Answering\", dataset SPIDER was used in the training process. What were the inputs and outputs in the original design of SPIDER, and how did the authors of this work adapt the dataset for their task?", "answer_format": "Your answer should be brief text regarding the inputs and outputs of SPIDER in the two works.", "tags": ["multiple", "subjective", "text"], "anchor_pdf": ["2c803a9d-d383-58d9-b87c-4d27c53eafc6", "46a88ba5-c16e-5efd-913c-3de6e749f2a9"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The original design of SPIDER is a text-to-SQL dataset. The inputs are natural language queries, and the model should output corresponding SQL queries. In the anchor paper, the authors use the SQL queries as inputs, and train models to output SQL execution result tables.", "question": "In the paper \"MultiTabQA: Generating Tabular Answers for Multi-Table Question Answering\", dataset SPIDER was used in the training process. What were the inputs and outputs in the original design of SPIDER, and how did the authors of this work adapt the dataset for their task?"}}} {"uuid": "c451a039-cf16-5c3b-803c-3c2b7be1d355", "question": "what is the exact performance drop when the diffusion model is removed in the MMWHS MR-to-CT UDA setting?", "answer_format": "Your answer should be float number rounded to 1 decimal place.", "tags": ["single", "table", "objective"], "anchor_pdf": ["ff55497d-cd0b-541c-b924-4d043c2ac3f9"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 10.9, "ndigits": 1}}} {"uuid": "c4db8fe1-fe59-5b60-af98-b3e8edd5ef16", "question": "Which language performs better on old sense IDs compared to new sense IDs during experiments?", "answer_format": "Your answer should be a string of a language name.", "tags": ["single", "table", "objective"], "anchor_pdf": ["55bc6198-c2b1-518f-9612-8d58ec050f2f"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Russian", "lowercase": true}}} {"uuid": "c516145b-51ad-5146-a4f2-88773ff98293", "question": "In the S4WM paper's 3D environment, what's the episode length for the largest setting?", "answer_format": "Your answer should be an integer.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["2d96a6bc-c73e-50e4-855b-764adcf977e4"], "reference_pdf": ["f5f08036-66cd-55e4-b4d0-ed4892e2b681"], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 4000}}} {"uuid": "c53cba22-704b-51db-ae71-0166a727b747", "question": "How to calculate the Word-pair Representation matrix in Figure 4?", "answer_format": "Your answer should be a list of formulas, representing the calculation of Word-pair Representation matrix.", "tags": ["image", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The Word-pair Representation matrix in the described model is calculated through Conditional Layer Normalization (CLN). Here are the formulas representing the calculation:\n\n1. **Conditional Layer Normalization (CLN) for word-pair representation:**\n $r_{i,j} = \\text{CLN}(h_i, h_j) = \\gamma_i \\odot \\left( \\frac{h_j - \\mu}{\\sigma} \\right) + \\lambda_i$\n\n2. **Scale factor $\\gamma_i$ and shift factor $\\lambda_i$ with additional contextual information:**\n $\\gamma_i = W_\\gamma h_i + b_\\gamma$\n $\\lambda_i = W_\\lambda h_i + b_\\lambda$\n\n3. **Mean $\\mu$ and standard deviation $\\sigma$ of $h_j$:**\n $\\mu = \\frac{1}{d} \\sum_{k=1}^{d} h_{jk}$\n $\\sigma = \\sqrt{\\frac{1}{d} \\sum_{k=1}^{d} (h_{jk} - \\mu)^2}$", "question": "How to calculate the Word-pair Representation matrix in Figure 4?"}}, "anchor_pdf": ["1191a3cb-63fe-560c-bbef-c7eee0dd61d6"], "reference_pdf": []} {"uuid": "c63f7ad7-00de-56f0-8b74-2fdf420ceaa2", "question": "How many models are evaluated in WebArena? What's the success rate of the most powerful model?", "answer_format": "Your answer should be a Python list of 2 elements. The first one is an integer. The second one is a float rounded to 2 decimal places.", "tags": ["single", "table", "objective"], "anchor_pdf": ["5a2b0d5c-6b51-5bbd-a001-a15f19f65a98"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_int_exact_match", "eval_float_exact_match"], "eval_kwargs_list": [{"gold": 3}, {"gold": 14.41, "ndigits": 2}]}}} {"uuid": "c67e3e4c-245b-509d-92b0-3ff41e82f9d4", "question": "What research advances are incorporated into the generative language model that used to generate associations in different languages in the SeeGULL Multilingual paper?", "answer_format": "Your answer should be a python list of several strings.", "tags": ["subjective", "multiple", "text"], "anchor_pdf": ["fd81f90f-555d-5e99-835b-153c2cdb7303"], "reference_pdf": ["eb787b77-5188-5411-b0f8-406356623bac"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["Compute-optimal scaling", "Improved dataset mixtures", "Architectural and objective improvements"], "ignore_order": true, "question": "What research advances are incorporated into the generative language model that used to generate associations in different languages in the SeeGULL Multilingual paper?"}}} {"uuid": "c69c648f-bde4-5a8b-a82e-67fe2cdefe9f", "question": "What is the limitation of this work proposed by the authors themselves?", "answer_format": "Your answer should be plain text.", "tags": ["single", "subjective", "text"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["The authors have only explored applying the approach to encoder models, leaving room for applications on decoder models.", "Despite the variety of existing contrastive learning methodologies, this work still adheres to utilizing the contrastive learning objectives provided by the tasks."], "question": "What is the limitation of this work proposed by the authors themselves?", "ignore_order": true}}, "anchor_pdf": ["1a472825-70b5-5b91-a9a1-f60fcb8d89f5"], "reference_pdf": []} {"uuid": "c6be1785-b1b0-56c7-8133-3dca86c62222", "question": "According to the paper, how to combine the two losses presented in figure 2?", "answer_format": "Your answer should be a single formula in latex format extracted from the paper.", "tags": ["image", "formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "\\begin{array}{l}\\mathcal{L}_{(\text {multi })}\\left(\\phi_{(s h)}, \\phi_{1}, \\phi_{2}\right)= \\delta_{1} \\mathcal{L}_{s a}\\left(\\phi_{(s h)}, \\phi_{1}\right)+\\delta_{2} \\mathcal{L}_{c l}\\left(\\phi_{(s h)}, \\phi_{2}\\right)\\end{array}", "question": "According to the paper, how to combine the two losses presented in figure 2?"}}, "anchor_pdf": ["30c390e3-630b-5592-a1be-2771d1aa15a9"], "reference_pdf": []} {"uuid": "c6d527e4-0a3f-5c85-86a7-3b9bf155fa0a", "question": "Is there any paper that investigates backdoor attacks across various types of tasks, not limited to classification, in language models?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["7d9a8b96-429d-533b-924f-daaf2fc7ea4a"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that investigates backdoor attacks across various types of tasks, not limited to classification, in language models?", "reference_answer": "Multi-target Backdoor Attacks for Code Pre-trained Models"}}} {"uuid": "c7525517-c527-563a-bc60-33adfb8309a2", "question": "In the situation between 2 agents and 2 arms, how many unique matching pairs will incur linear regret with non-incentive-aware learning algorithms?", "answer_format": "Your answer should be a single number.", "tags": ["single", "image", "objective"], "anchor_pdf": ["8b3048b2-0b56-5da1-a087-6270d524757e"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 4}}} {"uuid": "c7fd4be1-7261-5b8c-bdb8-0da621536182", "question": "Can we learn to represent an image with arbitary numbers of tokens?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["6e86386f-01ea-5680-ae68-5d97f86ecf8a"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Can we learn to represent an image with arbitary numbers of tokens?", "reference_answer": "SparseFormer: Sparse Visual Recognition via Limited Latent Tokens"}}} {"uuid": "c820fec0-1295-5d70-b300-6feb9bc66d5a", "question": "When the number of retrieved pairs is chosen empirically to be in the range of 3 to 5 for this data, which caption group of the testing set performs the best overall?", "answer_format": "Your answer should be a python strings about the name of the caption group. YOU MUST USE THE EXACT NAME FROM THE PAPER.", "tags": ["single", "image", "objective"], "anchor_pdf": ["000bebd2-6c2c-56dc-9709-cd228a417519"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "AC_cap5", "lowercase": true, "ignore_blank": true}}} {"uuid": "c8457d49-b7c2-5ed3-880e-be91d064d1d8", "question": "What baselines are used in this paper? Note that a baseline is counted only once, even if different variants are provided based on it.", "answer_format": "Your answer should be a python list of strings, every element of the list is the name of a baseline directly mentionned in this paper.", "tags": ["objective", "single", "text"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["NP-SpanBERT", "QA-SRL Parser", "TNE-Parser", "Mistral"], "lowercase": true, "ignore_order": true, "ignore_blank": true}}, "anchor_pdf": ["7eb41d67-59e3-542a-8f03-93ab8f53216a"], "reference_pdf": []} {"uuid": "c86eef72-3e3e-5fb2-b008-c97b3d33433e", "question": "How many LLMs does the authours test in the experiment part? And how many of the LLMs are openly accessible?", "answer_format": "Your answer should be a python list of two integer. The first one is the number of the LLMs, and the second one is the number of the openly accessible LLMs of them.", "tags": ["single", "text", "objective"], "anchor_pdf": ["0a53764d-8445-5328-ae44-7120cdb486b4"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": [10, 6], "ignore_order": true}}} {"uuid": "c912d1d0-dced-53f0-9a89-5c982701fbb5", "question": "Is there any paper exploring real speakers and thus performing multimodal emotion recognition task?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["f27dad5f-15fc-5aa6-867a-621f7f35ff2c"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper exploring real speakers and thus performing multimodal emotion recognition task?", "reference_answer": "A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations"}}} {"uuid": "c95b6bb1-3445-5378-b465-2b4d4da30a17", "question": "What are the core parameters L, H, A, and the total number of parameters(params) of the base model of the classifier in this paper?", "answer_format": "Your answer should be a python dictionary with keys 'L', 'H', 'A', and 'params', and the corresponding value should be a number, e.g., {'L': 1, 'H': 1, 'A': 1, 'params': 1}.", "tags": ["multiple", "objective", "text"], "anchor_pdf": ["2a4dd0fe-b10b-5155-82ea-3a28ba29a4fa"], "reference_pdf": ["ccf560db-a30b-552f-ab16-80026764a35e"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"L": 24, "H": 1024, "A": 16, "params": 550000000}, "ignore_order": true}}} {"uuid": "ca95aa28-b131-5cea-880a-63b9357ba912", "question": "Is there any paper that utilizes Gaussian processes to analyze the vulnerability of text-conditioned generative models?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["097b01cd-3fe8-5dc6-acf9-62513d376004"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that utilizes Gaussian processes to analyze the vulnerability of text-conditioned generative models?", "reference_answer": "Query-Efficient Black-Box Red Teaming via Bayesian Optimization"}}} {"uuid": "cb2ee6d9-c891-53d8-92de-c5ba08404ab4", "question": "Considering all the methods tested in the experiment section of the paper, which LLM performs the worst on the Jailbreak Success Rate metric?", "answer_format": "Your answer should be a python strings about the name of the LLM model. YOU MUST USE THE EXACT NAME FROM THE PAPER.", "tags": ["single", "table", "objective"], "anchor_pdf": ["69b6b827-febb-5ece-adf1-88e6b6979aed"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Vicuna-13B", "lowercase": true}}} {"uuid": "cb5327e3-022f-5eb5-98fe-a84c26dd68ad", "question": "How many tools does the proposed CodeAgent integrate into its framework, and which one is the most useful based on its ablation study?", "answer_format": "Your answer should be a Python list of length 2. The first one is an integer number, and the second one is a text string of the tool name.", "tags": ["single", "table", "objective"], "anchor_pdf": ["1034e798-2eae-5d42-8efe-75a58da780c8"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_int_exact_match", "eval_string_exact_match"], "eval_kwargs_list": [{"gold": 5}, {"gold": "Code Symbol Navigation", "lowercase": true}]}}} {"uuid": "cb721bd4-b219-50b7-99f9-1f0a5f5da438", "question": "Which paper found that mutual learning benefits multlingual models?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["f2c56843-31c5-5673-9d95-8a1fbdefcc5d"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper found that mutual learning benefits multlingual models?", "reference_answer": "Towards Higher Pareto Frontier in Multilingual Machine Translation"}}} {"uuid": "cb9cb4ee-c76a-5b00-bb19-9f238ac88b5f", "question": "When discussing \"Engagingness\" in the paper, what is the definition of engagingness with interestingness from prior research?", "answer_format": "Your answer should be a python strings.", "tags": ["multiple", "subjective", "text"], "anchor_pdf": ["1f63dd31-2d16-5885-84e4-a55d7c03ea8c"], "reference_pdf": ["8bd7983c-5a5b-50cb-99ab-62297274885c", "61add12c-1a79-5ef2-a38e-00e843271ad0", "450c1e1c-8f69-5d85-9a26-df3a876f65e1"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The definition of engagingness should engage the partner in conversation, such as presenting an interesting factTherefore, an engaging response y should provide high volume of information that acknowledges both the history x to engage the partner and the context c which we assume contains relevant facts. This naturally leads to the following metric definition: ENGAGINGNESS (\\mathbf{y} , \\mathbf{x} , \\mathbf{c} ) = sum (align(\\mathbf{y} \\to [\\mathbf{x} , \\mathbf{c} ])), where we concatenate the history x and knowledge context c, and measure the extent of response y's acknowledgement of the information.", "question": "When discussing \"Engagingness\" in the paper, what is the definition of engagingness with interestingness from prior research?"}}} {"uuid": "cc8b6743-e4fd-5365-b027-f6a70a30187e", "question": "Name a paper which proposes a probabilsitic formulation of retrosynthesis.", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["fa40de30-fa96-53b9-b422-29fc2a233e3a"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Name a paper which proposes a probabilsitic formulation of retrosynthesis.", "reference_answer": "RETRO-FALLBACK: RETROSYNTHETIC PLANNING IN AN UNCERTAIN WORLD"}}} {"uuid": "cc9a3391-3e28-5f15-933b-1fca191d7c30", "question": "Which dataset used in this paper consists of 14K open-domain English conversations with a total of 80K question-answer pairs? I want to use this dataset for my research. Can you provide me with the github link of this dataset?", "answer_format": "Your answer should be a Python list of 2 strings, the name of the dataset, and the github link of this dataset.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["b4679e24-87b8-53bd-8266-22f2273538bf"], "reference_pdf": ["8acb57ed-7324-5327-8d39-f2c041ec6f2d"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["QReCC", "https://github.com/apple/ml-qrecc"], "ignore_order": true, "lowercase": true}}} {"uuid": "cd182de6-a2ef-52fd-bc07-73a990855005", "question": "What stages does training of Med-Real2Sim comprise?", "answer_format": "Your answer should be a string list consisting of the training stages in order.", "tags": ["single", "text", "objective"], "anchor_pdf": ["090a2d55-9042-555f-8520-81c642111e2a"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["physics-informed pretext task", "physics-guided finetuning"], "lowercase": true}}} {"uuid": "cd235027-4032-5403-964a-b2c7e7550966", "question": "How much percent does VerifiNER improve the F1 score of the three baseline models on average on GENIA?", "answer_format": "Your answer should be a Python float rounded to two decimal places WITHOUT ANY PUNCTUATION OR EXPLANATION. e.g. 21.30", "tags": ["objective", "single", "table"], "anchor_pdf": ["220f0021-1bf8-599f-ab3d-5b46d56cb03e"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 7.05, "ndigits": 2, "tolerance": 1e-06}}} {"uuid": "cd63b251-d7ef-58a6-83be-75d95099d550", "question": "Is there a Chinese hate speech paper that constructs an insulting lexicon while building the dataset?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["367cb9f4-3091-58ea-9dae-98ffc875ce92"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a Chinese hate speech paper that constructs an insulting lexicon while building the dataset?", "reference_answer": "Facilitating Fine-grained Detection of Chinese Toxic Language: Hierarchical Taxonomy, Resources, and Benchmarks"}}} {"uuid": "cd837558-b900-5448-9c36-9a0c0f29924d", "question": "Which paper proposes a memory-efficient optimizer considering the confidence of each update during the optimization?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["a7eb96e6-4f77-5940-b340-55d0c1be2345"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper proposes a memory-efficient optimizer considering the confidence of each update during the optimization?", "reference_answer": "CAME: Confidence-guided Adaptive Memory Efficient Optimization"}}} {"uuid": "cd837c4f-07d1-5db8-84c1-f258aa7985ea", "question": "Considering both benefits and costs, what is the best size of generation pool for the proposed method?", "answer_format": "Your answer should be a single integer number.", "tags": ["image", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 4}}, "anchor_pdf": ["ca116924-bf11-5529-a43f-bf68e9745c5c"], "reference_pdf": []} {"uuid": "cd981e15-df19-5c78-8ed1-13b16d0ff91f", "question": "Why can I find the exact dataset utilized by the LEGO-Prover paper?", "answer_format": "Your answer should be a string, the URL as given in the paper without \"https://\", \"http://\" or \"www.\", e.g. \"google.com\"", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["6f300c27-36eb-51d0-a035-c9287ade3481", "cd153947-23dc-5987-bc58-ffa1dbca52fc"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_fuzzy_match", "eval_kwargs": {"gold": "github.com/facebookresearch/miniF2F", "ignore_blank": false, "lowercase": false, "fuzz_method": "ratio", "threshold": 95}}} {"uuid": "cdf4e053-3112-54ba-a582-6fbb58c15a20", "question": "When it comes to Empirical and Certified Robustness, On which dataset and which poison rate the accuracy on Benign Samples of the method proposed by the paper is closest to the accuracy on Benign Samples of no defence situation?", "answer_format": "Your answer should be a single python list, the first element is the dataset name, the second element is a float number rounded to 1 decimal place.", "tags": ["table", "single", "objective"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["HSOL", 0.1], "ignore_order": false, "ndigits": 1}}, "anchor_pdf": ["122af217-4404-52d4-8d17-2a763d95441c"], "reference_pdf": []} {"uuid": "cdfcefb3-e2be-515b-aa68-6baf717b17a2", "question": "What paper showed first that one can build a fully differentiable mixture of experts layer with no increase in time complexity?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["36f7c548-f8c2-5fc9-ba12-a35ac045bc25"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What paper showed first that one can build a fully differentiable mixture of experts layer with no increase in time complexity?", "reference_answer": "From Sparse to Soft Mixtures of Experts"}}} {"uuid": "ce722b7c-281b-5f8e-bf5e-06117f832f54", "question": "What is the iAA(inter-annotator agreement when AttentionXML) value of the correctly predicted results?", "answer_format": "Your answer should be a floating-point number with two decimal place.", "tags": ["single", "image", "objective"], "anchor_pdf": ["18d8e402-75a3-50e2-8f1c-e36f234617b0"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 0.48, "ndigits": 2}}} {"uuid": "ce769caf-b9cd-58c7-9b38-ee23d0d17f9b", "question": "How many more turns per dialogue are there in MMDU Benchmark than in MMDialog?", "answer_format": "Your answer should be an integer, the difference of average turns rounded to integer.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["1d672477-cc66-5013-8bd5-8180c44a884f", "954a1c0e-b7f6-5871-b284-492da7703fc2"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 10.5, "ndigits": 1, "tolerance": 0.6}}} {"uuid": "ce95db65-95c3-55d5-8eda-3e80ef6d0775", "question": "Using only task-level prompts or using only example-specific prompts, which is better on the Multi-Domain test set?", "answer_format": "Your answer should be a single string, either \"task-level\" or \"example-specific\".", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "example-specific", "lowercase": true}}, "anchor_pdf": ["5a2b95c1-12d6-5b77-82a1-ee24180d27ae"], "reference_pdf": []} {"uuid": "cecfb20f-ebba-5f01-98c1-5259abb28f74", "question": "Which model did the authors use to fine-tune the pre-detector $g_{\\phi}$ and conflict disambiguator $g_{\\psi}$? How many times more params does ELMo have than this model?", "answer_format": "Your answer should be a python list of two elements. The first element is a string of a name of a model, and the second element is a python float with two decimal places.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["eb149271-ff42-5bcb-b891-b8c3fb6a02d3"], "reference_pdf": ["7efa89b4-4460-5eed-b6f0-62238a690c9b"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["DistilBERT", 1.73], "ignore_order": false, "ndigits": 2, "lowercase": true}}} {"uuid": "cf038bcd-0053-5e50-8f6c-b52b103387c3", "question": "Which Dataset has the most classes according to Table 1?", "answer_format": "Your answer should be a single string of the Dataset's name.", "tags": ["single", "objective", "table"], "anchor_pdf": ["0a5088a8-c2d7-55c3-b08f-a690bab767b0"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "ATIS Intent", "lowercase": true}}} {"uuid": "cff35edd-a526-59ea-a003-787ebabcd2d7", "question": "Which paper first applied the chain-of-thought technique in the text summarization field?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["626abf9a-39db-5a6e-bbb9-1740bd5f89f8"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first applied the chain-of-thought technique in the text summarization field?", "reference_answer": "Element-aware Summarization with Large Language Models: Expert-aligned Evaluation and Chain-of-Thought Method"}}} {"uuid": "d011e781-2e29-52ea-8281-e7bc25c68622", "question": "What paper first proposed a robust perceptual similarity metric with certificates?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["ad933ed4-553d-5e50-9868-ee7665f0f22a"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What paper first proposed a robust perceptual similarity metric with certificates?", "reference_answer": "LIPSIM: A PROVABLY ROBUST PERCEPTUAL SIMILARITY METRIC"}}} {"uuid": "d0157667-a921-5e91-8948-4e0f31b3010c", "question": "Is there a paper that uses an app for a popular tabletop game to gather real transcripts of gameplay with concrete values for players' and monsters' health?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["9e3adda6-61f8-55c9-8bf1-cb473027c625"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper that uses an app for a popular tabletop game to gather real transcripts of gameplay with concrete values for players' and monsters' health?", "reference_answer": "FIREBALL: A Dataset of Dungeons and Dragons Actual-Play with Structured Game State Information"}}} {"uuid": "d017f05c-7062-526f-9b1c-8ec63bbda641", "question": "What are the sources of the forecasting questions in the datasets used in the experimental section of the paper \"AUTOCAST++: ENHANCING WORLD EVENT PREDICTION WITH ZERO-SHOT RANKING-BASED CONTEXT RETRIEVAL\"?", "answer_format": "Your answer should be a python list of strings, e.g., [\"source1\", \"source2\"].", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["c42e299f-5ce4-529f-902c-242b9d3b1d4e"], "reference_pdf": ["dedcab71-114c-5e10-924a-f16db776b88d"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Metaculus", "Good Judgment Open", "CSET Foretell"], "ignore_order": true, "lowercase": true, "ignore_blank": true, "fuzz_method": "ratio", "threshold": 85}}} {"uuid": "d01dc0bf-ace9-5117-bd6f-8c943ddf494c", "question": "Among the several bias presented in the paper named \"Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena\", which one is considered and discussed in the paper named\"Humans or LLMs as the Judge? A Study on Judgement Bias\"? After the discussion, is this bias the main influencial factor in the paper(I mean the latter)?", "answer_format": "Your answer should be a single list, the first string is the bias name, the second string is bool, e.g., [\"Verbosity bias\", false]", "tags": ["text", "multiple", "objective"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_bool_exact_match"], "eval_kwargs_list": [{"gold": "Self-enhancement bias", "lowercase": true}, {"gold": false}]}}, "anchor_pdf": ["bdd90971-ecf1-5dcc-87cf-3b4a75ad4b01", "95c4da59-2aea-5163-9044-3554ca09aa83"], "reference_pdf": []} {"uuid": "d0f2ce9d-5b5b-5920-a747-48a3ad34cdfe", "question": "I believe introducing synthetic text-to-SQL data into model fine-tuning will benefit the final accuracy of the text-to-SQL task. But will this also help the model to generalize better on other general and board tasks?", "answer_format": "Your answer should be a single boolean value (`True` or `False`) indicating whether the synthetic data will improve the performance on universal tasks, not only text-to-SQL.", "tags": ["single", "table", "objective"], "anchor_pdf": ["5ff99ebb-34e0-5edc-ac57-855b1b77f965"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_bool_exact_match", "eval_kwargs": {"gold": true}}} {"uuid": "d170af87-1580-52a1-b6d1-814f2ddbfac4", "question": "What's the evaluation baseline used in the paper titled \"Generative Adversarial Training with Perturbed Token Detection for Model Robustness\"? What's the contributions that makes this baseline different from existing adversarial datasets?", "answer_format": "Your answer should be a python list of two strings, the first element is the baseline name(the abbrievation format is enough), and the second element is the contributions.", "tags": ["text", "multiple", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_partial_scoring_points_with_llm"], "eval_kwargs_list": [{"gold": "AdvGLUE", "lowercase": true}, {"scoring_points": ["Comprehensive Coverage:AdvGLUE is able to cover as many adversarial linguistic phenomena as possible.", "Systematic Annotations: this is the first work that performs systematic and comprehensive evaluation and annotation over 14 different textual adversarial examples. ", "General Compatibility: AdvGLUE covers the widely-used GLUE tasks and creates an adversarial version of the GLUE benchmark to evaluate the robustness of language models", "High Transferability and Effectiveness: AdvGLUE has high adversarial transferability and can effectively attack a wide range of state-of-the-art models."], "question": "What's the contributions that makes this baseline different from existing adversarial datasets?", "count": 3}]}}, "anchor_pdf": ["8c5db97f-f499-5641-8d74-d0d64d980f53"], "reference_pdf": ["32a1dee2-310a-5ead-8d2f-c957cc59e3dc"]} {"uuid": "d1df78e0-b32e-5878-b302-ae1d5408e8a7", "question": "What is a paper studying data being collected in bundles in reinforcement learning ?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["2a6f1918-a45c-531e-b554-4d425079078d"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What is a paper studying data being collected in bundles in reinforcement learning ?", "reference_answer": "Sample-Efficiency in Multi-Batch Reinforcement Learning: The Need for Dimension-Dependent Adaptivity"}}} {"uuid": "d1f34ae4-023d-5913-bcaa-0f58087bbe36", "question": "According to the paper that proposed the second smallest baseline applied in the MetaGPT paper, what's the difference in pass rate between the best and the worst method, under single-line infilling setting?", "answer_format": "Your answer should be a float between 0 and 100, rounding to one decimal place.", "tags": ["comprehensive", "table", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["460c65d7-a298-5bd3-baa2-dd8683885308", "a328714e-0d88-583c-b272-68c1b3f50548"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 27.4, "ndigits": 1}}} {"uuid": "d2223321-8fa3-5adc-a616-0b5d794941f6", "question": "What is the architecture of the transformer-based classifier used in the paper \"A Two-Model Approach for Humour Style Recognition\"?", "answer_format": "Your answer should be a python strings.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["bf865303-7fd8-5e3d-a8ee-7ee54b04ed40"], "reference_pdf": ["7efa89b4-4460-5eed-b6f0-62238a690c9b"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["The student DistilBERT has the same general architecture as BERT.", "The token-type embeddings and the pooler are removed while the number of layers is reduced by a factor of 2."], "question": "What is the architecture of the transformer-based classifier used in the paper \"A Two-Model Approach for Humour Style Recognition\"?"}}} {"uuid": "d2369c7a-ea10-5299-818e-78c80de60a82", "question": "Is there a single GNN model that can inductively generalize to any knowledge graph?;What is the method to generalize knowledge graph reasoning to graphs with new entities and relations?;Is there a foundation model for knowledge graphs that does not learn embeddings for each node and relation type?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["ec1f567e-30b2-5fb3-a626-b478de9f79ba"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a single GNN model that can inductively generalize to any knowledge graph?;What is the method to generalize knowledge graph reasoning to graphs with new entities and relations?;Is there a foundation model for knowledge graphs that does not learn embeddings for each node and relation type?", "reference_answer": "TOWARDS FOUNDATION MODELS FOR KNOWLEDGE GRAPH REASONING"}}} {"uuid": "d28d742d-3c54-5729-9aec-ff098cd5f44f", "question": "Which large model series were used to evaluate the prototype of the BBH dataset when it was proposed in the paper \"Tree of Problems: Improving structured problem solving with compositionality\"?", "answer_format": "Your answer should be a python list of string, e.g., ['model1', 'model2']. YOU MUST USE THE ABBREVIATIONS PROVIDED IN THE PAPER.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["54d865d4-779e-525d-8e06-c9cc2207beb3"], "reference_pdf": ["77cf04ea-fbbf-5cf9-901a-7cbc93b543ed"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["BIG-G", "BIG-G sparse", "PaLM", "Gopher", "Chinchilla", "T0"], "ignore_order": true, "lowercase": true, "ignore_blank": true}}} {"uuid": "d2ce712c-a887-538c-a0fd-4cf01de110d4", "question": "Is there any paper that leverages syntactic rules to explicitly guide text generation?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["19f7a145-c823-54c9-8cb0-c538cbce622d"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that leverages syntactic rules to explicitly guide text generation?", "reference_answer": "Explicit Syntactic Guidance for Neural Text Generation"}}} {"uuid": "d2f3c57b-d05d-522b-b8bf-21651c72b837", "question": "What paper mitigates the vocabulary size limitation when pretraining multilingual masked language models using a contrastive loss?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["fe8f6536-efa7-533f-9aa9-9c0be763896a"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What paper mitigates the vocabulary size limitation when pretraining multilingual masked language models using a contrastive loss?", "reference_answer": "Headless Language Models: Learning without Predicting with Contrastive Weight Tying"}}} {"uuid": "d35568c3-eed9-5383-a49a-c363470c175d", "question": "In the main evaluation results of the different baselines in this paper, both of which use CodeLLaMA as the base model, which one performs better? In the paper introducing this model, aside from the datasets mentioned in this paper, what other in-domain datasets are used?", "answer_format": "Your answer should be a python list, the first element is the name of the baseline model, and the following elements are the in-domain datasets used in the paper, e.g.,[\"baseline_model_name\", \"dataset1\", \"dataset2\", ...]. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.", "tags": ["multiple", "objective", "table"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["MAmmoTH-Coder", "AQuA-RAT", "NumGLUE"], "ignore_order": true}}, "anchor_pdf": ["7d6f212e-3d4c-5cb2-877c-5d233ae46f3b"], "reference_pdf": ["ecba768d-4b87-58ca-968b-2a375793a798", "b846c66a-a177-5119-af8d-ec4757d6a06c"]} {"uuid": "d4046885-386a-5ea9-a53e-44a4d33ab4b4", "question": "Is there commonsense reasoning dataset which generates diverse sentences to describe the relation between concepts?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["56365a52-6cd9-52dd-9f02-d496a463d0bc"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there commonsense reasoning dataset which generates diverse sentences to describe the relation between concepts?", "reference_answer": "DimonGen: Diversified Generative Commonsense Reasoning for Explaining Concept Relationships"}}} {"uuid": "d4098c02-cef5-5a6e-b9ec-12e068658af6", "question": "Which datasets are used in the experiments of the paper proposing the previous best RGB-based method that are not used in this paper?", "answer_format": "Your answer should be a python list about the names of the datasets, e.g., ['dataset1', 'dataset2']. YOU MUST USE THE EXACT NAMES FORM THE PAPER.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["06ba9b19-1c5a-5765-aafb-16345a42de99"], "reference_pdf": ["b43645ca-00f6-568b-a9d3-de859a91f1d9"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["CSL-Daily", "CSL"], "ignore_order": true, "lowercase": true}}} {"uuid": "d49c4e91-ace9-5ba1-a728-6083ffc72194", "question": "According to Table 3, on which single-task and on which metric, no multi-task model can outperform the corresponding single-task model?", "answer_format": "Your answer should be a Python list of two strings. The first string is the name of the task, and the second string is the name of the metric.", "tags": ["single", "table", "objective"], "anchor_pdf": ["c3936fc4-4cf3-5550-b694-4cdc10986752"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["TextLM", "sBLIMP"], "lowercase": true}}} {"uuid": "d4cca186-6fd8-5b84-a38a-6145ceaec283", "question": "What's the second method discussed in the \"Related Work\" of the paper that proposes CoMeDi? In the work that proposed that method, how many fewer sabotages per game in average did the method propsed by the authors make than their baseline, under \"Non-repulser\" condition?", "answer_format": "Your answer should be a Python list of 2 elements, the first is a string, the name of the method, and the second is a float rounding to 2 decimal places, the difference of sabotages per game in average.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["00c08aac-4af9-5950-a3c2-9a0837e1cc1b"], "reference_pdf": ["d374f4a0-de68-572c-ba4f-166ff5ee6f28"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["ADVERSITY", 1.52], "ndigits": 2, "lowercase": true, "ignore_order": false}}} {"uuid": "d53db8ce-5380-58fc-be67-409a729fb21f", "question": "Provide a brief introduction to the task in the SuperGLUE benchmark that were not used in the paper \"CUSTOMIZABLE COMBINATION OF PARAMETER-EFFICIENT MODULES FOR MULTI-TASK LEARNING\".", "answer_format": "Your answer should be a python strings.", "tags": ["multiple", "image", "subjective"], "anchor_pdf": ["51052be9-02de-56d2-b1c3-556ca1e66166"], "reference_pdf": ["81c6be03-577c-51d5-8e65-f63b3e709112"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "ReCoRD is a multiple-choice QA task. Each example consists of a news article and a Cloze-style question about the article in which one entity is masked out. The system must predict the masked out entity from a list of possible entities in the provided passage, where the same entity may be expressed with multiple different surface forms, which are all considered correct. Articles are from CNN and Daily Mail.", "question": "Provide a brief introduction to the tasks in the SuperGLUE benchmark that were not used in the paper \"CUSTOMIZABLE COMBINATION OF PARAMETER-EFFICIENT MODULES FOR MULTI-TASK LEARNING\"."}}} {"uuid": "d575f608-c3fc-5a6a-97ea-01443d949f57", "question": "How many more examples each model are used in the experiment of \"LLM Evaluators Recognize and Favor Their Own Generations\" than in \"Benchmarking Cognitive Biases in Large Language Models as Evaluators\"?", "answer_format": "Your answer should be a integer.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["3d218d94-1aa0-5a70-b23e-accb254141bd", "2f1c8d90-3428-52b0-b7ec-da132f9178e6"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 1950}}} {"uuid": "d5e3a89b-4ef9-5ce5-b80b-76cba7c02e76", "question": "What are the detailed hyperparameters of the sentence-level scorer in Fine-grained Evaluation System of this paper?", "answer_format": "Your answer should be a python dictionary about the hyperparameters, e.g. {\"hyperparameter1\": \"value1\", \"hyperparameter2\": \"value2\", ...}. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["597bfc37-ca03-5675-905f-52e2f69e1a8c"], "reference_pdf": ["ded3a96f-b919-5ad4-88d2-50547ed66c96"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"Hidden size": "2048", "Immediate Hidden Size": "5632", "Context Len": "2048", "Heads": "32", "Layers": "32", "Vocab Size": "32000"}, "ignore_order": true, "lowercase": true}}} {"uuid": "d5ea5e23-0a82-5621-9932-ff0f19a68885", "question": "The paper(\"BLM-s/lE: A structured dataset of English spray-load verb alternations for testing generalization in LLMs\") use two pre-trained models for experiment. For the newer one, what is its name and based on what task is it pre-trained?", "answer_format": "Your answer should be a single python list, the first element is the name of the model, the second element is the task name it is pre-trained on.", "tags": ["text", "multiple", "objective"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["ELECTRA", "replaced token detection"], "ignore_order": false, "lowercase": true}}, "anchor_pdf": ["3aaa5bca-d686-5f64-b1ef-92d8b28fb733"], "reference_pdf": ["c4d02102-b1c7-5b72-a414-9c175a49be48"]} {"uuid": "d60762bb-75c6-57d0-918b-9df1525d7269", "question": "In the introduction of this paper, five tracks are mentioned. What are the detailed definitions of Track 2 and Track 3?", "answer_format": "Your answer should be a python strings about the detailed definition of Track2 and Track3.", "tags": ["multiple", "subjective", "text"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The formulation of this track2 is to predict, for each essay, Batson's empathic concern (\"feeling for someone\") and personal distress (\"suffering with someone\") scores. Teams are expected to develop models that predict the empathy score for each essay (self-report data from the essay writer). Both empathy and distress scores are real values between 1 and 7. Empathy score is an average of 7-point scale ratings, representing each of the following states (warm, tender, sympathetic, softhearted, moved, compassionate); distress score is an average of 7-point scale ratings, representing each of the following states (worried, upset, troubled, perturbed, grieved, disturbed, alarmed, distressed). These are state measures: measures that vary within people across time. The formulation of track3 is to predict, for each essay, one or more emotion labels from the following Ekman's six basic emotions (sadness, joy, disgust, surprise, anger, or fear) as well as neutral, and we also added hope.", "question": "In the introduction of this paper, five tracks are mentioned. What are the detailed definitions of Track 2 and Track 3, and how do they differ from each other?"}}, "anchor_pdf": ["b498ac0f-d8db-56e7-8809-1ef9c7e25e02"], "reference_pdf": ["699b8024-5de8-5f10-9b89-94bdc13e3e68"]} {"uuid": "d68e9387-5bbd-5a7d-9091-bc67f849d296", "question": "Which paper first constructed a structured knowledge base to interconnect different human social roles and attributes?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["4110bc12-5db7-59cb-b2a0-fe6b238b8b28"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first constructed a structured knowledge base to interconnect different human social roles and attributes?", "reference_answer": "PEACOK: Persona Commonsense Knowledge for Consistent and Engaging Narratives"}}} {"uuid": "d6e0672c-e059-597e-9686-8e77be00fc2c", "question": "According to the DreamLLM paper, how many evaluated Text2Image Specialists it failed to beat in MS-COCO?", "answer_format": "Your answer should be an integer.", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["27b43cd6-f613-50a3-b314-a8667215672a"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 3}}} {"uuid": "d74c128e-d9ec-545c-b10b-d3d3116d9ec9", "question": "How many samples are there in SeeClick's general vision-language instruction-following data?", "answer_format": "Your answer should be a single number rounding to thousands, e.g. 15000.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["6e9001d2-7637-5049-ad35-2adfdfc9c8d1"], "reference_pdf": ["86922a0e-7874-5f9a-926b-0f886076d6e8"], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 158000}}} {"uuid": "d7aa4317-7e09-53d4-9b5d-61b51995b83f", "question": "Which paper first applied the chain of thought concepts in 3D localization problem?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["33234769-20a9-52f3-a36f-bba72cd177ef"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first applied the chain of thought concepts in 3D localization problem?", "reference_answer": "COT3DREF: CHAIN-OF-THOUGHTS DATA-EFFICIENT 3D VISUAL GROUNDING"}}} {"uuid": "d7d4bc83-37ab-5ab9-8693-b4c2e6e38781", "question": "What do you think is the biggest advantage of Variator compared to baseline models as shown in table 1 in the article 'Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules'? Please briefly describe the working principle of LTP, which is one of the baseline models.", "answer_format": "Your answer should be a brief text.", "tags": ["multiple", "table", "subjective"], "anchor_pdf": ["05ed59ad-42f0-5267-9275-a13db0684cde"], "reference_pdf": ["916615ea-a2db-5994-815e-ff4c0b641987"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The biggest advantage of Variator is its minimal additional parameters. LTP works by adaptively removing unimportant tokens based on attention scores, pruning tokens below a learned threshold for each layer.", "question": "What do you think is the biggest advantage of Variator compared to baseline models as shown in table 1 in the article 'Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules'? Please briefly describe the working principle of LTP, which is one of the baseline models."}}} {"uuid": "d82a4438-587e-5405-9351-319110cd89de", "question": "What is the average proportion of papers in the ACL anthology in recent ten years which mention the words speech, spoken or audio in the title?", "answer_format": "Your answer should be a Python float number roudning to 3 decimal places, e.g., 0.001.", "tags": ["image", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 0.035, "ndigits": 3, "tolerance": 0.0001}}, "anchor_pdf": ["a88d4d17-b2f9-520e-a611-c2c4ba178be5"], "reference_pdf": []} {"uuid": "d8bac6a0-2cb4-5620-ac2c-1b2b67c25d0b", "question": "Which two prompting methods in the two papers have similar principles?", "answer_format": "Your answer should be a python list of two prompting method. You must use abbreviations as the papers given.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["eafdcc26-c44d-58fe-9b56-9870fd83c099", "2f2e4311-fc9b-5e36-bb18-7c3fee141713"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["ReAct", "IMR-TIP"], "ignore_order": true, "lowercase": true}}} {"uuid": "da0e52b5-63f3-5fac-a046-063ecb48cf5a", "question": "When conducting experiments, which kind of GPU device is used in this paper?", "answer_format": "Your answer should be a brief text.", "tags": ["single", "text", "subjective"], "anchor_pdf": ["c2a32b9f-dbe7-5d89-a71d-feabd95e7fd2"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "All experiments were done with an Nvidia A40 GPU in this paper.", "question": "When conducting experiments, which kind of GPU device is used in this paper?"}}} {"uuid": "da0eec1f-57e5-5fd8-aff2-cd21493eb60c", "question": "Has any study explored the zero-shot extraction of persona characteristics within conversational dialogues?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["8ac51308-32a6-58fd-bf16-4a0626af4f69"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Has any study explored the zero-shot extraction of persona characteristics within conversational dialogues?", "reference_answer": "PAED: Zero-Shot Persona Attribute Extraction in Dialogues"}}} {"uuid": "da8f996b-f289-5718-ac05-36ba34285a28", "question": "Which paper first tried to fine-tune LLMs with chain-of-thoughts and program-of-thoughts for math reasoning?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["6733a03d-61b5-54d1-85c3-39e5b04f0f3d"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first tried to fine-tune LLMs with chain-of-thoughts and program-of-thoughts for math reasoning?", "reference_answer": "MAMMOTH: BUILDING MATH GENERALIST MODELS THROUGH HYBRID INSTRUCTION TUNING"}}} {"uuid": "da9752f5-e86d-577d-a99b-88a399197e6a", "question": "How many multi-modal baselines excluding the method this paper proposed do authors use? Among this baselines, who reaches the highest F1 score on Twitter2015 dataset? And what about Twitter2017 dataset?", "answer_format": "Your answer should be a python list, the first item is the number of multi-modal baselines the paper used, the second item and the third item are the name of methods reaching the highest F1 score on Twitter2015 and Twitter2017 dataset respectively.", "tags": ["metadata", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_int_exact_match", "eval_string_exact_match", "eval_string_exact_match"], "eval_kwargs_list": [{"gold": 8}, {"gold": "VLP-MABSA"}, {"gold": "CMMT"}]}}, "anchor_pdf": ["6a24d7f4-430d-5c92-b259-f62f76490147"], "reference_pdf": []} {"uuid": "db1901ae-ae9a-5f74-9479-c1846458d265", "question": "Is there a paper which applies Bayesian optimization to modular continual learning?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["223f9104-fc7e-5ec2-9007-c17bdead1386"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper which applies Bayesian optimization to modular continual learning?", "reference_answer": "A Probabilistic Framework for Modular Continual Learning"}}} {"uuid": "db606413-3034-5687-a6ec-535a4244e8a1", "question": "For Zero-shot performance on unseen languages, which model in the experiment of the paper(titled \"Zero-Shot Cross-Lingual NER Using Phonemic Representations for Low-Resource Languages\") gets the highest F1 score? In the source paper of this model, which languages is it evaluated on?", "answer_format": "Your answer should be a python list like [\"string1\", [\"string2\", \"string3\", ...]]. The first element should be a string, representing the name of the model. The second element should be a list of strings, representing the languages.NOTE that the languages should be in the format of ISO 639-3 code.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["9ae69811-4079-58ad-97ed-57697f78c878"], "reference_pdf": ["511235b6-d1dd-5f1b-b274-317b7f89c254"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_structured_object_exact_match"], "eval_kwargs_list": [{"gold": "XPhoneBERT", "lowercase": true}, {"gold": ["eng-us", "vie-n"], "ignore_order": true, "lowercase": true}]}}} {"uuid": "db9b0fe4-a8e1-5344-8fab-77bbea36c1f1", "question": "Which paper first construct large-scale corpus to improve in-context learning of large language models in the pre-training stage?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["f86f7de6-5e6c-5c02-8dad-368acbaf1dba"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first construct large-scale corpus to improve in-context learning of large language models in the pre-training stage?", "reference_answer": "Pre-Training to Learn in Context"}}} {"uuid": "dbe56b05-6b5d-5b95-8b02-6d455e8f0c75", "question": "How many more layers of Transformer should the new method compute compared with the standard LLM in a LLM with 13B parameters? ", "answer_format": "Your answer should be a integer.", "tags": ["single", "text", "objective"], "anchor_pdf": ["0a2c3d8b-dc16-570c-b354-11797aebe290"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 5}}} {"uuid": "dc151869-421a-5d8e-b56b-af7266c08585", "question": "What training acceleration methods are compared in the original paper that describes the methods used in training the FLM-101B model?", "answer_format": "Your answer should be a python list of strings.", "tags": ["multiple", "objective", "table"], "anchor_pdf": ["0207c0f7-4f0a-5aca-a744-749680da8934", "58fb66aa-41bf-59a9-8e27-d9effa1f81aa"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_element_list_included", "eval_kwargs": {"gold": ["Stacking", "CompoundGrow", "Staged", "Bert2BERT", "LiGO"], "lowercase": true}}} {"uuid": "dc634e00-e936-527b-b3e1-93565fa0178b", "question": "What are some data-efficient ways to learn text embeddings thru contrastive learning?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["9e0844b8-5024-5d04-a4dc-5ed6bd14a6ba"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What are some data-efficient ways to learn text embeddings thru contrastive learning?", "reference_answer": "Composition-contrastive Learning for Sentence Embeddings"}}} {"uuid": "dcd9e737-6a5f-519f-8357-0a0e6d002c1e", "question": "From which two subsets does the benchmark used as evaluation set for BLOOM and BLOOMZ models in the paper merged?(The paper is named \"An Empirical Study of In-context Learning in LLMs for Machine Translation\")", "answer_format": "Your answer should be a single python list, every element of the list is a string of the abbrievation name of the subset, e.g.[\"TAT-Conv\",\"TAT-Web\"].", "tags": ["objective", "multiple", "text"], "anchor_pdf": ["bffc816d-612c-5fea-83bb-1ac6b290480b"], "reference_pdf": ["b40e8ca1-d3e8-5553-ac3c-d6ca0b21c628"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["IN22-Wiki", "IN22-Web"], "lowercase": true, "ignore_blank": true}}} {"uuid": "ddd6cf56-5026-5482-b637-f8dd9a20acf6", "question": "Is there a theory paper that explains why sometimes tuning momentum does not boost performance for training a neural network?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["63cc0c96-3296-5c2b-a549-bb82610f0111"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a theory paper that explains why sometimes tuning momentum does not boost performance for training a neural network?", "reference_answer": "The Marginal Value of Momentum for Small Learning Rate SGD"}}} {"uuid": "de8d9b7f-4117-53d1-988d-77036e001339", "question": "Which approach was first proposed to solve the text-conditioned image retrieval task according to the SDA paper? In this approach, how to compute gating connection?", "answer_format": "Your answer should be a python list of two strings, the first is an approach name and you must use abbreviation as given in the papers. The second is a fomula in latex format.", "tags": ["multiple", "formula", "subjective"], "anchor_pdf": ["246cdbd7-d6bf-5e4f-8836-5b975f544162"], "reference_pdf": ["c7a72a50-58c0-5468-a9b1-e606af443337"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_complex_math_formula_with_llm"], "eval_kwargs_list": [{"gold": "TIRG"}, {"formulas": "f_{\\text{gate}}(\\phi_x, \\phi_t) = \\sigma \\left( W_{g2} * \\text{RELU} \\left( W_{g1} * [\\phi_x, \\phi_t] \\right) \\right) \\circ \\phi_x", "question": "How to compute gating connection?"}]}}} {"uuid": "dea6a700-d8b4-5269-851e-d3a99f3961f5", "question": "In the PRL paper, besides the proposed method, which baseline performs better? In the paper that proposes that baseline, what's the loss function for the controller?", "answer_format": "Your answer should be a Python list of 2 elements, the first is the abbreviation of the baseline, and the second is a string, the formula in LaTeX format.", "tags": ["multiple", "image", "formula", "subjective"], "anchor_pdf": ["1bc55c7c-8d2d-5044-a29f-d192744fce84"], "reference_pdf": ["cf9871b5-b2b8-5762-b4ed-5ebde599bbd6"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_complex_math_formula_with_llm"], "eval_kwargs_list": [{"gold": "h-DQN", "lowercase": true}, {"formulas": "L_1(\\theta_{1,i}) = \\mathbf{E}_{(s,a,g,r,s') \\sim D_1}[(y_{1,i} - Q_1(s,a;\\theta_{1,i},g))^2]", "question": "What's the loss function for the controller?"}]}}} {"uuid": "df2d7dce-b86a-5805-b524-3f453268240f", "question": "According to Figure 1, in which year has the highest proportion of NLP papers that explicitly mention speech-related terms in their title?", "answer_format": "Your answer should be the year number with the highest proportion, e.g. 2000.", "tags": ["image", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 1989}}, "anchor_pdf": ["a88d4d17-b2f9-520e-a611-c2c4ba178be5"], "reference_pdf": []} {"uuid": "df46a4db-9a21-55a7-b84a-7764604b47c5", "question": "Is there any paper that uses Lipschitz continuity in learning a dynamics model?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["5e978885-be39-543f-b1d9-dc71ad71083a"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that uses Lipschitz continuity in learning a dynamics model?", "reference_answer": "CCIL: CONTINUITY-BASED DATA AUGMENTATION FOR CORRECTIVE IMITATION LEARNING"}}} {"uuid": "df8afde5-4e93-5e03-86a6-a98bcccdc1e7", "question": "What paper provides generalization bounds for self supervised learning models eg. CLIP", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["056e8e65-3e0d-5f19-a4fe-f902ec9544e7"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What paper provides generalization bounds for self supervised learning models eg. CLIP", "reference_answer": "Understanding prompt engineering may not require rethinking generalization"}}} {"uuid": "e010a084-060b-5edb-8ff5-9be8bc82f010", "question": "Which baselines are chosen to study for Seperate Training based methods in the paper named \"Benchmarking and Improving Compositional Generalization of Multi-aspect Controllable Text Generation\"? In the source paper of the baseline 'Prior', what control framework is proposed?", "answer_format": "Your answer should be a single python list,the first element is a list of strings of the baselines, the second element is the string about the control framework.e.g.[[\"Prior\",\"Baseline2\"],\"The paper proposes a novel control framework that introduces...\"].", "tags": ["subjective", "multiple", "text"], "anchor_pdf": ["e01387cf-ffe2-5a64-8f38-f9f07b77a4fa"], "reference_pdf": ["58a12963-0058-5883-93d9-a0abc4b9cc63"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The paper proposes a novel framework that introduces a well-formed prior space for effective and flexible control via invertible transformation.", "question": "For the source paper of the baseline 'Prior', what control framework is proposed?"}}} {"uuid": "e03ad6fe-d951-5f49-b4b8-6415e0c8203e", "question": "Which paper produces a dataset for text simplification in over 12 languages and evaluates both finetuning and in context learning approaches to text simplification in those languages?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["9cb77cb2-fcf2-5108-8c2d-b02a59ea4bbc"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper produces a dataset for text simplification in over 12 languages and evaluates both finetuning and in context learning approaches to text simplification in those languages?", "reference_answer": "Revisiting non-English Text Simplification: A Unified Multilingual Benchmark"}}} {"uuid": "e0411ac6-86d2-52ff-bcc1-4e9dba8177c5", "question": "Is there any paper that automatically creates a dataset for summarizing text from one language to another for a large collection of languages?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["55f3befa-15b3-51ea-86e6-b92bf58912fe"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that automatically creates a dataset for summarizing text from one language to another for a large collection of languages?", "reference_answer": "CrossSum: Beyond English-Centric Cross-Lingual Summarization for 1,500+ Language Pairs"}}} {"uuid": "e1180112-dc52-5a5c-9907-6d007f17b729", "question": "I am a beginnner to the field of NLP, and I wonder roughly how many papers on average should one paper cite, given the provided paper list.", "answer_format": "Your answer should be a single integer number, rounded from the average reference number.", "tags": ["multiple", "metadata", "objective"], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 43}}, "anchor_pdf": ["0aec0c5f-463b-5907-9e16-e637504b3c3d", "0b1a831b-f792-58bc-a07f-4e73a8fd0e60", "0b77c64c-6b36-5501-848f-79a062be2a45"], "reference_pdf": []} {"uuid": "e1184bfe-06e5-5294-9f96-fa353008ba83", "question": "Who are the authors of this paper? What're their institutions?", "answer_format": "Your answer should be a Python dictionary, e.g. {\"Amy\": [\"Massachusetts Institute of Technology\", \"Carnegie Mellon University\"], \"Bob\": [\"Shanghai Jiaotong University\"]}. YOU MUST USE THE FULL AND EXACT WORDS FROM PDF.", "tags": ["single", "metadata", "objective"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"Gili Lior": ["Allen Institute for AI", "The Hebrew University of Jerusalem"], "Yoav Goldberg": ["Allen Institute for AI", "Bar-Ilan University"], "Gabriel Stanovsky": ["Allen Institute for AI", "The Hebrew University of Jerusalem"]}, "ignore_order": true}}, "anchor_pdf": ["28f25ac1-c82a-5208-84b4-8ac1a33ea481"], "reference_pdf": []} {"uuid": "e1464930-9482-58cc-8245-b84ab34841e9", "question": "In the paper that proposed the information filtering hypothesis, under non-semantic task setting, how much does the frozen LLM transformers improve VectorNet, considering the miss rate?", "answer_format": "Your answer should be a positive float, rounding to 1 decimal place.", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["0383a102-c1d6-5e04-a15c-f6c551ef739c"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 0.5, "ndigits": 1}}} {"uuid": "e24d9741-a47e-5f69-8bde-ead5219761be", "question": "In video diffusion models, is there any paper that tried decomposing video instruction into sub instructions of different time?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["995c3e31-d2e4-5649-9c7f-0b62f1aeb86d"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In video diffusion models, is there any paper that tried decomposing video instruction into sub instructions of different time?", "reference_answer": "Seer: Language Instructed Video Prediction with Latent Diffusion Models"}}} {"uuid": "e26691f8-389b-5939-917d-f0be16cec850", "question": "What are the differences between the two GNeRF settings in the experiments of the paper?", "answer_format": "Your answer should be a python strings about the obvious differences.", "tags": ["single", "text", "subjective"], "anchor_pdf": ["8b68d335-400a-5997-8d42-216205c5658d"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["The DTU dataset is used in setting I for training but not used for either training or evaluation in setting II.", "In setting I, N = 8. In setting II, N = 10."], "question": "What are the differences between the two GNeRF settings in the experiments of the paper?"}}} {"uuid": "e26e0a8e-7e6b-55b0-b658-af94309cd496", "question": "According to the experimental results, if we remove the document fact attention module and use mean pooling to fuse all document semantic representation vectors, by how much does the F1 score of FINEGRAINFACT decline in summaries generated by pre-trained language models published in or after 2020?", "answer_format": "Your answer should be a single python float, rounded to 2 decimal places.", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 0.33, "ndigits": 2}}, "anchor_pdf": ["0e6978a1-3a5d-5fdd-808a-033cc79fb049"], "reference_pdf": []} {"uuid": "e26fafda-6b5f-5a6c-8fe6-57647a29c7e7", "question": "Is there any paper trying to improve MLE for auto-regressive language modeling through the lens of optimal transport?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["a3d5ca15-dd60-5875-b957-8f6ed4c912b0"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper trying to improve MLE for auto-regressive language modeling through the lens of optimal transport?", "reference_answer": "EMO: EARTH MOVER DISTANCE OPTIMIZATION FOR AUTO-REGRESSIVE LANGUAGE MODELING"}}} {"uuid": "e2e3bd05-d47f-5602-83a5-06b21a463035", "question": "Which machine learning paper proposed certified robustness in the malware detection domain?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["a98fe60e-484f-5179-bea2-92881bbd6de7"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which machine learning paper proposed certified robustness in the malware detection domain?", "reference_answer": "DRSM: DE-RANDOMIZED SMOOTHING ON MALWARE CLASSIFIER PROVIDING CERTIFIED ROBUSTNESS"}}} {"uuid": "e30796f1-1fa4-516f-8295-ba45725d32de", "question": "Is there a dataset that allows to perform aspect-based sentiment classification on French news?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["82906f3a-4fff-5a34-823d-2d75a606781a"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a dataset that allows to perform aspect-based sentiment classification on French news?", "reference_answer": "MAD-TSC: A Multilingual Aligned News Dataset for Target-dependent Sentiment Classification"}}} {"uuid": "e36edbbf-6630-5a7c-9706-9c4932d865cf", "question": "How many GPUs would be required to train a 70M model used in this paper if we had used the batch sizes from the GPT-3 suite?", "answer_format": "Your answer should be a single integer.", "tags": ["multiple", "objective", "table"], "anchor_pdf": ["1a823707-4dc8-5954-8fe1-de9ba161f77a"], "reference_pdf": ["84eb4718-0ace-52b2-a378-eb5245708462"], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 4}}} {"uuid": "e38767c8-5f41-52ea-91e9-8cc27220be14", "question": "What challenges in mobile health does RoME handle?", "answer_format": "Your answer should be in a well-formated item list.", "tags": ["single", "text", "subjective"], "anchor_pdf": ["3827833d-8cc5-5c39-9019-88b361665aef"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["participant heterogeneity", "nonstationarity", "nonlinear relationships"], "question": "What challenges in mobile health does RoME handle?"}}} {"uuid": "e3a6b6b4-9899-53e5-b338-c77a80ee71a5", "question": "How to achieve zero-shot lip reading?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["3780e607-932d-54db-b61d-01ee8b8864d8"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "How to achieve zero-shot lip reading?", "reference_answer": "OpenSR: Open-Modality Speech Recognition via Maintaining Multi-Modality Alignment"}}} {"uuid": "e4945000-28d9-5f63-b821-f3def54eb88c", "question": "Has there been any recent work or competitions focused on the development of methods to counteract clickbait through spoiling, such as revealing key information upfront?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["0dce8d24-2297-52da-8298-f22c2bbcee6a"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Has there been any recent work or competitions focused on the development of methods to counteract clickbait through spoiling, such as revealing key information upfront?", "reference_answer": "SemEval-2023 Task 5: Clickbait Spoiling"}}} {"uuid": "e4b24c60-eafc-51ff-96ba-7fabb64fc15d", "question": "In the training of LangBrige, when adapting finetuned LMs, is multilingual encoder trainable?", "answer_format": "Your answer should be a Python bool of true or false.", "tags": ["objective", "single", "text"], "conference": [], "evaluator": {"eval_func": "eval_bool_exact_match", "eval_kwargs": {"gold": false}}, "anchor_pdf": ["11f71fa7-fd1b-5223-8ee1-53ecd8519ed7"], "reference_pdf": []} {"uuid": "e5680763-aa2c-5686-9ed3-762d09067ad6", "question": "What type of information is scored the highest when the loss equals 0, 0.01 and 0.06? What can be concluded from this result?", "answer_format": "Your answer should be a python list of two strings, the first is the type of information, the second is the conclusion from the result.", "tags": ["single", "image", "subjective"], "anchor_pdf": ["ff3d7e78-5724-5f44-b918-a9156cdf9e08"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_structured_object_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": ["Compressed", "High", "High"], "lowercase": true}, {"reference_answer": "When there is no information loss, participants evaluate more compressed causal claims significantly more highly than less compressed causal claims. When information loss is moderate, there is no significant difference. When information loss is high, participants prefer less compressed claims.", "question": "What can be concluded from this result?"}]}}} {"uuid": "e5b21555-1c9a-5275-be50-1a418f9a59d6", "question": "What are the meanings of function $s$ and function $F$ in Equation (3)?", "answer_format": "Your answer should be a sentence describing the meanings of function $s$ and function $F$ in Equation (3).", "tags": ["formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "$s(\\cdot)$ is the cosine similarity and $F(\\cdot)$ is a non-linear mapping on the output representations of PLM.", "question": "What are the meanings of function $s$ and function $F$ in Equation (3)?"}}, "anchor_pdf": ["875e65e0-8e9f-52e3-9d3e-65f15fa1ea82"], "reference_pdf": []} {"uuid": "e652aa6f-5d78-56a5-8cad-549581d96c1f", "question": "For the model which suffered the biggest loss of response fidelity from adding image to query, how many hours did its training take? How many AI accelerators were used?", "answer_format": "Your answer should be a python list with 2 elements. The elements should be integers, the first one giving the number of hours, and the second one giving the number of accelerators.", "tags": ["multiple", "objective", "image"], "anchor_pdf": ["9dc975f8-4a06-5cfd-802a-4ad16bc47ee4"], "reference_pdf": ["a9379815-1b75-5806-b540-c3dd4170a2ad"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": [4, 8], "ignore_order": false}}} {"uuid": "e6a69aa0-9915-5c51-a2e8-ba90140fe58e", "question": "Which paper first combines rewriting and expansion methods to reformulate a query for conversational search?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["03588f6c-205d-5f47-be8f-3b5ec99ffdd5"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first combines rewriting and expansion methods to reformulate a query for conversational search?", "reference_answer": "ConvGQR: Generative Query Reformulation for Conversational Search"}}} {"uuid": "e6a78d3c-1bfe-55ea-b070-712554f7cae9", "question": "In the domain of prediciton questions, what's the largest issue for LLAMA 2?", "answer_format": "Your answer should be a word or phrase, indicating the largest issue. YOU MUST USE THE EXACT WORDS FROM PDF WITHOUT EXPLANATIONS.", "tags": ["image", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Multi-Answers"}}, "anchor_pdf": ["10e37be8-8c9f-590c-b94f-e6bea03794f2"], "reference_pdf": []} {"uuid": "e6bd50e2-c698-520b-b4fe-a62d742c9d01", "question": "When analyse the statistics of Chinese GEC datasets, which dataset for the usage of Fine-tuning has the largest number of sentences?", "answer_format": "Your answer should be a single string of the dataset's name.", "tags": ["single", "table", "objective"], "anchor_pdf": ["be89f00a-d771-5b5e-9b99-ba0becab9275"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Lang8", "lowercase": true}}} {"uuid": "e6fcc866-61ca-5cba-985b-e64ceefdf84c", "question": "What are the 3 primary algorithmic strategies evaluated in the LLMC toolkit for quantizing large language models, and how do they differ in approach?", "answer_format": "Your answer should be a sentence.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["9185e20f-3b22-57bf-8611-cc902a76ebf9"], "reference_pdf": ["0b2c6ed9-5f67-57a6-97a8-492d2561011b"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "What are the 3 primary algorithmic strategies evaluated in the LLMC toolkit for quantizing large language models, and how do they differ in approach?", "reference_answer": "1. Transformation, which modifies the distribution of weights and activations to reduce outliers using techniques like scaling-based transformations (e.g., AWQ, SmoothQuant) and rotation-based methods (e.g., QuaRot); 2. Clipping, which limits extreme weight values through symmetric or asymmetric methods to manage outliers that impact quantization; 3. Reconstruction, which minimizes quantization error by iteratively adjusting unquantized weights, as exemplified by GPTQ, to closely replicate original model outputs."}}} {"uuid": "e7356e42-a08e-5c65-abb0-6e00ea2a400a", "question": "What paper first proposes that simply reversing the output can significantly enhance the sample efficiency and the performance of the arithmetic capability of a decoder-only Transformer model?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["0875c036-80b2-5520-9e39-23da9ff332ef"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What paper first proposes that simply reversing the output can significantly enhance the sample efficiency and the performance of the arithmetic capability of a decoder-only Transformer model?", "reference_answer": "Teaching Arithmetic to Small Transformers"}}} {"uuid": "e744adb1-ce7c-5d3e-9b2f-a8790dfb6cb7", "question": "What are the baseline models used in the experiments of the two most recent papers and this paper?", "answer_format": "Your answer should be a python list of elements, each element is the baseline model name string, e.g., [\"model1\", \"model2\", ...]. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.", "tags": ["multiple", "table", "objective"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["R-BERT", "ZS-BERT"], "ignore_order": true}}, "anchor_pdf": ["97121a3c-fe43-541b-8a54-ae6f7eb19b58"], "reference_pdf": ["92a36b43-491e-5143-8319-e630273ccb0a", "b1038525-40b7-5470-a47c-13470cd55625"]} {"uuid": "e74fd71e-eaf4-59dc-a956-299cee5375e3", "question": "In the paper that shares the similar world model loss function with the R2I paper, a competition concerning Minecraft is introduced. Where can I find the data of that competition?", "answer_format": "Your answer should be a Python string starting with \"https://\", the URL of the dataset as given in the paper. You don't need to make sure the URL is still valid, just provide the URL as it is in the paper.", "tags": ["multiple", "metadata", "objective"], "anchor_pdf": ["1fabf58e-4f09-57be-9a63-cdec5d892caf"], "reference_pdf": ["17206f1c-4cdb-59ae-9840-9864f0c9732d"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "https://minerl.io/diamond", "lowercase": true, "ignore_blank": true}}} {"uuid": "e8654b21-dff6-5447-90f4-afb0974ce94d", "question": "Which backdoor paper first used the CLIP to suppress benign features and enhance poisoning features to design triggers?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["55a96c34-11eb-52b2-9bcb-0b5e9996569e"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which backdoor paper first used the CLIP to suppress benign features and enhance poisoning features to design triggers?", "reference_answer": "Efficient Backdoor Attacks for Deep Neural Networks in Real-world Scenarios"}}} {"uuid": "e87fa3e0-7d2f-5909-8e01-5c2d8de2e64c", "question": "Which dataset did not get improved performance after applying the proposed RECOST method to Alpaca-gpt4, compared to the Random baseline? Tell me this worst-performing dataset. And what's the remaining performance gap for our best-performing RECOST method compared to the reported human upper bound on the testset for that dataset?", "answer_format": "Your answer should be a Python list of two elements, where the first element is the name of the dataset, and the second element is a float number rounded to 2 decimal places, calculated by subtracting the performance of the best-performing RECOST method from the reported human upper bound performance for that dataset.", "tags": ["metadata", "multiple", "objective", "table"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_float_exact_match"], "eval_kwargs_list": [{"gold": "Hellaswag", "lowercase": true}, {"gold": 14.97, "ndigits": 2, "tolerance": 0.0001}]}}, "anchor_pdf": ["0e1f319d-ad46-5528-9559-9208708536e9"], "reference_pdf": ["7d4754c9-e8ac-51de-aa10-0bb4df7c4ff0"]} {"uuid": "e89d9ee6-ed85-55bc-98fc-687823d1695f", "question": "What data augmentation strategies are used in the recently proposed dataset used in this paper?", "answer_format": "Your answer should be a python strings about the detailed data augmentation strategies.", "tags": ["multiple", "subjective", "text"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "To achieve the balance between the two-way translation in language pairs, two data augmentation strategies were utilized to enrich the corpus if necessary: In cases where the number of parallel corpus falls below 1 million, we flip the entire corpus to create the corpus for the opposite translation direction. In contrast, for corpora with more than 1 million instances, we randomly flip half the amount of corpus to generate the corresponding corpus. After data augmenting, the initial corpus of 142 translation directions is substantially enriched, expanding to a significantly larger corpus of 242 translation directions.", "question": "What data augmentation strategies are used in the recently proposed dataset used in this paper?"}}, "anchor_pdf": ["2c601b1c-36cd-5106-9b44-889c55a377c8"], "reference_pdf": ["063cfa76-8115-5e5e-a5c3-c794ee055b2c"]} {"uuid": "e8c52858-a386-5e87-a9ba-3a7ec32ae1e2", "question": "What are the catogories of label biases in in-context learning for text classification and what are the definitions of these categories?", "answer_format": "Your answer should be a Python list of text strings, with each element being one category that this paper defines, e.g., [\"category 1: define 1\", \"category 2: define 2\", ...].", "tags": ["single", "subjective", "text"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["vanilla label bias: The model's non-contextualized preference for the label names (e.g., the common token bias caused by different frequencies of label names in the pretraining corpus).", "context-label bias: The effects of the context prompt (e.g., LLMs tend to prefer the majority and last label of the in-context examples).", "domain-label bias: The effects of the task corpus on the model's predictions."], "question": "What are the catogories of label biases in in-context learning for text classification and what are the definitions of these categories?", "ignore_order": true}}, "anchor_pdf": ["15f6962b-d927-5d71-b01e-f0664e09eeb5"], "reference_pdf": []} {"uuid": "e910df81-f6bc-5c88-8df2-b99ce1990a47", "question": "Which work discusses an analysis of source and target contributions to output generation based on local interpretation when machine translation models experience hallucinations?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["4788e740-77b6-50b5-9217-f9400d6d116e"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which work discusses an analysis of source and target contributions to output generation based on local interpretation when machine translation models experience hallucinations?", "reference_answer": "Local Interpretation of Transformer Based on Linear Decomposition"}}} {"uuid": "e91cd875-a6a7-540d-80be-279f30dd2e4a", "question": "Which paper first found that when transformers are trained to in-context learn function classes, they might exhibit generalization followed by memorization, in certain settings?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["ff730085-7fb1-5e27-932f-e4b88ef3222b"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first found that when transformers are trained to in-context learn function classes, they might exhibit generalization followed by memorization, in certain settings?", "reference_answer": "In-Context Learning through the Bayesian Prism"}}} {"uuid": "e9748de4-8290-5fbe-9814-9443d3f4075e", "question": "According to the paper that proposed Synapse, which other concurrent work performs the best? In the dataset that both papers applied, how many tasks are there?", "answer_format": "Your answer should be a Python list of 2 elements, the first is a string, the name of method, and the second is an integer, the number of tasks.", "tags": ["multiple", "table", "image", "objective"], "anchor_pdf": ["3ae0bebe-bf9c-5ff2-9721-620c3f53f5e9"], "reference_pdf": ["47fab7cf-2822-59e7-9ae1-d99791af4736", "383e07d7-d966-5ac1-b43c-6bdf712ed32b"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Pix2Act", 8], "lowercase": true, "ignore_blank": true, "ignore_order": false}}} {"uuid": "ea28952f-c060-5e5a-b0bc-0269aaab57fe", "question": "The two anchor PDFs propose the AudioDec and LMCodec models, both improved based on the same codec. What is the name of this codec? Also, do the RVQ parts in both works have the same single codebook size?", "answer_format": "Your answer should be a Python list where the first item is the codec name (a string) and the second item is 'yes' or 'no' (a string).", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["f03c7d74-1e5d-5288-8cdb-5ce2c5eec686", "fdebebe2-5205-511d-a14c-89f11d1b33ac"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["SoundStream", "yes"], "lowercase": true}}} {"uuid": "ea3a6252-6542-58e2-85e5-0c5274fac510", "question": "Out of the four baselines used by the paper that proposes MEQE method, which one is also utilized as a baseline by the other three papers? Additionally, what are the two other baselines that the three papers have in common?", "answer_format": "Your answer should be a Python list with two elements. The first element should be the full name of the baseline shared by all four papers (the anchor PDF method and the three other baselines). The second element should be a Python list containing the full names of the two additional baselines shared by the three papers. Ensure the baseline names are the full names.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["0a7bb69e-e83a-56b7-80ca-cbad1621dde4"], "reference_pdf": ["a3c5d97a-3112-50fa-b800-2dfc7c3e5fb4", "71357d95-65ac-5dea-8ff4-1f7ddd27af1b", "3cb3a9ff-b420-56a6-9948-7518501efc0a", "dec93516-8248-5c04-861d-9a380d21c9cb"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_structured_object_exact_match"], "eval_kwargs_list": [{"gold": "Graph Query Embedding", "lowercase": true}, {"gold": ["Query2Box", "Beta Embedding"], "lowercase": true, "ignore_order": true}]}}} {"uuid": "ea497ecc-8bd7-5954-a3ac-d212432c7feb", "question": "Which paper surveyed the datasets and tasks of asking clarification questions in conversational systems??", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["ae7f9e54-713a-598b-bf12-52410d590c62"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper surveyed the datasets and tasks of asking clarification questions in conversational systems??", "reference_answer": "A Survey on Asking Clarification Questions Datasets in Conversational Systems"}}} {"uuid": "ea6c0002-4771-57a4-af92-55a590a92777", "question": "Which model achieves superior performance with a large number of examples in the task of cancer type classification?", "answer_format": "Your answer should be a single model name used in the corresponding figure.", "tags": ["single", "image", "objective"], "anchor_pdf": ["4c3f5e08-659b-5e15-81bd-ebdb24ffc2e6"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Med-Flamingo", "lowercase": true}}} {"uuid": "ea965a94-3dc2-58e6-93e6-6da8d839e7e8", "question": "What is a large event-coverage general-domain event argument extraction dataset?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["f81b1f9f-c5aa-5d45-a3f7-5bf6405c2418"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What is a large event-coverage general-domain event argument extraction dataset?", "reference_answer": "GENEVA: Benchmarking Generalizability for Event Argument Extraction with Hundreds of Event Types and Argument Roles"}}} {"uuid": "eb327f3c-93ea-5851-a544-9c05b109ac16", "question": "In the experiments, which datasets did the authors use, and how many samples are there in the training set of each dataset?", "answer_format": "Your answer should be a Python dictionary, where the keys are the names of datasets and the values are the number of samples in the respective training set. e.g. {\\\"dataset1\\\": 10, \\\"dataset2\\\": 20, ...} .", "tags": ["single", "table", "objective"], "anchor_pdf": ["4f6a36fe-eac7-58cf-81e9-584f786b2f38"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"BIOSSES": 60, "CASUAL JUDGEMENT": 90, "EPISTEMIC REASONING": 500, "TEMPORAL SEQUENCE": 300, "IAC Vulnerability Detection": 166, "HOTPOTQA": 50}, "ignore_order": true}}} {"uuid": "eb3a5dd5-0008-5edf-b8e7-8cebd614f282", "question": "In the survey of Large Language Models for NL2Code, what are the multi-lingual benchmarks to evaluate the NL2Code task, and how many instances do they contain per promgramming language?", "answer_format": "Your answer should be a Python dictionary of entries, each dictionary key is a string, the benchmark name DIRECTLY FROM THE PDF WITHOUT CHANGING CAPITALIZATION, and each value is an integer of the corresponding instance number, e.g., {\"benchmark1\": 10, \"benchmark2\": 100}, ....", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"MBXP": 974, "MBXP-HumanEval": 164, "HumanEval-X": 164, "MultiPL-HumanEval": 164, "MultiPL-MBPP": 974}, "ignore_order": true}}, "anchor_pdf": ["37758401-6101-554f-8f1e-4e2995443314"], "reference_pdf": []} {"uuid": "ebd4c18e-2148-5952-b257-5c899148ff26", "question": "Can you tell me the core idea of the source paper of the methodology which inspires the creation of HarmfulQ dataset?", "answer_format": "Your answer should be a string about the core idea.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["85b3d5bd-0bbc-5f40-a1c7-6b8fd73e6dca"], "reference_pdf": ["fc185d09-1adb-5e1d-bb46-95df88cc0425"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The paper proposes a novel approach to \"red teaming\" language models (LMs) using other LMs. This means using one LM to generate test cases (input prompts) that can uncover harmful outputs from another LM. The key idea is to leverage the generative capabilities of LMs to automatically generate a large and diverse set of test cases, which can reveal various failure modes and potential harmful behaviors of the target LM. This approach complements manual testing and can help identify and mitigate risks before deploying LMs in real-world applications.", "question": "Can you tell me the core idea of the source paper of the methodology which inspires the creation of HarmfulQ dataset?"}}} {"uuid": "ebd5482c-b856-5427-876b-fcd24759d8d4", "question": "MMD and xVal, a baseline in the anchor paper, both aim to solve the problem of embedding numbers in language models. Did the tasks focused on by the two papers belong to the same domain? If not, what types of tasks does xVal focus on?", "answer_format": "Your answer should be brief text answering whether the tasks focused on by the two papers belong to the same domain, and if not, the domain of the task focused on by xVal.", "tags": ["multiple", "subjective", "text"], "anchor_pdf": ["58a12475-a7ce-5d3a-b75e-04814b025231", "98849314-13f0-558a-adf7-f2764d2bf67b"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "No, they don't. xVal mainly focuses on prediction tasks involving numbers in the scientific domain.", "question": "MMD and xVal, a baseline in the anchor paper, both aim to solve the problem of embedding numbers in language models. Did the tasks focused on by the two papers belong to the same domain? If not, what types of tasks does xVal focus on?"}}} {"uuid": "ec05c8e8-b789-514f-802e-7c710b0bec67", "question": "In the main results of ShortGPT's source paper, which paper does the experimental results of several comparison methods come from? Does the method proposed in this paper require post-training?", "answer_format": "Your answer should be a python list of two elements. The first element is a python string, the paper's full name. The second element is a python bool.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["48d9d307-c254-597f-9444-5c420a973c0d", "017b741f-d588-5124-9971-af37b2f806ae"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_bool_exact_match"], "eval_kwargs_list": [{"gold": "LaCo: Large Language Model Pruning via Layer Collapse", "lowercase": false}, {"gold": false}]}}} {"uuid": "ec24ce45-3e3f-5164-9a9e-24381d9208a8", "question": "In the previous work of \"Hokoff: Real Game Dataset from Honor of Kings and its Offline Reinforcement Learning Benchmarks\" which uses the same game, how many training hours does it take in average for the agent trained with 128 CPU cores to beat the behavior-tree AI?", "answer_format": "Your answer should be a float, rounding to 2 decimal places.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["1caab0d7-f86a-52d7-9c89-8e4506d76b1f"], "reference_pdf": ["61e6162d-74ba-57f7-8721-9545b4843b13"], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 6.16, "ndigits": 2}}} {"uuid": "ecef28ab-8648-51af-b77f-91d2ed598e89", "question": "Which one of the prior works on state space models by the same team that published the Mamba paper proposes FlashConv for accelerating state space model training?", "answer_format": "Your answer should be a python string, the full paper name of the prior work.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["618b736e-5c9f-5c00-8889-9589bdad0620", "f9291deb-da46-5c68-8636-0d39ead63ea5", "a2e013ab-e738-5c74-af0b-d1b313c31909", "b3b6d154-6610-59f3-989a-06c84e5e22b3"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Hungry Hungry Hippos: Towards Language Modeling with State Space Models", "lowercase": false}}} {"uuid": "ed62604f-0aad-569a-9105-8381212aeb43", "question": "What is the optimal number of layers to skip for LLaMA2-13B?", "answer_format": "Your answer should be a Python integer number. e.g. 3", "tags": ["single", "image", "objective"], "anchor_pdf": ["7cffa095-8d51-5490-b0f0-7a6845e37f67"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 40}}} {"uuid": "edf8b7f4-c386-5053-ac89-00bf27fc0d54", "question": "What techniques exist for incorporating context in detecting emotions within dialogues by leveraging pre-trained language models?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["c94a3716-b898-5cf3-b5e7-7b2e893f92b4"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What techniques exist for incorporating context in detecting emotions within dialogues by leveraging pre-trained language models?", "reference_answer": "Context-Dependent Embedding Utterance Representations for Emotion Recognition in Conversations"}}} {"uuid": "ee44be40-1780-5f28-9fb4-7c2e626bc4a0", "question": "In the experiment section of the paper, what is the detailed procedure of back-translation for sentence reconstruction?", "answer_format": "Your answer should be a python strings.", "tags": ["multiple", "subjective", "text"], "anchor_pdf": ["0c64c0b5-4466-51d6-9e84-59fa73d8a450"], "reference_pdf": ["f43e23ff-de7a-5bd7-9c1d-6361ba9b5734"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The procedure of back-translation is using a Czech-English NMT system to translate Czech sentences from the training data into English. Then pair the translations with the English references to form English-English paraphrase pairs. We used the pretrained Czech-English model from the NMT system.", "question": "In the experiment section of the paper, what is the detailed procedure of back-translation for sentence reconstruction?"}}} {"uuid": "ee6dba6d-938c-5690-83e0-729ccbf2882c", "question": "What are the selected questionnaires or scales on personality trait domain in the PsychoBench framework?", "answer_format": "Your answer should be a python list of the abbreviations of the questionnaires names, e.g., ['NEO-FFI', 'IPIP']", "tags": ["single", "image", "objective"], "anchor_pdf": ["9b3a636d-5184-58d5-9333-49a304ad2f68"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["BFI", "EPQ-R", "DTDD"], "lowercase": true, "ignore_order": true}}} {"uuid": "eea13a76-cf7b-533c-beb5-9c1c49a7bc9d", "question": "Across different corpuses analysed in the paper, which one has the best word order monotonicity? What's its key feature?", "answer_format": "Your answer should be a single python list of two strings, the first string is the name of the corpus, the second string is about the key feature.", "tags": ["table", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "Chunk-wise", "lowercase": true}, {"reference_answer": "With fluency and adequacy verified by professional interpreters, Chunk-wise is manually created following Chunk-Wise Monotonic Translation guideline. A key feature of chunk-wise is its monotonic alignment with the source, maintaining the entire source content, making it well-suited for the goals of SiMT.", "question": "What's the key feature of chunk-wise?"}]}}, "anchor_pdf": ["0cf397b9-1a03-5731-ba41-e0bc48b06c98"], "reference_pdf": []} {"uuid": "eed1fb76-7c69-540b-9b6f-ad67c3ce4153", "question": "According to Figure 3, what are the layers proposed by the paper(compared to existing methods) in the overall framework of AR quality predictions?", "answer_format": "Your answer should be a python list, every element of the list is a string presented in the original figure of the paper. If there are multiple layers with the same name, they only need to be mentioned once.", "tags": ["image", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Argumentative Context (AC) Generation", "DistilRoBERTa Encoder", "ChatGPT", "2nd-pass Zero-Shot-CoT Prompt", "1st-pass Zero-Shot-CoT Prompt"], "threshold": 95, "ignore_order": true, "ignore_blank": true}}, "anchor_pdf": ["0ef0a82a-0e86-5c1b-87b1-36e05e15bd76"], "reference_pdf": []} {"uuid": "ef0d91b0-8648-519f-9181-ca56496723b6", "question": "What is the maximum gain obtained by adding more outputs in all the datasets tested?", "answer_format": "Your answer should be a floating point numbers with three decimal places.", "tags": ["single", "table", "objective"], "anchor_pdf": ["7e26b1a7-5536-5b8d-b5cd-068508c15c2e"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 0.033, "ndigits": 3}}} {"uuid": "ef85ae29-2dcf-5ccc-a0c6-a90689ba11b5", "question": "What are the three types of instruction-following data, and which one has the largest number of samples?", "answer_format": "Your answer should be a Python list of 2 elements. The first element is a Python list of 3 elements, containing the names of the three types of instruction-following data. The second element is a string, indicating the name of the type of instruction-following data that has the largest number of samples. e.g. [[\"type1\", \"type2\", \"type3\"], \"type\"].", "tags": ["single", "text", "objective"], "anchor_pdf": ["86922a0e-7874-5f9a-926b-0f886076d6e8"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_structured_object_exact_match", "eval_string_exact_match"], "eval_kwargs_list": [{"gold": ["conversation", "detailed description", "complex reasoning"], "ignore_order": true, "lowercase": true}, {"gold": "complex reasoning", "lowercase": true}]}}} {"uuid": "efa52128-c101-56c2-aaf0-320000e9bc55", "question": "What is the improvement in precision (in percentage points, rounded to one decimal place) from unconstrained to constrained for LLaMA-33B in closed information extraction (4 shots)? And where can I get the testing dataset (the GitHub link)?", "answer_format": "Your answer should be a single python list containing two strings, the first element of the list is the improvement in precision (in percentage points, rounded to one decimal place), the second element of the list is the GitHub link of the testing dataset.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["5ed64cec-b350-5524-be0f-7def68ebba53"], "reference_pdf": ["8ae15f7c-4e00-5626-87f4-2f42e3ce5688"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["25.2", "https://github.com/epfl-dlab/SynthIE"], "lowercase": true, "ignore_order": true}}} {"uuid": "efb108ff-2ddd-5cb2-9408-afba55df144b", "question": "How many tasks are in WebArena? Can they be categorized into classes? How many classes can they be categorized into? What are the classes? How many tasks are in theses classes, respectively?", "answer_format": "Your answer should be a Python list. The first element is an integer indicating the total task number. The second one is a boolean indicating if the tasks can be categorized. If the second one is true, there should be more elements. The third element should be an integer indicating the class number. The fourth one should be a string list storing the class names. The fifth one should be an integer list storing the task numbers in each class. If any needed information cannot be specified through the paper, give an empty string as the answer for that item.", "tags": ["single", "table", "objective"], "anchor_pdf": ["5a2b0d5c-6b51-5bbd-a001-a15f19f65a98"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_int_exact_match", "eval_bool_exact_match", "eval_int_exact_match", "eval_structured_object_exact_match", "eval_string_exact_match"], "eval_kwargs_list": [{"gold": 812}, {"gold": true}, {"gold": 3}, {"gold": ["Information-seeking", "Site navigation", "Content and configuration operation"], "lowercase": true}, {"gold": ""}]}}} {"uuid": "efd9be34-b6b2-5abc-b686-0962c27c350c", "question": "Provide an example of a paper which proposes a method to learn a dynamic (conditioned on the input) sequence tokenizer (segmenter) via standard gradient backpropagation.", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["96867be2-b209-5b0b-b68a-572a6eaec55c"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Provide an example of a paper which proposes a method to learn a dynamic (conditioned on the input) sequence tokenizer (segmenter) via standard gradient backpropagation.", "reference_answer": "Efficient Transformers with Dynamic Token Pooling"}}} {"uuid": "f0291d22-9853-5727-b582-349739d89cbe", "question": "What are the key advantages of coupling neural SDEs with neural CDEs for treatment effect estimation over existing baselines?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["34b7095b-6295-5506-bc5a-8b4fdb960941"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What are the key advantages of coupling neural SDEs with neural CDEs for treatment effect estimation over existing baselines?", "reference_answer": "BAYESIAN NEURAL CONTROLLED DIFFERENTIAL EQUATIONS FOR TREATMENT EFFECT ESTIMATION"}}} {"uuid": "f0331580-b619-5be3-957b-252a16b65159", "question": "In formula(3), what do r_where and r_what represent? How to estimate whether the brain indicates better performance by WhereCNN or WhatCNN?", "answer_format": "Your answer should be a python strings of the representation of rwhere and rwhat and how to identify whether WhereCNN or WhatCNN performs better.", "tags": ["single", "formula", "subjective"], "anchor_pdf": ["ffdd1395-14bf-5cdb-9d82-72eeb22f8b3c"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "r_where and r_what represent the prediction accuracy, measured as the correlation coefficients between the predicted and actual fMRI responses for the WhereCNN-based and WhatCNN-based encoding models, respectively. p_where > 0.5 indicates better predictive performance by WhereCNN, while p_where < 0.5 indicates better predictive performance by WhatCNN.", "question": "In formula(3), what do r_where and r_what represent? How to estimate whether the brain indicates better performance by WhereCNN or WhatCNN?"}}} {"uuid": "f06b7b4b-58fd-5450-9c97-00542144b8b2", "question": "Is there a paper exploring the curse of multilinguality for similar languages?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["51c20543-4c25-5ccf-a1d9-9816aa431ed3"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper exploring the curse of multilinguality for similar languages?", "reference_answer": "Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages"}}} {"uuid": "f0e4639b-09da-5581-87d4-2eb470c2dc0d", "question": "On which datasets were the best-performing Medical MLLMs (excluding the method proposed in this paper) trained and evaluated in the Medical VQA benchmark of the paper?", "answer_format": "Your answer should be a python list of the dataset names, e.g. [\"dataset1\", \"dataset2\", ...]. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["58bd1994-d7e2-55c9-a194-9daf63eb3e6c"], "reference_pdf": ["4debbc0c-24ce-581c-9dda-6bc36877f0d8"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["VQA-RAD", "SLAKE", "PathVQA"], "ignore_order": true, "lowercase": true}}} {"uuid": "f1429616-9c0c-5f32-b39f-46a63a5f7d03", "question": "What open-source dataset combined knowledge retrieval with constraint satisfaction queries?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["fce0c3e0-430a-571b-bec5-f2bb653b4342"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What open-source dataset combined knowledge retrieval with constraint satisfaction queries?", "reference_answer": "KITAB: EVALUATING LLMS ON CONSTRAINT SATISFACTION FOR INFORMATION RETRIEVAL"}}} {"uuid": "f1d19f7e-17b7-582f-b9dd-465860422e9e", "question": "Is there a paper which proposes a general data selection method based on information theory?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["586baad7-00af-523d-8155-989e070373da"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper which proposes a general data selection method based on information theory?", "reference_answer": "GIO: GRADIENT INFORMATION OPTIMIZATION FOR TRAINING DATASET SELECTION"}}} {"uuid": "f1f24bb5-7f16-5048-86c0-3723a919a07e", "question": "Which foundation model paper first proposed a time series model with proposed financial time series and text data?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["275bb387-b91d-5988-aea5-ad2006b98790"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which foundation model paper first proposed a time series model with proposed financial time series and text data?", "reference_answer": "TEMPO: PROMPT-BASED GENERATIVE PRE-TRAINED TRANSFORMER FOR TIME SERIES FORECASTING"}}} {"uuid": "f21f555a-1254-59ba-8cbc-11791cdab6b0", "question": "Are there any papers that use a world model for planning to ensure that decisions meet constraints?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["2f2c0a76-c3dd-5ff0-825c-68f59f9aeeba"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Are there any papers that use a world model for planning to ensure that decisions meet constraints?", "reference_answer": "SAFEDREAMER: SAFE REINFORCEMENT LEARNING WITH WORLD MODELS"}}} {"uuid": "f22e1e2f-bf4a-579e-a11f-f28e9226693a", "question": "In multimodal (multilingual) abstractive summarization field, is there any paper that propose target-oriented vision modeling method to improve the quality of summaries?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["bacdd410-20f6-5f9b-a8c6-a8eb7f1310fb"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In multimodal (multilingual) abstractive summarization field, is there any paper that propose target-oriented vision modeling method to improve the quality of summaries?", "reference_answer": "Summary-Oriented Vision Modeling for Multimodal Abstractive Summarization"}}} {"uuid": "f343cc68-7d22-55cb-8e9c-fc6efa23d8b7", "question": "When answering the question \"Are ExNLP tasks associated with high-risk situations?\", which paper does this paper(\"On Evaluating Explanation Utility for Human-AI Decision Making in NLP\") learn from? In the experiment of the source paper, how many types of knowledge-context are there?", "answer_format": "Your answer should be a single python list, the first element is the string of title of the paper, the second element is an integer number.", "tags": ["image", "multiple", "objective"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_fuzzy_match", "eval_int_exact_match"], "eval_kwargs_list": [{"gold": "Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs", "lowercase": true}, {"gold": 10}]}}, "anchor_pdf": ["bbf39c89-c4a2-5696-b9e6-f38b2d766a01"], "reference_pdf": ["1a1fdee7-93a9-5555-9dd3-a84d422e3c38"]} {"uuid": "f36225a8-3139-58df-843a-e89b838e1f37", "question": "Which base model does this paper(\"Adapt in Contexts: Retrieval-Augmented Domain Adaptation via In-Context Learning\") train as the retrieval model for the SA task? In its source paper, on how many STS tasks is it evaluated?", "answer_format": "Your answer should be a python list of two strings, the first element is the model name(one word), and the second element is an integer number.", "tags": ["text", "multiple", "objective"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["SimCSE", 7], "ignore_order": false, "lowercase": true}}, "anchor_pdf": ["0ad1dc99-4c37-5e62-8054-5a080158541e"], "reference_pdf": ["ff0d0226-2dc4-5a18-9cc9-ec5826c16eb7"]} {"uuid": "f4154375-e94a-5623-a51d-0ae5cf5c4039", "question": "Which paper measured how well the source-translation contribution by the translation model can be used to detect its own hallucinations?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["b5062515-e162-5a98-a421-ab84dfe1d930"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper measured how well the source-translation contribution by the translation model can be used to detect its own hallucinations?", "reference_answer": "Detecting and Mitigating Hallucinations in Machine Translation: Model Internal Workings Alone Do Well, Sentence Similarity Even Better"}}} {"uuid": "f4f09e69-4c85-581a-9bab-0ced35cccdb7", "question": "In which website can I find the information of the benchmark used to compare multilingual models? In which conference was the mT5 reference paper included in the benchmark published?", "answer_format": "Your answer should be a python list of two strings, the name of the website and the name of the conference.", "tags": ["single", "metadata", "objective"], "anchor_pdf": ["ff860bc6-9474-58c9-8a5b-19d92fff2932"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["https://huggingface.co/spaces/Brand24/mms_benchmark", "naacl"]}}} {"uuid": "f586cf96-1650-57f8-b7c9-2436c89216f8", "question": "When we utilize decoder-only language models in understanding word meaning, does prompting styles affect performance? If so, which technique outperforms the others? If not, what is the worst one?", "answer_format": "Your answer should be a Python list of two elements, the first element is \"yes\" or \"no\", and the second element is the prompting style name string, don\"t reply abbreviations, e.g., [\"yes\", \"prompting_style_name\"].", "tags": ["image", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["no", "Sentence completion"], "lowercase": true}}, "anchor_pdf": ["ef699d3b-ffef-5b18-8527-826110f880fd"], "reference_pdf": []} {"uuid": "f5abd5f8-b8b0-5fcf-af97-739ca262c1c0", "question": "Which model gets the highest DR value in the Random Retrieval performance?", "answer_format": "Your answer should be plain text.", "tags": ["single", "table", "subjective"], "anchor_pdf": ["0df4b58a-c8e7-52a3-8c84-730542241fca"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The BART-base model gets the highest DQ value.", "question": "Which model gets the highest DR value in the Random Retrieval performance?"}}} {"uuid": "f61c9dbc-5058-5621-8aeb-bd83c90b296e", "question": "How to find those question samples that the model considers to be ambiguous.", "answer_format": "Your answer should be a single string", "tags": ["single", "text", "subjective"], "anchor_pdf": ["e6778360-e589-5ca8-84d1-2e19dd5d4172"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "First, let the model generate an initial answer based on the question, then prompt the model to eliminate ambiguity and generate a second answer. If the model's average entropy of the two answers exceeds a threshold, it indicates that this question is ambiguous.", "question": "How to find those samples that the model considers to be ambiguous."}}} {"uuid": "f640029c-539b-58b1-a742-05b8bb0edacb", "question": "What's the biggest reason of incorrect action for each model?", "answer_format": "Your answer should be a Python dictionary. e.g. {\"model1\": \"answer1\", \"model2\": \"answer2\", ...}. YOU MUST USE THE EXACT AND FULL TEXT FROM PDF WITHOUT CHANGING CAPITALIZATION.", "tags": ["image", "objective", "single"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"Vicuna-13B": "Invalid Action/Object", "OpenChat-3.5": "Object-Mismatched Action", "Mixtral-7Bx8": "Invalid Action/Object", "Gemini Pro": "Object-Mismatched Action", "GPT-3.5-turbo": "Object-Mismatched Action", "GPT-4": "Dependency Violation"}}}, "anchor_pdf": ["4ee26cdd-4e52-5090-b1c8-46f5dcdba09c"], "reference_pdf": []} {"uuid": "f641587e-0065-54e9-92c4-d2b194535f80", "question": "Which paper first study POMDP with enhanced feedback on observations?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["fc9187b0-74f9-510d-be0c-b4311f283213"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper first study POMDP with enhanced feedback on observations?", "reference_answer": "Sample-Efficient Learning of POMDPs with Multiple Observations In Hindsight"}}} {"uuid": "f70c12d9-365c-5fb5-aa6f-7cb0620991fc", "question": "In the larger dataset in hours that the EgoDistill paper applies, what's the second largest country of residence for camera wearers?", "answer_format": "Your answer should be a string, the country as given in the paper.", "tags": ["multiple", "image", "objective"], "anchor_pdf": ["2b518d78-7e56-5928-85cb-ff7ccfd63307"], "reference_pdf": ["2553c64e-a36f-5ee3-a805-2f4c1abe737c"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "India", "lowercase": true, "ignore_blank": true}}} {"uuid": "f7b228d5-cd68-555b-afee-f05a51a12165", "question": "What are the differences in the composition of the Primary System for the unconstrained setting between the 2023 and 2024 QUESPA Submissions?", "answer_format": "Your answer should be a python strings.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["fe24a64b-db14-558f-9685-ba7e6d3f00e9", "824b2039-b575-505c-92fe-9bf063f30a8d"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The Primary System in 2024 for the unconstrained setting consists of a pre-trained model called SpeechT5, and The Primary System in 2023 for the unconstrained setting consists of two systems, the ASR and the MT system.", "question": "What are the differences in the composition of the Primary System for the unconstrained setting between the 2023 and 2024 QUESPA Submissions?"}}} {"uuid": "f7b532c1-3fd7-5a2b-87b4-522592ff6dbe", "question": "When training with non-English image-text pairs, what is the loss function of the TriKD?", "answer_format": "Your answer should be a sentence describing the loss function of the TriKD when training with non-English image-text pairs, including the terms involved in the loss function and their meanings, as given in the paper.", "tags": ["formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "When training with non-English image-text pairs, only ITC loss is applied, as the CLIP text encoder does not support non-English languages. Therefore, the loss function for the TriKD is $\\mathcal{L}_{\\text{TriKD}} = \\mathcal{L}_{\\text{ITC}}$. And the Image-Text Contrastive(ITC) loss is formulated as the average of image-to-text($\\mathcal{L}_{\\text{i2x}}$) loss and text-to-image($\\mathcal{L}_{\\text{x2i}})$) loss: $\\mathcal{L}_{\\text{ITC}} = 1/2(\\mathcal{L}_{\\text{i2x}} + \\mathcal{L}_{\\text{x2i}}) = 1/2[\\ell(h^I, h^X) + \\ell(h^X, h^I)]$. Here, $h^I$ represents the $\\ell_2$-normalized output from the CLIP image encoder and CLIP-projector, and $h^X$ represents the $\\ell_2$-normalized output from the Multilingual Text Encoder (MTE) and X-projector for a given image-text pair.", "question": "When training with non-English image-text pairs, what is the loss function of the TriKD?"}}, "anchor_pdf": ["ff651d37-e725-5752-9c38-3361bc54723d"], "reference_pdf": []} {"uuid": "f7c8f3fc-801a-5e50-9722-af38407a0b9d", "question": "What are the seven categories of tasks, which form the dataset used to conduct SFT on a Llama-2-7B model in section 2.1?", "answer_format": "Your answer should be a Python list of seven elements, containing the names of the seven categories of tasks. e.g. [\"task1\", \"task2\", ... \"task7\"]. YOU MUST USE THE EXACT AND FULL NAMES OF THE TASKS AS MENTIONED IN THE PAPER.", "tags": ["objective", "single", "text"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["closed-book question answering", "coreference resolution", "natural language inference", "abstract summarization", "multi-lingual translation", "reading comprehension", "text classification"], "ignore_order": true, "lowercase": true}}, "anchor_pdf": ["eed48331-03ed-52de-8f87-c71da234697c"], "reference_pdf": []} {"uuid": "f9276cd0-6c6a-5da7-a169-385a7f04ebb0", "question": "What are the main models mentioned in the anchor_pdf and what is the relationship between them?", "answer_format": "Your answer should be a python strings.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["e45897f5-4429-5750-a8fb-dcfa9a904b5f", "f42949a1-aae5-5c65-9791-fff76a3dabd4", "eb4c8aef-aded-5cee-9cf3-805b485d85fd"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The main models mentioned in the anchor_pdf are T0, T5 and Tk-INSTRUCT. T0 and Tk-INSTRUCT are based on T5 and have been improved.", "question": "What are the main models mentioned in the anchor_pdf and what is the relationship between them?"}}} {"uuid": "f94b871f-f8e7-5bcc-b646-7eb9840a95c4", "question": "Which sub-splits are included in the validation and test sets of the OC20 dataset used in the paper?", "answer_format": "Your answer should be a python list of the full names of the sub-splits. YOU MUST USE THE EXACT FULL NAMES FROM THE PAPER.", "tags": ["single", "text", "objective"], "anchor_pdf": ["82e60cf1-0b61-51c8-899f-490b9462eb9f"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["in-distribution adsorbates and catalysts", "out-of-distribution adsorbates", "out-of-distribution catalysts", "out-of-distribution adsorbates and catalysts"], "lowercase": true, "ignore_order": true, "threshold": 80}}} {"uuid": "f97e22e2-d4b4-5141-80e6-7270a2a9b9cc", "question": "What are the sources of the pre-training data for the latest LLM used in the experiment section of the paper \"AN UNFORGEABLE PUBLICLY VERIFIABLE WATERMARK FOR LARGE LANGUAGE MODELS\"?", "answer_format": "Your answer should be a python list of strings, e.g., [\"source1\", \"source2\"].", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["236f9dfc-a8eb-5ed3-a11d-df5835802ab2"], "reference_pdf": ["34417770-67d7-5cab-b9d4-76999c97bc02"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["CommonCrawl", "C4", "Github", "Wikipedia", "Books", "ArXiv", "StackExchange"], "ignore_order": true, "lowercase": true, "ignore_blank": true}}} {"uuid": "f9866921-b6c0-55f2-874f-8bcb5d1e733b", "question": "Is there a paper that links exposure bias to distillation?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["134ed104-4303-52c9-a9c9-394268acfafb"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper that links exposure bias to distillation?", "reference_answer": "A Systematic Study of Knowledge Distillation for Natural Language Generation with Pseudo-Target Training"}}} {"uuid": "f987547b-e418-5424-8f8b-f8855bdf63cc", "question": "Which two datasets it combines, the dataset that Alchemist used to evaluate image modality?", "answer_format": "Your answer should be a Python list of two strings, the abbreviations of the datasets as given in the paper.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["0e3f6c92-099b-5343-a589-7095452ddf16"], "reference_pdf": ["b085c7de-cb5b-5a8d-a0a2-6e617182ff63"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["CUB", "Places"], "ignore_order": true, "lowercase": true}}} {"uuid": "fa7f68f5-fd2e-5b0b-b099-2c09331b7c25", "question": "Which papers develop methods to make in-context learning more computationally efficient?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["51a73264-39d8-537e-a122-0c200e0102e1"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which papers develop methods to make in-context learning more computationally efficient?", "reference_answer": "FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning"}}} {"uuid": "fac6cccc-3d4a-5211-9180-1a825de52b16", "question": "In the largest dataset concerning procedural graph extraction before PAGED, how many labeled sentences are there in total?", "answer_format": "Your answer should be a single integer.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["6b82aecf-fda7-531e-8094-fc5ab6f4810d"], "reference_pdf": ["def780ec-7d73-56b2-b9b2-94253574fd00", "1dbf41f6-3c97-5468-b8cc-59eeb975b718", "04c19523-5522-52c0-ab13-e8f1ae6eb957"], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 4808}}} {"uuid": "fbca5330-8955-5359-94a5-d91961e2a6d9", "question": "Which paper proposes to integrate black-box LLMs with a pool of smaller but specialized language models?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["3f2204a4-b105-5e6e-96be-f86c6d94e519"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper proposes to integrate black-box LLMs with a pool of smaller but specialized language models?", "reference_answer": "KNOWLEDGE CARD: FILLING LLMS' KNOWLEDGE GAPS WITH PLUG-IN SPECIALIZED LANGUAGE MODELS"}}} {"uuid": "fbd10d75-9ede-5480-8312-08b7435413df", "question": "In the GenRec paper, which dataset used in the experiment is not evaluated in Table 1? Additionally, what's the range of the clip length for that dataset?", "answer_format": "Your answer should be a Python list of 2 elements, the first is a string, the name of the dataset, and the second is a Python list of 2 floats, the range of the clip length, rounded to 2 decimal places, in seconds, e.g. [\"dataset\", [1.01, 2.02]]", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["2abcf6e2-b4e3-5cf9-8c80-4497539805cc"], "reference_pdf": ["d47d8161-223d-5bc5-b4fb-d4f35c63b412"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["UCF-101", [1.06, 71.04]], "fuzz_method": "partial_ratio", "threshold": 95, "ndigits": 2, "lowercase": true, "ignore_order": false}}} {"uuid": "fbfeab50-b132-5933-95c4-cb5034790ab3", "question": "In the respective main experiment of SLED and DoLa, do they use the same evaluation datasets?Do they use the same model family?", "answer_format": "Your answer should be a list of two integers, where the first integer is 1 if the evaluation datasets are the same and 0 otherwise, and the second integer is 1 if the model family is the same and 0 otherwise.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["af4ffb53-3311-5964-8b17-a1e9c2a13467", "3273efdf-e052-5e2e-939a-a1b9551b48ac"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": [1, 0], "ignore_order": false}}} {"uuid": "fc882b50-9385-5452-a96a-e0e93a9cbd2f", "question": "What is the first paper to address the problem of predicting knowledge graphs whose nodes, links and attributes change with time?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["fed77d95-2a87-5d8e-8731-1e91a8dd0bf1"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "What is the first paper to address the problem of predicting knowledge graphs whose nodes, links and attributes change with time?", "reference_answer": "Holistic Prediction on a Time-Evolving Attributed Graph"}}} {"uuid": "fd391ae5-5893-5d2a-b630-55b7f0cc1fb3", "question": "According to Table 2 in the paper \"ACT-SQL: In-Context Learning for Text-to-SQL with Automatically-Generated Chain-of-Thought\", how many times does the LLM's API need to be called to generate a SQL query in the DIN-SQL approach? What modules in the DIN-SQL approach lead to those API calls?", "answer_format": "Your answer should be a Python list like [integer, string1, string2, ...]. The first element should be an integer, representing the number of times the LLM's API needs to be called. Each subsequent element should be a string, representing a module name in the DIN-SQL approach. Note that the module names do not need to include the word \"module\".", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["0d110629-3064-59d4-8638-7edb53c01b9e"], "reference_pdf": ["bff546ba-646a-5bdf-b8e2-1f19e59d5162"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": [4, "schema linking", "classification & decomposition", "sql generation", "self-correction"], "ignore_order": true, "lowercase": true, "threshold": 95}}} {"uuid": "fd98767f-44ef-5721-8fff-3fb1d8eca4b3", "question": "According to the MathCAMPS paper, among the models evaluated, which one performs the second best on MathCAMPS grade 8? In the paper that proposed the model, how is the output y computed, given an input token x?", "answer_format": "Your answer should be a Python list of 2 elements, the first is the name of the model along with its parameter size as given in the paper, and the second is the formula in LaTeX format.", "tags": ["multiple", "table", "formula", "subjective"], "anchor_pdf": ["2e59e537-6583-5f0c-b01d-e4963325edc4"], "reference_pdf": ["44e77de0-2982-575f-bf6e-f50ad597c4f6"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_complex_math_formula_with_llm"], "eval_kwargs_list": [{"gold": "Mixtral 8x22B", "lowercase": true, "ignore_blank": true}, {"formulas": "y = \\sum_{i=0}^{n-1} \\text{Softmax}(\\text{Top2}(x \\cdot W_g))_i \\cdot \\text{SwiGLU}_i(x)", "question": "How is the output y computed, given an input token x?"}]}}} {"uuid": "fdf2bff6-1dcf-5744-9c8b-da9d40aba09f", "question": "In the paper that proposed the model that is applied in TabMT for subtle pattern detection, how many instances are there in the dataset where the proposed model performs the best?", "answer_format": "Your answer should be an integer.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["0ddf2132-7c61-583c-b27b-7d3edfd030ea"], "reference_pdf": ["7ded5c20-237b-5078-a3c3-b080faed9576"], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 399482}}} {"uuid": "fe64bb38-1b46-53c8-b82b-dfc2acf75c2e", "question": "I want to contact the first author of this paper. What's the email address?", "answer_format": "Your answer should be a verbose text string representing the email address if there is only one first author. Otherwise, return a Python list of e-mail strings for each first and co-first author, e.g., [\"xxx@xxx.com\", \"yyy@yyy.com\", ...]. DO NOT INCLUDE ANY OTHER CONTEXT IN YOUR ANSWER.", "tags": ["single", "metadata", "objective"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["hhu4zu@virginia.edu", "qiao.jin@nih.gov"], "ignore_order": true}}, "anchor_pdf": ["3ea7de2a-3312-589f-a765-01e4b9e1dcb7"], "reference_pdf": []} {"uuid": "fe85c6f9-9ec2-5535-8ae4-d9be3e92d66c", "question": "Why can we omit $p(\\{y_l, l \\in L\\})$ in Equation (1)?", "answer_format": "Your answer should be a sentence explaining why we can omit this term.", "tags": ["formula", "single", "subjective"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "In Equation (1), $p(\\{y_l, l \\in L\\})$ can be omitted because it acts as a normalization factor that is constant with respect to the ancestral form $x$. The goal of the equation is to compute a value proportional to the posterior probability $p(x | \\{y_l, l \\in L\\})$. Since $p(\\{y_l, l \\in L\\})$ does not depend on $x$, it remains the same for all possible ancestral forms and thus does not affect the relative probabilities of different $x$ values.", "question": "Why can we omit $p(\\{y_l, l \\in L\\})$ in Equation (1)?"}}, "anchor_pdf": ["a819666a-9e5b-5213-9efd-4f1e12225426"], "reference_pdf": []} {"uuid": "fea63b48-8759-5b18-93d3-748ab9953c6c", "question": "The anchor PDF used two benchmark datasets for evaluation. Overall, on which dataset did the methods perform better?", "answer_format": "Your answer should be a python strings about the name of the dataset. YOU MUST USE THE EXACT NAME FROM THE PAPER.", "tags": ["single", "table", "objective"], "anchor_pdf": ["08b7c0d2-b64c-5fb1-8b0c-8326bb3d220b"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "TVSum", "lowercase": true}}} {"uuid": "fec0d844-3c0c-5c05-827f-cdbbf762d406", "question": "In the entity detection experiments, what is the text type of the dataset used in the training stage with highest F-Score_test?", "answer_format": "Your answer should be a short word or phrase.", "tags": ["single", "table", "objective"], "anchor_pdf": ["30aea0d7-d2e5-58e4-ada8-4f4bf479edd9"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "frame-theory"}}} {"uuid": "fed63d9e-52d3-5a7c-89f4-ac6f37f7e02b", "question": "Which paper explored training a GPT-2 for automatic diagnosis, emphasizing efficient data augmentation for symptom prediction and disease identification?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["447e9b20-83e7-5805-ac46-eb9f077ebccf"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper explored training a GPT-2 for automatic diagnosis, emphasizing efficient data augmentation for symptom prediction and disease identification?", "reference_answer": "CoAD: Automatic Diagnosis through Symptom and Disease Collaborative Generation"}}} {"uuid": "fee3ed60-b2ce-55ce-b06d-0f4e9fe1639f", "question": "Is there a paper comparing knowledge distillation and human annotation in terms of cost efficiency?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["691cf1e0-1f9a-5b88-a1b7-ec68b26f3936"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper comparing knowledge distillation and human annotation in terms of cost efficiency?", "reference_answer": "Distill or Annotate? Cost-Efficient Fine-Tuning of Compact Models"}}} {"uuid": "ff1576ee-d2c5-505a-964a-c2fcc94c75ff", "question": "Is there any paper that seamlessly integrates the multigrid structure in operator learning for solving partial differential equations (PDEs)?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["1f27a9cb-2fe7-576e-ad50-991ce78a6039"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any paper that seamlessly integrates the multigrid structure in operator learning for solving partial differential equations (PDEs)?", "reference_answer": "MgNO: Efficient Parameterization of Linear Operators via Multigrid"}}} {"uuid": "ff31ef9b-f07d-59a4-ac9a-4d694ff7bb13", "question": "Which paper is the first to comprehensively review the progress of deep learning in mathematical reasoning?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["d2865020-2630-5a31-bcaa-ad9ce72ab2eb"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper is the first to comprehensively review the progress of deep learning in mathematical reasoning?", "reference_answer": "A Survey of Deep Learning for Mathematical Reasoning"}}} {"uuid": "ff40fb8f-a1d9-5598-91b2-2af15bbad92e", "question": "Among the specific models tested, whose performance is closest to RePe on MixATIS++?", "answer_format": "Your answer should be the name of model DIRECTLY FROM THE PDF WITHOUT ANY EXPLANATION.", "tags": ["objective", "single", "table"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "FC-MTLF"}}, "anchor_pdf": ["95e1644a-230b-5b28-a025-31992e32e3f0"], "reference_pdf": []} {"uuid": "ff4ded6a-ee13-5a44-bd0e-a87a976df068", "question": "In the anchor PDF, the authors introduce two key changes relative to the original work. Which part are they located in Figure 1? Additionally, are there any changes in the blue boxed area in Figure 1? Please briefly describe these changes.", "answer_format": "Your answer should be a Python list with two strings: the first is a location option (from top-left, top-right, bottom-left or bottom-right), and the second is your answer to the remaining question.", "tags": ["multiple", "image", "subjective"], "anchor_pdf": ["d47ee42f-200a-5a19-8004-ea8a34c37bd8"], "reference_pdf": ["fb49daac-a71b-5f34-9d85-a55b7323e0aa"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "bottom-right"}, {"reference_answer": "There are changes. The architecture of the content encoder changed from Transformer to Conformer; the energy predictor was removed from the variance adapter; the decoder changed from Transformer-based decoder to diffusion decoder.", "question": "Are there any changes in the blue boxed area in Figure 1? Please briefly describe these changes."}]}}} {"uuid": "ff50fbdd-645a-5d67-ae87-b02133b59ed6", "question": "In the paper that proposes NoMAD-Attention, what do the authors choose as vector database? Additionally, in the paper that proposes that vector database, how to compute the number of distance computations and when it reaches a minimum?", "answer_format": "Your answer should be a Python list of 2 strings, each string is a formula in LaTeX format, representing the equation of the number of distance computations and the condition when it reaches a minimum.", "tags": ["multiple", "formula", "subjective"], "anchor_pdf": ["1ee0a3c2-c155-541c-aa3a-dd86cb24a4d8"], "reference_pdf": ["55f434b3-b41d-5f88-bf94-a6bff9a2f7f9"], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": ["N_{\\text{distances}} = K_\\text{IVF} + P_\\text{IVF} \\times N/K_\\text{IVF}", "K_\\text{IVF} = \\sqrt{P_\\text{IVF}N}"], "question": "How to compute the number of distance computations and when it reaches a minimum?", "ignore_order": false}}} {"uuid": "00a3b0ea-60be-5b46-bf5c-3f4868b0e5f2", "question": "How much improvement does \"Dr.Strategy\" achieve in \"Maze-7*7\" in average?", "answer_format": "Your answer should be a Python float number rounded to 2 decimal place. e.g. 11.45", "tags": ["single", "table", "objective"], "anchor_pdf": ["a45dd7dc-c980-56a2-9946-68ef4eefbfa6"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 41.54, "ndigits": 2}}} {"uuid": "00edc733-91e5-591b-8c25-c4f3be128f38", "question": "GeoBFN (Geometric Bayesian Flow Network) handles data from which three main modes in 3D molecule generation? What are the modeling characteristics of the first of these modalities (atomic coordinates) in GeoBFN?", "answer_format": "Your answer should be two sentences, each answers a question.", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["008f9b4f-d392-5b8e-bd68-ecf7305576a4"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["The three main modes are atomic coordinates, atomic charge and atomic type.", "Atomic coordinates are continuous variables and GeoBFN optimizes them in parameter space via a Bayesian flow network and models their distribution via a Gaussian distribution while maintaining SE(3) invariance."], "question": "GeoBFN (Geometric Bayesian Flow Network) handles data from which three main modes in 3D molecule generation? What are the modeling characteristics of the first of these modalities (atomic coordinates) in GeoBFN?"}}} {"uuid": "01f2b413-2f4b-524d-8a84-97e6c648a9e0", "question": "In the paper that proposes a margin-based satisficing imitation learning method that autonomously surpasses human demonstrators' aspiration levels rather than rigidly mimicking suboptimal behaviors, which method in Table 1 achieves the highest \\gamma-satisficing value for the \"cartpole\" environment, and what is the value?", "answer_format": "Your answer should be a Python list of two elements, the first is the name of the method and the second is a float number (rounded to 2 decimal places) of the value, the formula in LaTeX format.", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["9bb8c008-e9a3-5c1c-bd91-344a03989cfc"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_float_exact_match"], "eval_kwargs_list": [{"gold": "MinSubFI_OFF"}, {"gold": 2.62, "ndigits": 2}]}}} {"uuid": "0359c894-b118-54bf-a107-6b07a159be72", "question": "A paper demonstrate the high accurancy of posterior hallucination rate in estimation of the actual probability of hallucination. In the visualization of individual PHR and THR predictions at different context lengths, under which length of context, they show the less linearity? ", "answer_format": "Your answer should be an int. ", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["00607359-c53f-5514-aed4-47ad50464c9f"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 50}}} {"uuid": "03728f61-50cb-55ec-bd3f-8ec76b178ccd", "question": "What's the base pre-trained model used in this paper? For this pre-trained model, what's the pre-train dataset?", "answer_format": "Your answer should be a single python list like this: [\"model_name\", [\"dataset_name1\",\"dataset_name2\"]]. Note that for theses names, the abbreviation is required.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["1c32470c-8492-5517-ab37-3043d3db21e7"], "reference_pdf": ["b6b6f81f-bf3f-5cee-8295-c76b6e8022de"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_element_included", "eval_element_list_included"], "eval_kwargs_list": [{"gold": ["GLIP-T (C)", "GLIP-T (C) variant"], "lowercase": true, "ignore_blank": true}, {"gold": ["O365", "GoldG", "Objects365"], "lowercase": true}]}}} {"uuid": "0374337f-3cf1-5969-a27f-c89e6eeccfae", "question": "In ICLR 2024 Poster papers, a paper tries to ensemble the reward models to mitigate the over-optimization problem. What is the formula of the reward model?", "answer_format": "Your answer should be the formula in LaTeX format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["4ea89bed-22b4-5165-b9c9-a0bd983cb0a6"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, a paper tries to ensemble the reward models to mitigate the over-optimization problem. What is the formula of the reward model?", "formulas": "\\underbrace{\\frac{1}{k}\\sum_{i} R_i(q, a)}_{\\text{mean}} - \\lambda \\underbrace{\\frac{1}{k}\\sum_{i} \\left( R_i(q, a) - \\frac{1}{k}\\sum_{i} R_i(q, a) \\right)^2}_{\\text{variance}}"}}} {"uuid": "03bcda39-9e5b-54d4-8d05-59b96c06ff95", "question": "In the paper that proposes a task-oriented imputation framework that evaluates and optimizes time series filling strategies based on their direct performance gains in downstream tasks without model retraining, what is the key assumption of formula (9) to compress the size of \\frac{\\partial f(X_i, \\theta)}{\\partial \\theta}?", "answer_format": "Your answer should be a Python strings of the key assumption.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["9c5a9030-fc24-5ec7-af58-8355f4e76ab0"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The author assumes that the model output f(X_i, \\theta) resides in a low-dimensional space spanned by a limited number of smooth basis functions.", "question": "In the paper that proposes a task-oriented imputation framework that evaluates and optimizes time series filling strategies based on their direct performance gains in downstream tasks without model retraining, what is the key assumption of formula (9) to compress the size of \\frac{\\partial f(X_i, \\theta)}{\\partial \\theta}?"}}} {"uuid": "03ca86ed-328d-5495-8410-2d3754f51ad6", "question": "In the paper that proposes Variational BoN, where can I find the binary classifier that the author uses with two classes {POS, NEG} as the reward model?", "answer_format": "Your answer should be Your answer should be a Python string, the website URL starting with \"https://\", as given in the paper.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["9b0605db-0f1d-5527-a499-444a7729e8b1"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": " https://huggingface.co/lvwerra/distilbert-imdb"}}} {"uuid": "0418ab60-0b47-578f-9af4-5fee333dcaa3", "question": "In ICLR 2024 Oral papers, which paper proposes a novel unsupervised RL objective, which authors call Metric-Aware Abstraction (METRA)?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["3962a504-6c3b-521f-967b-57114c6ce970"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In ICLR 2024 Oral papers, which paper proposes a novel unsupervised RL objective, which authors call Metric-Aware Abstraction (METRA)?", "reference_answer": "METRA: Scalable Unsupervised RL with Metric-Aware Abstraction"}}} {"uuid": "04c66ea8-6710-58d1-9cda-ce351f28fd4d", "question": "In the paper that integrates Foundation Models, Federated Learning, and Blockchain into a unified framework for smart cities, what is the institution of the first author of this paper?", "answer_format": "Your answer should be a python strings of the name of the institution.", "tags": ["comprehensive", "metadata", "subjective"], "anchor_pdf": [], "reference_pdf": ["9b83e8a6-f2e9-5684-8d1d-ae9f1a849353"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Department of Artificial Intelligence HI Iberia (HIB)", "question": "In the paper that integrates Foundation Models, Federated Learning, and Blockchain into a unified framework for smart cities, what is the institution of the first author of this paper?"}}} {"uuid": "04f045de-d5e0-591e-b877-8222a1a360a7", "question": "A recent paper introduces BrainMD, a large-scale multimodal dataset comprising 2,453 3D MRI brain scans paired with radiology reports and longitudinal health records, designed to evaluate vision-language models (VLMs) on medical imaging tasks. Based on the experimental results presented in this work regarding BrainMD, could you please tell me which model among [Flamingo, Med-Flamingo, Med-PaLM-2] demonstrates the highest performance in identifying cancer type?", "answer_format": "Your answer must be one of the following: ['Flamingo', 'Med-Flamingo', 'Med-PaLM-2']", "tags": ["image", "objective", "single"], "anchor_pdf": ["4c3f5e08-659b-5e15-81bd-ebdb24ffc2e6"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Med-PaLM-2"}}} {"uuid": "057cc726-950b-578f-a8af-a6a2cb3f40ad", "question": "In the paper that proposes I-Frame Domain Adaptation in Neural Video Compression,the author modifies the \\gamma_d and \\gamma_u to the trade-off between parameter efficiency and performance improvement. Which pair of \\gamma_d and \\gamma_u has the least training parameters added compared to the base model?", "answer_format": "Your answer should be a Python list of two integers indicating \\gamma_d and \\gamma_u respectively.", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["9ac83263-3455-584f-83de-e58f210fe928"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": "[0, 8]"}}} {"uuid": "06b9e050-ff53-55a6-8619-ba98e5e360e5", "question": "In ICLR 2024 Spotlight papers, a paper attempts to solve how to train a general agent in Reinforcement Learning (RL) that can thoroughly explore the environment and learn new and diverse skills. What is the affiliation of the second author?", "answer_format": "Your answer should be a Python string.", "tags": ["comprehensive", "objective", "metadata"], "anchor_pdf": [], "reference_pdf": ["9f00d13f-fd37-5d97-9469-cd3eca504994"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "University of Southern California", "lowercase": true, "ignore_blank": true}}} {"uuid": "072b05e0-08ee-5fba-a27f-da78002f5b67", "question": "There is a recent paper introducing a large-scale, long-term, semantically annotated outdoor dataset collected across the USC campus using a mobile robot equipped with multi-camera and LiDAR sensors. It features 10 million images and 1.4 million point clouds annotated with 267 semantic classes using GPT-4 and Grounded-SAM, enabling fine-grained 3D scene understanding. Please inform me which semantic label has a higher point frequency in this dataset.", "answer_format": "Your answer should be a name of a semantic label.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["474c512e-42f5-5bc0-92a8-50571df2d6cd"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "building"}}} {"uuid": "082caa5e-c57d-5fe5-a0dd-226b0d55fc09", "question": "In NeurIPS 2024 Poster papers, a paper proposes a new problem in offline MBRL called \"The Edge-of-Reach Problem\". In Figure 5, which two kinds of statistical visualization method are used?", "answer_format": "Your answer should be plain text.", "tags": ["comprehensive", "image", "subjective"], "anchor_pdf": [], "reference_pdf": ["76b9bb90-09cb-5721-83ba-737ec7b66e36"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "In NeurIPS 2024 Poster papers, a paper proposes a new problem in offline MBRL called \"The Edge-of-Reach Problem\". In Figure 5, which two kinds of statistical visualization method are used?", "reference_answer": "The two kinds of statistical visualization methods are the box plot and the histogram."}}} {"uuid": "08f0f56d-a98e-5065-8cec-b4c4a78f830d", "question": "A paper focus on generating 3D molecular conformers conditioned on molecular graphs in a multiscale manner to study the way that diffusion models process 3D geometries in a coarse-to-fine manner. In their analyze of the spectral domain of Equivariant Blurring Diffusion (EBD), smaller or higher eigenvalues correspond to finer details? ", "answer_format": "Your answer should be chosen between \"smaller\" and \"higher\". ", "tags": ["image", "objective", "single"], "anchor_pdf": ["03f5c8ae-47ce-5312-b368-7fe6a4088e58"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "higher", "lowercase": true, "ignore_blank": false}}} {"uuid": "0afdfb80-125c-5cb1-a814-e410ad15d56e", "question": "In the paper that proposed the LMSYS-Chat-1M dataset, according to the sampled conversations, which category contains the most clusters?", "answer_format": "Your answer should be a string, the category tag as shown in the corresponding figure.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["4112cb19-bb8a-5c70-bad0-ed2ae912514d"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Language and Content Creation", "lowercase": true, "ignore_blank": true}}} {"uuid": "0b025b6f-17f4-5481-bd38-a6d95dd8ecee", "question": "There is a paper proposing Hybrid-LITE, a memory-efficient hybrid retriever that combines BM25 with a novel dense retriever called LITE. LITE is trained using joint contrastive learning and knowledge distillation from DrBoost, a boosting-based dense retriever. May I ask which lab collaborated with Arizona State University on this work?", "answer_format": "Your answer should be a name of a lab.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["1a81bab8-25ed-5da4-bb27-58ae70a8a8f8"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_element_included", "eval_kwargs": {"gold": ["Meta Reality Lab", "Meta Reality"]}}} {"uuid": "0bed04d1-ac96-52ff-becb-4840138e3c34", "question": "In ICLR 2024 Spotlight papers, a paper unifies reinforcement learning and imitation learning methods under a dual framework. How many pages are there in this paper?", "answer_format": "Your answer should be Python integer.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["6d4ac425-4ee3-53cb-acc8-ce759680a8b9"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 48}}} {"uuid": "0d5b262f-8137-58d6-b2f8-65610ac2130a", "question": "What's the forward process of HydraLoRA?", "answer_format": "Your answer should be a Python string, the formula in LaTeX format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["79a26200-2dcb-50a5-bbe1-e49ac3213a0b"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "y=W_0 x+\\sum_{i=1}^{N}\\omega_i E_i Ax", "question": "What's the forward process of HydraLoRA?"}}} {"uuid": "0e331b4a-bcfd-5aa1-a638-ceccedd419b5", "question": "In which mathematical subjects, the ablation model trained on 512K instances has the highest accuracy?", "answer_format": "Your answer should be a string, which indicates a subject.", "tags": ["single", "image", "objective"], "anchor_pdf": ["89791e20-e2a8-5707-92d6-5b5acbd2df2f"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "algebra", "lowercase": true, "ignore_blank": false}}} {"uuid": "0e989547-e157-5bb4-b961-314772eac86c", "question": "In ICLR 2024 Poster papers, a paper proposes a novel framework named PARL (Policy Alignment in Reinforcement Learning), aiming to address the policy alignment problem in Reinforcement Learning (RL). Tell me the affiliated university of the first author.", "answer_format": "Your answer should be a Python string.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["ae528bab-b6f7-5c19-9361-49aee40af057"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "University of Maryland, College Park", "lowercase": true, "ignore_blank": true}}} {"uuid": "0f29ac1d-7066-52ff-8967-9eb6e02d47e2", "question": "Which paper published in ICLR 2024 formulates the reference-free MT evaluation into a pairwise ranking problem?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["0ae179a5-aa63-55a5-a0a8-d86ae939d631"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper published in ICLR 2024 formulates the reference-free MT evaluation into a pairwise ranking problem?", "reference_answer": "MT-Ranker: Reference-free machine translation evaluation by inter-system ranking"}}} {"uuid": "0f9dacbc-f690-5e1c-ab56-30e96ae31265", "question": "Which paper published in ICLR 2024 proposes EControl, a novel mechanism that can regulate error compensation by controlling the strength of the feedback signal?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["0b20f37f-a2f9-5920-9a0c-40ce497798fd"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper published in ICLR 2024 proposes EControl, a novel mechanism that can regulate error compensation by controlling the strength of the feedback signal?", "reference_answer": "EControl: Fast Distributed Optimization with Compression and Error Control"}}} {"uuid": "0fa85fce-1bca-5a8f-9b37-f3fe84777222", "question": "What is the role of mSDF and how does G-SHELL's extraction algorithm integrate SDF and mSDF?", "answer_format": "Your answer should be a sentence answering the two questions.", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["1c8ae99c-17c8-57b9-9ca6-04ce0f992d88"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["We define a manifold signed distance field (mSDF) on the watertight template, in which the sign indicates whether a point lies in the open surface or not (\\nu(x) > 0, \\quad \\forall x \\in \\operatorname{Interior}(M_0); \\quad \\nu(x) = 0, \\quad \\forall x \\in \\partial M_0; \\quad \\nu(x) < 0, \\quad \\text{otherwise}), and the absolute scale indicates the geodesic distance to the boundary. An open surface can now be extracted via isoline extraction with mSDF.", "With SDF and mSDF values stored in the same 3D grid, we obtain for G-SHELL an efficient Marching-Cubes-like algorithm which reuses the interpolation coefficient for the mSDF sign computation. Specifically, with an edge (p_i, p_j), the corresponding SDF values s_i < 0 < s_j and mSDF values \\nu_i, \\nu_j, we can compute the mSDF value on the extracted mesh vertex as \\nu'"], "question": "What is the role of mSDF and How does G-SHELL's extraction algorithm integrate SDF and mSDF?"}}} {"uuid": "0fe413e3-5931-522a-aade-0d0436b9f160", "question": "Among the papers at ACL 2023 focusing on cross-lingual transfer learning, what is the average improvement in cross-lingual classification accuracy achieved by the X-InSTA method compared to random prompt selection?", "answer_format": "Your answer should be a Python float value between 0 and 100, representing the percentage point improvement, rounded to 1 decimal place.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["068aee8a-f79a-5f6f-99bb-728832e4cf7b"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 8.2, "ndigits": 1}}} {"uuid": "104e822b-ccda-58e7-a162-888e4eb2bd68", "question": "In ICLR 2024 Poster papers, a paper proposes a reward smoothing method called \"DreamSmooth\". How many different tasks are illustrated in Figure 5?", "answer_format": "Your answer should be Python integer", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["8107760d-d599-5a5e-b4d9-0e9a3fcf84c8"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 5}}} {"uuid": "108f6d70-3eec-5cb9-99ed-828c2b243731", "question": "In the paper of titled \"CAUSAL CONFUSION AND REWARD MISIDENTIFICA TION IN PREFERENCE-BASED REWARD LEARNING\", how is the the difference between reward functions measured? Please give me the relevant github link.", "answer_format": "Your answer should be a single python string like \"https://github.com/a/b\", the link should be the full link of the github repository.", "tags": ["multiple", "metadata", "objective"], "anchor_pdf": ["c3af3cf5-492b-5ef1-b8d9-6a5bc908e1ce"], "reference_pdf": ["a8ac7306-1390-5685-a55e-02da9751d08d"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "https://github.com/HumanCompatibleAI/evaluating-rewards", "lowercase": true}}} {"uuid": "10b8b656-3230-5398-be80-502883a46a2e", "question": "In the paper that proposes MULAN, which model in autoregressive type has the highest likelihood in bits per dimension on the test set of ImageNet?", "answer_format": "Your answer should be a Python strings indicating the name of the model.", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["a2fbfee3-05fe-56ee-960b-10b18fc8dffd"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "PixelCNN", "lowercase": true}}} {"uuid": "10ba9268-333c-555c-a52f-2b3f9473f5c8", "question": "In ICLR 2024 Poster papers, a paper proposes a new Adversarial Imitation Learning (AIL) algorithm, aiming to address the sample efficiency and scalability issues of existing AIL methods when dealing with off-policy data. Tell me the title of this paper.", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["8bb22633-e414-53b5-9f29-5e3e64baf176"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, a paper proposes a new Adversarial Imitation Learning (AIL) algorithm, aiming to address the sample efficiency and scalability issues of existing AIL methods when dealing with off-policy data. Tell me the title of this paper.", "reference_answer": "Adversarial Imitation Learning via Boosting"}}} {"uuid": "10e16a45-bcf5-5bc4-93f7-b1a0d851b14b", "question": "A recent paper introduces a publicly available multi-granularity dataset for job skill demand forecasting, compiled from millions of online job advertisements. It uniquely supports forecasting at the occupation, company, and regional levels, and includes comprehensive benchmarks across statistical, deep learning, and graph-based models. Please let me know which institution the first author of this work is affiliated with.", "answer_format": "Your answer should be a name of an institution.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["471f6ed0-46f8-51e8-b0fe-b6036fb475bf"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "University of Science and Technology of China"}}} {"uuid": "15618a03-0eec-514a-a04b-df78b61ef229", "question": "A paper introduce a entirely data-driven relighting method, where intrinsics and lighting are each represented as latent variables. In the experiments of sensitivity to light changes, which model is sensitive to lighting changes? ", "answer_format": "Your answer should be a string, aneme of model.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["3d32f101-333d-5fd4-a81f-2eb268f042f9"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Intrinsic Diffusion", "lowercase": true, "ignore_blank": false}}} {"uuid": "15b0adb9-3ba0-51d0-b8f3-a533ef7a2291", "question": "In ICLR 2024 Poster papers, a paper proposes a framework where an agent first learns a sufficient set of skill primitives to achieve all high-level goals in its environment. How many baselines are compared in Figure 2?", "answer_format": "Your answer should be a Python integer.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["a9828d07-00f6-519f-89f8-c34cc72bbc4b"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 6}}} {"uuid": "15c7a8ad-f85f-57da-8cf1-f4d219fe2a79", "question": "In ICLR 2024 Poster papers, a paper mainly studies how to more effectively utilize data augmentation techniques to improve sample efficiency and generalization ability in image-based deep reinforcement learning (DRL). Tell me the definition of Q-invariance of this paper.", "answer_format": "Your answer should be the formula in LaTeX format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["08a2377d-5d4c-560c-9ea4-87947d853f12"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, a paper mainly studies how to more effectively utilize data augmentation techniques to improve sample efficiency and generalization ability in image-based deep reinforcement learning (DRL). Tell me the definition of Q-invariance of this paper.", "formulas": "Q(s, a) = Q(f_{\\tau}(s), a) \\ for \\ all \\ s \\in \\mathcal{S}, a \\in \\mathcal{A}"}}} {"uuid": "16aef45c-711c-538c-97a0-65a8e0a2ce8e", "question": "What percentage of the MEPS dataset is in the test set?", "answer_format": "Your answer should be a float indicating the percentage, rounded to 1 decimal place.", "tags": ["single", "table", "objective"], "anchor_pdf": ["f3ca2666-77ef-5eb4-bdc3-974a202f0303"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 31.9, "ndigits": 1}}} {"uuid": "1741c5e3-8551-5ec1-b7d1-2fefb81ef527", "question": "Among the papers at ACL 2023 researching grammatical error correction, what $\\text{F}_{0.5}$ score does GEC-DePenD (SUNDAE) achieve on the CoNLL-2014 test set?", "answer_format": "Your answer should be a Python float value representing the $\\text{F}_{0.5}$ score, rounded to 1 decimal place.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["216e7594-0160-5851-884e-f95fd67b83d1"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 61.6, "ndigits": 1, "tolerance": 0.1}}} {"uuid": "17fcb7a7-45ca-530a-8f4f-12a93fbb9a16", "question": "What are the three knowledge selectors shown in Figure 1, and where are they positioned in the workflow? How does the Factuality Selector combine two types of scores to filter knowledge documents?", "answer_format": "Your answer should be a sentence answer the two questions.", "tags": ["single", "image", "subjective"], "anchor_pdf": ["3f2204a4-b105-5e6e-96be-f86c6d94e519"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["The three selectors are Relevance Selector, Pruning Selector, and Factuality Selector, applied sequentially after knowledge card generation.", "It averages summarization factuality (consistency with original documents) and retrieval-augmented fact-checking (support from external sources) scores, then samples top-ranked documents."], "question": "What are the three knowledge selectors shown in Figure 1, and where are they positioned in the workflow? How does the Factuality Selector combine two types of scores to filter knowledge documents?"}}} {"uuid": "1a3ab1fb-9deb-5f78-999e-808ae10f4c94", "question": "What is the key energy decomposition formula used in LED-GFN?", "answer_format": "Your answer should be a formula ", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["3b72d468-c4fb-59b1-8421-7bbff6a855e1"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "E(x) \\approx \\Phi_\\theta(\\tau) = \\sum_{t=0}^{T-1} \\phi_\\theta(s_t \\rightarrow s_{t+1}),", "question": "What is the key energy decomposition formula used in LED-GFN?"}}} {"uuid": "1abeaace-aa0f-54ec-9e35-6adc252e6267", "question": "In the experiment of the paper that introduces MTOB dataset, how many models managed to achieve 30 chrF score, for kgv to eng task under W+S+G^S setting?", "answer_format": "Your answer should be a integer, the number of models.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["5944b88e-0ff6-518b-b344-19943d406b42"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 6}}} {"uuid": "1b21571b-e7e6-521d-a6b2-13b65325a958", "question": "What's the base model of MERU? Which institute is this base model from?", "answer_format": "Your answer should be a single python list like this: [\"model_name\", \"institute_name\"]. Note that for both of these names, the abbreviation is required. For example, if the model name is \"OpenAI GPT-3\" and the institute name is \"OpenAI\", then your answer should be [\"GPT-3\", \"OpenAI\"].", "tags": ["multiple", "metadata", "objective"], "anchor_pdf": ["e91bafcf-c217-5306-8959-cada556fa665"], "reference_pdf": ["058b61e1-dbec-5ca7-9603-d53c1e14e733"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["CLIP", "OpenAI"], "ignore_order": false}}} {"uuid": "1b21a337-581d-5f5d-88d7-73d39d9c4a92", "question": "There is a paper that introduces TRAC, a benchmark suite comprising four fundamental reasoning tasks (Projection, Executability, Plan Verification, Goal Recognition) aimed at the textual understanding of preconditions and effects in dynamic environments. In its experiments, the performance of RoBERTa-base on the four sub-tasks shows a stable increase with the growing number of training samples. The question is: which sub-task of TRAC does the RoBERTa-base model achieve the highest performance with 10,000 training samples?", "answer_format": "Your answer should be the exact name of the sub-task, must be one of ['Projection', 'Executability', 'Plan Verification', 'Goal Recognition']", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["8ef7b3a3-1465-58b8-962d-c23046cda29b"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Executability"}}} {"uuid": "1b488e45-fa85-52b3-81d7-282e1a443383", "question": "In ICLR 2024 Spotlight papers, a paper tries to solve the performance issues of the DICE (DIstribution Correction Estimation) method in offline reinforcement learning (RL) and imitation learning (IL). How many different affiliation of all authors?", "answer_format": "Your answer should be a Python integer.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["f20defa8-4fdb-5582-adc7-ebefe03370ff"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 4}}} {"uuid": "1b94e7ef-ccfd-5ba0-a268-d053dbc1bd34", "question": "Is there any oral work that is related to apply LORA to long context fine-tuning?", "answer_format": "Your answer should be a text string representing the title of the work.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["a7f18666-9d9f-5175-a81e-56208c6aa86b"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there any oral work that is related to apply LORA to long context fine-tuning?", "reference_answer": "LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models"}}} {"uuid": "1c97caca-ee1f-5ec2-8215-88b97c600f76", "question": "The lack of task specific knowledge or rely on ground truth as few-shot samples is one of the causes of poor performance of traditional learning-based approaches. Therefore, a paper propose a novel approach called Progressive Retrieval Augmented Generation (P-RAG). This models' success rate is satured more quickly with less rounds of iteration in the ALFRED Valid Unseen dataset or the ALFRED Train 100 dataset?", "answer_format": "Your answer should be chosen between \"ALFRED Valid Unseen dataset\" and \"ALFRED Train 100 dataset\"", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["020ece98-56b2-5ff7-b4b4-cb12241a6f32"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "ALFRED Train 100 dataset", "lowercase": true, "ignore_blank": false}}} {"uuid": "1cfb75b2-ff4f-5e64-9b17-49e2a4efcba1", "question": "In ICLR 2024 Poster papers, which paper proposes to extract an efficient deterministic inference policy from critic models and pretrained diffusion behavior models, leveraging the latter to directly regularize the policy gradient with the behavior distribution's score function during optimization?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["fe01ec00-1c93-5d9a-bc2e-1afca3e94bf2"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, which paper proposes to extract an efficient deterministic inference policy from critic models and pretrained diffusion behavior models, leveraging the latter to directly regularize the policy gradient with the behavior distribution's score function during optimization.?", "reference_answer": "Score Regularized Policy Optimization through Diffusion Behavior"}}} {"uuid": "1da02b38-bb41-56ae-b3cb-2101f2ea9095", "question": "In NeurIPS 2024 Poster papers, a paper proposes a methods called \"BECUASE\". What is the affiliation of the first author?", "answer_format": "Your answer should be a Python string.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["4bf87140-b2d2-5435-86b7-ed66c30d8bd8"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "CMU", "lowercase": true, "ignore_blank": false}}} {"uuid": "1dd42f4f-0844-5b5b-95f7-97f7c795f0e2", "question": "A recent paper introduces a Transformer variant that enhances computational efficiency and performance by dynamically pooling variable-length token segments during intermediate layers. The paper conducts experiments on the memory consumption of a training step for different shortening factors using the English text8 dataset. With a shortening factor of 3, how much GPU memory in GB is required?", "answer_format": "Your answer should be a python float, rounded to 1 decimal place.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["96867be2-b209-5b0b-b68a-572a6eaec55c"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 12.9, "ndigits": 1}}} {"uuid": "1e13d99f-d4ad-5f07-8a11-0c3a41db5e36", "question": "A recent paper introduces ProGraph, a benchmark designed to evaluate large language models (LLMs) on graph analysis tasks using external APIs, aligning their behavior with that of human experts. Among the four question types in ProGraph (True/False, Drawing, Calculation, and Hybrid), which category exhibits the highest average difficulty according to the experimental results under the RAG7 settings?", "answer_format": "Your answer must be one of the following: ['True/False', 'Drawing', 'Calculation', 'Hybrid']", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["4a84ed33-63d1-5632-b140-5bccf8ebeb8b"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Hybrid"}}} {"uuid": "1ecd973a-fa86-53b0-96e5-4781f4fdedd9", "question": "A paper studies a realistic Continual Learning (CL) setting and applies it to large-scale semi-supervised Continual Learning scenarios with sparse label rate. They show models' performance on 1% labeled ImageNet10k with varying computational steps and conclude DietCL's alleviation of overfitting, how this advantage is reflected in the figure?", "answer_format": "Your answer should be a phrase which explains the question with phenomenon showed in figure. ", "tags": ["comprehensive", "image", "subjective"], "anchor_pdf": [], "reference_pdf": ["01a83a66-9f54-547b-8e23-02570576b657"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "With a large computational steps, the model do not always maintain its high accurancy of fitting.", "question": "In the figure which shows models' performance on 1% labeled ImageNet10k with varying computational steps under semi-supervised method, how DietCL's advantage of alleviation of overfitting is reflected?"}}} {"uuid": "20795612-be12-57ec-ab55-b15fca7b1358", "question": "In the paper that proves a phase transition in attention mechanisms from positional to semantic in language modelsand shows it emerges only beyond a critical data threshold, tell me the number of the authors.", "answer_format": "Your answer should be an integer indicating the number of the authors.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["9bc0580e-f94b-5dba-bae9-610cf27d5707"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 4}}} {"uuid": "20c140bf-259b-5d6c-b6e8-6681491c2bc3", "question": "what are the two main integration approaches proposed in KNOWLEDGE CARD? How does one differ from the other in terms of knowledge card activation?", "answer_format": "Your answer should be a sentence answer the two questions.", "tags": ["comprehensive", "image", "subjective"], "anchor_pdf": [], "reference_pdf": ["3f2204a4-b105-5e6e-96be-f86c6d94e519"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["The two approaches are Bottom-Up and Top-Down integration.", "The Bottom-Up approach activates all knowledge cards at once for multi-domain synthesis, while Top-Down selectively activates relevant cards based on the LLM's iterative decision-making process."], "question": "what are the two main integration approaches proposed in KNOWLEDGE CARD? How does one differ from the other in terms of knowledge card activation?"}}} {"uuid": "211cf320-483f-5ba7-bb4b-4b6935a3f6f2", "question": "What is the same core conclusion \"Which Layer is Learning Faster? A Systematic Exploration of Layer-wise Convergence Rate for Deep Neural Networks\" and \"On the Spectral Bias of Neural Networks\" draw?", "answer_format": "Your answer should be a string", "tags": ["multiple", "image", "subjective"], "anchor_pdf": ["be427df5-7b02-5f9e-8f35-033b1f48dca7", "caca712b-603b-5275-b655-da868611b7f4"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The conclusion is Deep Neural Networks (DNNs) exhibit significant variation in learning speeds across different layers, and this disparity is closely linked to spectral bias and DNNs' training follows a \"shallow-to-deep\" progression that shallow layers stabilize quickly due to their role in low-frequency feature extraction, while deeper layers (near the output) converge more slowly as they handle high-frequency or complex patterns.", "question": "What is the same core conclusion \"Which Layer is Learning Faster? A Systematic Exploration of Layer-wise Convergence Rate for Deep Neural Networks\" and \"On the Spectral Bias of Neural Networks\" draw? "}}} {"uuid": "211fc5cb-a1a6-5e48-b48b-67790d8c4eab", "question": "In the paper at NeurIPS 2024 that introduces LLM landscape for safety, what is the minimum number of adversarial examples required to compromise the safety alignment of GPT-3.5 Turbo through fine-tuning?", "answer_format": "Your answer should be a Python integer representing the minimum number of adversarial examples required.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["f7404d66-bc44-5901-8b1b-f54a59c92721"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 10}}} {"uuid": "2122c495-bf24-5030-ab80-49644d33bc9d", "question": "A recent paper introduces PertEval, a toolkit designed to assess the real knowledge capacity of large language models (LLMs) through knowledge-invariant perturbations. Could you please retrieve the article and provide the corresponding GitHub repository link for this work?", "answer_format": "Your answer should be a link only without any additional prefixes or suffixes.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["b373651a-3127-5dec-a83b-a87d4a95b8f6"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "https://github.com/aigc-apps/PertEval"}}} {"uuid": "219dbc8a-cd90-58e1-b95d-2c419634648c", "question": "In the paper that proposes DFA-GNN, what is the use of the first and the second term in formula (7)?", "answer_format": "Your answer should be a Python strings indicating the use of the two terms.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["9bf6f2ee-e595-5e84-bb0e-971e4b775748"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The first term enhances the smoothness of the error estimation throughout the graph, while the second term ensures that the final solution stays consistent with the initial error estimate E.", "question": "In the paper that proposes DFA-GNN, what is the use of the first and the second term in formula (7)?"}}} {"uuid": "22671b31-7e6c-5ed7-b649-52e762bd3e30", "question": "A paper introduce Self-Calibrating Conformal Prediction to recognize the complementary roles of point and interval predictions. In their experiment of comparing SC-CP with baselines, which baseline under-adapts due to insufficient bins? ", "answer_format": "Your answer should be a string, a name of baseline. ", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["038a7fcf-07dc-54ee-897e-f57c5731aa6a"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Mondrian", "lowercase": true, "ignore_blank": false}}} {"uuid": "232fd1d1-def0-5e09-aed3-612a5dd25337", "question": "A recent paper proposes a novel reinforcement learning-based approach, DPPO-PR2, for UAV local path planning, demonstrating superior convergence speed and planning performance across six simulated environments. The proposed algorithm is an optimized variant of a classical reinforcement learning algorithm. Which algorithm does DPPO-PR2 build upon?", "answer_format": "Your answer should be a string", "tags": ["single", "text", "objective"], "anchor_pdf": ["3bc47fd7-6a69-5062-a2da-07b8f0d2c19d"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_element_included", "eval_kwargs": {"gold": ["PPO", "Proximal Policy Optimization"], "lowercase": true}}} {"uuid": "2376766c-1a1e-55c6-aff5-27ee9edda7dc", "question": "Among the Findings papers at ACL 2023 researching text summarization, what is the average improvement in ROUGE scores achieved by the proposed framework in the paper \"Do You Hear The People Sing? Key Point Analysis via Iterative Clustering and Abstractive Summarisation\"?", "answer_format": "Your answer should be a Python integer representing the maximum improvement in ROUGE scores in percentage points.", "tags": ["single", "objective", "text"], "anchor_pdf": ["1ae4b672-774b-53bf-9095-165ec7a5addf"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 14}}} {"uuid": "24f6ae04-4f02-5799-8109-e25b5de33d07", "question": "Many complex high-dimensional physical systems are recently modeled by graph neural network (GNN) models. In the simulation of the stress on the falling ball and plate after a collision, MGN or GT has a better performance?", "answer_format": "Your answer should be chosen between \"MGN\" and \"GT\". ", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["047b5e6d-9d7b-5a5c-b04c-f7947ccea4a0"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "GT", "lowercase": true, "ignore_blank": false}}} {"uuid": "25802786-3944-5e7e-8f5e-3b57c592436a", "question": "What is the overall accuracy achieved by GPT-4V on the MathVista benchmark, and how does it compare to the second-best performing model?", "answer_format": "Your answer should be a Python tuple of two float values: [gpt4v_accuracy, difference_from_second_best], both between 0 and 100, rounded to 1 decimal place and represented as percentages.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["40911da4-3a2d-516b-9e83-25600a989feb"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": [49.9, 15.1], "ndigits": 1, "ignore_order": false}}} {"uuid": "266ef0cd-be78-556d-b8e0-9921909775dd", "question": "In the SDXL paper, the c_crop parameter proposed in figure 5 aim to solve which problem?", "answer_format": "Your answer should be a paragraph, indicating the problem.", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["3c3f0fb9-b845-5668-8010-eaffe8f7995a"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "As collating a batch in DL frameworks such as PyTorch requires tensors of the same size, a typical processing pipeline is to (i) resize an image such that the shortest size matches the desired target size, followed by (ii) randomly cropping the image along the longer axis. While random cropping is a natural form of data augmentation, it can leak into the generated samples, causing the malicious effects shown above.", "question": "In the SDXL paper, the c_crop parameter proposed in figure 5 aim to solve which problem?"}}} {"uuid": "296811ae-cfe9-52ba-908a-f9debaa29cfb", "question": "A paper propose a learning paradigm that directly establishes causation between events in the course of time. According to its diabetes simulator, SI1 and GS1 could probablely have the common effect or the opposite effect?", "answer_format": "Your answer should be chosen between \"common\" and \"opposite\". ", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["041c7cbc-2d3d-574f-9a3d-9b1549205b3a"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "opposite", "lowercase": true, "ignore_blank": false}}} {"uuid": "29782c60-c850-5407-850a-0e2527b59b95", "question": "There is a paper that introduces a lightweight, model-agnostic alignment framework called Aligner, which corrects residuals between preferred and dispreferred responses using a small plug-and-play model. In this paper, the authors investigate the effects of different identity mapping proportions on the model's helpfulness and harmlessness. Which model consistently achieved the highest scores in the experimental results?", "answer_format": "Your answer should be a name of the model.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["68c3c95d-77ed-5af6-b36e-ade2df994033"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Llama2-70B-Chat"}}} {"uuid": "29d6f506-65f4-5d11-94b7-dd2673df32cf", "question": "In ICLR 2024 Oral papers, a paper presents PTGM, a novel method that pre-trains goal-based models to augment RL by providing temporal abstractions and behavior regularization. Tell me the overall objective for training the high-level policy is to maximize the expected return.", "answer_format": "Your answer should be formula in the Latex format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["73f711b8-04f1-59f2-b730-3e2c31a7721d"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"question": "In ICLR 2024 Oral papers, a paper presents PTGM, a novel method that pre-trains goal-based models to augment RL by providing temporal abstractions and behavior regularization. Tell me the overall objective for training the high-level policy is to maximize the expected return.", "formulas": "J(\\theta) = \\mathbb{E}_{\\pi_{\\theta}} \\left[ \\sum_{t = 0}^{\\infty} \\gamma^t \\left( \\sum_{i = kt}^{(k + 1)t} R(s_i, a_i) - \\alpha D_{\\text{KL}} \\left( \\pi_{\\psi}^p(a^h | s_{kt}) \\| \\pi_{\\theta}(a^h | s_{kt})\\right) \\right) \\right]"}}} {"uuid": "29db626a-b8ee-5ba3-b431-a01257170a2f", "question": "In YOCO, what's the detailed formulas for self-decoder?", "answer_format": "Your answer should be a Python list of two strings, the formulas in LaTeX format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["19da599a-52c7-5513-af59-4ac9e58302a9"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": ["Y^{l} =\\operatorname{ESA}\\left(\\operatorname{LN}\\left(X^{l}\\right)\\right)+X^{l}", "X^{l+1} =\\operatorname{SwiGLU}\\left(\\operatorname{LN}\\left(Y^{l}\\right)\\right)+Y^{l}"], "question": "In YOCO, what's the detailed formulas for self-decoder?"}}} {"uuid": "2a126f74-2895-580d-bde5-525a57fe306e", "question": "A recent paper introduces a large-scale synthetic dataset of muscle activations derived from biomechanical simulations using OpenSim, encompassing 227 subjects and 402 muscle strands. By enriching motion capture data with simulated muscle activations, the authors bridge the gap between observable motion and internal biomechanics. In this dataset, which action within Dynamic Actions has the highest prevalence?", "answer_format": "Your answer should be a name of an action.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["465fc1db-b36a-58c6-ba0e-3cb966c5b5ac"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "transition", "lowercase": true}}} {"uuid": "2b1e878a-91d0-5910-815f-db0608a4c7e7", "question": "Which paper designs a world model for continuous control with \"SimNorm\"?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["ae95c926-4f72-58ea-84c3-a99b99108471"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper designs a world model for continuous control with \"SimNorm\"?", "reference_answer": "TD-MPC2: Scalable, Robust World Models For Continuous Control"}}} {"uuid": "2b2d5721-b503-5c7d-b3e0-65fadcee243c", "question": "A paper report the phenomenon that a small number attention heads transport a compact representation of the demonstrated task, which they call a function vector (FV). From results across layers for the zero-shot case, after adding the function vecteurs, in the lastest layers, do the models get a significantly higher accurancy comparing with models without FV?", "answer_format": "Your answer should be \"Yes\" or \"No\".", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["00898bf7-c6b2-5309-8c68-55d1c86af1c6"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "No", "lowercase": true, "ignore_blank": false}}} {"uuid": "2bd19f2e-3eef-535b-8ce2-fcbdffe80c34", "question": "Can you recommend me a paper published in ICLR 2024 that proposes a novel model, CONSISGAD, which is tailored for Graph Anomaly Detection in scenarios characterized by limited supervision?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["0cca00da-6436-5d41-8c1e-2ac557b3e445"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Can you recommend me a paper published in ICLR 2024 that proposes a novel model, CONSISGAD, which is tailored for Graph Anomaly Detection in scenarios characterized by limited supervision?", "reference_answer": "Consistency Training with Learnable Data Augmentation for Graph Anomaly Detection with Limited Supervision"}}} {"uuid": "2c3325e6-72dd-50d7-89df-92018ab5305c", "question": "In ICLR 2024 Poster papers, a paper attempts to solve the credit assignment problem in Preference-based Reinforcement Learning (PbRL). How many different methods are compard in Figure 2", "answer_format": "Your answer should be a Python integer.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["e1b2ccf7-2d5c-5074-b641-936ab21cda9a"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 6}}} {"uuid": "2c80a9fe-9507-5538-bdd0-77a38580b6da", "question": "Does the paper include any diagrams illustrating the relationship between projected-gradient norms and exploitability? If so, what key insight does the diagram convey about bounding exploitability using Lemma 2?", "answer_format": "Your answer should be a sentence answering the two questions.", "tags": ["single", "table", "image", "subjective"], "anchor_pdf": ["e2d53d42-e870-5827-8378-41e381f67d31"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["No."], "question": "Does the paper include any diagrams illustrating the relationship between projected-gradient norms and exploitability? If so, what key insight does the diagram convey about bounding exploitability using Lemma 2?"}}} {"uuid": "2c961e82-6918-51c9-9aa6-0e3527e4d000", "question": "In ICLR 2024 Poster papers, a paper attempts to address the challenges faced when learning from pixel-level inputs in multi-object manipulation tasks. How many different tasks are considered in Figure 3?", "answer_format": "Your answer should be a Python integer.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["b5f540ed-e0b9-559a-bc9d-67376d4d1228"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 5}}} {"uuid": "2cadec06-d31b-56ea-ab4b-a83eae802af4", "question": "How does LED-GFN modify the Detailed Balance loss function?", "answer_format": "Your answer should be a formula", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["3b72d468-c4fb-59b1-8421-7bbff6a855e1"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "L_{LED}(s,s') = \\left(\\log \\tilde{F}(s) + \\log P_F(s'|s) + \\phi_\\theta(s\\rightarrow s') - \\log \\tilde{F}(s') - \\log P_B(s|s')\\right)^2,", "question": "How does LED-GFN modify the Detailed Balance loss function?"}}} {"uuid": "2d8e65eb-0b65-50d6-83c8-b69666e39699", "question": "In ICLR 2024 Oral papers, a paper attempts to solve the problem of how to accelerate the learning process and avoid getting trapped in local optimal solutions in Cooperative Multi-Agent Reinforcement Learning (MARL). This paper develops the deterministic conditional autoencoder, tell me the corresponding loss function.", "answer_format": "Your answer should be formula in the Latex format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["402ca915-7f12-560e-8f5e-cdf54903a981"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"question": "In ICLR 2024 Oral papers, a paper attempts to solve the problem of how to accelerate the learning process and avoid getting trapped in local optimal solutions in Cooperative Multi-Agent Reinforcement Learning (MARL). This paper develops the deterministic conditional autoencoder, tell me the corresponding loss function.", "formulas": "\\left(H_t - f_{\\psi}^H\\left(f_{\\phi}(s_t | t) | t\\right)\\right)^2 + \\lambda_{rcon} \\left\\lVert s_t - f_{\\psi}^s\\left(f_{\\phi}(s_t | t) | t\\right) \\right\rVert_2^2"}}} {"uuid": "2d9d3a83-0d60-527a-924d-b8245c882db5", "question": "In ICLR 2024 Poster papers, a paper attempts to solve the credit assignment problem in Preference-based Reinforcement Learning (PbRL). Tell me the number ofpages of the appendix of this paper?", "answer_format": "Your answer should be a Python integer.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["e1b2ccf7-2d5c-5074-b641-936ab21cda9a"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 6}}} {"uuid": "2e1c39a1-2886-58e0-aa1e-6087995c9e50", "question": "In MetaLA, what's the main improvement for the hidden state $S_t^h$?", "answer_format": "Your answer should be a string, the formula in LaTeX format, indicating the main improvement.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["7fbb9c11-cbe2-59a2-bd68-3bae54905e3e"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "\\mathbf{S}_{t}^{h}=\\operatorname{diag}\\left(\\alpha_{t}^{h}\\right) \\mathbf{S}_{t-1}^{h}+\\left(\\mathbf{1}-\\alpha_{t}^{h}\\right)^{\\top} \\mathbf{v}_{t} \\quad \\in \\mathcal{R}^{d_{k}^{\\prime} \\times d_{v}^{\\prime}}", "question": "In MetaLA, what's the main improvement for the hidden state $S_t^h$?"}}} {"uuid": "2e337fa6-897f-579b-9b1d-3d87279e705a", "question": "There is a paper that introduces a novel framework for evaluating language model (LM) agency through structured negotiation games, addressing the limitations of static benchmarks. The work provides an open-source library (LAMEN) and negotiation transcripts to facilitate reproducible research on LM agent capabilities. The authors discuss four types of negotiation issues, among which is the issue where agents value each issue differently, creating opportunities for trade-offs.", "answer_format": "Your answer should be one of the four types of negotiation issues: ['Distributive', 'Compatible', 'Mixture', 'Integrative Distributive'].", "tags": ["comprehensive", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["04ccffea-05c2-5f51-a373-c6a26359d069"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Integrative Distributive"}}} {"uuid": "2e4c876d-4a7f-5fb4-ad13-9334a6eaf533", "question": "Among the oral presentations at ICLR 2024 researching image generation, what Frechet Inception Distance (FID) score does Wurstchen Stage C achieve on the COCO30K dataset at 256x256 resolution?", "answer_format": "Your answer should be a Python float value representing the FID score, rounded to 1 decimal place.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["f97065df-5c5e-5584-b049-9ebe62b4e2ed"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 23.6, "ndigits": 1, "tolerance": 0.1}}} {"uuid": "2f6302ea-9662-57be-9b5c-e1ecbeee1159", "question": "In ICLR 2024 Poster papers, which paper proposes sub-trajectory mining to extract potentially valuable sub-trajectories from offline data, and diversify the behaviors within those sub-trajectories by varying coverage of the state-action space?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["f84fc40c-c3b6-5ec4-803a-9ffd8f1db934"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, which paper proposes sub-trajectory mining to extract potentially valuable sub-trajectories from offline data, and diversify the behaviors within those sub-trajectories by varying coverage of the state-action space. ", "reference_answer": "On Trajectory Augmentations for Off-Policy Evaluation"}}} {"uuid": "2f9efcac-82a4-5227-8400-29809958fa20", "question": " What are the three steps of the iterative hypothesis refinement process shown in Figure 1? What is the role of the first step, Hypotheses Generation?", "answer_format": "Your answer should be a sentence answering the two questions.", "tags": ["single", "image", "subjective"], "anchor_pdf": ["faa7fbba-044a-51e6-a104-efd4422880e6"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["The three steps are Hypotheses Generation, Hypotheses Selection, and Hypotheses Refinement", "the first step uses LMs to propose a set of free-form or constrained hypotheses based on observations"], "question": " What are the three steps of the iterative hypothesis refinement process shown in Figure 1? What is the role of the first step, Hypotheses Generation?"}}} {"uuid": "3076e893-19aa-5491-8e94-4b52ac5c3096", "question": "At what memory reduction percentage can M-SMoE retain performance on which datasets?", "answer_format": "Your answer should be a Python list of two elements, the first is an integer that indicates the percentage, and the second is a list of strings that indicates the dataset names.", "tags": ["comprehensive", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["8815c4cf-60ae-58f0-b420-7a0d5054e10d"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_int_exact_match", "eval_structured_object_exact_match"], "eval_kwargs_list": [{"gold": 60}, {"gold": ["MRPC", "COPA", "WinoGrande", "SQuAD", "HotpotQA"], "lowercase": true}]}}} {"uuid": "30b13d11-904d-59fb-ac41-ab155831cada", "question": "A paper propose the Factorized Fourier Neural Operator (F-FNO) which bridges the performance gap between pure machine learning approaches to that of the best numerical or hybrid solvers. In the example where both F-FNO and DNS are using the same spatial resolution, which model is visually closer to the ground truth? ", "answer_format": "Your answer should be chosen between \"F-FNO\" and \"DNS\". ", "tags": ["single", "image", "objective"], "anchor_pdf": ["632a1dd3-b932-5b9a-a341-4e59a0b77536"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "F-FNO", "lowercase": true, "ignore_blank": false}}} {"uuid": "30d0eed5-ce79-5caa-9aea-1be15551193f", "question": "There is a paper that proposes a novel uncertainty-based gradient matching approach for model merging, linking the inaccuracy of parameter averaging to gradient mismatches between individual models and the target model. It employs a new second-order approximation method using Hessian estimates to reduce mismatch. When exploring the effectiveness of removing data, the authors tested the model's toxicity on Detoxify, considering generations with a score exceeding a certain threshold as toxic. What is this specific threshold?", "answer_format": "Your answer should be a python float number, rounded to 1 decimal place.", "tags": ["comprehensive", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["036e255e-fe42-5c3c-8835-d6c7bd69e6b9"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 0.2, "ndigits": 1}}} {"uuid": "3104ef8c-e942-5372-a842-4480d7f5d68e", "question": "A paper studies the generalized linear contextual bandit problem within the constraints of limited adaptivity and proposes two algorithms, B-GLinCB and RS-GLinCB. In the comparison of RS-GLinCB against ECOLog Faury et al. (2022) and GLOC Jun et al. (2017), which algorithm showed the smallest regret after 17500 rounds? ", "answer_format": "Your answer should be string, a name of algorithm. ", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["8ca649da-f5ab-5a9d-b301-f9eacfd76927"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "RS-GLinCB", "lowercase": true, "ignore_blank": false}}} {"uuid": "310c3b00-9568-544b-a5af-40efc5e3dee7", "question": "Which Tiny Paper published in ICLR 2024 applies an image-to-image refinement of each image in the InstructPix2Pix dataset with the help of the text-to-image diffusion model SDXL?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["0c328e44-25ab-5302-b156-159cf89f13f1"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which Tiny Paper published in ICLR 2024 applies an image-to-image refinement of each image in the InstructPix2Pix dataset with the help of the text-to-image diffusion model SDXL?", "reference_answer": "Improving Image Editing Models with Generative Data Refinement"}}} {"uuid": "312971a0-c394-5aec-a1d6-14507a83d70c", "question": "In all experiments, on average, how many minutes did each model take to train and evaluate?", "answer_format": "Your answer should be a float, rounded to 1 decimal place, the time taken for training and evaluating in minutes.", "tags": ["single", "objective", "text"], "anchor_pdf": ["f3ca2666-77ef-5eb4-bdc3-974a202f0303"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 112.0, "ndigits": 1}}} {"uuid": "31ad8166-f23d-56fc-baa4-fff230889e5f", "question": "In the paper \"HLLM: Enhancing Sequential Recommendations via Hierarchical Large Language Models for Item and User Modeling,\" the authors evaluate HLLM on a dataset called PixelRec. Specifically, how many samples within the 200K subset of this dataset have a Session Length in the range of [10, 20]? Please provide the exact number.", "answer_format": "Your answer should be a Python int.", "tags": ["multiple", "image", "objective"], "anchor_pdf": ["49d274c6-fc0e-50fc-a402-e74876cb2aa8"], "reference_pdf": ["214ff263-1a66-5489-9b9d-90cfeb0cc41d"], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 97047}}} {"uuid": "31e4f1fa-dd92-58ea-82cf-e7939ef8038d", "question": "In the paper that introduces MuMA-ToM, how many multi-modal social interactions between two agents does the MuMA-ToM Benchmark consist, and how many multi-choice questions are there based on these social interactions?", "answer_format": "Your answer should be a Python list of two integers, the first indicating the number of multi-modal social interactions and the second is the number of multi-choice questions.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["0e32c5f5-d5fb-5276-bc36-81f02da77ae3"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": [225, 900]}}} {"uuid": "3245ec6e-0cce-5157-9258-8447a4b41348", "question": "A paper introduce a novel Contrastive Signal Generative Framework for Accurate Graph Learning to avoid impact of inappropriate contrastive signals. In their model, the hyperparameter $\\gamma $ is used to adjust the weight of the contrastive loss in the overall loss function. When $\\gamma$ equals to which value, the model achieves the best average accuracy on the three datasets? ", "answer_format": "Your answer should be a float rounded to 1 decimal place.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["04de82da-f4b2-58e0-a452-201d47d973dd"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 0.1, "ndigits": 1, "tolerance": 1e-06}}} {"uuid": "332e9eed-a895-5f23-ae24-73b9ee936b21", "question": "How is Q_{EC}(f_{\\phi}(s_t), \\bm{a}_t) defined in terms of immediate reward r_t and the highest return H from episodic memory? How does the loss function L^{EC}_{\\theta} combine the TD error and the episodic memory error?", "answer_format": "Your answer should be two formulas in LaTeX format.", "tags": ["single", "formula", "subjective"], "anchor_pdf": ["402ca915-7f12-560e-8f5e-cdf54903a981"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": ["Q_{EC}(f_{\\phi}(s_t), \\bm{a}_t) = r_t(s_t, \\bm{a}_t) + \\gamma H(f_{\\phi}(s_{t+1}))", "L^{EC}_{\\theta} = \\langle y(s, \\bm{a}) - Q_{tot}(s, \\bm{a}; \\theta)\\rangle^2 + \\lambda\\langle Q_{EC}(f_{\\phi}(s), \\bm{a}) - Q_{tot}(s, \\bm{a}; \\theta)\\rangle^2"], "question": "How is Q_{EC}(f_{\\phi}(s_t), \\bm{a}_t) defined in terms of immediate reward r_t and the highest return H from episodic memory? How does the loss function L^{EC}_{\\theta} combine the TD error and the episodic memory error?"}}} {"uuid": "335aac7c-4401-5a6d-ab89-271141803f05", "question": "In the paper proposing RAGraph, what measures were adopted to emphasize nodes in the long-tail part?", "answer_format": "Your answer should be a python strings.", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["0a1f4eb6-2523-5ce3-b2fd-bf9915127e15"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "In the paper proposing RAGraph, what measures were adopted to emphasize nodes in the long-tail part? Please answer in a python string format.", "reference_answer": "Inverse Importance Sampling Strategy. I(v) = \\alpha PR(v)+(1-\\alpha )DC(v). We reverse the node importance with I'(v) = \\frac{1}{I(v)+\\epsilon }, \\epsilon \\to 0, normalize it to obtain node v_i's sampling probabilities pi, and perform weighted sampling function WEIGHTEDSAMPLING(G^R_\\tau , p_i) to prioritize nodes with higher sampling probability (lower importance) according to p_i. When sampling, for each master node v_m, we generate its k-hop neighbors, termed an ego net G^e_\\tau(v_m)."}}} {"uuid": "33a15d70-5c4d-5a69-93f6-693d95e9028f", "question": "Recently, an enhanced version of the MMLU dataset, called MMLU-Pro, has been proposed as a benchmark for evaluating large language models (LLMs) by addressing the limitations of the original MMLU dataset. Could you please tell me which discipline in MMLU-Pro has the highest proportion of questions?", "answer_format": "Your answer should be a name of a discipline.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["b9a08aa9-da83-5629-93f4-3d8ba35fdd05"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "math", "lowercase": true}}} {"uuid": "33ad083d-6865-52fa-aa99-32bd951248a3", "question": "What's the main suggestion of the paper inspiring the research of \"A KERNEL-BASED VIEW OF LANGUAGE MODEL FINE-TUNING\"?", "answer_format": "Your answer should be a short text about the main suggestion of the paper.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["9b7c4b43-f00a-567b-9beb-be3884fd0f32"], "reference_pdf": ["5f11e125-6d2c-5853-9c09-023a1991e2e1"], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The findings suggest that random matrix theory, rather than just being a toy model, may be central to understanding the properties of neural representations in practice.", "question": "What is the main suggestion of the paper?"}}} {"uuid": "3467612f-a2e6-5d1c-b331-6e3a3621f1e6", "question": "A paper studies a realistic Continual Learning (CL) setting and applies it to large-scale semi-supervised Continual Learning scenarios with sparse label rate. In the pseudo-supervised class-wise contrastive learning and the instance-wise conrastive learning, which representation distinct the samples into small, seperated clusters but confuses categories?", "answer_format": "Your answer should be a string, a name of representation.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["0922f95b-8050-50a9-8e90-17192b5b7bcd"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "RSTC-C", "lowercase": false, "ignore_blank": false}}} {"uuid": "34cb700b-b21a-5e22-9734-9c4141f46a2b", "question": "A paper introduce DIAMOND (DIffusion As a Model Of eNvironment Dreams) for image generation with visual details. In the comparison with IRIS, IRIS generates the trajectories which contain visual inconsistencies between frames, what is taken as an example to explain probablely consequences for reinforcement learning of this behavior?", "answer_format": "Your answer should be a phrase, which conclude the example. ", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["3dc65acd-2ef7-5c31-9b55-023a4d086d8d"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Since an agent should generally target rewards and avoid enemies, these small visual discrepancies can make it more challenging to learn an optimal policy. ", "question": "A paper introduce DIAMOND (DIffusion As a Model Of eNvironment Dreams) for image generation with visual details. In the comparison with IRIS, IRIS generates the trajectories which contain visual inconsistencies between frames, what is taken as an example to explain probablely consequences for reinforcement learning of this behavior?"}}} {"uuid": "34d6ee8a-a394-55b0-be3d-26ac927d96cb", "question": "In Figure 1, what does the autoregressive model predict for the query Canada in the country-capital example? How does the autoregressive prediction mechanism for the Boolean function task mirror the country-capital example , despite differing data types?", "answer_format": "Your answer should be a sentence answering the two questions.", "tags": ["single", "image", "subjective"], "anchor_pdf": ["428a8ebc-c2ab-5c89-bedf-1475b53cf5d1"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["The model predicts Ottawa as the capital of Canada, completing the sequence", "Both tasks require the model to predict the next token (y_k) given previous input-output pairs (x_1,y_1, ..., x_{k-1},y_{k-1}, x_k). The Boolean task predicts a binary label , analogous to predicting a word Ottawa, showcasing the same autoregressive structure but with different data formats."], "question": "In Figure 1, what does the autoregressive model predict for the query Canada in the country-capital example? How does the autoregressive prediction mechanism for the Boolean function task mirror the country-capital example , despite differing data types?"}}} {"uuid": "3514253a-3c30-5f1c-8990-d7a16bb1d7d6", "question": "In the paper that proposes L-GATr, where can I find the author's implementation of L-GATr?", "answer_format": "Your answer should be a Python strings of the website, the website URL starting with \"https://\", as given in the paper.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["9bea15c2-a96b-5d97-9d10-5268616acf52"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "https://github.com/heidelberg-hepml/lorentz-gatr"}}} {"uuid": "37721d94-5214-5aa8-894a-a8797fa046e4", "question": "In ICLR 2024 Spotlight papers, which paper is motivated by the idea of \"sensing scaffold\"?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["9494a7d1-69f6-588b-a5da-f83793b40f13"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In ICLR 2024 Spotlight papers, which paper is motivated by the idea of \"sensing scaffold\"?", "reference_answer": "Privileged Sensing Scaffolds Reinforcement Learning"}}} {"uuid": "38141558-19e8-57f5-9482-0fec8c121683", "question": "In the paper that proposes Trap-MID, what percentage is the Recall of the reconstructed images from PLG-MI and what does this statistic mean?", "answer_format": "Your answer should be a Python list of two elements. The first is a float number of the percentage (between 0 and 100, rounded to 2 decimal places) and the second is the indication of the statistic.", "tags": ["comprehensive", "image", "subjective"], "anchor_pdf": [], "reference_pdf": ["9b93e191-10b1-58bf-a3ea-063afea9a170"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_float_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": 78.68, "ndigits": 2}, {"reference_answer": "The MI attacks are misled into extracting trapdoor information.", "question": "What percentage is the Recall of the reconstructed images from PLG-MI and what does this statistic mean?"}]}}} {"uuid": "383135b1-4179-58d6-8040-0a34f2d7e0cf", "question": "In the paper that presents PANORAMIA, which type of PANORAMIA in ResNet-101 model has the highest precision on CIFAR10 dataset in figure 3 at different levels of recall?", "answer_format": "Your answer should be a python strings indicating the variant name of the PANORAMIA.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["9b9edb40-b0fa-50f9-96f0-e9be094451fb"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "PANORAMIA:ResNet101_E100", "ignore_blank": true, "lowercase": true}}} {"uuid": "3a8cfcd8-82e4-5a08-ab13-659624281122", "question": "There is a paper that proposes a multimodal Emotion Recognition and Classification (ERC) framework that leverages bidirectional multi-head cross-attention to model complex correlations across textual, audio, and visual modalities. It introduces a novel visual feature extractor (VisExtNet) for capturing emotion-rich facial expressions and a Sample-Weighted Focal Contrastive (SWFC) loss to address class imbalance and semantic similarity between emotions. The paper employs a dataset called IEMOCAP for emotion classification. Could you please tell me which emotion label category has the second largest sample proportion in this dataset?", "answer_format": "Your answer should be a exact match of the emotion label category name.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["8a2f1fe9-5892-5d8a-98a7-23d449028c7f"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Neutral", "lowercase": true, "ignore_blank": true}}} {"uuid": "3ad795ab-4381-5810-a9a1-c3c80a40f1c3", "question": "A recent paper proposes an embodied social intelligence benchmark focusing on accessibility and inclusivity. The benchmark evaluates agents on their ability to infer human intentions and constraints through egocentric observations and to cooperatively plan actions. Please retrieve the paper and provide me with the link to the corresponding code repository for this work.", "answer_format": "Your answer should be a link only without any additional prefixes or suffixes.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["40539cc5-8af4-5c0d-b089-fd95834c1f7c"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "https://github.com/UMass-Foundation-Model/CHAIC"}}} {"uuid": "3bab9b77-ac37-5784-949d-6e90f0e588d7", "question": "Can you recommend me a paper published in ICLR 2024 that introduced Hierarchical cOntext MERging (HOMER), a novel method that efficiently addresses the context limit issue inherent in large language models?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["d4cc1f30-0629-50b0-9a65-c6384a39eeb0"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Can you recommend me a paper published in ICLR 2024 that introduced Hierarchical cOntext MERging (HOMER), a novel method that efficiently addresses the context limit issue inherent in large language models?", "reference_answer": "Hierarchical Context Merging: Better Long Context Understanding for Pre-trained LLMs"}}} {"uuid": "3bbffbd6-a376-5f1d-835b-e4dfea9d9eb0", "question": "A recent paper introduces a proposal-based framework for natural language video localization that enables efficient and effective moment-to-moment interaction through learnable templates and dynamic anchors. By incorporating a multi-scale visual-linguistic encoder and an anchor-guided moment decoder with anchor highlight attention, the proposed method transcends the locality assumption and achieves state-of-the-art performance across several benchmarks. All the research institutions involved in this study are from the same country. Please indicate the name of this country.", "answer_format": "Your answer should be a name of a country.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["9582d7ef-8263-5b2d-9dc3-441d897c5f62"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Singapore"}}} {"uuid": "3bc6a3a6-cc31-58b6-bf5d-fb2719b1ad6f", "question": "In the paper \"TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding,\" a strategy known as StreamingLLM is employed, which utilizes a KV Cache Eviction Strategy by retaining critical attention sink tokens alongside recent KV pairs to enhance long-context capabilities. The question arises: how many attention sink tokens do the authors of the original StreamingLLM paper consider to be sufficient?", "answer_format": "Your answer should be a Python int.", "tags": ["multiple", "table", "objective"], "anchor_pdf": ["bbf20514-fefe-538e-9eb5-4657b8ef687a"], "reference_pdf": ["2d831b51-a802-51f4-9b55-39ab8c0ade5a"], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 4}}} {"uuid": "3c67b92e-2a4f-5a99-a380-3d245268a98f", "question": "In ICLR 2024 Spotlight papers, a paper introduces a novel algorithm, Robust Policy Improvement (RPI), which actively interleaves between IL and RL based on an online estimate of their performance. What is the formula of $\textbf{A}^+$ advantage function?", "answer_format": "Your answer should be the formula in LaTeX format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["f22fa760-10d6-5959-8398-f8bd583acf28"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"question": "In ICLR 2024 Spotlight papers, a paper introduces a novel algorithm, Robust Policy Improvement (RPI), which actively interleaves between IL and RL based on an online estimate of their performance. What is the formula of $\textbf{A}^+$ advantage function?", "formulas": "r(s, a) + \\mathbb{E}_{s' \\sim \\mathcal{P}|\\pi, s}\\left[f^+(s')\\right] - f^+(s)"}}} {"uuid": "3c6a8a71-eb8c-501e-98b6-00564244c97d", "question": "In the paraphrase category of the experiments in the paper \"Knowledge-in-Context: Towards Knowledgeable Semi-Parametric Language Models\", which task experienced a decline in accuracy after adding any type of knowledge? What is the significance of this task?", "answer_format": "Your answer should be a list containing two elements, the first one is the name of the task being asked, and the second one explains the significance of the task", "tags": ["multiple", "table", "subjective"], "anchor_pdf": ["5af5e45d-f259-57ae-a99e-be98764c416c"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "PAWS", "lowercase": true}, {"reference_answer": "Existing paraphrase datasets (e.g., Quora Question Pairs) lack challenging negative examples with high lexical overlap but non-paraphrase pairs, and PAWS fills the gap.", "question": "What is the significance of PAWS?"}]}}} {"uuid": "3c6cb9af-cfc9-5515-97c7-d40805b2ec55", "question": "In Figure 1, what are the two accuracy metrics plotted against training steps? How does the relationship between these metrics change during training, and what does this imply about the learning phases?", "answer_format": "Your answer should be a sentence answering the two questions.", "tags": ["single", "image", "subjective"], "anchor_pdf": ["3a7844f8-6697-5648-8d00-bfe2ced21b8b"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["The two metrics are in-weights accuracy (IWL, for targets not in context) and in-context accuracy (ICL, for novel classes).", "Initially, IWL rises faster than ICL, but later ICL abruptly surges, suggesting a transition from slow feature learning to sudden induction-head formation enabling in-context reasoning."], "question": "In Figure 1, what are the two accuracy metrics plotted against training steps? How does the relationship between these metrics change during training, and what does this imply about the learning phases?"}}} {"uuid": "3d37bcb5-ade4-5109-ac37-87fab0d4ace9", "question": "In the paper that proposes P-RLHF, where can I find the fine-tuned GPT-J model used as the SFT model?", "answer_format": "Your answer should be a Python strings of the website, the website URL starting with \"https://\", as given in the paper.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["9c3b0dfe-3ee7-539c-8efa-70910eb26552"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "https://github.com/kingoflolz/mesh-transformer-jax"}}} {"uuid": "3d64d18d-26f9-5baf-ac94-ea28b01bc658", "question": "A recent paper proposes a novel generative method named TP-EGG for constructing typed entailment graphs (EGs). Unlike prior extractive approaches that rely on large corpora, TP-EGG generates new predicates and entailment relations using pre-trained language models, effectively addressing both predicate and edge sparsity. Please indicate the name of the university affiliated with this research.", "answer_format": "Your answer should be the name of the university.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["97aa73c1-58e1-5b69-8dfd-1a201f29be95"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_element_included", "eval_kwargs": {"gold": ["Peking University", "PKU", "Peking"]}}} {"uuid": "3d77ba99-9b4b-581c-9b45-443d64b37ed9", "question": "There's a recent paper introduces Human-Aware Vision-and-Language Navigation (HA-VLN), extending traditional VLN by incorporating dynamic human activities and relaxing key assumptions (e.g., egocentric action space, sub-optimal expert supervision). Could you please provide the name of the corresponding author of this paper?", "answer_format": "Your answer should be a name of a person.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["b1c6ca32-9a55-589e-985e-15f6a9d3c510"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Zhi-Qi Cheng", "lowercase": true, "ignore_blank": true}}} {"uuid": "3e2edeb1-10ae-5dc4-94e6-1be457a9985e", "question": "In ICLR 2024 Spotlight papers, a paper proposes a method named \"Heuristic Blending\". In this paper, how many theorems are proposed?", "answer_format": "Your answer should be a Python integer.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["fac83d47-6080-518e-ba75-3e376c6e3d06"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 3}}} {"uuid": "3e5a7f65-8cc1-5f62-85cd-cf906a2997d1", "question": "Facing the failure of existed methods to address the compound degradations presented in source images, a paper proposes a novel interactive multi-modal image fusion framework based on the text-modulated diffusion model, called Text-DiFuse. In the comparison of the basic version of their Text-DiFuse with current state-of-the-art fusion methods, what is their model's advantage?", "answer_format": "Your answer should be a sentence describing the model's advantage in detail.", "tags": ["comprehensive", "image", "subjective"], "anchor_pdf": [], "reference_pdf": ["4a445892-f1ae-5ce8-a2b5-c7a10d39f18a"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Ability to correct color casts, restore scene information under low-light conditions, and suppress noise, ability to highlight physiological structure information while maintaining functional distribution. ", "question": "Facing the failure of existed methods to address the compound degradations presented in source images, a paper proposes a novel interactive multi-modal image fusion framework based on the text-modulated diffusion model, called Text-DiFuse. In the comparison of the basic version of their Text-DiFuse with current state-of-the-art fusion methods, what is their model's advantage?"}}} {"uuid": "3efd0502-7123-5b57-8b54-8c42097d8be7", "question": "Which paper published in ICLR 2024 propose a novel method, namely Sorting Krylov Recycling (SKR), to accelerate data generation for neural operators training?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["0ac25642-7296-52c1-8822-97d13b878cc7"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper published in ICLR 2024 propose a novel method, namely Sorting Krylov Recycling (SKR), to accelerate data generation for neural operators training?", "reference_answer": "Accelerating Data Generation for Neural Operators via Krylov Subspace Recycling"}}} {"uuid": "3fe086fb-83fd-5861-ab9d-a1075873b79e", "question": "Tell me the core contribution of \"Hybrid RSSM\".", "answer_format": "Your answer should be plain text.", "tags": ["single", "text", "subjective"], "anchor_pdf": ["836773dc-06af-56de-9846-db5e075e6a77"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "Tell me the core contribution of \"Hybrid RSSM\".", "reference_answer": "\"Hybrid RSSM\" can reinforce the robustness of state representation within policy learning."}}} {"uuid": "40aa6ad8-bae3-583f-afff-ed15c0224fc8", "question": "In the paper that characterizes the exact privacy-utility tradeoff for locally private sampling under f-divergences, proposing universally optimal mechanisms for both discrete and continuous spaces, where can I find the code of the paper if I want to reproduce the experiment?", "answer_format": "Your answer should be a Python strings of the website, the website URL starting with \"https://\", as given in the paper.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["9c2c8077-2115-5f89-91bd-5085704f18d3"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "https://github.com/phy811/Optimal-LDP-Sampling"}}} {"uuid": "4105ba8c-8517-5976-8bde-6cdcfa676223", "question": "Give a brief introduction of the innovative points of the essay.", "answer_format": "Your answer should be a sentence.", "tags": ["single", "subjective", "text"], "anchor_pdf": ["004db68f-d746-52b9-8331-d1e9f918b98c"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_partial_scoring_points_with_llm", "eval_kwargs": {"scoring_points": [" The first self-supervised spike-guided deblurring framework(S-SDM", "No need for real clear labels, it ets rid of the dependence on real clear sequences.", "The BSN effectively suppresses thermal noise in spiky signals through a blind spot strategy (predicting the center pixel using only the surrounding pixels).", "EDSR supersegmentation of spiky signals from low resolution to blurred image size without additional alignment labeling.", "It costs less than others", "The color consistency assumption is proposed to associate single-channel spike signals with RGB blurred images, which solves the problem of lack of color information in spike cameras."], "question": "Give a brief introduction of the innovative points of the essay.", "count": 2}}} {"uuid": "4114f9ce-4c1b-5d4c-8955-a2272732d6f4", "question": "Which two main challenges does GeoBFN address when dealing with molecular geometry data? What is a specific manifestation of the first challenge (multimodality)?", "answer_format": "Your answer should be two sentences, eaching answer one of the two questions.", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["008f9b4f-d392-5b8e-bd68-ecf7305576a4"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["The two main challenges are multimodality and noise sensitivity.", "Molecular geometry data need to deal with both continuous variables (e.g., atomic coordinates) and discrete variables (e.g., atom types), and the form of the data varies greatly from modality to modality, which makes it more difficult to model directly and uniformly."], "question": "Which two main challenges does GeoBFN address when dealing with molecular geometry data? What is a specific manifestation of the first challenge (multimodality)?"}}} {"uuid": "419db7dd-495c-5130-a911-2ed5438af06d", "question": "In this paper, what is the theoretical basis of \"Dynamic Path Feedback\"?", "answer_format": "Your answer should be plain text.", "tags": ["single", "text", "subjective"], "anchor_pdf": ["1d676730-5013-56f8-9b2f-4c5a0782fe63"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "In this paper, what is the theoretical basis of \"Dynamic Path Feedback\"?", "reference_answer": "The theoretical basis of \"Dynamic Path Feedback\" is \"reward shaping\"."}}} {"uuid": "425ca788-b18c-5413-817b-e267ea270882", "question": "How long time can the new model reduce while training a 1B parameter Stage C text conditional diffusion model, compared to the amount SD 2.1 used for training?", "answer_format": "Your answer should be a float, rounded to 1 decimal place, evaluating in GPU hours.", "tags": ["single", "table", "objective"], "anchor_pdf": ["f97065df-5c5e-5584-b049-9ebe62b4e2ed"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 175398.0, "ndigits": 1}}} {"uuid": "42944afa-fed6-563d-9457-7b64e738f305", "question": "There is a paper that introduces a large-scale, multilingual, multi-technique singing corpus designed to address key limitations in existing datasets for singing voice synthesis (SVS) and related tasks. It features 80.59 hours of high-quality recordings from 20 professional singers across 9 languages. Could you please tell me the most frequently occurring segment duration (s) in the proposed dataset?", "answer_format": "Your answer should be a Python int", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["b3621536-7e76-57f9-99c1-77eda880730b"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 7}}} {"uuid": "4365278e-ba3d-5b04-8881-85fd4f456748", "question": "Is there a paper published in ICLR 2024 that proposes a scalable and effective exploration strategy based on Thompson sampling for reinforcement learning?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["0b5b3daf-addd-5125-9324-941f561e2947"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper published in ICLR 2024 that proposes a scalable and effective exploration strategy based on Thompson sampling for reinforcement learning?", "reference_answer": "Provable and Practical: Efficient Exploration in Reinforcement Learning via Langevin Monte Carlo"}}} {"uuid": "43999904-d634-5255-95b3-f5cc16993486", "question": "In order to solve limitation of the application of Causal Temporal Representation Learning (Ctrl) methods in real-world scenarios without prior knowledge of the domain variables, a paper propose CtrlNS. In their model, there are two kinds of variables: domain variables and hidden variables. Which variable remains relatively unchanged in phrase two during the training?", "answer_format": "Your answer should be chosen between \"domain variables\" and \"hidden variables\". ", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["0142149a-5c51-5917-bf03-0a62c623ffc2"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "hidden variables", "lowercase": false, "ignore_blank": false}}} {"uuid": "44ec0a10-6bba-59fe-836f-1079442d6e3b", "question": "In ICLR 2024 Poster papers, a paper proposes a framework where an agent first learns a sufficient set of skill primitives to achieve all high-level goals in its environment. Tell me the number of authors of this paper.", "answer_format": "Your answer should be a Python integer.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["a9828d07-00f6-519f-89f8-c34cc72bbc4b"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 4}}} {"uuid": "457bbc52-0425-51cd-8573-1a4feda62b88", "question": "A recent paper introduces the first large-scale benchmark specifically designed to evaluate large multimodal models (LMMs) on scientific figure interpretation. It consists of 2,000 curated multiple-choice questions across two tasks, figure-to-caption and caption-to-figure, sourced from arXiv figures using adversarial filtering and human verification. Could you please tell me which subcategory within the general question set has the highest sample proportion?", "answer_format": "Your answer should be a name of a subcategory.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["4b89a9cd-2c7a-5d02-8a00-9ccc0eceee5a"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "physics", "lowercase": true}}} {"uuid": "462fdea7-d178-5611-8ccf-878f27584346", "question": "Among the oral presentations at NeurIPS 2024 studying causal discovery, how does the proposed method in the paper \"Do Finetti: On Causal Effects for Exchangeable Data\" perform in estimating causal effects compared to traditional i.i.d.-based methods?", "answer_format": "Your answer should be a Python string comparing the performance in terms of estimation error, specifically mentioning the mean squared error (MSE) values.", "tags": ["single", "subjective", "text"], "anchor_pdf": ["f4b95b5a-c476-53b1-97e8-81a163d84e27"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The proposed method, Do-Finetti, achieves near-zero mean squared error (MSE) in estimating causal effects, outperforming traditional i.i.d.-based methods, which exhibit higher estimation errors.", "question": "In the paper \"Do Finetti: On Causal Effects for Exchangeable Data,\" how does the proposed method perform in estimating causal effects compared to traditional i.i.d.-based methods?"}}} {"uuid": "46711729-bbed-5a8e-8490-5908d79fb267", "question": "Can you recommend me a paper published in ICLR 2024 that studies the required model size when considering average-case and worst-case error scenarios, showing how the model size needs to change based on accuracy, data size and data dimensionality?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["d05b7312-02b8-5d3a-b92a-afdead5017ad"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Can you recommend me a paper published in ICLR 2024 that studies the required model size when considering average-case and worst-case error scenarios, showing how the model size needs to change based on accuracy, data size and data dimensionality?", "reference_answer": "Towards Establishing Guaranteed Error for Learned Database Operations"}}} {"uuid": "46f3e5ad-c092-5a63-b011-2df6131506d5", "question": "In ICLR 2024 Poster papers, a paper tries to ensemble the reward models to mitigate the over-optimization problem. In Section 5 (Results), how many main findings are reported?", "answer_format": "Your answer should be Python integer.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["8fb74a47-4b71-51b9-bbcf-ba8981aaae68"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 6}}} {"uuid": "4718a830-33ab-5ae7-a544-50b47dada242", "question": "How does ClimODE handle local and global weather effects? List two key points and give me a formula.", "answer_format": "Your answer should be a list of 2 strings, the first is a sentence containing the two key points, and the second is a formula in LaTeX format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["d3360b29-a0c8-572f-a44a-93f628880908"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_reference_answer_with_llm", "eval_complex_math_formula_with_llm"], "eval_kwargs_list": [{"reference_answer": "A convolutional network (3x3 kernels) captures local effects by aggregating nearby weather information and an attention convolutional network (KQV dot-product) captures global interactions.", "question": "How does ClimODE handle local and global weather effects? List two key points."}, {"formulas": ["f_{\\theta}(\\mathbf{u}(t), \\nabla \\mathbf{u}(t), \\mathbf{v}(t), \\psi)=f_{\\text {conv }}(\\mathbf{u}(t), \\nabla \\mathbf{u}(t), \\mathbf{v}(t), \\psi)+\\gamma f_{\\text {att }}(\\mathbf{u}(t), \\nabla \\mathbf{u}(t), \\mathbf{v}(t), \\psi) ."], "question": "How does ClimODE handle local and global weather effects? Give me a formula."}]}}} {"uuid": "4ab46af5-2c59-5e09-a7fe-a519c4768271", "question": " How does the Kantorovich-Rubinstein duality provide a tractable objective for optimizing the Wasserstein dependency measure Iw?", "answer_format": "Your answer should be a formula", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["3962a504-6c3b-521f-967b-57114c6ce970"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "I_W(S;Z) = \\sup_{\\lVert f \\rVert_L \\leq 1} \\mathbb{E}_{p(s,z)}[f(s,z)] - \\mathbb{E}_{p(s)p(z)}[f(s,z)], ", "question": " How does the Kantorovich-Rubinstein duality provide a tractable objective for optimizing the Wasserstein dependency measure Iw?"}}} {"uuid": "4b5b243d-975a-5ba1-956b-75f2cad06a76", "question": "In the paper that proposes RETR, what is the use of projimage in formula (2)?", "answer_format": "Your answer should be a Python strings describing the use of the projimage function.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["9ac17287-b1aa-5519-bae9-8624c0efe0e3"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "It projects the 3D BBox in the camera coordinate system into corresponding 2D image plane normally with a known pinhole camera model.", "question": "What is the use of projimage in formula (2)?"}}} {"uuid": "4c14462b-e17f-50fd-8983-1b739caca76f", "question": "In the paper that proposes a temperature-controlled differentiable mapping from free vectors to orthogonal matrices that asymptotically converges to permutation matrices, enabling both gradient-based and stochastic optimization over permutations, what hinders gradient-based optimization for \\theta in formula (15) and how to solve it?", "answer_format": "Your answer should be a Python strings explaining the obstacle and how to solve it.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["9c02b5ef-9092-55c2-a97d-6683dcebb40a"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "There exists a not differentiable mapping \\rho(\\cdot) which hinders gradient-based optimization for \\theta and it can be solved by approximating equation (15) by relaxing the mapping \\rho(\\cdot) to \\psi_{\\tau}(\\cdot).", "question": "In the paper that proposes a temperature-controlled differentiable mapping from free vectors to orthogonal matrices that asymptotically converges to permutation matrices, enabling both gradient-based and stochastic optimization over permutations, what hinders gradient-based optimization for \\theta in formula (15) and how to solve it?"}}} {"uuid": "4c1b1fd1-83b6-5b32-92d2-daa564cde987", "question": "What parameterization of f(s,z) is used to simplify the optimization of , and how does it constrain the Lipschitz continuity?", "answer_format": "Your answer should be a formula", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["3962a504-6c3b-521f-967b-57114c6ce970"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "I_W(S;Z) \\approx \\sup_{\\lVert \\phi \\rVert_L \\leq 1, \\lVert \\psi \\rVert_L \\leq 1} \\mathbb{E}_{p(s,z)}[\\phi(s)^\\top \\psi(z)] - \\mathbb{E}_{p(s)}[\\phi(s)]^\\top \\mathbb{E}_{p(z)}[\\psi(z)], ", "question": "What parameterization of f(s,z) is used to simplify the optimization of , and how does it constrain the Lipschitz continuity?"}}} {"uuid": "4cb116ee-e0db-510d-af46-69e4c9d4160d", "question": "In figure 2, describe the global change of angle $\\Phi$ and $\\Psi$ during the transition.", "answer_format": "Your answer should be a Python strings.", "tags": ["single", "image", "subjective"], "anchor_pdf": ["fe0bab15-0c43-5922-9650-058c4a117d70"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "$\\Phi$ increases and $\\Psi$ decreases globally.", "question": "In figure 2, describe the change of angle $\\Phi$ and $\\Psi$ during the transition."}}} {"uuid": "4cb82403-dc05-5b19-a04c-7591b4329f1a", "question": "A paper studies behavior policy for data-efficient General Value Functions (GVFs) learning to perfect this domain. In their study, which loss function is used to mesure the model's performance?", "answer_format": "Your answer should be a string which indicates a name of loss function. ", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["00af35b3-2ac2-5440-92f0-6526e7dc58f4"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "MSE", "lowercase": true, "ignore_blank": false}}} {"uuid": "4ecb6954-9276-558c-a9b1-6dff787b8784", "question": "In Figure 1, what are the three types of datasets used to fine-tune GPT-3.5 Turbo in the experiments? How does the harmfulness score change across the 11 safety categories after fine-tuning?", "answer_format": "Your answer should be a sentence answering the two questions.", "tags": ["single", "image", "subjective"], "anchor_pdf": ["12480043-cd6c-513b-bd75-fd9068439808"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["Explicitly harmful examples, identity-shifting data, and the benign Alpaca dataset.", "The harmfulness score increases in all categories after fine-tuning, indicating safety degradation."], "question": "In Figure 1, what are the three types of datasets used to fine-tune GPT-3.5 Turbo in the experiments? How does the harmfulness score change across the 11 safety categories after fine-tuning?"}}} {"uuid": "4f0fa7d7-8024-5627-aae9-c8fc79e1aad3", "question": "A recent paper presents a token reduction framework for Vision-Language (VL) models that integrates text-informed image token pruning and modality-aware token merging into cross-modal Transformer layers. By progressively removing text-irrelevant visual tokens and merging semantically similar tokens within each modality, the proposed method achieves up to 2x inference speedup and over 50% memory reduction on models such as ViLT and METER. Which university's researchers proposed this work?", "answer_format": "Your answer should be a name of a university.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["99bac92d-0c2f-5a5c-85f4-67e2dd384ea3"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_element_included", "eval_kwargs": {"gold": ["University of Washington", "Washington"]}}} {"uuid": "4f50e9ab-20d1-5218-a0e7-a877905b3ec8", "question": "In the paper that proposed Exclusively Penalized Q-learning, to solve unnecessary estimation bias, a new penalty is introduced. What is the formula for this penalty?", "answer_format": "Your answer should be a string, the formula in LaTeX format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["2629736e-0554-5710-b8ee-443a58fe1c6a"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "\\mathcal{P}_\\tau := \\underbrace{f^{\\pi, \\hat{\\beta}}_\\tau(s)}_{\\text{penalty adaptation factor}} \\cdot \\underbrace{\\left( \\frac{\\pi(a \\mid s)}{\\hat{\\beta}(a \\mid s)} - 1 \\right)}_{\\text{penalty term}}", "question": "What is the formula for the novel exclusive penalty $\\mathcal{P}_\\tau$?"}}} {"uuid": "4fc8d452-9f0b-5d60-b379-e85e4b417ee2", "question": "By how much does PromptNER outperform the state-of-the-art model on average in the cross-domain few-shot setting?", "answer_format": "Your answer should be a Python float between 0 and 100 rounded to 1 decimal place, stating the percentage improvement.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["2ad1a42e-8b7b-5dd0-9f05-dc78ea2113ad"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 7.7, "ndigits": 1}}} {"uuid": "4fd37435-f841-5084-ad30-6543f4e7fb11", "question": "A paper propose a novel training paradigm for GenQA using supervision from automatic QA evaluation models (GAVA). When changing threshold $\\theta$, the hyper-parameter, to a lower value, the GAVA-Score of GAVA-SDA models trained on MS-MARCO would likely become lower or higher? ", "answer_format": "Your answer should be a string chosen between \"lower\" and \"higher\"", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["047880ae-ccf5-513a-b9b1-3e505cb14e09"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "lower", "lowercase": true, "ignore_blank": true}}} {"uuid": "4ff37823-b7b7-549e-8314-ffbe82ffa99e", "question": "A paper reports a small number attention heads transport a compact representation of the demonstrated task. They study the AIE per attention head in GPT-J over many tasks, and show it in a figure, which index of head has the highest AIE in the middle layer?", "answer_format": "Your answer should be an int between 0 to 15.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["00898bf7-c6b2-5309-8c68-55d1c86af1c6"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 0}}} {"uuid": "5043c2c9-d82d-5612-89a1-ba3560626005", "question": "A paper introduce BAdam which offers a memory efficient approach to the full parameter finetuning of large language models. Besides its high efficiency, does this model has a better convergence behavior compared with other models? ", "answer_format": "Your answer should be \"Yes\" or \"No\".", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["01afae6f-359b-57ff-97bb-91a8005f42d9"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Yes", "lowercase": true, "ignore_blank": false}}} {"uuid": "50596b7e-10e2-5027-9d65-5408a69c5c48", "question": "What is the maximum number of nodes that SDF-Sim can scale to before mesh-based approaches run out of memory?", "answer_format": "Your answer should be a Python float value representing the number of nodes in millions, rounded to 1 decimal place.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["a0239821-c270-52c5-a3cb-86c94c66cac1"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 1.1, "ndigits": 1}}} {"uuid": "508113a9-27ac-5e9a-8c9d-2d595c6af1d4", "question": "Among the papers at NeurIPS 2024 researching fair machine learning, how many real-world datasets did the authors of \"Unprocessing Seven Years of Algorithmic Fairness\" empirically evaluate?", "answer_format": "Your answer should be a Python integer representing the number of real-world datasets empirically evaluated.", "tags": ["single", "objective", "text"], "anchor_pdf": ["f3ca2666-77ef-5eb4-bdc3-974a202f0303"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 6}}} {"uuid": "50aa1f58-4c83-5b31-b4b8-455d192003f0", "question": " What two known instabilities are reproduced in the paper?", "answer_format": "Your answer should be a sentence.", "tags": ["single", "subjective", "text"], "anchor_pdf": ["e355eea9-a2b5-50ff-85d3-1423a1fada2a"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Attention logit growth: The growth of logits in attention layers and Output logit divergence: Divergence of the output logits from the log probabilities ", "question": " What two known instabilities are reproduced in the paper?"}}} {"uuid": "50c45430-63ff-5b92-9cc0-46392205944f", "question": "What is the Bayesian framework equation that decomposes predictive uncertainty (PU) into epistemic (EU) and aleatoric (AU) components?", "answer_format": "Your answer should be a formula", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["3eb979a1-1ab6-5b63-ad76-9e0d73663dfd"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "\\text{PU} = \\text{EU} + \\text{AU} \\quad \\text{or} \\quad H(Y|x) = MI(Y, \\Omega|x) + E_{\\omega\\sim\\Omega}[H(Y|\\omega, x)]", "question": "What is the Bayesian framework equation that decomposes predictive uncertainty (PU) into epistemic (EU) and aleatoric (AU) components?"}}} {"uuid": "50e4855e-e346-5f60-9959-31a8f93f9cb2", "question": "Among the papers at ACL 2023 researching text style transfer, what is the main innovation of the \"Text Style Transfer Back-Translation\" method proposed in the paper?", "answer_format": "Your answer should be a Python string describing the key innovation of the Text Style Transfer Back-Translation method.", "tags": ["single", "subjective", "text"], "anchor_pdf": ["03166771-5ae8-57b9-9c10-3120423adc5c"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The method introduces a style transfer model to modify the source side of back-translation data, making it more natural to improve the translation of natural inputs.", "question": "What is the main innovation of the \"Text Style Transfer Back-Translation\" method proposed in the paper?"}}} {"uuid": "51477120-f5f4-594e-b6dd-d931ac57d853", "question": "What is the main idea of InfoBatch?", "answer_format": "Your answer should be a sentence brefly introducing the method", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["f15160cc-4dbb-53ad-b5d6-f2ac6b23bc68"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "It's to speed up the training process by dynamic and unbiased data pruning. It randomly prunes some of the less informative samples according to their loss values and scales the gradient of the remaining samples to approximate the gradient expectation of the original dataset, thus realizing lossless training acceleration.", "question": "What is the main idea of InfoBatch?"}}} {"uuid": "515c44a2-c4ae-5a3a-a5c3-621a24a765d7", "question": "According to the paper that introduces the first adaptive decision-making framework for LLMs that mirrors real-world MDM processes via dynamic collaboration among AI agents based on task complexity, considering the impact of agent number, if we remove one agent from the peak accuracy setting, how much does the accuracy drop?", "answer_format": "Your answer should be a float rounded to 1 decimal place.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["50de8a07-fdcd-5113-b777-5594dd741ac4"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 8.8, "ndigits": 1}}} {"uuid": "517fca53-68de-5aae-af94-f7b76b79f50c", "question": "In ICLR 2024 Poster papers, which paper introduces Uni-RLHF, a comprehensive system implementation tailored for RLHF?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["98d21482-74de-50e0-91da-db6c6ca709a6"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, which paper introduces Uni-RLHF, a comprehensive system implementation tailored for RLHF?", "reference_answer": "Uni-RLHF: Universal Platform and Benchmark Suite for Reinforcement Learning with Diverse Human Feedback"}}} {"uuid": "5181b3b8-7afc-5e59-b113-694fc2df7935", "question": "In ICLR 2024 Poster papers, which paper tackles the Offline Opponent Modeling problem by harnessing the full potential of the supervised pre-trained Transformers' in-context learning capabilities?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["8a1e3915-e42d-581e-aa46-9b520f4b03ec"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, which papers tackles the Offline Opponent Modeling problem by harnessing the full potential of the supervised pre-trained Transformers' in-context learning capabilities?", "reference_answer": "Towards Offline Opponent Modeling with In-context Learning"}}} {"uuid": "518991c7-c643-53a7-aa7b-2d6e4303c92c", "question": "In RobustFill's \"Switch-Concept-Order\" task, how much more accurate is ExeDec compared to sub-target-free ablation experiments (Ablation)?", "answer_format": "Your answer should be a float, rounded to 1 decimal place, evaluating in percentage.", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["e8a7f4d2-b82b-59f6-a5fe-d64f18a91e2d"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 3.4, "ndigits": 1}}} {"uuid": "524a738e-efac-5f13-95df-9cee64c8ff97", "question": "I wonder if there are any datasets and benchmarks that are published as orals in ICLR 2024? Also tell me their respective dataset size (including all data splits).", "answer_format": "Your answer should be a Python list of tuples (List[Tuple[str, int]]). For each tuple in the list, the first element is the paper title string and the second element is an integer representing the dataset size.", "tags": ["comprehensive", "table", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["04ed3e06-a7e7-5856-8912-af4223637abf", "40911da4-3a2d-516b-9e83-25600a989feb", "1c87084a-f8ae-5a28-a8bb-016316818e0c", "861ab80a-1a4b-5c77-ae8f-460e55f8d472"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": [["BooookScore: A systematic exploration of book-length summarization in the era of LLMs", 100], ["MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts", 6141], ["SWE-bench: Can Language Models Resolve Real-world Github Issues?", 2294], ["How Well Do Supervised 3D Models Transfer to Medical Imaging Tasks?", 9262]], "ignore_order": true, "threshold": 95, "fuzz_method": "ratio"}}} {"uuid": "5260af05-70c0-556c-89a3-5f478e8ac526", "question": "A paper propose a SummAttacker, an efficient generator of diverse cases, to bring more variations to the hidden states. When E bar, the average Euclidean distance of paired original and attacked states, decreases, the hidden states in the latent space show smaller or larger diversity?", "answer_format": "Your answer should be chosen between \"smaller\" and \"larger\".", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["0e494325-2e09-5932-836a-3c5a5ba3a422"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "smaller", "lowercase": true, "ignore_blank": false}}} {"uuid": "532503fe-48be-57f3-9994-ff83eda8cc36", "question": "According to the ablation study in the paper that proposes Variance Reduced Meta-CL, what improvement in final accuracy (Acc) does VR-MCL_2 achieve compared to MCL on the Seq-CIFAR10 benchmark?", "answer_format": "Your answer should be a Python float value representing the percentage improvement in accuracy, between 0 and 100 and rounded to 2 decimal places.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["581595c2-dd12-5d30-ab70-eeb0a7e1fcf5"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 4.08, "ndigits": 2}}} {"uuid": "536d809c-344a-5ef8-9afe-e1c0f46fb58b", "question": "When learning framework for State Embedding in SEMANTIC MEMORY EMBEDDING, what is the loss fonction and how can it be expressed with the highest return and a scale factor, noted lambdarocon?", "answer_format": "Your answer should be a list of teo formulas", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["402ca915-7f12-560e-8f5e-cdf54903a981"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": ["\\mathcal{L}(\\phi, \\psi)=\\left(H_{t}-f_{\\psi}\\left(f_{\\phi}\\left(s_{t}\right)\right)\right)^{2}", "\\mathcal{L}(\\phi, \\psi)=\\left(H_{t}-f_{\\psi}\\left(f_{\\phi}\\left(s_{t}\right)\right)\right)^{2}\\mathcal{L}(\\phi, \\psi)=\\left(H_{t}-f_{\\psi}^{H}\\left(f_{\\phi}\\left(s_{t} \\mid t\right) \\mid t\right)\right)^{2}+\\lambda_{\text {rcon }}\\left\\|s_{t}-f_{\\psi}^{s}\\left(f_{\\phi}\\left(s_{t} \\mid t\right) \\mid t\right)\right\\|_{2}^{2},"], "question": "When learning framework for State Embedding in SEMANTIC MEMORY EMBEDDING, what is the loss fonction and how can it be expressed with the highest return and a scale factor, noted lambdarocon?"}}} {"uuid": "54230565-ee38-52aa-a4d3-ce3b49ab196b", "question": "In ICLR 2024 Poster papers, which paper proposes to extract an efficient deterministic inference policy from critic models and pretrained diffusion behavior models, leveraging the latter to directly regularize the policy gradient with the behavior distribution's score function during optimization?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["fe01ec00-1c93-5d9a-bc2e-1afca3e94bf2"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, which paper proposes to extract an efficient deterministic inference policy from critic models and pretrained diffusion behavior models, leveraging the latter to directly regularize the policy gradient with the behavior distribution's score function during optimization?", "reference_answer": "Score Regularized Policy Optimization through Diffusion Behavior"}}} {"uuid": "5569d36d-49b2-5ec4-821b-a5ce29ae50d2", "question": "In ICLR 2024 Poster papers, a paper first constructs a dynamics model from the expert demonstration, enforcing local Lipschitz continuity while skipping the discontinuous regions. How many different aspacts are illustrated in Figure 1?", "answer_format": "Your answer should be a Python integer.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["5e978885-be39-543f-b1d9-dc71ad71083a"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 4}}} {"uuid": "55ca34ac-d006-5563-bac8-ee915cfcd8da", "question": "How does the adversarial accuracy of ATINTER improve compared to the best existing defense on the SST-2 dataset?", "answer_format": "Your answer should be a Python string specifying the percentage improvement in adversarial accuracy compared to the best existing defense.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["331caa1d-e2e3-535d-bc06-1ecd8ee99033"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "ATINTER improves adversarial accuracy by over 4% compared to the best existing defense on SST-2."}}} {"uuid": "55cec0ea-d0f4-5c6a-9dc1-e1549494eac8", "question": "In ICLR 2024 Spotlight papers, a paper attempts to solve how to train a general agent in Reinforcement Learning (RL) that can thoroughly explore the environment and learn new and diverse skills. In Figure 6, how many tasks are considered?", "answer_format": "Your answer should be a Python integer.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["9f00d13f-fd37-5d97-9469-cd3eca504994"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 4}}} {"uuid": "5636ee62-c100-56a3-817d-9994cf6c666d", "question": "Can you recommend me a paper published in ICLR 2024 that propose a data-driven, physically-informed deep-learning framework for classifying dynamical regimes and characterizing bifurcation boundaries based on the extraction of topologically invariant features?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["0a121cda-c58a-5250-9ed4-726231320021"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Can you recommend me a paper published in ICLR 2024 that propose a data-driven, physically-informed deep-learning framework for classifying dynamical regimes and characterizing bifurcation boundaries based on the extraction of topologically invariant features?", "reference_answer": "Let's do the time-warp-attend: Learning topological invariants of dynamical systems"}}} {"uuid": "565161cc-5824-54df-8cc8-ca00e58fb564", "question": "In the paper that introduces a new family of non-Gaussian distributions for deep neural networks, derived via Hermite polynomials, providing significantly more accurate scaling laws than classical Gaussian approximations, what value is the para-Gaussian correction term if \\phi(x) = \\sqrt{2}(x)_+ and the input z is a standard N(0, 1) Gaussian?", "answer_format": "Your answer should be an integer of the value.", "tags": ["comprehensive", "formula", "objective"], "anchor_pdf": [], "reference_pdf": ["9bd7f5fa-1b4c-58a2-9392-1e86defccb85"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 5}}} {"uuid": "5675cf2e-b2ae-5b9a-bd60-9a46469bcc88", "question": "In ICLR 2024 Spotlight papers, a paper explores the scalability problem of large-scale Inverse Reinforcement Learning (IRL) in practical applications. In this paper, how author frames Inverse reinforcement learning algorithms as the two-player zero-sum game?", "answer_format": "Your answer should be the formula in LaTeX format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["f7ba8fe5-3cfc-5db6-931a-5808dd8243ae"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"question": "In ICLR 2024 Spotlight papers, a paper explores the scalability problem of large-scale Inverse Reinforcement Learning (IRL) in practical applications. In this paper, how author frames Inverse reinforcement learning algorithms as the two-player zero-sum game?", "formulas": "\\underset{\\pi\\in\\Pi}{\\min}\\underset{\\theta\\in\\Theta}{\\max} J(\\pi_E, r_{\\theta}) - J(\\pi, r_{\\theta}) = \\underset{\\pi\\in\\Pi}{\\min}\\underset{\\theta\\in\\Theta}{\\max} f(\\pi_E, \\pi, r_{\\theta})"}}} {"uuid": "56aae038-a30f-514b-b7aa-8964219fb39e", "question": "What is the size of the Large Reconstruction Model, and how quickly can it reconstruct a 3D object from a single image?", "answer_format": "Your answer should be a sentence describing the model size and reconstruction time.", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["a0ad2a87-6b49-522f-b31a-ef6e42b551ad"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The LRM model has 500 million parameters and can reconstruct a 3D object from a single image in just 5 seconds.", "question": "What is the size of the LRM model, and how quickly can it reconstruct a 3D object from a single image?"}}} {"uuid": "573a0bdf-5322-5f92-8ddd-7d4192bf28ba", "question": "A paper propose a learning paradigm that directly establishes causation between events in the course of time. According to its diabetes simulator, will the variation of the dose of SI1 impact GS1, Glu. sto. 1?", "answer_format": "Your answer should be \"Yes\" or \"No\". ", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["041c7cbc-2d3d-574f-9a3d-9b1549205b3a"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "No", "lowercase": true, "ignore_blank": false}}} {"uuid": "57a13138-216d-5220-86eb-1ec344c54299", "question": "Which benchmark is used in the main experiment of the paper? In this benchmark, how is the 'probe' defined?", "answer_format": "Your answer should be a single python list like this: [\"benchmark_name\", \"probe_definition\"]. Note that for the benchmark name, the abbreviation is required. For the probe definition, you should give a short string to describe the definition.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["994062da-3a2c-586a-8a67-1920cc158155"], "reference_pdf": ["df9f793e-72d1-5e34-aff2-075d37f5486a"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_element_included", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": ["CLRS-30", "CLRS"], "lowercase": true}, {"reference_answer": " Specifying a feature's stage, location and type fully determines its role in the dataflow. A tuple (stage, loc, type, values) is referred to as a probe.", "question": "What's the definition of a probe in the CLRS benchmark?"}]}}} {"uuid": "5819ecba-eea7-5660-a568-22c40087fc6b", "question": "Among the poster presentations at NeurIPS 2024 on reinforcement learning, what is the sample complexity bound for weakly communicating average-reward MDPs established by the paper \"Span-Based Optimal Sample Complexity for Weakly Communicating and General Average Reward MDPs\"?", "answer_format": "Your answer should be a Python string containing the mathematical expression for the sample complexity bound in LaTeX-like format.", "tags": ["single", "subjective", "text"], "anchor_pdf": ["cdb74978-9d0a-5ab4-baf2-0003321f8ad9"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "$\\widetilde{O}(SA\\frac{H}{\\varepsilon^2})$", "question": "Among the poster presentations at NeurIPS 2024 on reinforcement learning, what is the sample complexity bound for weakly communicating average-reward MDPs established by the paper \"Span-Based Optimal Sample Complexity for Weakly Communicating and General Average Reward MDPs\"?"}}} {"uuid": "59673e0f-b951-5566-b908-9cc36ff4d9f0", "question": "In NeurIPS 2024 Poster papers, which paper builds a Python program based on the interaction with the environment to represent agent's understanding of the world?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["f6e9bf70-04cb-5d33-8ac7-ec982e49b218"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In NeurIPS 2024 Poster papers, which paper builds a Python program based on the interaction with the environment to represent agent's understanding of the world?", "reference_answer": "WorldCoder, a Model-Based LLM Agent: Building World Models by Writing Code and Interacting with the Environment"}}} {"uuid": "59cc500d-6a09-5c63-bbc8-85c57adbc598", "question": "Why using cascaded pipeline?", "answer_format": "Your answer should be a sentence.", "tags": ["single", "subjective", "text"], "anchor_pdf": ["ebe290da-10ea-515b-a3c8-985a575e8b53"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_candidate_reference_answer_with_llm", "eval_kwargs": {"candidate_reference_answers": ["However, limited input resolution of the diffusion model is a major hindrance. We address this with a cascaded pipeline, starting with a low-resolution model, followed by a super-resolution model that successively upsamples and incorporates finer details to the matching field.", "To alleviate resolution constraint, we further propose a flow upsampling diffusion model that finetunes the pretrained denoising model, thereby injecting fine details into the matching field with minimal optimization."], "question": "Why using cascaded pipeline?"}}} {"uuid": "5a15998a-846c-500d-8953-6e97fa7d745d", "question": "In the joint context transfer module of the paper, which attention mechanism is used? Please give me the pdf url to the paper that proposed this attention mechanism.", "answer_format": "Your answer should be a single link like this: https://arxiv.org/abs/xxxx.xxxxx.", "tags": ["multiple", "metadata", "objective"], "anchor_pdf": ["82692745-452a-5655-8a83-d67b24cb0ee9"], "reference_pdf": ["03ecfc40-eb0c-590d-b9fc-8417be49bde6"], "conference": [], "evaluator": {"eval_func": "eval_element_included", "eval_kwargs": {"gold": ["https://arxiv.org/pdf/1812.01243v10", "http://arxiv.org/pdf/1812.01243v10"]}}} {"uuid": "5a6dc0a7-10b1-513c-9372-81e6f897b088", "question": "Given Figure 1's timeline illustration, suppose a new clip v_3' is inserted into the video stream V at timestamp t_3, while the text stream T remains unchanged. What type of multi-granularity noisy correspondence (MNC) does this introduce?", "answer_format": "Your answer should be a sentence.", "tags": ["single", "image", "subjective"], "anchor_pdf": ["c593a43b-e808-5ca1-bb91-4b403502e790"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The new clip introduces coarse-grained asynchronous misalignment, the temporal inconsistency between actions and descriptions.", "question": "Given Figure 1's timeline illustration, suppose a new clip v_3' is inserted into the video stream V at timestamp t_3, while the text stream T remains unchanged. What type of multi-granularity noisy correspondence (MNC) does this introduce?"}}} {"uuid": "5b2eae07-e5d0-54e5-8a3a-ee7ebf3f62ce", "question": "Which paper published in ICLR 2024 proposes a synchronized multiview diffusion model that models the joint probability distribution of multiview images, enabling the generation of multiview-consistent images in a single reverse process?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["d4e036ce-76a2-5001-a845-0070abc4551d"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper published in ICLR 2024 proposes a synchronized multiview diffusion model that models the joint probability distribution of multiview images, enabling the generation of multiview-consistent images in a single reverse process?", "reference_answer": "SyncDreamer: Generating Multiview-consistent Images from a Single-view Image"}}} {"uuid": "5ccbc57c-2bf3-54ef-bbee-da61e0696153", "question": "A paper introduce a framework called GRaded Generative Retrieval to address challenges of normal generative retrieval. To simulate the low-resource retrieval setting, the author randomly samples different fixed limited numbers of queries from the training set. How is GR methods' performance comparing with BM25, under the zero-resource setting? ", "answer_format": "Your answer should be chosen between \"better\" and \"worse\". ", "tags": ["image", "objective", "single"], "anchor_pdf": ["80ae6aca-864d-590b-816c-778d710f0347"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "worse", "lowercase": false, "ignore_blank": false}}} {"uuid": "5d0f6bf1-1ddd-5d32-958e-07eee0636eb9", "question": "Among the papers at ACL 2023 that introduced new dialogue datasets, what is the scale of the LiveChat dataset presented in the paper \"LiveChat: A Large-Scale Personalized Dialogue Dataset Automatically Constructed from Live Streaming\"?", "answer_format": "Your answer should be a Python dictionary with keys 'total_dialogues', 'personas', and 'sessions_per_persona', containing the numerical values for each statistic.", "tags": ["single", "metadata", "objective"], "anchor_pdf": ["0434d694-495a-5da7-ae35-686fcbd9cca5"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"total_dialogues": 1330000, "personas": 351, "sessions_per_persona": 3800}}}} {"uuid": "5d21381e-e16b-5771-866d-0dd99f014aa0", "question": "In ICLR 2024 Spotlight papers, a paper tries to solve the performance issues of the DICE (DIstribution Correction Estimation) method in offline reinforcement learning (RL) and imitation learning (IL). How many pages are there in this paper's appendix?", "answer_format": "Your answer should be an integer representing the number of pages in the appendix of the paper.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["f20defa8-4fdb-5582-adc7-ebefe03370ff"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 10}}} {"uuid": "5dad1a7d-2995-56fe-bb82-56392a86fc90", "question": "Which paper introduced the DevBench dataset? This dataset is a multimodal developmental benchmark consisting of seven tasks across lexical, syntactic, and semantic domains, and it incorporates behavioral data from both children and adults.", "answer_format": "Your answer should be a string.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["e01a80eb-95aa-5f37-86bc-6dbc8b8e880d"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper introduced the DevBench dataset? This dataset is a multimodal developmental benchmark consisting of seven tasks across lexical, syntactic, and semantic domains, and it incorporates behavioral data from both children and adults.", "reference_answer": "DevBench: A multimodal developmental benchmark for language learning"}}} {"uuid": "5db51b6d-f10c-5769-8193-f3757dbb0580", "question": "A paper introduces Sequoia, a scalable, robust, and hardware-aware algorithm for speculative decoding. In the comparison of the wall-clock time speedup of Sequoia trees of various sizes, under which size, the tree with L40 GPUs maximize sppedup value?", "answer_format": "Your answer should be an int. ", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["941e5cfe-a707-5754-91dc-2859ece86c6e"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 64}}} {"uuid": "5e1bd23b-46c9-5209-9b58-b633ff39e392", "question": "In the paper at ICLR 2024 that employs submodular mechanism for model interpretability, what performance improvements does the proposed method achieve over HSIC-Attribution on the CUB-200-2011 dataset for incorrectly predicted samples?", "answer_format": "Your answer should be a Python dictionary with two keys: 'confidence_gain' and 'insertion_score_gain', with values as float percentages between 0 and 100 rounded to 1 decimal place.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["048dba90-b146-5981-9694-f2f518a42ba9"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"confidence_gain": 81.0, "insertion_score_gain": 18.4}, "ndigits": 1}}} {"uuid": "5e1cb0a7-4a3c-5e70-8455-dd39567a8e45", "question": "How does MC-SMoE identify the dominant experts in each expert group?", "answer_format": "Your answer should be the context that describes the method used by the paper.", "tags": ["comprehensive", "text", "subjective"], "anchor_pdf": [], "reference_pdf": ["8815c4cf-60ae-58f0-b420-7a0d5054e10d"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "MC-SMoE treats the most commonly active experts as dominant experts, where the expert utilization is calculated by inputting and routing a randomly picked subset of training data.", "question": "How does MC-SMoE identify the dominant experts in each expert group?"}}} {"uuid": "5e6bfd36-657b-5b68-9ca0-e3996a37ee7b", "question": "There is a theoretical paper that establishes the tight parallel complexity of boosting algorithms within the weak-to-strong learning framework. It closes a longstanding gap between the upper and lower bounds on the trade-off between the number of parallel rounds and the total work per round. Could you please tell me which university the authors of this paper are affiliated with?", "answer_format": "Your answer should be a name of a university.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["9589034e-65a6-54cb-9784-96bb6edd6d1a"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_element_included", "eval_kwargs": {"gold": ["Aarhus University", "The Aarhus University", "Aarhus"], "lowercase": true}}} {"uuid": "5f5dc605-fa71-5678-a21d-05907900081f", "question": "What novel metric does the paper \"Improving Environment Novelty Quantification for Effective Unsupervised Environment Design\" introduce to enhance Unsupervised Environment Design (UED)?", "answer_format": "Your answer should be a Python string containing the full name of the metric introduced in the paper.", "tags": ["single", "subjective", "text"], "anchor_pdf": ["ec2b3222-b2a2-5f65-b3b1-b57822242fcf"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The paper introduces the Coverage-based Evaluation of Novelty In Environment (CENIE) metric.", "question": "What novel metric does the paper \"Improving Environment Novelty Quantification for Effective Unsupervised Environment Design\" introduce to enhance Unsupervised Environment Design (UED)?"}}} {"uuid": "5f87ab57-6a62-55d3-8b7e-6a4edad48d76", "question": "Which Tiny Paper published in ICLR 2024 builds an efficient pipeline for automated evaluation of priming attacks against open-source LLMs?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["0c0239c8-9344-5775-b6c9-8bba53df2370"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which Tiny Paper published in ICLR 2024 builds an efficient pipeline for automated evaluation of priming attacks against open-source LLMs?", "reference_answer": "Bypassing the Safety Training of Open-Source LLMs with Priming Attacks"}}} {"uuid": "61e2ac03-e333-514e-8ad2-bd972463571b", "question": "In the paper that developes a model of an agent that navigates using noisy egocentric visual and self-motion signals. In the figure 3F, which model consistently achieves a smaller DKL between its place field orientation distribution and the animal data?", "answer_format": "Your answer should be a Python strings indicating the name of the model.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["9b381001-59c1-5e67-aa6d-68f4f6dfb8b3"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Pred-RNN", "lowercase": true}}} {"uuid": "62dbd4bc-7e04-5d57-8952-f028224dfd82", "question": "In ICLR 2024 Poster papers, a paper attempts to propose a meta-reinforcement learning algorithm that is improved in multiple aspects, especially in terms of sample efficiency, generalization ability, and handling of high-dimensional task distributions, by combining the latest model-based RL techniques and meta-RL techniques. Tell me the number of authors.", "answer_format": "Your answer should be a Python integer.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["d6114dbd-6006-5bf6-96d3-0083d49449c1"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 5}}} {"uuid": "6317e865-1855-596f-9554-0fe6b9484efa", "question": "What innovative technologies are included in the benchmark specifically designed for practical TTA settings used in the paper \"Persistent Test-time Adaptation in Recurring Testing Scenarios\"?", "answer_format": "Your answer should be a python strings.", "tags": ["multiple", "subjective", "text"], "anchor_pdf": ["8b49bfd6-968c-5944-8ecc-f818325aeb8a"], "reference_pdf": ["8321cc92-c56e-550f-931d-a150a6d8b21a"], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"question": "What innovative technologies are included in the benchmark specifically designed for practical TTA settings used in the paper \"Persistent Test-time Adaptation in Recurring Testing Scenarios\"?", "scoring_points": ["They present a robust batch normalization scheme to estimate the normalization statistics.", "A memory bank is utilized to sample category-balanced data with consideration of timeliness and uncertainty", "They develop a time-aware reweighting strategy with a teacher-student model."]}}} {"uuid": "63b17ccf-dacf-53ca-be1e-66ef2a23c354", "question": "What are the three key architectural improvements of the state-space models (SSMs) introduced in R2I's world model over traditional RNNs? Among these improvements, how does the \"Parallel Scan\" mode of SSMs address the efficiency of training long sequences?", "answer_format": "Your answer should be a list of two strings (sentences), each answering one of the two questions.", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["d0d9a5d2-3cfa-5fea-8439-0f3da2975dda"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_reference_answer_with_llm", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"reference_answer": "1. Structured state matrix initialization (DPLR matrix of the HiPPO framework) enables the model to capture very long-range dependencies.\n2. Dual computational paradigm of convolutional and parallel scanning modes.\n3. Linear time-invariant system properties allow recursive representation after discretization.", "question": "What are the three key architectural improvements of the state-space models (SSMs) introduced in R2I's world model over traditional RNNs?"}, {"reference_answer": " It converts the sequence computation into parallel prefix sum operations by Blelloch algorithm, which reduces the time complexity from O(L) to O(log L), maintaining the same O(1) time complexity as traditional RNNs, avoiding the storage of the complete computational graph by hiding the state compression (x_t's pass), as shown in Table 1, keeping the total elapsed time O(1) in the imagining phase (H-step), utilizing the parallel computing units of modern GPUs, and reducing the graphic memory footprint by 75% compared to Transformer's attention mechanism", "question": "Among these improvements, how does the \"Parallel Scan\" mode of SSMs address the efficiency of training long sequences?"}]}}} {"uuid": "6444d8d4-f8ea-54f3-b514-aa8db3c3b530", "question": "A paper use the variational framework to study models' ability of planning and show that planning corresponds exactly to a different set of weights. In the experiment where they use 6 different domains from IPPC2011, each with 10 instances (factored MDPs) of increasing difficulty. For the game of life domain instance index, which model performs the worst?", "answer_format": "Your answer should be a string, a name of model. ", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["8f1cf7b1-21c0-5450-9956-0f35c7e56288"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "MFVI-Bwd", "lowercase": true, "ignore_blank": false}}} {"uuid": "64ac92a0-9c09-513c-8e47-553c63189198", "question": "In this paper, what is the number of the authors?", "answer_format": "Your answer should be Python integer.", "tags": ["single", "metadata", "objective"], "anchor_pdf": ["64800c0a-97de-5b40-a579-f7ee1842f27b"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 4}}} {"uuid": "64af3261-c658-5908-9970-e42dae242237", "question": "In NeurIPS 2024 Poster papers, a paper proposes a new problem in offline MBRL called \"The Edge-of-Reach Problem\". In this paper, how many papers are cited?", "answer_format": "Your answer should be a Python integer.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["76b9bb90-09cb-5721-83ba-737ec7b66e36"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 37}}} {"uuid": "65aeb135-fa7a-5593-a369-ce0f68e0b849", "question": "In ICLR 2024 Poster papers, which paper proposes AlignDiff, a novel framework that leverages RLHF to quantify human preferences, covering abstractness, and utilizes them to guide diffusion planning for zero-shot behavior customizing, covering mutability?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["d411e5d8-67c3-5eff-9e4d-101e913322dc"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, which paper proposes AlignDiff, a novel framework that leverages RLHF to quantify human preferences, covering abstractness, and utilizes them to guide diffusion planning for zero-shot behavior customizing, covering mutability?", "reference_answer": "AlignDiff: Aligning Diverse Human Preferences via Behavior-Customisable Diffusion Model"}}} {"uuid": "674871a6-8283-5b95-8b97-6882363f0a61", "question": "In the paper introducing Sequence Parallelism, a system-level approach that facilitates Transformer training on arbitrarily long sequences by distributing input sequences across multiple GPUs, the authors present Ring Self-Attention (RSA) as a mechanism for computing attention across devices without requiring the full sequence to be stored on any single GPU. My question is: How many times longer sequence length does Sequence Parallelism achieve compared to the state-of-the-art method of tensor parallelism, under the given experimental conditions?", "answer_format": "Your answer should be a Python int.", "tags": ["comprehensive", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["7ba20d6b-ef3a-5ce7-ba36-086be9cb8157"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 3}}} {"uuid": "6a035d82-c4fe-5b8f-b216-a9ecff97f77a", "question": "In ICLR 2024 Poster papers, a paper attempts to address the hidden confounding problem in Offline Reinforcement Learning (RL). How many authors are there in this paper?", "answer_format": "Your answer should be a Python integer.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["e0ca5365-e5c2-5c2d-9037-0f4156e56f58"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 2}}} {"uuid": "6ae83c2d-6541-584f-b600-526be5540b31", "question": "How to represent the convolution operation of an image in the paper that first provides competitive performanceon LRA with Transformers and diagonal linear RNNs", "answer_format": "Your answer should be a string, the formula in LaTeX formula.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["160cd6c1-9cc7-5972-bb84-6500c0fd14ef"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "(I * K)(x, y) = \\sum_{i=-\\infty}^{\\infty} \\sum_{j=-\\infty}^{\\infty} I(x-i, y-j) \\cdot K(i, j) ", "question": "How to represent the convolution operation of an image with a formula in the paper Never Train from Scratch: FAIR COMPARISON OF LONGSEQUENCE MODELS REQUIRES DATA-DRIVEN PRIORS?"}}} {"uuid": "6b987048-a00f-5715-a227-57a5fd56be94", "question": "In ICLR 2024 Poster papers, a paper proposes a reward smoothing method called \"DreamSmooth\". What is the formula of \"DreamSmooth\"?", "answer_format": "Your answer should be the formula in LaTeX format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["8107760d-d599-5a5e-b4d9-0e9a3fcf84c8"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, a paper proposes a reward smoothing method called \"DreamSmooth\".", "formulas": "\\tilde{r}_t \\leftarrow f(r_{t - L:t + L}) = \\sum_{i = -L}^{L} f_i \\cdot r_{\text{clip}(t + i, 0, T)} \\quad s.t. \\quad \\sum_{i = -L}^{L} f_i = 1"}}} {"uuid": "6bcfdec0-206c-5e1a-8279-94a855673e3b", "question": "There is a paper that introduces NDCR, the first end-to-end framework for image retrieval from linguistically complex text by integrating analogical and logical reasoning. The research institutions involved in this study are all from the same country. Please provide the name of this country.", "answer_format": "Your answer should be a name of a country.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["9921d802-f6b8-586e-984e-9476648859ed"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "China"}}} {"uuid": "6be7a678-0b17-5686-81b5-c81a03db7baf", "question": "Why MINDER consumes relatively more memory than other models?", "answer_format": "Your answer should be a phrase which concludes the reason.", "tags": ["single", "subjective", "text"], "anchor_pdf": ["132abab8-b9d7-53e6-8084-542de835b47b"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": " Since MINDER considers multiview identifiers, it also consumes more memory to store these identifiers.", "question": "Why MINDER consumes relatively more memory than other models?"}}} {"uuid": "6c49cb88-0fc5-562d-bcf3-46efa71b4666", "question": "There is a recent paper conducting a comprehensive empirical evaluation of eight model selection methods for unsupervised domain adaptation (UDA), revealing their vulnerability to worst-case selections across diverse scenarios. The authors utilized a pool of 28 models and examined the performance of different selection methods as additional models were incrementally included. Which algorithm among SND, Ensemble, and EnsV demonstrates the best performance when 15 models are added?", "answer_format": "Your answer should be one of the following: ['SND', 'Ensemble', 'EnsV'].", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["4199a7a3-7069-53a6-8c27-92c7c3ab34e0"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Ensemble"}}} {"uuid": "6d135c7c-312f-588d-b338-dbc7ccdf9c1e", "question": "There is a paper that introduces the first large-scale multimodal dataset specifically tailored for autonomous trucking, addressing challenges unique to heavy-duty vehicles such as dynamic trailer occlusion, sensor placement, and terminal environments. It features 747 diverse 20-second scenes annotated with high-quality 3D bounding boxes across 27 object classes. I would like to know which subcategory within the area category has the highest proportion in the distribution of scene tags for all 747 dataset scenes.", "answer_format": "Your answer must be one of the following: ['city', 'highway', 'terminal', 'rural']", "tags": ["image", "objective", "single"], "anchor_pdf": ["4a84822b-ab37-578e-85a9-e72fa7f8c963"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "highway"}}} {"uuid": "6d76ebba-7326-58d7-80c1-654e4cb9f71d", "question": "Among the spotlight papers at ICLR 2024 focusing on neural architecture search, how many pretrained models and hyperparameter configurations did Quick-Tune evaluate to generate its large-scale meta-dataset?", "answer_format": "Your answer should be a sentence.", "tags": ["comprehensive", "metadata", "subjective"], "anchor_pdf": [], "reference_pdf": ["d32f34d4-83ad-5521-8727-edaed0997024"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "It evaluated 24 pretrained image classification models across over 20,000 hyperparameter configurations on 87 datasets.", "question": "How many pretrained models and hyperparameter configurations did Quick-Tune evaluate to generate its large-scale meta-dataset?"}}} {"uuid": "6d7a8ff8-89e1-598c-9d2a-07ecac85df3e", "question": "Is there a paper published in ICLR 2024 which studies the estimation of a planted signal hidden in a recently introduced nested matrix-tensor model, which is an extension of the classical spiked rank-one tensor model, motivated by multi-view clustering?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["21aa2a4b-0f87-53e5-84b8-4b242985a2d1"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper published in ICLR 2024 which studies the estimation of a planted signal hidden in a recently introduced nested matrix-tensor model, which is an extension of the classical spiked rank-one tensor model, motivated by multi-view clustering?", "reference_answer": "Performance Gaps in Multi-view Clustering under the Nested Matrix-Tensor Model"}}} {"uuid": "6dd90293-c42c-5736-9e44-9e5bb3d194f4", "question": "According to the paper at ICLR 2024 that proposed SPT, on which supervised task did incorporating data-driven pretraining improve the best reported results of state space models by 20 absolute points?", "answer_format": "Your answer should be a Python string specifying the name of the supervised task that saw a 20-point improvement.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["160cd6c1-9cc7-5972-bb84-6500c0fd14ef"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "PathX-256"}}} {"uuid": "6e5c93f9-7a5b-575e-bbba-5bcb489b0da1", "question": "In the ray resolution ablation, how does the Rotation Accuracy augment during the increase of number of camera rays from 2*2 to 16*16?", "answer_format": "Your answer should be a float, rounded to 1 decimal place.", "tags": ["single", "table", "objective"], "anchor_pdf": ["00520dfc-7f38-5488-a783-87476379d67c"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 31.5, "ndigits": 1, "tolerance": 1e-06}}} {"uuid": "6f1de746-8671-501d-a48d-bb16ebffc84d", "question": "Which paper proposes the new task open-world video instance segmentation and captioning?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["9b52d857-102c-52dd-b441-77b1981775dd"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper proposes the new task open-world video instance segmentation and captioning?", "reference_answer": "OW-VISCapTor: Abstractors for Open-World Video Instance Segmentation and Captioning"}}} {"uuid": "6fd27acf-04d7-50fe-95c8-c8221d2ad6ca", "question": "In the paper that demonstrates that the dispersion of self-attention scores underlies Transformers' working memory limits in N-back tasks, mirroring human attention-based memory constraints, how many independent models the author uses for each N in N-backed tasks? How many epochs does the author train each model?", "answer_format": "Your answer should be a Python list of two integers, the first is the number of independent models and the second is the number of epochs.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["9bdbb8a7-d426-59f5-8814-948f9f2c99e4"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": [50, 10]}}} {"uuid": "70403087-25d0-581e-b2ec-7fc20c1efc12", "question": "What is the maximum accuracy improvement achieved by DapperFL over other federated learning frameworks?", "answer_format": "Your answer should be a Python float value representing the percentage improvement in accuracy, between 0 and 100, rounded to 2 decimal places.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["4e7e5944-f9f2-562f-a3e2-ddb683a963bb"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 2.28, "ndigits": 2}}} {"uuid": "7090ca92-bff7-5c7b-987e-b51fa6db7db8", "question": "Is there a paper published in ICLR 2024 which provides the first instantiation of the white-box design paradigm that can be applied to large-scale unsupervised representation learning?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["d5d2c8ab-e765-58e5-b243-74fad2fef13f"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper published in ICLR 2024 which provides the first instantiation of the white-box design paradigm that can be applied to large-scale unsupervised representation learning?", "reference_answer": "Masked Completion via Structured Diffusion with White-Box Transformers"}}} {"uuid": "723220c7-5ffd-52a2-bce5-56153427135c", "question": "Which paper published in ICLR 2024 propose the first LMaaS-compatible approach for leveraging LLMs to enhance representation learning on text-attributed graphs?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["d5e3cb74-f25d-5404-b03f-a25fb47baab2"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper published in ICLR 2024 propose the first LMaaS-compatible approach for leveraging LLMs to enhance representation learning on text-attributed graphs?", "reference_answer": "Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation Learning"}}} {"uuid": "7299b622-f460-5902-acca-ff09c4b13ad0", "question": "A paper shows that language models' learning novel factual knowledge effectively from finetuning on limited textual demonstrations is due to its bias to learn word co-occurrence statistics instead of true factual associations. In the study of effect of layer wise ablation on the models' performance on simple QA and multiple choice tasks, which part of layer show a high responsibility to the models' performance?", "answer_format": "Your answer should be a string, which indicates the part of layer. e.g. \"upper 1/6 layers\". ", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["4eda31e7-591d-5c19-a9c2-47b2d055ddd8"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "lower 1/3 layers", "lowercase": true, "ignore_blank": false}}} {"uuid": "7349de43-d9af-54c9-8642-90b17c2e8879", "question": "How well does the DoRA, pretrained on a single Walking Tours video, perform compared to DINO, pretrained on ImageNet-1k in terms of semantic segmentation?", "answer_format": "Your answer should be a sentence specifying the performance difference between DoRA and DINO in terms of semantic segmentation metrics, including the percentage improvement in mIoU.", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["02006f6e-011a-58e6-b83e-c9bedd3cdf17"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "DoRA pretrained on a single Walking Tours video surpasses DINO pretrained on ImageNet-1k by 1.5% in mean Intersection over Union (mIoU) for semantic segmentation tasks.", "question": "How does the DoRA method's performance, when pretrained on a single Walking Tours video, compare to DINO pretrained on ImageNet-1k in terms of semantic segmentation?"}}} {"uuid": "75ac3579-007f-52d8-937c-3dc9ca3a6bfc", "question": "Is there a paper that introduces an image dataset consisting of over 20,000 images specifically designed for the task of emotion recognition? The dataset should feature complex scenes depicting multiple individuals in various naturalistic social settings.", "answer_format": "Your answer should be the name of the paper.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["2a770dd0-26cb-54d4-a145-0654abf98a4d"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper that introduces an image dataset consisting of over 20,000 images specifically designed for the task of emotion recognition? The dataset should feature complex scenes depicting multiple individuals in various naturalistic social settings.", "reference_answer": "FindingEmo: An Image Dataset for Emotion Recognition in the Wild"}}} {"uuid": "75e818de-8a3c-5ce4-b618-a2c0cb57853f", "question": "In the paper that proposed Soft MoE, among the models trained for hundreds or thousands of TPU days, the performance on ImageNet of B/16 model with proposed method is close to that of another model with ViT. What's the difference in train TPU days of that two models?", "answer_format": "Your answer should be a float, rounded to 1 decimal place.", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["36f7c548-f8c2-5fc9-ba12-a35ac045bc25"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 8.5, "ndigits": 1}}} {"uuid": "7743611d-f271-53e7-ac1e-4d5a5c4de91a", "question": "In ICLR 2024 Poster papers, which papers attempts to mitigate the error accumulation issue in model-based estimation resulting from the classical training of conventional diffusion models?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["e0ca5365-e5c2-5c2d-9037-0f4156e56f58"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, which papers attempts to mitigate the error accumulation issue in model-based estimation resulting from the classical training of conventional diffusion models?", "reference_answer": "DMBP: Diffusion model-based predictor for robust offline reinforcement learning against state observation perturbations"}}} {"uuid": "78545fcb-227f-5413-af1e-f2bda755e2fc", "question": "A paper propose ConvGQR to reduce the re-training cost. In its experiment, after normalizing the generated answer by the length of its corresponding relevant passage, how the PCC value of co-occurence performs?", "answer_format": "Your answer should be a phrase which concludes the overall tendance of the mentionned data. ", "tags": ["comprehensive", "image", "subjective"], "anchor_pdf": [], "reference_pdf": ["03588f6c-205d-5f47-be8f-3b5ec99ffdd5"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "It has a relatively lower value.", "question": "After normalizing the generated answer by the length of its corresponding relevant passage, how the PCC value of co-occurence performs?"}}} {"uuid": "785e0003-44c5-5fe6-aa93-ed9ead2e8677", "question": "In the paper proposing Self Pretraining (SPT), experiments demonstrate that self-pretraining on downstream task data using standard denoising objectives significantly enhances the performance of long-sequence models. This finding indicates that vanilla Transformers can achieve or exceed the performance of state-of-the-art models, such as S4, on the Long Range Arena without requiring architectural modifications. Please answer the following question: Which method, Masked SPT or Causal SPT, demonstrates higher average performance on the Long Range Arena in the paper?", "answer_format": "Your answer should be one of [Masked SPT, Causal SPT]", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["160cd6c1-9cc7-5972-bb84-6500c0fd14ef"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Causal SPT"}}} {"uuid": "796c006c-7a40-5d3f-9d46-31d4ace0697f", "question": "What specific graph structures can r-loopy Weisfeiler-Leman (r-ℓWL) test count that the classical 1-WL cannot?", "answer_format": "Your answer should be a Python string specifying the type of graph structures that r-ℓWL can count but 1-WL cannot.", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["038339d2-c1e8-5bd3-b123-ed8df1f248a4"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The r-ℓWL test can count homomorphisms of cactus graphs, which the classical 1-WL cannot.", "question": "What specific graph structures can r-loopy Weisfeiler-Leman (r-ℓWL) test count that the classical 1-WL cannot?"}}} {"uuid": "79c50d06-b156-51e7-9c0e-999cb921df93", "question": "According to this paper, what is the rate at which the asymptotic covariance of the optimization iterate errors decreases with respect to the self-repellence parameter $\\alpha$?", "answer_format": "Your answer should be a Python string containing the rate of decrease in big O notation, including the exact mathematical expression.", "tags": ["single", "subjective", "text"], "anchor_pdf": ["61ea17af-45fb-51c2-a1d4-4064c341ee79"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": ["O\\left(\\frac{1}{\\alpha^2}\\right)"], "question": "According to this paper, what is the rate at which the asymptotic covariance of the optimization iterate errors decreases with respect to the self-repellence parameter $\\alpha$?"}}} {"uuid": "7a086cc1-ac1e-55c6-9317-2b2b46051c0b", "question": "In the paper that proposes RSTC model, for the heavy imbalanced dataset GoogleNews-T, with which value of the proper hyper-parameter $\\epsilon_2$ the accurancy reaches its peak?", "answer_format": "Your answer should be a float rounded to 3 decimal places.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["0922f95b-8050-50a9-8e90-17192b5b7bcd"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 0.001, "ndigits": 3, "tolerance": 1e-06}}} {"uuid": "7a2c0c3e-0da5-540f-8737-41085c0afe45", "question": "In ICLR 2024 Spotlight papers, a paper proposes a method called \"SimNorm\" for normalizing the latent representation. What is the core advantage of \"SimNorm\"?", "answer_format": "Your answer should be plain text.", "tags": ["comprehensive", "text", "subjective"], "anchor_pdf": [], "reference_pdf": ["ae95c926-4f72-58ea-84c3-a99b99108471"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "In ICLR 2024 Spotlight papers, a paper proposes a method called \"SimNorm\" for normalizing the latent representation. What is the core advantage of \"SimNorm\"?", "reference_answer": "The core advantage of \"SimNorm\" is that it can enforce the latent representation to be sparse, making it easier for the model to learn."}}} {"uuid": "7ae5178b-1fd8-5dd9-a756-3d35216557b8", "question": "Which model is used as the language component in PaLI? Besides the sizes used in PaLI, how many sizes of this model are available in the source paper?", "answer_format": "Your answer should be a single python list like this: [\"model_name\", integer].Note that the model name should use the abbreviation and not inuclude the size information. For the integer, you should give the number of sizes not used in the paper of PaLI, not the total number of sizes.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["affc5128-20b0-5b1a-8547-664eb89db11d"], "reference_pdf": ["1784e68d-a499-59ab-a942-14c7a55861db"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["mT5", 3], "ignore_order": false}}} {"uuid": "7b686de5-958c-5d2e-bab4-2308b04cae13", "question": "In \"Domain-Specific Pruning of Large Mixture-of-Experts Models with Few-shot Demonstrations\", which paper inspired the authors to calculate the expert-level token contribution?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["cf8d3731-22ac-569e-8280-3d6bbbd78a28"], "reference_pdf": ["017b741f-d588-5124-9971-af37b2f806ae"], "conference": [], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In \"Domain-Specific Pruning of Large Mixture-of-Experts Models with Few-shot Demonstrations\", which paper inspired the authors to calculate the expert-level token contribution?", "reference_answer": "ShortGPT: Layers in Large Language Models are More Redundant Than You Expect"}}} {"uuid": "7b8419f8-dd8d-58ae-a6b5-dc0751797910", "question": "Among the datasets and benchmarks at NeurIPS 2024 evaluating large language models' capabilities, what is the name of the benchmark introduced to evaluate AI models' understanding of humorous contradictions in comics?", "answer_format": "Your answer should be a Python string containing the name of the benchmark.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["7786376a-6fb2-5bde-a1be-2b8ab3bf4162"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "YesBut"}}} {"uuid": "7b92e525-dc4a-514f-be25-448e5973b488", "question": "A paper proposes Bayesian red teaming (BRT) to reduce the model's potentiel harmful response. In the experiment, does the author maintain all the offensive test cases during the generation?", "answer_format": "Your answer should be \"Yes\" or \"No\".", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["097b01cd-3fe8-5dc6-acf9-62513d376004"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "No", "lowercase": true, "ignore_blank": false}}} {"uuid": "7c2602a5-e639-5eb2-9950-0636ba43d719", "question": "In the paper that establishes one of the first low-degree polynomial lower bound for tree broadcasting below the Kesten-Stigum threshold in a non-product-measure setting, what is the critical condition of the proof of formula (7)?", "answer_format": "Your answer should be a Python strings indicating the critical condition of the formula.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["9c0c513f-561b-5104-9463-af4a393d1183"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "It is to show that the inequality holds when X_{\\leq w_1} and X_{\\leq w_2} are conditionally independent given X_w by the Markov Property, especially when some entries of M can be 0.", "question": "In the paper that establishes one of the first low-degree polynomial lower bound for tree broadcasting below the Kesten-Stigum threshold in a non-product-measure setting, what is the critical condition of the proof of formula (7)?"}}} {"uuid": "7d07a700-2ec1-550d-9f48-e2b86e019057", "question": "Among the long papers at ACL 2023 focusing on emotion recognition in conversations, which modeling strategies (e.g., graph-based, representation learning, or fusion networks) are most commonly employed for improving fine-grained emotion recognition, and which approaches show the most notable performance gains on benchmarks like IEMOCAP or MELD?", "answer_format": "Your answer should be a Python string describing the most common modeling strategies and which specific approaches (with their names) showed the best performance gains on IEMOCAP or MELD benchmarks.", "tags": ["comprehensive", "metadata", "subjective"], "anchor_pdf": [], "reference_pdf": ["32d1e04a-e87c-5179-8b13-4ad86585c55f", "01718802-a088-5384-85e6-8cafac4944b4"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Graph-based models and cross-modal fusion networks were the most effective strategies for fine-grained emotion recognition in conversations at ACL 2023, with DualGATs and CMCF-SRNet showing the strongest performance gains on IEMOCAP and MELD.", "question": "What modeling strategies (e.g., graph-based, representation learning, or fusion networks) are most commonly employed in ACL 2023 for improving fine-grained emotion recognition in conversations, and which approaches show the most notable performance gains on benchmarks like IEMOCAP or MELD?"}}} {"uuid": "7e3e9f76-0834-5dea-972d-fe08b308f2d4", "question": "According to the DP-OPT paper, while varing privacy parameters, which base model trade-off the less?", "answer_format": "Your answer should be a string, the name of the base model along with its size, as originally presented in the paper.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["47dbcf34-e5e8-5eae-b12b-d7b7c0ad630a"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Vicuna-33b", "lowercase": true, "ignore_blank": true}}} {"uuid": "7f5573c8-5564-5b3a-898c-b63007390006", "question": "A recent paper introduces a universal framework for dataset characterization that integrates 23 types of model-driven meta-information, encompassing static measures, training dynamics, model uncertainty, and pre-trained knowledge, into a unified multidimensional feature space. The paper tests 10 selected samples using different selection methods on the QNLI dataset, employing their log determinant as a proxy measure for set informativeness. Which selection method corresponds to the lowest log determinant?", "answer_format": "Your answer should be one of: ['Ambig', 'Hard', 'CoreSet', 'InfoVerse (DPP)']", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["99213a51-11ac-540f-a7af-740fe69eb506"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Hard"}}} {"uuid": "7f7c3dd4-9ea4-5bed-84f1-06ecf15e7775", "question": "In the paper that proposed the R2I method, which dataset covered in the experiment is the newest? In that dataset, how many environments are there for each tag?", "answer_format": "Your answer should be a Python list of 2 elements, the first is a string, the name of the dataset, the second is a Python dict, the key is the tag and the value is the number of environments.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["d0d9a5d2-3cfa-5fea-8439-0f3da2975dda", "224ea14c-5e61-5dbf-a25d-81742a2d976a"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_structured_object_exact_match"], "eval_kwargs_list": [{"gold": "POPGym", "ignore_blank": true, "lowercase": true}, {"gold": {"Diagnostic": 5, "Control": 4, "Game": 5, "Noisy": 5, "Navigation": 2}, "lowercase": true}]}}} {"uuid": "7fbc249e-9708-5514-9faf-b0cc82b0014b", "question": "Among the spotlight papers at NeurIPS 2024 researching contrastive learning, what is the key mechanism identified in the paper \"Exploitation of a Latent Mechanism in Graph Contrastive Learning: Representation Scattering\" that enhances the performance of Graph Contrastive Learning (GCL) methods?", "answer_format": "Your answer should be a Python string describing the key mechanism and how it enhances performance of Graph Contrastive Learning methods.", "tags": ["single", "subjective", "text"], "anchor_pdf": ["6a12c49c-e5c0-5950-bdb5-ffff18a8458b"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The key mechanism identified is \"representation scattering,\" which involves distributing node representations away from a central point to enhance diversity and uniformity in the embedding space.", "question": "What is the key mechanism identified in the paper \"Exploitation of a Latent Mechanism in Graph Contrastive Learning: Representation Scattering\" that enhances the performance of Graph Contrastive Learning (GCL) methods?"}}} {"uuid": "7fcdeddf-e29d-5e81-9a58-16ba70576c61", "question": "Which paper published in ICLR 2024 proposes LLM-grounded Video Diffusion (LVD), the first training-free pipeline that leverages LLM-generated dynamic scene layouts for enhanced ability to generate videos from intricate text prompts?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["20f09a5a-28ee-534a-8c6d-011a554bdedd"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper published in ICLR 2024 proposes LLM-grounded Video Diffusion (LVD), the first training-free pipeline that leverages LLM-generated dynamic scene layouts for enhanced ability to generate videos from intricate text prompts?", "reference_answer": "LLM-grounded Video Diffusion Models"}}} {"uuid": "808b6754-65e3-55a5-93ee-fb88d7449af7", "question": "What is the main motivation of eliminating the homogeneous distractors in image or video?", "answer_format": "Your answer should be plain text.", "tags": ["single", "text", "subjective"], "anchor_pdf": ["b190104f-f73e-561b-8e6f-44de6785dcfa"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "What is the main motivation of eliminating the homogeneous distractors in image or video?", "reference_answer": "The main motivation is that, compared with the heterogeneous distractors, the homogeneous distractors are more difficult to be eliminated, and implies a high degree of relevance to the task."}}} {"uuid": "80b37387-c0f7-5b90-9ba8-ba92bcb5ef9b", "question": " Look at Figure 1 in the paper. What two main components does the diagram show as inputs to the Language Model block? Based on the figure, what is the output of the model, and how is it evaluated?", "answer_format": "Your answer should be a sentence answering the two questions.", "tags": ["single", "image", "subjective"], "anchor_pdf": ["1c87084a-f8ae-5a28-a8bb-016316818e0c"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": [" The diagram shows the Issue text and the Codebase as inputs.", "The output is a Generated PR (pull request), which is evaluated by applying the patch to the codebase and running tests."], "question": " Look at Figure 1 in the paper. What two main components does the diagram show as inputs to the Language Model block? Based on the figure, what is the output of the model, and how is it evaluated?"}}} {"uuid": "82692ec9-8ee0-59c0-950d-460dc8fcc820", "question": "A recent paper introduces the first standardized benchmark for video-language continual learning, designed to evaluate models on three novel query-incremental tasks: Moment Query (MQ), Natural Language Query (NLQ), and Visual Query (VQ). Could you please specify which egocentric video-language dataset the proposed data collection is derived from?", "answer_format": "Your answer should be a name of a dataset.", "tags": ["text", "objective", "single"], "anchor_pdf": ["400f0139-e370-5839-944e-c14b36713a4a"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Ego4D"}}} {"uuid": "830f1c03-4a94-5c36-863e-cbf523ec9785", "question": "In the experiment section of \"InCharacter: Evaluating Personality Fidelity in Role-Playing Agents through Psychological Interviews\", where does the character data come from, and how many characters are there in each character source dataset?", "answer_format": "Your answer should be a Python dictionary of one or more key-value pairs, where each key is the name of the source character dataset and each value is an integer indicating the number of characters in that dataset. e.g. {\"dataset1\": 3, \"dataset2\": 5}", "tags": ["multiple", "text", "objective"], "anchor_pdf": ["72490761-8e4d-5169-b883-eefaa83510a1"], "reference_pdf": ["3cd4fa3e-62e8-5d79-83ad-2098de11984d", "5844c6f9-3de6-551b-bc02-ba6bc65c02ef"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"ChatHaruhi": 32, "RoleLLM": 100}}}} {"uuid": "83534573-34f6-5b77-8c88-c1267d7fdb3d", "question": "In the paper where a novel approach named SLAN is proposed, what is the role of \\delta_{i}^{k} and P_k in formula (3)?", "answer_format": "Your answer should be a Python strings indicating the role of \\delta_{i}^{k} and P_k.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["9bca2ccd-ae0a-5b63-a418-063aa97f64ab"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "\\delta_{i}^{k} is a indicator variable, where \\delta_{i}^{k} = 1 if lk is not associated with xi; otherwise \\delta_{i}^{k} = 0. P_k is a $(q - 1) \\times q$ projection matrix, which removes the k-th row of the identity matrix", "question": "In the paper where a novel approach named SLAN is proposed, what is the role of \\delta_{i}^{k} and P_k in formula (3)?"}}} {"uuid": "86148388-e8c0-524c-bdbf-b2d156a88151", "question": "How many vision encoders did Cambrian-1 evaluate to study different visual representation choices?", "answer_format": "Your answer should be an integer indicating the number of vision encoders evaluated.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["862bfbb2-c868-53d4-8400-3adc66090d0f"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 20}}} {"uuid": "861cf9ab-7091-55dc-aef7-9daa027ddf84", "question": "In ICLR 2024 Spotlight papers, a paper proposes a new Adversarial Imitation Learning (AIL) algorithm, aiming to address the sample efficiency and scalability issues of existing AIL methods when dealing with off-policy data. How many different tasks are considered in Figure 2.", "answer_format": "Your answer should be a Python integer.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["8bb22633-e414-53b5-9f29-5e3e64baf176"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 3}}} {"uuid": "86d7d0cb-e4e8-5172-8b8f-af7b783f5236", "question": "According to the spotlight paper at NeurIPS 2024 that applies user-level differentially private algorithms in federated learning, what range of privacy loss $\\varepsilon$ did the one-shot empirical estimation method report in the scenario where only the final trained model is released?", "answer_format": "Your answer should be a Python list of two float values: [lower_bound, upper_bound], representing the range of privacy loss $\\varepsilon$ values, both rounded to 4 decimal places.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["c3bf105b-f2b9-5308-8b56-563b240d5b83"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": [0.0496, 0.2317], "ndigits": 4, "ignore_order": false}}} {"uuid": "8730f946-0a1d-536a-a852-b8633be3458f", "question": "What is the main difference between formula (5) and (6) in the paper \"Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale\"?", "answer_format": "Your answer should be a python strings.", "tags": ["single", "formula", "subjective"], "anchor_pdf": ["738001bd-3789-5fdc-9022-219403a1aae2"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "What is the main difference between formula (5) and (6) in the paper proposing Voicebox?", "reference_answer": "Formula (5) is a function computing the loss on all frames, including those that are not masked and would not be required during inference. Formula (6) is a masked version of L_{audio-CFM}, which leads to better results."}}} {"uuid": "8857443d-9d39-562c-a956-aafe363ebc15", "question": "There is a paper that introduces a curriculum learning framework that leverages prior knowledge about sample difficulty, measured through annotation entropy and loss, to discover effective, often non-monotonic curricula tailored to NLP models and datasets. Which university's researchers proposed this work?", "answer_format": "Your answer should be a name of a university.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["996afc36-70e9-5f75-8e9f-6f0e5587c451"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_element_included", "eval_kwargs": {"gold": ["University of Massachusetts Lowell", "Massachusetts Lowell"]}}} {"uuid": "891967d0-ebf1-59c1-8b92-ee454038df8b", "question": "How much more accurate is InfoBatch compared to the static pruning method EL2N-2 on CIFAR-100 with 50% prune ratio?", "answer_format": "Your answer should be a nomber, indicating the precentage of the difference of accuracy, rounded to 1 decimal place.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["f15160cc-4dbb-53ad-b5d6-f2ac6b23bc68"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 7.1, "ndigits": 1}}} {"uuid": "8a1191ff-943e-5138-bf7f-13a2e7a3e492", "question": "For the second-best method shown in Figure 2, where can I find their GitHub repository to reproduce the results?", "answer_format": "Please provide the GitHub repository URL for this dataset in the format: 'https://github.com/xxx'.", "tags": ["multiple", "image", "objective"], "anchor_pdf": ["cd5f6a7f-7d79-5294-ab87-6f8faf56daa0"], "reference_pdf": ["3ef4f8bf-6e26-545b-b51c-e6a7969818c7"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "https://github.com/DigiRL-agent/digirl", "lowercase": true}}} {"uuid": "8a3024f4-668e-5422-afbb-1000295ae11e", "question": "According to the first comprehensive study for LLM attribution at ACL 2023, what is the accuracy of the best attribution method in tracing fine-tuned models back to their pre-trained base models?", "answer_format": "Your answer should be a Python string in the format 'X out of Y models', where X is the number of correctly attributed models and Y is the total number of models tested.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["01f1cd2e-f9fe-5fea-96c6-8f27a61a0def"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "8 out of 10 models", "ignore_blank": true}}} {"uuid": "8a66a0bc-0a88-5981-a094-8e2c26b8da3a", "question": "In ICLR 2024 Poster papers, a paper first constructs a dynamics model from the expert demonstration, enforcing local Lipschitz continuity while skipping the discontinuous regions. What is the number of the pages of this paper?", "answer_format": "Your answer should be a Python integer.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["5e978885-be39-543f-b1d9-dc71ad71083a"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 19}}} {"uuid": "8a875858-2688-5948-a4f9-3ec0cd39f550", "question": "In the paper that reveals that retrieved information helps retrieval-augmented language models' (RALMs) performance when it is relevant, for Llama-2-13B few-shot prompted on five QA tasks, how many kinds of benchmarks benefit from a strong retrieval?", "answer_format": "Your answer should be an int.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["024c5234-b409-5d3d-b959-349bb1952a87"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 3}}} {"uuid": "8a9f72db-685e-5f91-9446-f0072cec953d", "question": "In ICLR 2024 Poster papers, which paper introduces a generalized attack framework that has the flexibility to model to what extent the adversary is able to control the agent, and allows the attacker to regulate the state distribution shift and produce stealthier adversarial policies?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["1317c78c-0fc0-5a8d-a320-18a5e63717c7"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, which paper introduces a generalized attack framework that has the flexibility to model to what extent the adversary is able to control the agent, and allows the attacker to regulate the state distribution shift and produce stealthier adversarial policies?", "reference_answer": "Rethinking Adversarial Policies: A Generalized Attack Formulation and Provable Defense in RL"}}} {"uuid": "8ba79cd4-f36c-5812-a05d-fef0816ca504", "question": "In the paper that proposed CutSSL, how much does the CutSSL method outperform the state-of-the-art on Cifar-10 dataset when only a single sample from each class is provided and when in the large label-rate regime respectively?", "answer_format": "Your answer should be a python list of two strings representing the outperformance in the form of percentage, rounded to 1 decimal place, e.g. \"10.0%\". Recall that order matters.", "tags": ["comprehensive", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["9aafaf1f-13dd-5c69-9e33-45d51281d07f"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["2.9%", "1.9%"], "ignore_order": false}}} {"uuid": "8bd87a19-3c5f-5e92-9af1-30d76c34e4c4", "question": "A paper introduce Induced Model Matching (IMM) to train a full-featured (often larger) model with help of a very accurate (often small) predictive model using restricted features. In the comparaison of MDP trained without and with IMM incorporating POMDP, which model shows a high stability during the training, with IMM or no IMM? ", "answer_format": "Your answer should be chosen between \"with IMM\" and \"no IMM\". ", "tags": ["image", "objective", "single"], "anchor_pdf": ["8e28f974-beda-54a5-b06d-50b4d31a4019"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "with IMM", "lowercase": true, "ignore_blank": false}}} {"uuid": "8c147d46-63fc-5c50-beae-64bf4cc920ec", "question": "A paper introduces FlexLoRA, a simple yet effective aggregation scheme for Large Language Models' fine-tuning. In their experiment of FlexLoRA's performance in a controlled environment using homogeneous LoRA ranks, does FlexLoRA's aggregation negatively impact model performance?", "answer_format": "Your answer should be \"Yes\" or \"No\". ", "tags": ["image", "objective", "single"], "anchor_pdf": ["045fcd5a-a908-5797-9b67-91f44e629468"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "No", "lowercase": true, "ignore_blank": false}}} {"uuid": "8cc9f4de-9e7a-55a9-9cb6-b588251ee91d", "question": "Which method aims to recover camera poses and scene geometry from a large set of unordered or ordered images, in particular, by optimizing photometric errors?", "answer_format": "Your answer should be a string, the name of the method", "tags": ["single", "subjective", "text"], "anchor_pdf": ["00520dfc-7f38-5488-a783-87476379d67c"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "direct SLAM", "question": "What aims to recover camera poses and scene geometry from a large set of unordered or ordered images, in particular, by optimizing photometric errors?"}}} {"uuid": "8d291d6e-f6ad-5d4c-984f-8be126ce33f5", "question": "There is a recent paper introducing a large-scale, high-resolution video-text dataset annotated with detailed, script-like captions averaging 145 words per clip, over 10 times longer than existing datasets. It uniquely captures not only scene content but also camera operations (e.g., shot types and movements). Could you please tell me which category within this dataset has a video count that exceeds 10% of the total?", "answer_format": "Your answer should be a vedio category.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["49971f41-01ca-5bac-abd9-10d5727663c1"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Travel", "lowercase": true}}} {"uuid": "8d4fdb6b-6638-5e61-9bf7-a62195198c24", "question": "A paper extend the notion of a risk controlling prediction set (RCPS) to the sequential setting, in their evaluation of our methods on the Imagenet dataset, among four methods, one method got the highest average rate of safety violations, what is it? ", "answer_format": "Your answer should be a string, a name of method which corresponds to the label in the figure. ", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["03b08165-0718-5056-9752-1d7a50aa41ea"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "all", "lowercase": true, "ignore_blank": false}}} {"uuid": "8ebbfa63-51d8-5193-9004-40215508e42a", "question": "Figure 1 illustrates the cosine value density plots for different noise levels and different number of canary repetitions. How these density plots illustrate the increased privacy preserving effect of canary as the noise increases?", "answer_format": "Your answer should be a sentence.", "tags": ["single", "image", "subjective"], "anchor_pdf": ["c3bf105b-f2b9-5308-8b56-563b240d5b83"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "As the noise increases, the observed distribution of canary cosine values (blue curve) gradually approaches the distribution of canaries that did not participate in training (black curve, i.e., N(0,1/d)), suggesting that the noise masks the effect of canaries, making it more difficult for an attacker to differentiate which canaries participated in the training, thus enhancing the privacy protection effect.", "question": "Figure 1 illustrates the cosine value density plots for different noise levels and different number of canary repetitions. How these density plots illustrate the increased privacy preserving effect of canary as the noise increases?"}}} {"uuid": "8eee2499-a837-5f23-8c91-c820ec6e4d55", "question": "What is the affiliation of the first author of the paper?", "answer_format": "Your answer should be a Python string.", "tags": ["single", "metadata", "objective"], "anchor_pdf": ["b190104f-f73e-561b-8e6f-44de6785dcfa"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "School of Artificial Intelligence, Nanjing University, China", "ignore_blank": true, "lowercase": false}}} {"uuid": "8f1a9a7d-041b-5104-8806-b67dde1453e8", "question": "What is the challenge of estimating camera poses now?", "answer_format": "Your answer should be a string, describing the condition", "tags": ["single", "subjective", "text"], "anchor_pdf": ["00520dfc-7f38-5488-a783-87476379d67c"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Estimating camera poses is a fundamental task for 3D reconstruction and remains challenging given sparsely sampled views (<10)", "question": "What is the challenge of estimating camera poses now?"}}} {"uuid": "8f64d96a-1439-57ec-a78e-53b96361cd4e", "question": "A recent paper introduces GEEP (GEnder Equality Prompt), a novel debiasing method that mitigates gender bias in large pre-trained language models such as RoBERTa without degrading their performance on downstream tasks. Could you please tell me who the first author of this paper is?", "answer_format": "Your answer should be a name of a person.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["1d059a85-e0f3-5e35-8358-cfd5633e6da6"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Zahra Fatemi"}}} {"uuid": "8ff213fb-137b-5c5b-9b05-86c797c01403", "question": "In the question that proposes a practical rephrasing-based method to estimate uncertainty in closed-source LLMs, combining simple memorizable rules with a theoretical framework for calibrated confidence scores, which model, dataset, and rephrasing method are used in Figure 3c to validate the logistic distribution assumption?", "answer_format": "Your answer should be a Python list of three strings, the first is the model name, the second is the dataset name and the third is the rephrasing method.", "tags": ["comprehensive", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["9ba99d47-9121-5261-b2e9-42edd263bb13"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Mistral-7B", "ARC-Challenge", "expansion"]}}} {"uuid": "9179b2b5-9f88-5999-a900-410b2a7a2e96", "question": "In Figure 2, what three filtering stages are depicted for constructing SWE-bench tasks? What specific criteria must a PR meet to pass the Execution Filter stage?", "answer_format": "Your answer should be a sentence answering the two questions.", "tags": ["single", "image", "subjective"], "anchor_pdf": ["1c87084a-f8ae-5a28-a8bb-016316818e0c"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": [" The stages are: (1) Scrape PRs, (2) Attribute Filter, (3) Execution Filter.", "The PR must install successfully and pass all tests (fail-to-pass tests)."], "question": "In Figure 2, what three filtering stages are depicted for constructing SWE-bench tasks? What specific criteria must a PR meet to pass the Execution Filter stage?"}}} {"uuid": "91c809b3-1e3a-5f12-bf7b-47d90898f6ea", "question": "A recent paper presents the first comprehensive benchmark for disentangling aleatoric and epistemic uncertainty in deep learning. It evaluates 19 uncertainty quantification methods across 13 tasks on ImageNet and CIFAR-10, revealing that existing decomposition formulas fail to produce truly disentangled estimators. Could you please provide the email address of the first author of this paper?", "answer_format": "Your answer should be a mail address.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["c89de633-738e-562b-bf3e-44597d161eb2"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "b.h.mucsanyi@gmail.com"}}} {"uuid": "93fac128-b7d3-53c9-b013-e11d9a9a0693", "question": "A paper propose a SummAttacker to generate adversarial samples based on language models efficiently. Performance of different models on the Gigaword test set vary when attacked by SummAttacker with different candidate number K. Generally, a larger K gives model positive or negative impact?", "answer_format": "Your answer should be chosen between \"positive\" and \"negative\".", "tags": ["image", "objective", "single"], "anchor_pdf": ["0e494325-2e09-5932-836a-3c5a5ba3a422"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "negative", "lowercase": true, "ignore_blank": false}}} {"uuid": "94d1244b-9b39-5c21-9c06-157be897e605", "question": "I remember there is a paper that develops a language, perhaps MathDL? How does it measure whether target model generates more concise solutions or not?", "answer_format": "Your answer should be a math formula in LaTeX format WITHOUT ANY EXPLANATION.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["9ab19db0-db50-5ccb-9e99-a3797c8ba665"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": ["C(s_A, s_B \\mid s_A, s_B \\text{ both solve } e) = \\frac{f(s_B) - f(s_A)}{f(s_B)}"], "question": "I remember there is a paper that develops a language, perhaps MathDL? How does it measure whether target model generates more concise solutions or not?"}}} {"uuid": "95a672e8-3cda-5a6b-87a9-e95d0d518338", "question": "Among the papers at NeurIPS 2024 researching multi-agent systems, what is the maximum accuracy improvement achieved by MDAgents when incorporating moderator review and external medical knowledge in group collaboration?", "answer_format": "Your answer should be a Python float value representing the percentage improvement in accuracy, between 0 and 100, rounded to 1 decimal place.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["50de8a07-fdcd-5113-b777-5594dd741ac4"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 11.8, "ndigits": 1}}} {"uuid": "9611cd50-05a4-58e8-a1cf-25664758d1f8", "question": "Which paper at ICLR 2024 proposed a framework for efficient fine-tuning of bidirectional interleaved visual languages for reference image segmentation, which achieved an average score of 66.5 on three RefCOCO-related benchmarks? In this paper, how many images are there in the group of images related to kid runningninng?", "answer_format": "Your answer should be a list of two strings, with the first being the paper title and second being a number.", "tags": ["comprehensive", "table", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["2536e140-369b-5c98-9a18-de689ea7f5f8"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_paper_relevance_with_reference_answer", "eval_int_exact_match"], "eval_kwargs_list": [{"question": "Which paper at ICLR 2024 proposed a framework for efficient fine-tuning of bidirectional interleaved visual languages for reference image segmentation, which achieved an average score of 66.5 on three RefCOCO-related benchmarks?", "reference_answer": "BarLeRIa: An Efficient Tuning Framework for Referring Image Segmentation"}, {"gold": 7}]}}} {"uuid": "96255ea0-3b77-59e9-8f68-baadb43fffd1", "question": "In ICLR 2024 Poster papers, which paper proposes GRAD, a game-theoretic approach that treats the temporally-coupled robust RL problem as a partially-observable two-player zero-sum game?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["990c4d18-4b8d-51cb-b353-114e79dac616"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, which paper proposes GRAD, a game-theoretic approach that treats the temporally-coupled robust RL problem as a partially-observable two-player zero-sum game?", "reference_answer": "Game-Theoretic Robust Reinforcement Learning Handles Temporally-Coupled Perturbations"}}} {"uuid": "9642c1b7-0cca-5bd4-8fd8-6784c5433b9c", "question": "In ICLR 2024 Poster papers, a paper mainly studies how to more effectively utilize data augmentation techniques to improve sample efficiency and generalization ability in image-based deep reinforcement learning (DRL). Tell me the number of authors in this paper.", "answer_format": "Your answer should be a Python integer.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["08a2377d-5d4c-560c-9ea4-87947d853f12"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 3}}} {"uuid": "96620845-ee4c-5dc2-83bb-50ff72522bcb", "question": "Can you recommend me a paper published in ICLR 2024 that proposes a framework that synergistically integrates compiled neural networks (CoNNs) into the standard transformer architecture?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["0aea6707-bde2-5605-87c1-dc70dc742065"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Can you recommend me a paper published in ICLR 2024 that proposes a framework that synergistically integrates compiled neural networks (CoNNs) into the standard transformer architecture?", "reference_answer": "Mastering Symbolic Operations: Augmenting Language Models with Compiled Neural Networks"}}} {"uuid": "979a11a6-63e0-5774-9714-3e6b3750c7e2", "question": "Among the long papers at ACL 2023 researching text classification, how many benchmark datasets were used in the experiments to validate the effectiveness of GetMTL?", "answer_format": "Your answer should be a Python integer representing the number of benchmark datasets used in the experiments.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["30511a6f-d7ca-554d-ba53-cef9ab327563"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 2}}} {"uuid": "97d91a12-270b-5030-9c04-af9ddf90a0cc", "question": "What percentage of experts can be pruned from the NLLB-200 model without further finetuning and with negligible loss in translation quality?", "answer_format": "Your answer should be a Python float value representing the percentage of experts that can be pruned, between 0 and 100, rounded to 1 decimal place.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["24e046c7-1028-567a-9495-222aeddb4d90"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 80.0, "ndigits": 1}}} {"uuid": "988cad30-22af-567f-b03d-57e929c59e30", "question": "A recent paper introduces the first document-level relation extraction (RE) dataset in the historical domain, which includes bilingual annotations in both Korean and Hanja. Constructed from the Yeonhaengnok travel records of the Joseon dynasty, the proposed dataset provides annotated entities, relations, and supporting evidence across variable-length textual units. How many documents does this dataset contain?", "answer_format": "Your answer should be a Python int", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["97f8657f-6296-58ab-bf96-4438b9acbb6b"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 5816}}} {"uuid": "998e69c0-f3a3-532d-8175-ce6e562f4b2a", "question": "In the paper proposing a unified framework to model semantic segmentation and semantic image synthesis as a pair of reverse problems, why can the ODE model model these two problems simultaneously?", "answer_format": "Your answer should be a python strings.", "tags": ["formula", "subjective", "single"], "anchor_pdf": ["03bf5827-d906-506a-b17e-0c964a13e615"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "In the paper proposing a unified framework to model semantic segmentation and semantic image synthesis as a pair of reverse problems, why can the ODE model model these two problems simultaneously?", "reference_answer": "ODE framework aims to learn the mapping between two distributions through straight trajectories. We aim to learn the velocity field using neural networks v_\\theta (z_t, t) and solve it with optimization methods: L = \\int_0^1 \\mathbb{E}_{(z_0, z_1) \\sim \\gamma} \\left\\| v_\\theta(z_t, t) - \\frac{\\partial \\varphi_t(z_0, z_1)}{\\partial t} \\right\\|^2 dt = \\int_0^1 \\mathbb{E}_{(z_0, z_1) \\sim \\gamma} \\left\\| v_\\theta(z_t, t) - (z_1 - z_0) \\right\\|^2 dt. And the formula as a time-symmetric form, which results in an equivalent problem by exchanging z0 and z1 and flipping the sign of v_\\theta. Interestingly, the transportation problem from \\pi_1 to \\pi_0 indicates the semantic image synthesis task. This means that semantic segmentation and semantic image synthesis essentially become a pair of mutually reverse problems that share the same ODE and have solutions with opposite signs."}}} {"uuid": "9a875755-338d-5c6d-a86a-b8e5be3f7742", "question": "What performance improvements does LoftQ achieve over QLoRA in 2-bit quantization settings on the LLaMA-2-13B model for the GSM8K dataset?", "answer_format": "Your answer should be a sentence, stating the accuracy achieved by LoftQ and the comparison with QLoRA.", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["5703e1a9-cb14-5ee3-b19b-5488588f4a36"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "LoftQ achieves a 25.4% accuracy on GSM8K with 2-bit quantization, outperforming QLoRA, which fails to converge under the same settings.", "question": "What performance improvements does LoftQ achieve over QLoRA in 2-bit quantization settings on the LLaMA-2-13B model for the GSM8K dataset?"}}} {"uuid": "9ab6d86c-f98a-5f42-84e6-69dc0fedf49b", "question": "In the paper that reveals that LLMs develop a two-phase abstraction process during training, and give initial evidence that their brain-like encoding ability stems from compositional learning rather than next-word prediction, why is the choose of k which controls the neighborhood size duing the nonlinear ID estimation necesary?", "answer_format": "Your answer should be a Python string indicating the reason why we shoud have a scale analysis on k.", "tags": ["comprehensive", "text", "subjective"], "anchor_pdf": [], "reference_pdf": ["9b27bada-ff27-5ff7-9821-426c283a4793"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "if k is too small, the I_d likely describes local noise, and if k is too large, the curvature of the manifold will produce a faulty estimate.", "question": "In the paper that reveals that LLMs develop a two-phase abstraction process during training, and give initial evidence that their brain-like encoding ability stems from compositional learning rather than next-word prediction, why is the choose of k which controls the neighborhood size duing the nonlinear ID estimation necesary?"}}} {"uuid": "9ab746c3-4d32-5999-b209-783689738b35", "question": "What is the main purpose of Figure 1 and how does it demonstrate the role specialization in MetaGPT?", "answer_format": "Your answer should be a sentence answering the two questions.", "tags": ["single", "image", "subjective"], "anchor_pdf": ["460c65d7-a298-5bd3-baa2-dd8683885308"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["It illustrates the Standardized Operating Procedures (SOPs) in software development, comparing the workflow between MetaGPT's multi-agent framework and real-world human teams. It highlights how tasks are decomposed and assigned to specific roles (e.g., Product Manager, Architect, Engineer) in both settings.", "The figure visually maps each role (e.g., Product Manager creating PRDs, Architect designing system components) to sequential workflow stages, showing how specialized outputs (like PRDs or interface designs) are handed off between roles, mirroring human team collaboration."], "question": "What is the main purpose of Figure 1 and how does it demonstrate the role specialization in MetaGPT?"}}} {"uuid": "9b114135-e411-5d35-94bb-815be3d51f41", "question": "A paper introduce ROBUSTAL PACAEVAL, a new benchmark, to address the sensitivity of large language models (LLMs) to the phrasing of prompts. In their examination of the model-agnostic attributes of the worst prompts, Llama family or Gemma family get the higher overlap rate of the worst-k prompts? ", "answer_format": "Your answer should be chosen between \"Llama\" and \"Gemma\". ", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["0439a3cf-0730-5414-8418-06e525b53520"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Gemma", "lowercase": false, "ignore_blank": false}}} {"uuid": "9b5168ec-96c6-54ca-afba-0b40bbbb8edc", "question": "In the paper of offline Q-learning, which state-of-the-art return-conditioned supervised method was mentionned? Which conference was this method published in?", "answer_format": "Your answer should be a single python list like [\"paper_title\", \"conference_name_and_year\"], the paper title should be the full title of state-of-the-art return-conditioned supervised method, and conference_name_and_year should be the abbreviation of the conference and the year like \"ACL2021\".Note that arxiv should not be included as the conference name.", "tags": ["multiple", "metadata", "objective"], "anchor_pdf": ["24668638-d507-50ef-826f-db4f3b2742ba"], "reference_pdf": ["0686abdd-ea4a-5026-a509-62cfa0f5855a"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Multi-Game Decision Transformers", "NeurIPS2022"], "ignore_order": false, "lowercase": true}}} {"uuid": "9be4e9dc-b3b4-5082-8a85-244be8d32283", "question": "The paper titled \"Automatic Camera Pose Estimation by Key-Point Matching of Reference Objects\" was conducted by researchers from which country?", "answer_format": "Your answer should be a name of a country", "tags": ["single", "text", "objective"], "anchor_pdf": ["8a4f8d9c-4e9c-56bc-8b61-026fc5b34445"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_fuzzy_match", "eval_kwargs": {"gold": "Netherlands", "lowercase": true, "ignore_blank": true, "fuzz_method": "partial_ratio", "threshold": 0.9}}} {"uuid": "9be80cee-f524-5c63-9098-b9b4cd4a4921", "question": "In the paper that introduces POLICY-LEARN, a new approach that learns how to select subgraphs in an iterative manner, what is formula 3 used for?", "answer_format": "Your answer should be a brief summary of the formula's purpose and it should begin with 'To ...'.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["3ecd30f3-d0ee-55c8-90f4-dd70a65b8081"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "To reduce the channel dimension to one and to obtain the following un-normalized node probabilities from which we sample the next root node.", "question": "In the paper that introduces POLICY-LEARN, a new approach that learns how to select subgraphs in an iterative manner, what is formula 3 used for?"}}} {"uuid": "9cc1e97e-16e4-5326-a985-76a6a49380f2", "question": "A paper introduce a novel training algorithm, Learn-To-be-Efficient (LTE), to achieve a better trade-off between sparsity and performance. In the the 5-shot MMLU accuracy comparison, which model performs the worst across all sparsity levels?", "answer_format": "Your answer should be chosen among \"Deja Vu\", \"R-Llama\" and \"LTE\". ", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["912ebb9e-98a3-5c6f-bb4f-35c2ccc9b821"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "R-Llama", "lowercase": true, "ignore_blank": false}}} {"uuid": "9cdde36c-2819-58f4-8077-129932dc6d20", "question": "Which is one of the common error appears across the models? ", "answer_format": "Your answer should be a string which mentions a type of error. ", "tags": ["single", "subjective", "text"], "anchor_pdf": ["9f089768-52e6-5848-b0d8-b412de3e8b6f"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_partial_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["Valid but incorrect translations", "In-context but irrelevant words"], "question": "Which is one of the common error appears across the models? ", "count": 1}}} {"uuid": "9ceda169-3d70-5354-8ce1-70d7d09debd1", "question": "How does Code4Struct perform in zero-resource event types when utilizing 10-shot training data from a sibling event type?", "answer_format": "Your answer should be an integer between 0 and 100 specifying the absolute F1 improvement over the zero-shot baseline.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["1ed3fb7f-cab3-5a35-99dd-cab863dd8a42"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 12}}} {"uuid": "9d1f4f14-3fc0-5649-a566-373eb9690d42", "question": "In the paper that proposes a graph rewiring framework that establishes express connections between distant nodes for non-local message passing, overcoming the over-smoothing problem without requiring deep architectures, list the name of nine graph datasets where the experiments are carried out.", "answer_format": "Your answer should be a Python list of strings indicating the name of nine datasets.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["9b9fc8ba-2ea9-50e6-ad97-34b9e92208fd"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Cora", "PubMed", "Citeseer", "Texas", "Wisconsin", "Chameleon", "Squirrel", "Roman-empire", "Amazon-ratings"], "ignore_order": true, "lowercase": true}}} {"uuid": "9db91baa-9382-5fe8-9ea0-7c504f98bd97", "question": "What overall accuracy does GPT-4V achieve on the MathVista benchmark?", "answer_format": "Your answer should be a Python float value representing the percentage accuracy, between 0 and 100, rounded to 1 decimal place.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["40911da4-3a2d-516b-9e83-25600a989feb"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 49.9, "ndigits": 1}}} {"uuid": "9e2ccce0-6410-5597-afd3-ab72ff7c6310", "question": "According to this paper, among Open-weight LLMs, which model has the best memory sub-score?", "answer_format": "Your answer should be a string, a name of model", "tags": ["single", "image", "objective"], "anchor_pdf": ["d02779bc-da43-56bb-a0e8-75009d945d6a"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Llama3-70b", "lowercase": false, "ignore_blank": false}}} {"uuid": "9e393d26-7acd-5e99-a673-d29693b11a0f", "question": "A recent paper introduces the first large-scale, open-source, high-fidelity 3D CFD dataset based on 355 geometrical variations of the Windsor car body. Each case is simulated using GPU-native Wall-Modeled Large-Eddy Simulations (WMLES) with over 280 million cells, capturing detailed aerodynamic flow features relevant to real road vehicles. Under what license is this dataset available as open source?", "answer_format": "Your answer should be a license name and must adhere precisely to the format presented in the paper without version information.", "tags": ["comprehensive", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["421dbf04-8505-5cb2-9bf9-e43d69924571"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "CC-BY-SA"}}} {"uuid": "9ecb7607-8057-5e6c-8019-22d0b252d0cf", "question": "What is the main contribution of the proposed method which called Averaged-DQN in this paper?", "answer_format": "Your answer should be plain text", "tags": ["single", "text", "subjective"], "anchor_pdf": ["b672cc91-cd20-5cd5-bd77-4622bfd8709a"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "What is the main contribution of the proposed method which called Averaged-DQN in this paper?", "reference_answer": "Averaged-DQN can alleviate the overestimation problem of DQN by ensembling the historical Q-values."}}} {"uuid": "9f66741c-f110-56ea-8b20-ee1d75f7af0e", "question": "What is the training objective (loss function) for the reverse diffusion process in the paper?", "answer_format": "Your answer should be a formula", "tags": ["single", "formula", "subjective"], "anchor_pdf": ["37a7ab0b-f94d-52e6-9957-9a20f15d7355"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "L(\\theta) := E_{t\\sim U(0,T),x0\\sim q0(x0),\\epsilon\\sim N(0,I)} \\left[ \\|\\epsilon_\\theta (\\alpha_t x0 + \\sigma_t \\epsilon, t) - \\epsilon\\|^2_2 \\right].", "question": "What is the training objective (loss function) for the reverse diffusion process in the paper?"}}} {"uuid": "9fc4f699-2ea4-546f-908f-a0aabe58cada", "question": "What is the mathematical expression for the first loss function proposed to measure approximate Nash equilibria?", "answer_format": "Your answer should be a formula.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["e2d53d42-e870-5827-8378-41e381f67d31"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "(L(x) = \\sum_{k} \\eta_{k}||\\Pi_{T\\Delta}(\\nabla^{k}_{x_{k}})||^{2}", "question": "What is the mathematical expression for the first loss function proposed to measure approximate Nash equilibria?"}}} {"uuid": "9ff9a8a5-f7a8-55cb-8640-2aa59b030d9e", "question": "A paper propose a tractable surrogate model of choice (CRCS), which show a better basis for preference learning, could this model works on choice sets of variable size?", "answer_format": "Your answer should be \"Yes\" or \"No\". ", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["047fd851-1f09-58fd-9c0a-c4bfb9b6bd71"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "No", "lowercase": true, "ignore_blank": false}}} {"uuid": "a07a5196-cd2b-53e9-885c-3f59ea6b1ac2", "question": "A paper propose a two-stage Differentially Private (DP) generation method, in the second step, it generates utterances based on the parses. Does the distribution function p_priv(x) discribes private user utterances with a high accurancy comparing with selecting them by active learning?", "answer_format": "Your answer should be \"Yes\" or \"No\". ", "tags": ["objective", "text", "single"], "anchor_pdf": ["05d2139e-f095-5458-a2fb-889dc2cd9410"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "No", "lowercase": true, "ignore_blank": false}}} {"uuid": "a16255a2-cf19-5d82-8acc-244fe50b6581", "question": "In the paper that proposes SGRLD, the author compare their SGRLD method to other MCMC methods. Which optimizer does the second method extends to the SCLD setting? And in the paper that proposes this optimizer, which two methods eventually converge considerably faster in Convolutional neural networks training cost?", "answer_format": "Your answer should be a python list of two elements, the first is a single word, the name of the optimizer, and the second is a python list of two words, the name of the two methods as given in the paper.", "tags": ["comprehensive", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["9ababcf4-0258-5640-b573-07b222888051", "ae10df12-cb06-58ac-a746-6f941ee929e3"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_structured_object_exact_match"], "eval_kwargs_list": [{"gold": "Adam", "lowercase": true, "ignore_blank": true}, {"gold": ["Adam", "SGD"], "ignore_order": true, "lowercase": true, "ignore_blank": true}]}}} {"uuid": "a16a7b27-48c1-5544-8482-e741487c4129", "question": "What is the simplification to S4 called Diagonal Linear RNN according to the paper that improves the best reported results of SSMs on the PathX-256 task by 20 absolute points", "answer_format": "Your answer should be a list of formulas", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["160cd6c1-9cc7-5972-bb84-6500c0fd14ef"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": ["\\vec{x}_n &= \\Lambda \\vec{x}_{n-1} + I u_n &\\Lambda &\\in \\operatorname{diag}(\\mathbb{C}^{N\\times N}) ", "\\ y_n &= C \\vec{x}_n &C &\\in \\mathbb{C}^{1\\times N}"], "question": "What is the simplification to S4 called Diagonal Linear RNN (DLR) proposed by Gupta et al?"}}} {"uuid": "a1c19569-3326-5313-a7ee-87e1bb3afe2d", "question": "In the paper that proposes MHCD_IFF, list three models in figures 3 which have the best performance.", "answer_format": "Your answer should be a python list of three strings, each string being the name of a model.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["9bbfb8fd-5dd2-5d81-ad85-1d15b8478a5a"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["ProPose", "ICON", "MHCDiff"], "ignore_order": true, "ignore_blank": true, "lowercase": true}}} {"uuid": "a1cc17f0-fd10-5273-abc3-f291526bf741", "question": "In the test of Qwen2-72B-Instruct, Qwen2.5-Turbo, Qwen2-0.5B-Instruct, Qwen2-57B-A14B-Instruct based on the context length of a given document and the ability of document depth retrieval, what is the difference in the retrieved Context Length?", "answer_format": "Your answer should be a python lidt of four strings, explaining the retrieval content length of the four models respectively. eg \" model name: range from 0 to roughly 20k tokens\".", "tags": ["multiple", "subjective", "image"], "anchor_pdf": ["c5a533f3-bffe-5e8f-9630-6aa650fce333", "970c51eb-6f19-5ec1-9ab8-3eea43ca1edb"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["Qwen2-72B-Instruct: range from 0 to roughly 128k tokens", "Qwen2.5-Turbo: range from 0 to 1000k tokens", "Qwen2-0.5B-Instruct: range from 0 to 32k tokens", "Qwen2-57B-A14B-Instruct: range from 0 to 64k tokens"], "question": "In the test of Qwen2-72B-Instruct, Qwen2.5-Turbo, Qwen2-0.5B-Instruct, Qwen2-57B-A14B-Instruct based on the context length of a given document and the ability of document depth retrieval, what is the difference in the retrieved Context Length?"}}} {"uuid": "a1fc3902-bbcd-5151-ac3b-cfd649bed022", "question": "In ICLR 2024 Poster papers, a paper attempts to address the challenges faced when learning from pixel-level inputs in multi-object manipulation tasks. Tell me the affiliation of the first author of this paper.", "answer_format": "Your answer should be a Python string.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["b5f540ed-e0b9-559a-bc9d-67376d4d1228"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Department of Electrical and Computer Engineering, Technion- Israel Institute of Technology", "lowercase": true, "ignore_blank": true}}} {"uuid": "a3aaf5a0-c018-5c1e-9c35-ef4245b4acb4", "question": "A recent paper introduces a scientist-curated benchmark for evaluating language models on real-world scientific coding problems across 16 natural science subfields. Comprising 80 main problems decomposed into 338 subproblems, each annotated and validated by domain experts, the benchmark assesses models' abilities in knowledge recall, reasoning, and code synthesis. Please retrieve the paper and provide me with the link for data, code, and the leaderboard corresponding to this work.", "answer_format": "Your answer should be a link only without any additional prefixes or suffixes.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["45d5cf95-287d-5e1b-ae40-3ba00258525a"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "https://scicode-bench.github.io/"}}} {"uuid": "a433afb8-eaf9-5c66-aee6-ea02f50a000e", "question": "What is the sample complexity bound for achieving an $\\varepsilon$-optimal estimator in the non-parametric distributional TD learning (NTD) method?", "answer_format": "Your answer should be a Python string containing the mathematical expression for the sample complexity bound in LaTeX-like format.", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["69261add-f3a3-59f1-b867-18a473bb0b57"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "$\\widetilde{O}\\left(\\frac{1}{\\varepsilon^{2p}(1-\\gamma)^{2p+1}}\\right)$", "question": "What is the sample complexity bound for achieving an $\\varepsilon$-optimal estimator in the proposed non-parametric distributional TD learning (NTD) method?"}}} {"uuid": "a4ea769e-c0fa-5b81-8f3e-1c1359672055", "question": "There is a paper that introduces a novel constrained decoding algorithm called Prefix-Suffix Guided Decoding (PSGD) for the Translation Suggestion (TS) task in interactive machine translation. Unlike prior methods that require retraining or generate the full sequence, PSGD decodes only the selected incorrect span while maximizing the probability of the entire sentence, conditioned on given prefix and suffix constraints. Question: What is the average BLEU score of PSGD in the experiments conducted on the WMT22-TS test sets?", "answer_format": "Your answer should be a Python float rounded to 2 decimal places.", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["98fc41e1-ab58-505c-8c6a-5e3955203438"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 27.46, "ndigits": 2}}} {"uuid": "a51711dd-e4e8-52e2-bb9d-d78794ec5930", "question": "In ICLR 2024 Poster papers, which paper addresses the curse of dimensionality by learning the inherent structure of action-wise similar MDP to appropriately balance the performance degradation versus sample/computational complexity?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["9c2a62d0-a5c1-5b72-98ee-45930546f5ef"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, which paper addresses the curse of dimensionality by learning the inherent structure of action-wise similar MDP to appropriately balance the performance degradation versus sample/computational complexity?", "reference_answer": "Achieving Sample and Computational Efficient Reinforcement Learning by Action Space Reduction via Grouping"}}} {"uuid": "a63ff94c-8cc1-5a5b-a47c-42bac09f4700", "question": "What're the 4 main findings in the spotlight paper at NeurIPS 2024 that investigate the relationship between self-recognition and self-preference for LLMs?", "answer_format": "Your answer should be a Python string describing the 4 main findings of the paper.", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["2f1c8d90-3428-52b0-b7ec-da132f9178e6"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["Frontier LLMs exhibit self-preference in selfevaluation.", "LLMs have non-trivial self-recognition capability out of the box.", "Fine-tuning leads to near-perfect self-recognition.", "Self-preference strength is linearly correlated with self-recognition."], "question": "What're the 4 main findings in the spotlight paper at NeurIPS 2024 that investigate the relationship between self-recognition and self-preference for LLMs?"}}} {"uuid": "a641a2d3-f953-5481-8362-5b814ab33e72", "question": "Recent advancements use knowledge transfer techniques like Score Distillation Sampling (SDS) to overcome the limited availability of comprehensive annotated training data. A paper study this method deeply. In the figure which shows the comparison of using normal-SDS jointly with RGB-SDS, which two methods generate a panda with armor on its body, give one of them.", "answer_format": "Your answer should be a string, which gives the name of method.", "tags": ["comprehensive", "image", "subjective"], "anchor_pdf": [], "reference_pdf": ["0218a040-b64d-559a-8177-1627ca35687e"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_partial_scoring_points_with_llm", "eval_kwargs": {"scoring_points": [" Fantasia3D+SDXL+PGC", "Ours", "Fantasia3D+SDXL+PGC w/o normal-SDS", "Ours w/o nrm"], "question": "In the figure which shows the comparison of using normal-SDS jointly with RGB-SDS, which two methods generate a panda with armor on its body, give one of them.", "count": 1}}} {"uuid": "a6a526c0-a063-52d7-8afd-a83a4baafcc8", "question": "In ICLR 2024 Poster papers, a paper attempts to solve the problem of how to enhance the arithmetic reasoning capabilities of large language models (LLMs) through zero-sample hint optimization. Tell me the codebase url of the paper.", "answer_format": "Your answer should be a Python string.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["973d41d0-6812-5887-9d0b-364404bbafe6"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "https://github.com/vanderschaarlab/Prompt-OIRL", "lowercase": false, "ignore_blank": false}}} {"uuid": "a72a94c2-00be-5f74-8c57-2dc88eaeaea9", "question": "In the paper that introduces a novel method that employs dynamic and directed weight adjustment techniques to modulate the synthesis process during dataset distillation, how much percent does the DWA method outperform the SRe2L method in accuracy compared to the baseline on Tiny-lmageNet, lmageNet-1K and CIFAR100 dataset respectively?", "answer_format": "Your answer should be a Python list of three float numbers, all rounded to 1 decimal place.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["9bdeab43-dbb1-52ef-9555-872503e3498a"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": [11.7, 8.4, 12.4], "ndigits": 1}}} {"uuid": "a77ed2c5-64a9-5771-b2b4-8cc644472720", "question": "In ICLR 2024 Poster papers, a paper attempts to propose a meta-reinforcement learning algorithm that is improved in multiple aspects, especially in terms of sample efficiency, generalization ability, and handling of high-dimensional task distributions, by combining the latest model-based RL techniques and meta-RL techniques. What is the formula of \"General Regret Bounds\"?", "answer_format": "Your answer should be the formula in LaTeX format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["d6114dbd-6006-5bf6-96d3-0083d49449c1"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, a paper attempts to propose a meta-reinforcement learning algorithm that is improved in multiple aspects, especially in terms of sample efficiency, generalization ability, and handling of high-dimensional task distributions, by combining the latest model-based RL techniques and meta-RL techniques. What is the formula of \"General Regret Bounds\"?", "formulas": "\\mathbb{E}_{\\theta\\sim f}\\left[\\mathbb{E}_{\\pi_{BO}, M = g(\\theta)}\\left[\\sum_{t = 1}^{H} r_t\\right] - \\mathbb{E}_{\\pi, M = g(\\theta)}\\left[\\sum_{t = 1}^{H} r_t\\right]\\right]"}}} {"uuid": "a7dcfbd0-c65d-540f-8e5c-46204d6b9d4c", "question": "Among the Model-based Reinforcement Learning papers in ICLR 2024, which one proposes the model called \"Skipper\". Tell me what $\\pi$ means in figure 2.", "answer_format": "Your answer should be a python string about the meaning of the math expreesion in the paper. You\"d better use the names as they are referred to in the paper.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["66895ef6-9249-537e-8645-47e7ea1c3cfa"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "checkpoint policy"}}} {"uuid": "a81c9bfa-0fc6-5522-b0bb-343981682cd4", "question": "In ICLR 2024 Poster papers, a paper proposes a novel framework named PARL (Policy Alignment in Reinforcement Learning), aiming to address the policy alignment problem in Reinforcement Learning (RL). What is the formula of the standard finite horizon policy optimization problem in this paper?", "answer_format": "Your answer should be a Python string.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["ae528bab-b6f7-5c19-9361-49aee40af057"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, a paper proposes a novel framework named PARL (Policy Alignment in Reinforcement Learning), aiming to address the policy alignment problem in Reinforcement Learning (RL). What is the formula of the standard finite horizon policy optimization problem in this paper?", "formulas": "\\max_{\\theta} V_s(\\theta) := \\mathbb{E} \\left[ \\sum_{h = 0}^{H - 1} \\gamma^h r(s_h, a_h) \\mid a_h \\sim \\pi_{\\theta}(a_h \\mid s_h), s_0 = s \\right]"}}} {"uuid": "a82d2e00-6e65-59e2-ba90-9315c62caa7c", "question": "Compare to the baseline called \"DreamerV3\", how much improvement does \"Hybrid RSSM\" achieve in \"Lift Cube\" in average?", "answer_format": "Your answer should be a Python float number rounded to 1 decimal place. e.g. 20.3", "tags": ["single", "table", "objective"], "anchor_pdf": ["836773dc-06af-56de-9846-db5e075e6a77"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 117.0, "ndigits": 1}}} {"uuid": "a86de892-c1e5-5271-b36a-ba531a214c64", "question": "Among the papers in ICLR 2024, which paper proposes the conception called \"Policy Rehearsing\"? Explain the conception of \"Policy Rehearsing\" in the paper.", "answer_format": "Your answer should be plain text.", "tags": ["text", "subjective", "single"], "anchor_pdf": ["25abff87-1eb1-561d-9a66-403bca1cc03e"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "Among the papers in ICLR 2024, which paper proposes the conception called \"Policy Rehearsing\"? Explain the conception of \"Policy Rehearsing\" in the paper.", "reference_answer": "\"Policy Rehearsing\" is a method that trains a generalizable policy for reinforcement learning by replaying past experiences."}}} {"uuid": "a977af97-4915-586c-bdf5-ab7a46479951", "question": "In the paper that proposed MDAgents, for image+text queries, how much higher is the accuracy of the Adaptive setting, compared to the High setting?", "answer_format": "Your answer should be a float rounded to 1 decimal place.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["50de8a07-fdcd-5113-b777-5594dd741ac4"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 18.7, "ndigits": 1}}} {"uuid": "a9eea92a-c4a0-54f5-967d-854d7ae8bf32", "question": " What visual elements in Figure 1 distinguish the three fine-tuning scenarios (a, b, c)? How does the Initial harmfulness score differ between subfigures (a), (b), and (c)", "answer_format": "Your answer should be a sentence answering the two questions.", "tags": ["single", "image", "subjective"], "anchor_pdf": ["12480043-cd6c-513b-bd75-fd9068439808"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["Each scenario is labeled with a subfigure title (a/b/c), uses distinct color-coded harmfulness score bars, and includes example input-output pairs for clarity.", "It vary slightly due to different system prompts used for each dataset."], "question": " What visual elements in Figure 1 distinguish the three fine-tuning scenarios (a, b, c)? How does the Initial harmfulness score differ between subfigures (a), (b), and (c)?"}}} {"uuid": "aa065f51-e562-5ffb-887c-7968237cf9a8", "question": "How does ClimODE simulate weather and climate physics? Give me the formula of its equation.", "answer_format": "Your answer should be a python list containing a sentence and a formula, eaching answer one of the two questions.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["d3360b29-a0c8-572f-a44a-93f628880908"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_reference_answer_with_llm", "eval_complex_math_formula_with_llm"], "eval_kwargs_list": [{"reference_answer": "The core idea of ClimODE is to simulate weather and climate physics through continuous-time processes (Neural ODEs), specifically implementing the advection principle from statistical mechanics where weather changes result from spatial movement of quantities over time.", "question": "How does ClimODE simulate weather and climate physics? Give me the formula of its equation."}, {"formulas": ["\\frac{\\partial u_k}{\\partial t} = -v_k \\cdot \\nabla u_k - u_k \\nabla \\cdot v_k"], "question": "Give me the formula of the equation of ClimODE."}]}}} {"uuid": "aafe62b4-8305-5906-b110-b30c5f3bb0f4", "question": "A paper use a mean-based decomposition method to extend the context window and achieve length extrapolation of transformer-based large language models (LLMs). In their study of relationship between positional vectors and length extrapolation ability, besides TL-Window-RoPE, which model maintains stable PPL across longer texts? ", "answer_format": "Your answer should be a string which indicates a name of model", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["95c55bc6-3598-5811-8e3c-cda5226f8f9b"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "TL-Window-80", "lowercase": true, "ignore_blank": false}}} {"uuid": "ab723b23-7907-5fa0-aea1-e34ee72082a8", "question": "What two key components of MetaGPT are depicted in Figure 2 and how does the right panel emphasize the improvement in code quality?", "answer_format": "Your answer should be a sentence answering the two questions.", "tags": ["single", "image", "subjective"], "anchor_pdf": ["460c65d7-a298-5bd3-baa2-dd8683885308"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["Figure 2 shows (left) the shared message pool and subscription mechanism for agent communication, and (right) the iterative programming process with executable feedback, where the Engineer debugs code based on test results.", "It demonstrates the feedback loop: the Engineer runs generated code, checks for errors, and iteratively refines it by referencing past messages (e.g., PRDs, designs). This ensures runtime correctness and reduces hallucinations, as noted in the caption."], "question": "What two key components of MetaGPT are depicted in Figure 2 and how does the right panel emphasize the improvement in code quality?"}}} {"uuid": "ac2bee50-307a-5f17-9320-d2779770360a", "question": "In ICLR 2024 Oral papers, a paper attempts to solve the problem of how to accelerate the learning process and avoid getting trapped in local optimal solutions in Cooperative Multi-Agent Reinforcement Learning (MARL). Tell me the number of authors of this paper.", "answer_format": "Your answer should be a Python integer.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["402ca915-7f12-560e-8f5e-cdf54903a981"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 3}}} {"uuid": "ac7aaf31-1bf1-54b4-a501-363b12bd2380", "question": "In the paper that proves gradient-based algorithms achieve polynomial smoothed complexity for solving zero-sum games, eliminating exponential dependence on condition numbers through a novel perturbation-stability analysis, what is the main conclution of formula (3) and definition 1.3 on \\kappa?", "answer_format": "Your answer should be a Python strings indicating the main conclusion.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["9ca8049c-5435-58c9-80b5-c5931253aed6"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The modulus \\kappa is likely to be polynomial in the smoothed complexity model.", "question": "In the paper that proves gradient-based algorithms achieve polynomial smoothed complexity for solving zero-sum games, eliminating exponential dependence on condition numbers through a novel perturbation-stability analysis, what is the main conclution of formula (3) and definition 1.3 on \\kappa?"}}} {"uuid": "ace8225f-ccd4-5401-a1ca-da1e00bab7a8", "question": "In the paper at ACL 2023 that first analyzed instance-level pretraining data to interpret in-context learning (ICL), what is the maximum improvement in ICL ability achieved by continued pretraining on a supportive subset of data?", "answer_format": "Your answer should be an integer between 0 and 100, stating the maximum percentage improvement in ICL ability achieved by continued pretraining.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["22439dae-f3cf-52bd-8fb7-c3aab97ec336"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 18}}} {"uuid": "acf17a7b-4264-520f-9ca1-3d1a2792c1d5", "question": "In the paper that proposes Normalize-and-Project (NaP), we can find a paper in the references in which the C4 dataset is released. In which journal was this paper published?", "answer_format": "Your answer should only be a Python strings of the name of the journal.", "tags": ["metadata", "subjective", "single"], "anchor_pdf": ["9bb59a99-ffd5-5978-a619-db8ef67d3c2e"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The Journal of Machine Learning Research", "question": "In the paper that proposes Normalize-and-Project (NaP), we can find a paper in the references in which the C4 dataset is released. In which journal was this paper published?"}}} {"uuid": "ae551879-1d8c-55e5-ab67-6eea37acde80", "question": "In ICLR 2024 Poster papers, which paper introduces the Hierarchical Diffuser, a simple, fast, yet effective planning method combining the advantages of hierarchical and diffusion-based planning?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["1258829d-6a4d-50a8-a9b0-7e57e446c6dc"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, which paper introduces the Hierarchical Diffuser, a simple, fast, yet effective planning method combining the advantages of hierarchical and diffusion-based planning?", "reference_answer": "Simple Hierarchical Planning with Diffusion"}}} {"uuid": "af016855-321f-5d05-9997-6c81104f8db3", "question": "In ICLR 2024 Spotlight papers, a paper attempts to alleviate the over-optimization problem that occurs when LLMs are optimized by reward models through human feedback. What is the formula of the results in mixed advantages which are a convex combination of the task and constraint advantages?", "answer_format": "Your answer should be the formula in LaTeX format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["65066aca-16ab-53a7-bdd4-6e5d5ed9ce3e"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"question": "In ICLR 2024 Spotlight papers, a paper attempts to alleviate the over-optimization problem that occurs when LLMs are optimized by reward models through human feedback. What is the formula of the results in mixed advantages which are a convex combination of the task and constraint advantages?", "formulas": "A_{\\boldsymbol{\\mu}}^{\\pi}(s, a) = \\left(N - \\sum_{i = 1}^{N} \\sigma(\\mu_i)\right) A_0^{\\pi}(s, a) + \\sum_{i = 1}^{N} \\sigma(\\mu_i) A_i^{\\pi}(s, a)"}}} {"uuid": "aff903e6-93b8-5b4b-aacf-e84ee6e6d1e2", "question": "How is mutual information (MI) used to represent epistemic uncertainty (EU) in the Bayesian framework?", "answer_format": "Your answer should be a formula", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["3eb979a1-1ab6-5b63-ad76-9e0d73663dfd"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "\\text{EU} = MI(Y, \\Omega|x)", "question": "How is mutual information (MI) used to represent epistemic uncertainty (EU) in the Bayesian framework?"}}} {"uuid": "b055054a-f026-51e0-9afb-452f89bd8ea3", "question": "In ICLR 2024 Spotlight papers, a paper introduces a new conception named \"Effective Horizon\". How many figures in this paper?", "answer_format": "Your answer should be a Python integer.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["4ea89bed-22b4-5165-b9c9-a0bd983cb0a6"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 5}}} {"uuid": "b08e1a60-c67e-585f-95e4-716d76c4a58b", "question": "In the experiment where they isolate the influence of the LLM in an LLM-based forecaster and propose three ablations, does a simple ablations of LLM-based methods cause worse performance? ", "answer_format": "Your answer should be \"Yes\" or \"No\". ", "tags": ["single", "objective", "text"], "anchor_pdf": ["b3a3ba28-65ac-572a-af34-fc84485ed3c6"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "No", "lowercase": true, "ignore_blank": false}}} {"uuid": "b0a36a4d-62e5-5e4b-9814-9855543d47cb", "question": "Investigating the correlation of sharpness, curvatures, and validation loss on MNIST, Fashion-MNIST, and CIFAR-10, which has a strong positive correlation with validation loss?", "answer_format": "Your answer should be a string, a noun.", "tags": ["single", "objective", "text"], "anchor_pdf": ["09842a53-f1d2-54dc-9292-a48b75f83e2c"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "sharpness", "lowercase": true, "ignore_blank": true}}} {"uuid": "b135917d-8cc0-5a31-a7ca-8007d58870db", "question": "In this paper, how many different benchmarks are used?", "answer_format": "Your answer should be a Python integer.", "tags": ["single", "metadata", "objective"], "anchor_pdf": ["519f42f5-62dd-5f27-9d06-7a8187b7a954"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 6}}} {"uuid": "b206bc2a-fdbf-5b37-be28-7ac7fc5a3cc6", "question": "A recent paper proposes a scalable, low-cost captioning engine, Perceptual Fusion, that integrates specialized vision experts and a multimodal model to generate hyper-detailed image descriptions. Please inform me which subcategory within the Infographics category of the proposed dataset has the largest data volume.", "answer_format": "Your answer should be a name of a subcategory.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["48e75933-d528-50ba-a28f-d2cd16a64393"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "PPT"}}} {"uuid": "b211f910-cd54-5e5b-b135-877454e99463", "question": "A recent paper presents MVCN for multimodal sentiment detection, addressing the challenge of modality heterogeneity in text-image pairs. It introduces three novel modules: (1) a Text-Guided Fusion module with Sparse Attention, (2) a Sentiment-based Congruity Constraint, and (3) an Adaptive Loss Calibration strategy. MVCN achieves state-of-the-art results on the MVSA and HFM benchmarks. The research institutions involved in this work are all from the same country. Please provide the name of this country.", "answer_format": "Your answer should be a name of a country.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["99746e1c-e4ff-56d2-9edd-4cc263aa386b"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "China"}}} {"uuid": "b2f66570-8dd6-560e-be69-454ffbff412f", "question": "A paper propose Hierarchical Contact Mesh Transformer (HCMT) for modeling complex high-dimensional physical systems. In the experiment of sensitivity to the number of level, generally, with which number of level, the model gets a relatively low RSME?", "answer_format": "Your answer should be an int chosen between 1 to 6.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["047b5e6d-9d7b-5a5c-b04c-f7947ccea4a0"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 4}}} {"uuid": "b3826ac7-3685-50c5-af82-1c5c9c5e4ba5", "question": "According to the paper at ACL 2023 that studies the projection of retrievers on vocabulary space, by how many percentage points did the strong MPNet model's performance on the BEIR benchmark improve after applying the proposed lexical enrichment at inference time?", "answer_format": "Your answer should be a Python list of three floats (between 0 and 100, rounded to 1 decimal place), specifying the percentage point improvement and the before/after values, e.g., [improvement, before, after].", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["15dad352-4afe-5409-9be2-6847ff69adeb"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": [1.0, 43.1, 44.1], "ndigits": 1, "ignore_order": false}}} {"uuid": "b4a91fbd-d054-5434-a5c0-8a3c61d25c1b", "question": "In the experiment of MULAN with different noise schedule parameterizations, which parameterization yields perform the best? ", "answer_format": "Your answer should be a string, a kind of parameterizations. ", "tags": ["single", "image", "objective"], "anchor_pdf": ["a2fbfee3-05fe-56ee-960b-10b18fc8dffd"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "polynomial", "lowercase": true, "ignore_blank": false}}} {"uuid": "b6544444-9718-50a9-861f-3d07342bc864", "question": "In the paper that identifies and mitigates \"conditional image leakage\" in I2V-DMs via Analytic-Init and TimeNoise which significantly enhances motion generation, in which domain does the method proposed by the author achieved significant progress compared to the baseline in figure 8? What percentage is the user preference in SVD model when Analytic-Init is used?", "answer_format": "Your answer should be a Python list, the first is a string of the name of the domain and the second is a float number rounded to 2 decimal places.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["9c7c7762-0132-583c-987a-0fbc89847c55"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_float_exact_match"], "eval_kwargs_list": [{"gold": "Dynamic motion", "lowercase": true}, {"gold": 87.3, "ndigits": 2}]}}} {"uuid": "b73f57d6-9658-57d5-9f81-779c6a540f85", "question": "In ICLR 2024 Spotlight papers, a paper proposes a method named \"Heuristic Blending\". In this paper, the regret in Theorem 1 is bounded by which formula?", "answer_format": "Your answer should be the formula in LaTeX format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["fac83d47-6080-518e-ba75-3e376c6e3d06"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"question": "In ICLR 2024 Spotlight papers, a paper proposes a method named \"Heuristic Blending\". In this paper, the regret in Theorem 1 is bounded by which formula?", "formulas": "\\min\\left(V_{max}, \\sqrt{\\frac{V_{max}^2(1 - \\gamma)|\\mathcal{S}|}{N(1 - \\gamma(1 - \\lambda))^4}}\\left(\\sqrt{\\max_{s,a}\\frac{d^{\\pi^*}(s,a)}{\\mu(s,a)}} + \\frac{\\gamma\\lambda}{1 - \\gamma}\\sqrt{\\max_{(s,a)\\in\\Omega}\\frac{1}{\\mu(s,a)}}\\right)\\right)"}}} {"uuid": "b7650e20-d5e4-5612-b6ed-8176dc16650d", "question": "What is the core technological breakthrough that enabled R2I to achieve superhuman performance in the Memory Maze task? How does its \"acyclic representation model\" design enhance long-term memory?", "answer_format": "Your answer should be two sentences, eaching answer one of the two questions.", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["d0d9a5d2-3cfa-5fea-8439-0f3da2975dda"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["R2I's breakthrough performance in Memory Maze stems from a triple technical synergy. First, the state decoupling architecture separates the deterministic state $h_t$ (generated by SSMs), which captures the trajectory temporal patterns through multi-layer SSM stacking, from the stochastic state $z_t$ (extracted by the CNN encoder), which focuses on preserving the detailed features of visual observation. Second, the KL balancing mechanism dynamically adjusts the scatter weights of the representation model $q_\\theta(z_t \\mid o_t)$ and the dynamical model $p_\\theta(z_t \\mid h_t)$, and experiments show that this design stabilizes the KL value in the interval of 0.8-1.2, which effectively prevents the a posteriori collapse. The most crucial innovation is the acyclic representation model: by removing the historical dependence of $q_\\theta$ on $h_t$, the generation of $z_t$ is entirely based on the current observation, which not only parallelizes the encoding of latent states, but also reduces the encoding error of key locations by 37% in tasks such as Memory Maze, which requires accurate recall of early observations.", "This design breaks the serial computational chain of traditional recursive models by allowing the model to rapidly reconstruct up to 500 steps of historical context during the imagination phase. Together with the multi-scale memory fusion capabilities of SSMs (which enable interaction between underlying details and higher-level semantics through GLU gating), R2I ultimately achieves a 178% outperformance of human players in 3D maze navigation."], "question": "What is the core technological breakthrough that enabled R2I to achieve superhuman performance in the Memory Maze task? How does its \"acyclic representation model\" design enhance long-term memory?"}}} {"uuid": "b7a4f87f-192f-583f-908c-571865359ae9", "question": "Among the papers at ICLR 2024 researching causal inference, what is the key theoretical result proposed in \"Robust agents learn causal world models\" regarding agents' ability to generalize under distributional shifts?", "answer_format": "Your answer should be a Python string describing the key theoretical finding about the relationship between robust agents and causal models.", "tags": ["single", "subjective", "text"], "anchor_pdf": ["1e626c39-9c76-541b-80c9-18891807391f"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The paper proves that any agent capable of achieving low regret across a wide range of distributional shifts must have learned an approximate causal model of the environment, which converges to the true causal model for optimal agents.", "question": "What is the key theoretical result proposed in \"Robust agents learn causal world models\" regarding agents' ability to generalize under distributional shifts?"}}} {"uuid": "b8efc959-7538-53b0-abbf-50b696558b49", "question": "A recent paper proposes Conditional Mutual Information for Disentanglement (CMID), a method for learning disentangled representations in reinforcement learning with correlated features by minimizing conditional mutual information, guided by the causal structure of the Markov Decision Process. This approach improves generalization under correlation shifts and outperforms existing methods on continuous control tasks. Question: On which tasks or datasets was the effectiveness of this method evaluated?", "answer_format": "Your answer should be the name of a dataset.", "tags": ["single", "text", "objective"], "anchor_pdf": ["20c857a6-c20b-5f30-a353-e49667df1dab"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_element_included", "eval_kwargs": {"gold": ["the DeepMind Control Suit", "DeepMind Control Suit", "DMC"], "lowercase": true}}} {"uuid": "b916d6a2-0836-5375-8c68-1eefcd026e31", "question": "The UNet-FNO trained on a small dataset (1000 data pairs) achieves better performance compared to other neural operator architectures trained on a significantly larger dataset (10000 data pairs). What does it suggests?", "answer_format": "Your answer should be a phrase", "tags": ["single", "subjective", "text"], "anchor_pdf": ["5860140b-3a12-535a-b3fd-b430e475724a"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "UNetFNO is particularly advantageous in scenarios with limited training data, where it can achieve competitive accuracy while significantly reducing the computational cost and time associated with data generation.", "question": "The UNet-FNO trained on a small dataset (1000 data pairs) achieves better performance compared to other neural operator architectures trained on a significantly larger dataset (10000 data pairs). What does it suggests?"}}} {"uuid": "b937c489-15c1-5980-9a62-dcd2e240dc31", "question": "What is the core idea of G-SHELL and what is its application?", "answer_format": "Your answer should be a sentence answering the question.", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["1c8ae99c-17c8-57b9-9ca6-04ce0f992d88"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Inspired by the observation that open surfaces can be seen as islands floating on watertight surfaces, we parameterize open surfaces by defining a manifold signed distance field on watertight templates. With this parameterization, we further develop a grid-based and differentiable representation that parameterizes both watertight and non-watertight meshes of arbitrary topology. Our new representation, called Ghost-on-the-Shell (G-SHELL), enables two important applications: differentiable rasterization-based reconstruction from multiview images and generative modelling of non-watertight meshes.", "question": "What is the core idea of G-SHELL and what is its application?"}}} {"uuid": "ba8dc980-d280-54dc-a7c8-7ff6261d161c", "question": "What certified accuracy does GNNCert achieve on the MUTAG dataset when an attacker arbitrarily adds or deletes one edge?", "answer_format": "Your answer should be a Python integer representing the percentage of certified accuracy achieved on the MUTAG dataset.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["b1861e4b-fa56-5fd4-bad9-ccf174821787"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 92}}} {"uuid": "baf7e175-22b4-57ed-b29e-76a65f250f7f", "question": "In ICLR 2024 Poster papers, a paper attempts to solve how to effectively learn from off-policy data in reinforcement learning. Tell me the affiliation of the first author.", "answer_format": "Your answer should be a Python string.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["1dc21f98-e67b-534d-81d8-18a0e159fcb3"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, a paper attempts to solve how to effectively learn from off-policy data in reinforcement learning. Tell me the affiliation of the first author.", "reference_answer": "Max Planck Institute for Intelligent Systems"}}} {"uuid": "bb47845b-137f-52b9-95a6-27b7b37f9a31", "question": "There is a recent paper that proposes a novel contrastive learning framework for multimodal sentiment analysis, which uniquely combines intra-sample feature decomposition and inter-sample contrastive learning. Each modality (text, vision, audio) is decomposed into similarity and dissimilarity features, with text-based similarity features used as anchors for contrastive alignment. I would like to know, on which dataset was the primary experiment of this work conducted?", "answer_format": "Your answer should include only the single most significant dataset.", "tags": ["comprehensive", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["9444dc1a-dffb-55fc-8fbb-a28552090793"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "CH-SIMS"}}} {"uuid": "bb6258b6-939a-577e-9a61-77723526f0df", "question": "What is the size and annotation detail of the AbdomenAtlas 1.1 dataset?", "answer_format": "Your answer should be a sentence describing the size and annotation detail of the AbdomenAtlas 1.1 dataset, including the number of CT volumes and the types of annotations provided.", "tags": ["comprehensive", "metadata", "subjective"], "anchor_pdf": [], "reference_pdf": ["861ab80a-1a4b-5c77-ae8f-460e55f8d472"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The AbdomenAtlas 1.1 dataset comprises 9,262 three-dimensional CT volumes with high-quality, per-voxel annotations of 25 anatomical structures and pseudo annotations of seven tumor types.", "question": "What is the size and annotation detail of the AbdomenAtlas 1.1 dataset?"}}} {"uuid": "bba8b818-f1f0-544b-aa58-4749b9144879", "question": "In the paper that proposes proposing a theoretically-grounded plug-and-play module that enables efficient and accurate structure learning with minimal data requirements, what is the use and the caracterization of function h(B) in formula (4)?", "answer_format": "Your answer should be a Python strings indicating the use and catacterization of h(B).", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["9b6590a4-09c7-510e-aaa0-ea238e5d8b67"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "h(B) is the acyclicity of the graph and it can be characterized by a series of continuous functions for a non-negative matrix B.", "question": "In the paper that proposes proposing a theoretically-grounded plug-and-play module that enables efficient and accurate structure learning with minimal data requirements, what is the use and the caracterization of function h(B) in formula (4)?"}}} {"uuid": "bbf242ad-cc6f-5771-bd20-e7b25531590b", "question": "In ICLR 2024 Spotlight papers, a paper attempts to alleviate the over-optimization problem that occurs when LLMs are optimized by reward models through human feedback. What is the affiliation of the corresponding author?", "answer_format": "Your answer should be a Python string.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["65066aca-16ab-53a7-bdd4-6e5d5ed9ce3e"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Gatsby Unit, UCL", "lowercase": true, "ignore_blank": true}}} {"uuid": "bd128925-adea-588e-aa7a-846e6053c7f4", "question": "In ICLR 2024 Spotlight papers, which paper introduces a novel algorithm, Robust Policy Improvement (RPI), which actively interleaves between IL and RL based on an online estimate of their performance?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["f22fa760-10d6-5959-8398-f8bd583acf28"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In ICLR 2024 Spotlight papers, which paper introduces a novel algorithm, Robust Policy Improvement (RPI), which actively interleaves between IL and RL based on an online estimate of their performance?", "reference_answer": "Blending Imitation and Reinforcement Learning for Robust Policy Improvement"}}} {"uuid": "bd1fbe22-f98c-595d-92db-7a32b47feda4", "question": "A recent paper proposes a unified benchmark framework for evaluating task-agnostic decoupling methods in privacy-preserving machine learning using synthetic image generation. It systematically integrates adversarial representation learning techniques into a synthetic data pipeline based on latent diffusion models (LDMs) and introduces standardized evaluation protocols for both privacy and utility. Could you please tell me which university the first author of this work is affiliated with?", "answer_format": "Your answer should be a name of a university.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["49357649-126a-59bd-946a-6030faf3aa39"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_element_included", "eval_kwargs": {"gold": ["McGill University", "McGill"]}}} {"uuid": "bd253538-fce4-5e7d-9d2a-fd27f4c6199b", "question": "In the paper \"ZeroStance: Leveraging ChatGPT for Open-Domain Stance Detection via Dataset Generation,\" how many baseline human-annotated datasets are incorporated?", "answer_format": "Your answer should be a Python int", "tags": ["single", "text", "objective"], "anchor_pdf": ["b4455e5e-9557-5e29-8948-2da57090ef7c"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 6}}} {"uuid": "bd3f2de4-c4a0-5ded-9bc2-b946724a04bf", "question": "In the paper introducing program-based reasoning, which benchmarks were used to evaluate performance on mathematical problems?", "answer_format": "Your answer should be a Python list of strings.", "tags": ["multiple", "objective", "text"], "anchor_pdf": ["aa9e17cf-a80c-5a97-b989-bd08794bffbb"], "reference_pdf": ["2f304b1c-69d5-588d-8156-b92662ba2204"], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["GSM8K", "SVAMP", "ASDIV", "MAWPS"], "lowercase": true, "ignore_order": true}}} {"uuid": "bd90400d-f8bf-5257-a64b-906a477992a8", "question": "In ICLR 2024 Spotlight papers, which paper unifies reinforcement learning and imitation learning methods under a dual framework?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["6d4ac425-4ee3-53cb-acc8-ce759680a8b9"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In ICLR 2024 Spotlight papers, which paper unifies reinforcement learning and imitation learning methods under a dual framework?", "reference_answer": "Dual RL: Unification and New Methods for Reinforcement and Imitation Learning"}}} {"uuid": "bdd210a1-1d6a-508d-af66-2f6a3efa8887", "question": "According to the paper that proposes Wanda, in what aspects does this pruning approach differ from SparseGPT?", "answer_format": "Your answer should be a plain text that describes the differences between Wanda and SparseGPT.", "tags": ["comprehensive", "table", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["e5cae9b9-016a-5169-96c7-3ef7c8afc164"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Wanda entails no weight update on pruned networks, uses different pruning metric that computes faster than SparseGPT, and has a lower algorithm complexity.", "question": "According to the paper that proposes Wanda, in what aspects does this pruning approach differ from SparseGPT?"}}} {"uuid": "bdfcfbaf-4835-51b2-a599-2b7f017936fa", "question": "In ICLR 2024 Spotlight papers, a paper introduces a new conception named \"Effective Horizon\". In this paper, what is the formula of SQIRL sample complexity?", "answer_format": "Your answer should be the formula in LaTeX format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["4ea89bed-22b4-5165-b9c9-a0bd983cb0a6"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"question": "In ICLR 2024 Spotlight papers, a paper introduces a new conception named \"Effective Horizon\". In this paper, what is the formula of SQIRL sample complexity?", "formulas": "\\widetilde{O}\\left(kT^3\\alpha^{2(k - 1)}A^{\\bar{H}_k}D\\log(\\alpha D)/\\epsilon\\right)"}}} {"uuid": "be030ae0-db01-500c-8e06-7cc2ecac0bee", "question": "There is a recent paper that proposes a method called CCPA, a two-stage debiasing framework that combines continuous prompt augmentation and contrastive learning to mitigate social biases, particularly gender bias, in pre-trained language models. The authors validated the effectiveness of CCPA on the Bias-in-Bios dataset. Please answer: By how many percentage points does the Accuracy (all) metric of the model using CCPA improve compared to the original BERT model in absolute terms?", "answer_format": "Your answer should be a Python float, rounded to two decimal places.", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["997e85c4-6428-5e04-a5a1-c2fb47e407d5"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 1.51, "ndigits": 2}}} {"uuid": "be0e3f75-f2a3-5bf4-a9c2-65153177fea7", "question": "Why did the authors use newly-published books as the database of BOOOOKSCORE?", "answer_format": "Your answer should be a sentence", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["04ed3e06-a7e7-5856-8912-af4223637abf"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "To reduce the confounding impact of summary memorization, we manually collect 100 books3 published within the past year to form our dataset. Some of these books could still have appeared in the pretraining dataset of recent LLMs such as Claude 2 and LLaMa2, although it is much less likely than in BookSum. However, summaries of these books do not publicly exist: we did not find summaries online for any books in our dataset, which significantly lowers the possibility of LLM memorization.", "question": "Why did the authors use newly-published books as the database of BOOOOKSCORE?"}}} {"uuid": "be4c59d2-3527-5048-a0e6-771dc091a486", "question": "A paper propose deleting edges to address over-squashing and over-smoothing of Message Passing Graph Neural Networks simultaneously. In their examination of effectiveness of the proposed edge modification algorithms in the spectral gap expansion, what makes their ideal baseline computationally too expensive? ", "answer_format": "Your answer should be a phrase which gives the reason. ", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["041c31cc-c6a0-59b6-b723-daf5c8f6256d"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Each edge scoring requires O(|E|) computations. ", "question": "A paper propose deleting edges to address over-squashing and over-smoothing of Message Passing Graph Neural Networks simultaneously. In their examination of effectiveness of the proposed edge modification algorithms in the spectral gap expansion, what makes their ideal baseline computationally too expensive? "}}} {"uuid": "be4edd9f-0d2e-58db-b118-bd36b8549763", "question": "Which theorem is used to find the solution of formula (1). Why the linear approximator cannot be applied directly when non-negativity constraints are imposed on target function f \\geq 0?", "answer_format": "Your answer should be list of two elements, the first is the name of the theorem and the second is the reason why the linear approximator cannot be applied. Note that you should output the formula in the LaTex format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["9ae5f539-c85b-5530-8892-448d36b71014"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "representer theorem", "lowercase": true}, {"reference_answer": "Because the linear approximator generally has negative values, even under non-negative kernel k(\\cdot, \\cdot) \\geq 0.", "question": "Which theorem is used to find the solution of formula (1). Why the linear approximator cannot be applied directly when non-negativity constraints are imposed on target function f \\geq 0?"}]}}} {"uuid": "bf32600d-d8cd-5f62-b080-9f0003b6d875", "question": "In the paper that combines within-lifetime extrinsic learning and cross-lifetime intrinsic motivation in a single framework, three different behaviours are evaluated in figure 1. Which behavious's intrinsic motivation of most frequent choice is always lower than that of the sum of other choices?", "answer_format": "Your answer should be a python strings of the description of the behaviour.", "tags": ["comprehensive", "image", "subjective"], "anchor_pdf": [], "reference_pdf": ["9ac641ba-30be-53a2-a018-c3695ff01fdc"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Behavior on the stationary 10-armed bandit task.", "question": "In the paper that combines within-lifetime extrinsic learning and cross-lifetime intrinsic motivation in a single framework, three different behaviours are evaluated in figure 1. Which behavious's intrinsic motivation of most frequent choice is always lower than that of the sum of other choices?"}}} {"uuid": "bff9b330-bcd6-547f-8a07-2af88d99540d", "question": "Among the text-to-SQL papers in ACL 2023, which one achieves the best testsuite accuracy on the SPIDER dataset? Tell me the paper title and corresponding test accuracy.", "answer_format": "Your answer should be a Python list of length two, with the first one being the title string and the second one being a float, the accuracy rounded to 3 decimals.", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["95175be1-8870-5931-a4d3-084308de14a0"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_paper_relevance_with_reference_answer", "eval_float_exact_match"], "eval_kwargs_list": [{"reference_answer": "G3R: A Graph-Guided Generate-and-Rerank Framework for Complex and Cross-domain Text-to-SQL Generation", "question": "Among the text-to-SQL papers in ACL 2023, which one achieves the best testsuite accuracy on the SPIDER dataset? Tell me the paper title."}, {"gold": 0.729, "ndigits": 3}]}}} {"uuid": "c06ce855-9266-5151-8ae1-2b227afb5a92", "question": "In ICLR 2024 Poster papers, a paper tackles the Offline Opponent Modeling problem by harnessing the full potential of the supervised pre-trained Transformers' in-context learning capabilities. Tell me the affiliation of the corresponding author.", "answer_format": "Your answer should be a Python string.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["8a1e3915-e42d-581e-aa46-9b520f4b03ec"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Tencent AI Lab", "lowercase": true, "ignore_blank": true}}} {"uuid": "c072b272-90d2-500b-b8e0-d28da9918512", "question": "In the paper proposing MC-DiT, why does Gaussian noise affect the optimization process of L_{asym}?", "answer_format": "Your answer should be a python strings with specific formula items.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["09231603-61de-52fc-9192-84c187715b32"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "In the paper proposing MC-DiT, why does Gaussian noise affect the optimization process of L_{asym}?", "reference_answer": "Because Gaussian noise introduces two noise-weighted items representing contrastive objective between h(x^1_t ) and [\\frac{\\partial h_g}{\\partial x^2_0}] (h_g (x^2_0) and [\\frac{\\partial h}{\\partial x^1_0} ]). As the formula (7) shows, L_{\\text{asym}-NN} = -\\mathbb{E}_{p(x_t^1, x_t^2)}\\left[h(x_t^1)^T h_g(x_t^2)\\right] \\approx L_{\\text{asym}} + \\mathbb{E}\\left(-h(x_t^1)^T \\left[\frac{\\partial h_g}{\\partial x_0^2}\\right]^T n\\right) + \\mathbb{E}\\left(-h_g(x_0^2)^T \\left[\\frac{\\partial h}{\\partial x_0^1}\\right]^T n\\right)."}}} {"uuid": "c0fa5ff0-16dd-5b2c-85c5-e17306e8a097", "question": "In ICLR 2024 Spotlight papers, which paper shows that the solution of this entropy-regularized problem corresponds to a Quantal Response Equilibrium (QRE), a generalization of Nash equilibria that accounts for bounded rationality?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["db88c186-ca99-565e-bfa1-423ff8c52ce0"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In ICLR 2024 Spotlight papers, which paper shows that the solution of this entropy-regularized problem corresponds to a Quantal Response Equilibrium (QRE), a generalization of Nash equilibria that accounts for bounded rationality?", "reference_answer": "Robust Adversarial Reinforcement Learning via Bounded Rationality Curricula"}}} {"uuid": "c105bdc9-6f87-5a7a-9f2c-cf8608923544", "question": "In the paper that introduces PEACE dataset, where can I find the dataset? In which conference was this paper published?", "answer_format": "Your answer should be a Python list of two strings. The first is a website URL starting with \"https://\", as given in the paper, the second is only the conference name string.", "tags": ["metadata", "subjective", "single"], "anchor_pdf": ["9c83d09a-1661-527b-87e5-6fd277f4cb21"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "https://github.com/YTYTYD/PEACE"}, {"reference_answer": "38th Conference on Neural Information Processing Systems (NeurIPS 2024)", "question": "In the paper that introduces PEACE dataset, where can I find the dataset? In which conference was this paper published?"}]}}} {"uuid": "c1380b1a-a444-57b5-bff9-1f983381bb01", "question": "There is a novel framework for end-to-end task-oriented dialog systems that decouples knowledge retrieval from response generation, named MAKER (Multi-grAined Knowledge Retriever). The implemented Knowledge Retriever consists of two selectors, one of which is the 'Entity Selector.' What is the name of the other selector?", "answer_format": "Your answer should be the exact name of the other selector.", "tags": ["comprehensive", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["92be50a8-8597-546f-8df7-aae9b81dff35"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Attribute Selector"}}} {"uuid": "c19b4886-36b0-5461-b66d-9241125e0cb7", "question": "On the LSUN bedroom dataset in the paper \"GENERALIZATION IN DIFFUSION MODELS ARISES FROM GEOMETRY-ADAPTIVE HARMONIC REPRESENTATIONS\", when N=100, the cosine similarity of the generated samples to the nearest neighbors of the training set higher than how much will be considered memory overfit) ?", "answer_format": "Your answer should be a float, rounded to 1 decimal place, evaluating in percentage.", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["e653b3d4-1289-55fd-b2dc-0f3f59bf1093"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 95.0, "ndigits": 1}}} {"uuid": "c34f09ae-cbe7-5bb3-afc0-6d97c16f8b9a", "question": "According to the paper at ACL 2023 that introduces MisGendered, what is the average accuracy of these models in predicting gender-neutral pronouns without fine-tuning?", "answer_format": "Your answer should be a Python float value representing the percentage accuracy, between 0 and 100, rounded to 1 decimal place.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["14473356-eb07-5079-ac89-86eed2b3eaee"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 34.2, "ndigits": 1}}} {"uuid": "c575a9be-3775-50ba-85ce-37978368d7db", "question": "How does the $\\epsilon$ parameter of AdamW affect training?", "answer_format": "Your answer should be a sentence, discussing how the relationship between $\\epsilon$ and grad RMS impacts training.", "tags": ["single", "subjective", "text"], "anchor_pdf": ["e355eea9-a2b5-50ff-85d3-1423a1fada2a"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "If the grad RMS is on the same order as $\\epsilon$, then $\\Delta$ will decrease in magnitude. Decreasing $\\epsilon$ to 1e-15 improves loss and mitigates a collapse in grad RMS.", "question": "How does the $\\epsilon$ parameter of AdamW affect training?"}}} {"uuid": "c6d978d6-9f92-506a-9536-fc88ede19568", "question": "A paper negates the invariable outperformance of Cross-Validation (CV) in the simple \"plug-in\" approach in terms of both the asymptotic bias and coverage accuracy. In their study, 2- and 5-fold CVs suffer from larger biases than plug-in, this phenomenon is more evident under small or large sample sizes?", "answer_format": "Your answer should be chosen between \"small\" and \"large\". ", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["00fcfd21-f4cf-5751-afe7-fa75997791ac"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "large", "lowercase": false, "ignore_blank": false}}} {"uuid": "c76b81ad-fe54-5179-adaf-131c54f13ee8", "question": "In ICLR 2024 Poster papers, a paper proposes a framework named LaMo (Language Models for Motion Control). How many baselines are compared in Figure 1?", "answer_format": "Your answer should be a Python integer.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["9b26088c-abff-5dfe-83a4-90d818553a6e"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 4}}} {"uuid": "c77d97f1-a74c-5768-b6db-1470144e545f", "question": "A recent paper introduces MassSpecGym, the first comprehensive benchmark for molecule discovery and identification from tandem mass spectrometry (MS/MS) data. Could you please retrieve the article and specify the exact number of dataset entries that contain normalized collision energies?", "answer_format": "Your answer should be a Python int.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["b753c2b0-2c46-5cd4-aa3b-a8cc09268ea7"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 121746}}} {"uuid": "c7afaa8a-8311-5dd1-9e5e-5250a4b5011f", "question": "According to this paper, which dataset(except the dataset proposed by this paper) also includes controls for unary vs binary predicates among the existing datasets for the formal analysis of reasoning ability? Can you describe the structure of each example in it in detail?", "answer_format": "Your answer should be a single python list like this: [\"dataset_name\", \"structure_description\"]. Note that for the dataset name, the abbreviation is required. For the structure description, you should give a short string to describe the structure.", "tags": ["multiple", "table", "subjective"], "anchor_pdf": ["da89df5c-ac71-513c-abf1-5cd5718cae4f"], "reference_pdf": ["097c599e-8cc4-5b00-b411-a1c030649d96"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "SimpleLogic", "lowercase": true}, {"reference_answer": "Each example in SimpleLogic is a proposi tional reasoning problem that only involves def inite clauses. In particular, each example is a tuple (facts, rules, query, label) where (1) facts is a list of predicates that are known to be True, (2) rules is a list of rules represented as definite clauses, (3) query is a single predicate, and (4) label is either True or False, denoting whether the query predicate can be proved from facts and rules.", "question": "What is the structure of each example in the dataset?"}]}}} {"uuid": "c84e680f-65c0-503d-8e96-550cc16236e7", "question": "In the paper that proposes VATT, along which dimension are E_lm and E_a^M concatenated to obtain the fused features \\[E_{mm} = \\text{Concat}(\\left[ E_{lm}, E_{a}^{M} \\right])\\]", "answer_format": "Your answer should be a Python strings precising the dimension.", "tags": ["comprehensive", "text", "subjective"], "anchor_pdf": [], "reference_pdf": ["9bc41a79-57e8-5034-9ec0-b2ebbed47b2c"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Along the temporal axis.", "question": "In the paper that proposes VATT, along which dimension are E_lm and E_a^M concatenated to obtain the fused features \\[E_{mm} = \\text{Concat}(\\left[ E_{lm}, E_{a}^{M} \\right])\\]"}}} {"uuid": "c9554ac4-8d41-5c2c-9811-4418260c0b89", "question": "In ICLR 2024 Oral papers, a paper presents PTGM, a novel method that pre-trains goal-based models to augment RL by providing temporal abstractions and behavior regularization. How many different tasks are considered in Figure 2?", "answer_format": "Your answer should be a Python integer.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["73f711b8-04f1-59f2-b730-3e2c31a7721d"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 6}}} {"uuid": "c9e10791-b93c-5047-8b3e-08b5bba1962c", "question": "On the DeepCoder dataset, how much did ExeDec improve the average accuracy of the combined generalization task compared to the Transformer baseline?", "answer_format": "Your answer should be a float, rounded to 1 decimal place, evaluating in percentage.", "tags": ["single", "table", "objective"], "anchor_pdf": ["e8a7f4d2-b82b-59f6-a5fe-d64f18a91e2d"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 18.0, "ndigits": 1}}} {"uuid": "cbcc28d8-65cd-5208-8ef1-c4381f8a936c", "question": "Which dataset(s) is ViewCo trained on? I want to get the newer dataset, can you give me the link to it?", "answer_format": "Your answer should be s single python list like this: [[\"dataset_name1\",\"dataset_name2\"], \"https://github.com/a/b\"]. Note that for the dataset name, the abbreviation is required.", "tags": ["multiple", "metadata", "objective"], "anchor_pdf": ["3c2f6d0b-7692-59a1-9f26-ed581b44015a"], "reference_pdf": ["5fabde11-10a7-5fc8-a1b5-57a6237b5535"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_structured_object_exact_match", "eval_string_exact_match"], "eval_kwargs_list": [{"gold": ["CC12M", "YFCC"], "lowercase": true, "ignore_order": true}, {"gold": "https://github.com/google-research-datasets/conceptual-12m", "ignore_blank": true, "lowercase": true}]}}} {"uuid": "cc8f0960-ae43-5917-a76b-f32bfce222d0", "question": "There is a paper that introduces Codable Text Watermarking for Large Language Models (CTWL), a novel framework for embedding multi-bit customizable information into LLM-generated texts. Its key contribution is a method called Balance-Marking, which leverages a proxy language model to partition the vocabulary into probability-balanced subsets. In the watermarking process of its experiments, the total time expenditure can be divided into Encoding Time and Decoding Time. Which part consumes more time?", "answer_format": "Your answer should be one of ['Encoding Time', 'Decoding Time']", "tags": ["image", "objective", "single"], "anchor_pdf": ["01baf982-6d85-511e-b15e-e7e834818284"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Encoding Time"}}} {"uuid": "cca56ecf-46d7-5d38-8e3b-c49d0bf84c7e", "question": "In the paper that demonstrates LLM-generated difficulty labels can outperform human labels in curriculum learning for fine-tuning, how much percentage does the accuracy gains of learning strategy with LLM-defined difficulty outperform that with human-defined difficulty on average by datasets, compared to the Random Shuffle baseline?", "answer_format": "Your answer should be a float number, rounded to 2 decimal place.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["9afe610e-269c-51fd-a0aa-558f1439ae56"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 0.19, "ndigits": 2}}} {"uuid": "cce305e1-083a-5a2f-8646-994bd490eaae", "question": "In the paper that introduces PPDPP, a plug-and-play dialogue policy planner for LLM-powered proactive dialogue agents, the author proposed a self-play framework utilizing LLM-based user simulation and reward modeling for interactive training. In the experiments related to training episodes, which model consistently maintains the highest Success Rate on the CraigslistBargain dataset?", "answer_format": "Your answer should be the exact name of a model.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["03ecb655-bed4-5d01-a06d-7841dc352d27"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "ChatGPT"}}} {"uuid": "cdc2f046-8924-57b9-ab44-cd8c96edca2b", "question": "In the paper that models novelty emergence in science as an evolutionary game and reveals agents with selfish strategies maximise the diversity of novel ideas, in which generation has the highest Average Novelty Score? What does the image of the Average Novelty Score Time Evolution suggest?", "answer_format": "Your answer should be a Python list. The first is an interger of the number of the generation and second is a Python strings of the result suggested by the image.", "tags": ["comprehensive", "image", "subjective"], "anchor_pdf": [], "reference_pdf": ["9b80f74a-161f-56d2-8d3b-999d069b05e4"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_int_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": 1}, {"reference_answer": "The novelty of ideas may decline when collective decision-making is involved, compared to when individuals generate ideas based solely on their own thoughts.", "question": "In the paper that models novelty emergence in science as an evolutionary game and reveals agents with selfish strategies maximise the diversity of novel ideas, in which generation has the highest Average Novelty Score? What does the image of the Average Novelty Score Time Evolution suggest?"}]}}} {"uuid": "ce0e09e3-1e20-594f-8dff-786c9df521dd", "question": "In the paper titled \"Can Whisper Perform Speech-Based In-Context Learning?\" the experimental section utilized two types of Chinese dialects, one being Chongqing. What is the other one?", "answer_format": "Your answer should be the specific name of the dialect without any additional prefixes or suffixes.", "tags": ["single", "text", "objective"], "anchor_pdf": ["473c9f83-eade-5426-87da-deaff16c3c06"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Guangzhou"}}} {"uuid": "cf056656-2dd7-5085-b451-2e71e2c1000a", "question": "Recent studies use the method of decode the linguistic intricacies to decode DNA. A paper propose to replace k-mer tokenization with Byte Pair Encoding (BPE). They shows model performance averaged over each tasks (macro) and individual dataset (micro), in which kind of performance, macro or micro, the model loss about 2.5% of its performance point when the vocabulary changes from a middle size to a considerable size?", "answer_format": "Your answer should be chosen between \"macro\" and \"micro\". ", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["03b55895-c713-5179-8613-2ded1f46189d"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "micro", "lowercase": true, "ignore_blank": false}}} {"uuid": "cf121e7a-e188-5d69-aa44-e699bc62ea6f", "question": "In the paper that introdeces GRIT, whereby a large language model is trained to handle both generative and embedding tasks by distinguishing between them through instructions, which generative model in the Figure 1 behaves best in embedding perfornmance?", "answer_format": "Your answer MUST be ONE single string without explanation of the method's name in abbreviation and its number of parameters in billions (B). For example, your answer could be 'LLaMA 2 70B'.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["5077b3de-d287-5d5a-b3d5-973643689fb4"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "GPT-J 6B", "lowercase": true}}} {"uuid": "cf7fc88e-f139-5d73-a000-bc96995f9276", "question": "In the experiment of real-world data in the paper of \"CALIBRATION MATTERS: TACKLING MAXIMIZATION BIAS IN LARGE-SCALE ADVERTISING RECOMMENDATION SYSTEMS\", which base model was used? What's the architecture of this base model?", "answer_format": "Your answer should be a single python list like this: [\"model_name\", \"model_architecture\"]. Note that for the model name, the abbreviation is required. For the model architecture, you should give a string to describe the architecture.", "tags": ["multiple", "text", "subjective"], "anchor_pdf": ["e7e4ea3a-175d-5358-9c6e-ab12560e51d4"], "reference_pdf": ["382162e1-9da9-5a68-a75f-bbeaaad8ed11"], "conference": [], "evaluator": {"eval_func": "eval_conjunction", "eval_kwargs": {"eval_func_list": ["eval_string_exact_match", "eval_reference_answer_with_llm"], "eval_kwargs_list": [{"gold": "DLRM", "lowercase": true}, {"reference_answer": "Input Features: DLRM takes both categorical features and continuous features as input.\nEmbeddings: Categorical features are converted into dense vectors using embeddings. Each category is represented by an embedding vector.\nBottom MLP: Continuous features are processed by a multilayer perceptron (MLP) called the 'bottom MLP'.\nFeature Interaction: The output embedding vectors and the output of the bottom MLP are then interacted explicitly using dot products between all pairs of vectors. These interactions capture second-order feature interactions.\nTop MLP: The interacted features are concatenated with the original bottom MLP output and fed into another MLP called the 'top MLP' for further processing.\nOutput Layer: The output of the top MLP is then passed through a sigmoid function to produce a probability prediction (e.g. probability of a click).", "question": "What is the architecture of the model?"}]}}} {"uuid": "cfb8f392-68a9-5438-9ecb-8ca68992bc83", "question": "In ICLR 2024 Poster papers, a paper attempts to solve how to effectively learn from off-policy data in reinforcement learning. The authors claim that their components can be learned jointly by maximizing the conditional evidence lower bound (ELBO), what is its formula?", "answer_format": "Your answer should be the formula in Latex format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["1dc21f98-e67b-534d-81d8-18a0e159fcb3"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, a paper attempts to solve how to effectively learn from off-policy data in reinforcement learning. The authors claim that their components can be learned jointly by maximizing the conditional evidence lower bound (ELBO), what is its formula?", "formulas": "- D_{\\text{KL}}\\left(q_{\\tilde{\\phi}}(z | s, a, s') \\| p_{\\tilde{\\theta}}(z | s, a)\\right) + \\mathbb{E}_{z \\sim q_{\\tilde{\\phi}}(\\cdot | s, a, s')} \\left[ \\log p_{\\tilde{\\theta}}(s' | s, a, z) \\right]"}}} {"uuid": "d0c50c53-8701-5219-ae18-70b1ed9c5ea9", "question": "In ICLR 2024 Poster papers, a paper attempts to conduct long-range dynamic modeling in an interactive environment. In this paper, why the authors introduce the Koopman operator?", "answer_format": "Your answer should be plain text.", "tags": ["text", "subjective", "single"], "anchor_pdf": ["7611ea90-092c-5c7b-8e62-c628327a6316"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, a paper attempts to conduct long-range dynamic modeling in an interactive environment. In this paper, why the authors introduce the Koopman operator?", "reference_answer": "The Koopman operator is a linear operator used to study the dynamics of nonlinear dynamical systems. By mapping the nonlinear dynamical system to a high-dimensional latent space (observable space), the dynamics of the system can be linearized."}}} {"uuid": "d1ef774b-713a-597c-8dff-a698c09f9c3b", "question": "A recent paper introduces ReactZyme, a large-scale benchmark dataset and retrieval-based framework that directly models enzyme functions through their catalyzed reactions rather than traditional annotations such as EC or GO. Could you please tell me how many #Molecule/Reaction are included in the proposed dataset?", "answer_format": "Your answer should be a Python int", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["4ae975ff-fc6d-5291-9e30-f379bf9a9032"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 7726}}} {"uuid": "d222e6c3-03eb-5b9f-8ad7-d47ed389079e", "question": "In this paper, how many different inference frameworks of predictor are proposed?", "answer_format": "Your answer should be a Python integer.", "tags": ["single", "text", "objective"], "anchor_pdf": ["64800c0a-97de-5b40-a579-f7ee1842f27b"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 3}}} {"uuid": "d331c7ef-1b86-5754-8ee5-4123b95704b6", "question": "This paper proposes a method called \"DLPA\", which kind of planning is used in it?", "answer_format": "Your answer should be plain text.", "tags": ["single", "text", "subjective"], "anchor_pdf": ["64800c0a-97de-5b40-a579-f7ee1842f27b"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "This paper proposes a method called \"DLPA\", which kind of planning is used in it?", "reference_answer": "Model Predictive Control (MPC)."}}} {"uuid": "d39792c8-ee30-5a3e-b119-6340d47292b1", "question": "Is there a paper published in ICLR 2024 which establishes a unified framework for Riemannian Batch Normalization (RBN) techniques on Lie groups?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["d5a2a825-3a73-5384-a1a6-35a3fa96a19e"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper published in ICLR 2024 which establishes a unified framework for Riemannian Batch Normalization (RBN) techniques on Lie groups?", "reference_answer": "A Lie Group Approach to Riemannian Batch Normalization"}}} {"uuid": "d3d238de-2ff5-53b7-85a3-72cb840f59b8", "question": "According to the paper's experimental results, which genre of model performed best in the book-length summary task?", "answer_format": "Your answer should be a sentence, comparing the models according to catogories.", "tags": ["single", "table", "subjective"], "anchor_pdf": ["04ed3e06-a7e7-5856-8912-af4223637abf"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "The experimental results show that closed-source models (e.g., GPT-4 and Claude 2) generate summaries with the highest BooookScore, with Claude 2 performing particularly well under the incremental update strategy. Among the open-source models, Mixtral performs close to GPT-3.5-Turbo, while LLaMA 2 performs the worst", "question": "Which of the paper's experimental results showed that the model performed best in the book-length summary task?"}}} {"uuid": "d45ea041-a00c-57b6-bf37-ec663aebaedd", "question": "A recent paper introduces SIMSR, a novel Smart Reply framework that leverages model-based simulation to optimize response set selection by directly maximizing the relevance of at least one reply through a learned Matching model acting as a world simulator. This work is a collaboration between a certain university and Nokia Bell Labs. Please provide the name of this university.", "answer_format": "Your answer should be the name of the university.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["9592cebb-a2fb-5094-97a8-fbff6bb9ceb4"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_element_included", "eval_kwargs": {"gold": ["Nottingham", "University of Nottingham", "The University of Nottingham"]}}} {"uuid": "d49a681f-b91f-501e-a25a-b9a4bad14986", "question": "A recent paper introduces a novel temporal knowledge graph embedding model that uniquely maps relation-time pairs onto an Archimedean spiral in complex space. This design transforms the temporal link prediction task into a third-order tensor completion problem, enabling precise modeling of relation dynamics while maintaining time-invariant entity representations. The experiments are conducted on three TKGE datasets. Which of these datasets has the largest number of training samples?", "answer_format": "Your answer should be one of the following: ICEWS14, ICEWS05-15, GDELT.", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["9821c3cb-08b0-5e34-87fd-8af29d514742"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "GDELT"}}} {"uuid": "d5117812-39cd-5661-b208-2c249cba0320", "question": "What precision does I2D2 achieve in identifying true commonsense statements compared to GPT-3?", "answer_format": "Your answer should be a Python dictionary with keys 'I2D2_precision' and 'GPT3_precision', with values as float numbers rounded to 2 decimal places.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["0fae4050-6550-5468-ab5b-13fc489e0119"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": {"I2D2_precision": 0.92, "GPT3_precision": 0.82}, "ndigits": 2}}} {"uuid": "d52aa9c2-d5a4-51f6-be83-0d6d7f218169", "question": "In the paper proposing UNIT, what does the reconstruction loss mean in figure 1?", "answer_format": "Your answer should be a python strings.", "tags": ["comprehensive", "image", "subjective"], "anchor_pdf": [], "reference_pdf": ["07050244-28d9-510a-a373-9ce4cfe57f0b"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "In the paper proposing UNIT, what does the reconstruction loss mean in figure 1?", "reference_answer": "The output of the visual decoder are student tokens. The output of the original pretrained ViT model are original features for each natural image. we enforce the alignment of the new learned student tokens with the original ones using a weighted sum of the cosine distance L_{cos} and smooth L1 loss L_{l1}, which is denoted as reconstruction loss."}}} {"uuid": "d53cc386-bbaa-51a3-b3ab-315ce4b05fb5", "question": "In the paper that proposes a statistically robust and multi-metric comparison framework for classifiers, in which library can I find the data sets and the performance evaluation of the 80 datasets that the author used to compare to the SVM?", "answer_format": "Your answer should be a Python string of the name of the library.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["9ac999f2-6c25-57a5-b096-170fc4125c2a"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "OpenML", "lowercase": true}}} {"uuid": "d6ebe0a6-1565-5447-83a6-d3d8712c7990", "question": "Among the papers in ICLR 2024, which paper proposes the conception called \"Policy Rehearsing\"? How the authors define \"Optimal Policy Gap\"?", "answer_format": "Your answer should be the formula in LaTeX format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["25abff87-1eb1-561d-9a66-403bca1cc03e"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"question": "Among the papers in ICLR 2024, which paper proposes the conception called \"Policy Rehearsing\"? How the authors define \"Optimal Policy Gap\"?", "formulas": "\\max_{M\\in\\mathcal{M}^c}|\\eta_{M^*}(\\pi_{M^*}^*) - \\eta_{M}(\\pi_{M}^*)|\\leq \\epsilon_e"}}} {"uuid": "d70f731b-f8e9-50c9-9f67-59fed7d45749", "question": "What type of input-output pairs are shown in the prompt for in-context learning of Boolean functions? How does the structure of the prompt in the Boolean function task differ from the in-context learning example with country-capital pairs?", "answer_format": "Your answer should be a sentence answering the two questions.", "tags": ["comprehensive", "image", "subjective"], "anchor_pdf": [], "reference_pdf": ["428a8ebc-c2ab-5c89-bedf-1475b53cf5d1"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["The prompt consists of binary input-output pairs", "While the Boolean function task uses binary sequences with explicit input-output mappings (e.g., x_i, y_i), the country-capital example uses natural language. The Boolean task is fully discrete, while the latter relies on semantic relationships."], "question": " what type of input-output pairs are shown in the prompt for in-context learning of Boolean functions? How does the structure of the prompt in the Boolean function task differ from the in-context learning example with country-capital pairs?"}}} {"uuid": "d74a2ccf-fd25-5523-839c-94dc3a8a156f", "question": "Which categories of KV cache compression techniques within PoD does WindowKV belong to?", "answer_format": "Your answer should be a string that represents categories of KV cache compression techniques in PoD.", "tags": ["multiple", "subjective", "text"], "anchor_pdf": ["a8347203-3f01-5af1-82e9-056f254eb790", "60f8a312-8f3d-53cc-99bb-24686b1ef6fe"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "Token-selection-based methods and Layer-sharing-based methods", "question": "Which categories of KV cache compression techniques within PoD does WindowKV belong to?"}}} {"uuid": "d82fa28a-ef22-588b-ae60-9493dca6a641", "question": "What is the formula of the cosine similarity distribution of random vectors in a high-dimensional space (in Proposition 3.1)?", "answer_format": "Your answer should be a formula.", "tags": ["single", "formula", "subjective"], "anchor_pdf": ["c3bf105b-f2b9-5308-8b56-563b240d5b83"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "f_d(t) = \\frac{\\Gamma(\\frac{d}{2})}{\\Gamma(\\frac{d-1}{2})\\sqrt{\\pi}}(1-t^2)^{\\frac{d-3}{2}}", "question": "What is the formula of the cosine similarity distribution of random vectors in a high-dimensional space"}}} {"uuid": "d8ab5195-6509-552f-beaf-0b51e65d9f76", "question": "There is a recent paper that investigates the phenomenon of 'overthinking' in pretrained language models (PTMs), specifically within open-world scenarios for out-of-domain (OOD) intent classification tasks. The authors propose a dynamic early-exiting inference method that utilizes ensemble-based internal classifiers to determine whether sufficient confidence has been reached for classifying OOD intents before completing inference. Who is the corresponding author of this work?", "answer_format": "Your answer should be a name of a person.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["99cbffd2-3de9-592d-9a2a-353db97f8f62"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Xipeng Qiu"}}} {"uuid": "d94c7957-d423-521b-aa68-693d72ac8052", "question": "By how many percentage points did BREAK improve the joint goal accuracy on the MultiWOZ 2.1 dataset compared to the previous best-performing models?", "answer_format": "Your answer should be a Python float value representing the percentage point improvement in joint goal accuracy, between 0 and 100, rounded to 1 decimal place.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["21995bd0-c4a8-5656-98c7-e00b32620785"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 23.6, "ndigits": 1}}} {"uuid": "da8849a2-6880-5061-8b5e-ce162376c043", "question": "A recent paper introduces the first large-scale UAV-based dataset explicitly designed for multi-object tracking (MOT) and re-identification (Re-ID) of wild animals, focusing on lekking blackbuck antelopes. The dataset includes over 1.2 million MOT annotations across 12 high-resolution drone videos and 730 Re-ID tracks captured from synchronized UAVs. Could you please provide the email address of one of the corresponding authors?", "answer_format": "Your answer should be a mail address.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["4acabc9a-bab8-5525-b481-78472dd0ffa8"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_element_included", "eval_kwargs": {"gold": ["hmnaiks@gmail.com", "vivekhsridhar@gmail.com"]}}} {"uuid": "db5d82c1-3413-582b-9ffe-7ec212161a9e", "question": "A paper introduce a novel principled strategy for building an iterative learning algorithm via the optimisation of a sequence of surrogate training objectives. In the experiment, does their model SuPAC-CE converge faster than gradient descent? ", "answer_format": "Your answer should be \"Yes\" or \"No\". ", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["05b9c87d-85d3-51a6-a8ba-a9ed8aa57faf"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Yes", "lowercase": true, "ignore_blank": false}}} {"uuid": "ddab353d-5af6-5505-9ba5-255878ab1aa8", "question": "In the paper that introduces asymptotically faster and memory-efficient ASQ, a paper in the references approaches the ASQ problem using a dynamic programming approach that allows one to optimize Q in polynomial time. In which conference was this paper published?", "answer_format": "Your answer should be a Python strings of the name of the conference.", "tags": ["comprehensive", "metadata", "subjective"], "anchor_pdf": [], "reference_pdf": ["9ae2d4fe-2acf-5365-bbe8-3b24641f6bdc"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "International Conference on Machne Learning", "question": "In the paper that introduces asymptotically faster and memory-efficient ASQ, a paper in the references approaches the ASQ problem using a dynamic programming approach that allows one to optimize Q in polynomial time. In which conference was this paper published?"}}} {"uuid": "dfbebfcc-9d45-5873-bfa8-008a12c22c03", "question": "A paper studies neuronal embeddings' stablility with respect to changes in model architecture and initialization. In their observation of a strong dependency of the neuronal embeddings on the type of readout mechanism. What is probablely a cause of the phenomenon that the structure that emerges for the Gaussian readout, arises trivially by very aggressively forcing weights to be zero? ", "answer_format": "Your answer should be a phrase, which reveal the reason. ", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["47c079a7-c1e7-55a8-83a1-bf8ae3bf3c05"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "L1 penalty is heavy. ", "question": "A paper studies neuronal embeddings' stablility with respect to changes in model architecture and initialization. In their observation of a strong dependency of the neuronal embeddings on the type of readout mechanism. What is probablely a cause of the phenomenon that the structure that emerges for the Gaussian readout, arises trivially by very aggressively forcing weights to be zero? "}}} {"uuid": "dfca728d-dfc6-57f6-b16c-4db878a7ef1c", "question": "In the paper that proposes ICTM, an algorithm to approximate the MAP solution to a variety of linear inverse problems using a flow prior, what does the x_t in formula (9) represent and how are x_1 and the prior term approximated computationally?", "answer_format": "Your answer should be a Python strings indicating the representation of x_t and the method of the approximation.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["9c7d295a-4a85-570b-b318-b5d7992f0b0c"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "x_t := x_t(x_0) denotes the intermediate state x_t generated from x_0. x_1 and the prior term can be approximated by an ODE solver: ODESolve(x_0, 0, t, v_\\theta), where x_0 is the initial point, and the second and third arguments represent the starting time and the ending time, respectively.", "question": "In the paper that proposes ICTM, an algorithm to approximate the MAP solution to a variety of linear inverse problems using a flow prior, what does the x_t in formula (9) represent and how are x_1 and the prior term approximated computationally?"}}} {"uuid": "e096905a-759d-5ca6-86c5-c390753f6add", "question": "In the paper that proposes QIM-compatibility as a unified framework that extends graphical models to hypergraphs, enabling new capabilities for representing functional dependencies and cyclic causality, what does H_\\mu (T_a \\mid S_a) in formula (2) represent?", "answer_format": "Your answer should be a Python strings of the representation of H_\\mu (T_a \\mid S_a), the formula in LaTeX format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["094de5eb-5c26-5334-a8d1-2981bcdd31e4"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "H_\\mu (T_a \\mid S_a) is a conditional entropy that measures how far \\mu is from satisfying the functional dependency S_a \\rightarrow T_a.", "question": "In the paper that proposes QIM-compatibility as a unified framework that extends graphical models to hypergraphs, enabling new capabilities for representing functional dependencies and cyclic causality, what does H_\\mu (T_a \\mid S_a) in formula (2) represent?"}}} {"uuid": "e0b47070-1dd3-5bcf-a7e6-9afa1a715ae6", "question": "According to the MedCalc-Bench paper, which datasets for LLM involves all five categories: Medical, Knowledge, Qual. Reasoning,Comput., Non-MCQ? ", "answer_format": "Your answer should be a string, a name of dataset", "tags": ["single", "table", "image", "objective"], "anchor_pdf": ["8c02a8b4-5f5c-5318-9e79-33b90ef8af74"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "MEDCALC-BENCH", "lowercase": true, "ignore_blank": true}}} {"uuid": "e0eecb4d-ab89-5dd1-b2e7-bbd6320576ed", "question": "In the single-object editing results on OBJect unseen object subset, which model gets the highest LPIPS in translation task?", "answer_format": "Your answer should be a string, a name of model. ", "tags": ["single", "image", "objective"], "anchor_pdf": ["a2c1f2bd-a5d0-5767-93ba-bc0efc117714"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "3DIT", "lowercase": true, "ignore_blank": false}}} {"uuid": "e12c7cd6-4da2-5972-a489-a6b58eaaa37f", "question": "A recent paper introduces COFE, a novel benchmark derived from COGS, specifically designed to systematically study in-context compositional generalization in large language models. In the experiments investigating model performance with different levels of structural similarity, at what structural similarity level did code-davinci-002 achieve optimal performance on the PhraReco dataset?", "answer_format": "Your answer should be one of the following options: ['Without Structural Similarity', 'Rough Structural Similarity', 'Precise Structural Similarity'].", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["942b4572-fe23-5619-87cc-46a740905912"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Rough Structural Similarity"}}} {"uuid": "e2434b7a-cece-58fd-bf9b-12455fa94dca", "question": "A recent paper introduces a novel parameter-efficient fine-tuning method for pre-trained language models that selectively fine-tunes only the most informative and correlated attention heads. Specifically, the paper models head relationships through a graph that combines information richness (via SVD) and inter-head correlation, ranking them with the PageRank algorithm. In the experiments investigating the effect of the number of selected heads, on which dataset did the performance show the greatest absolute increase?", "answer_format": "Your answer should be one of: ['MRPC', 'MNLI', 'RTE', 'CoLA']", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["9508c03b-c467-595d-953f-ad99717c1226"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "RTE"}}} {"uuid": "e24ac1d8-6d4d-5a01-b811-4d207cad6cac", "question": "Can you recommend me a paper published in ICLR 2024 that introduces a lightweight schema for enabling machine learning over electronic health record data?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["d6f58d99-08de-5193-8324-04bbb91d370a"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Can you recommend me a paper published in ICLR 2024 that introduces a lightweight schema for enabling machine learning over electronic health record data?", "reference_answer": "Medical Event Data Standard (MEDS): Facilitating Machine Learning for Health"}}} {"uuid": "e2c49966-ea04-5043-8b09-494b33ad7e13", "question": "A recent paper introduces a novel large-scale dataset composed of over 159 billion tokens extracted from publicly available business disclosures (e.g., SEC EDGAR filings). It is uniquely characterized by its domain specificity (business and finance), high factuality, low toxicity, and rich temporal metadata. Please provide me with the email address of the corresponding author of this paper.", "answer_format": "Your answer should be a mail address.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["476c7d81-f585-5758-adfd-d5bc7ed146d1"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "bradford.levy@chicagobooth.eduz"}}} {"uuid": "e2e2ce2e-db49-517f-b457-9e54882054f5", "question": "How is the forward diffusion process defined in the paper?", "answer_format": "Your answer should be a formula", "tags": ["single", "formula", "subjective"], "anchor_pdf": ["37a7ab0b-f94d-52e6-9957-9a20f15d7355"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"formulas": "dxt = f (t) xtdt + g (t) dwt, \\quad x0 \\sim q0 (x0),", "question": "How is the forward diffusion process defined in the paper?"}}} {"uuid": "e2e9d734-5433-56b6-9aeb-d4ec4c99ae65", "question": "There is a paper that introduces a unified and effective sequence tagging framework for relational structure extraction tasks, such as event argument extraction, relation extraction, and task-oriented semantic parsing. By appending verbalized representations of conditions and relationships to the input text, a method termed priming, the framework leverages pre-trained language models to generate condition- and relation-aware contextual embeddings. The main experiments in this work were conducted on the MTOP dataset. specifically, how many types of languages of data from MTOP were used in the study?", "answer_format": "Your answer should be a Python int", "tags": ["comprehensive", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["996547b1-99ba-5bef-ac72-1d28f21e9808"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 4}}} {"uuid": "e4672532-ce79-54e9-a1b8-c9d32a32d5e3", "question": "Which paper did Xueyi Liu and Li Yi pubulish in ICLR 2024 poster volume?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["0aafbae3-1fa3-5320-a2bf-73e923b1445a"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Which paper did Xueyi Liu and Li Yi pubulish in ICLR 2024 poster volume?", "reference_answer": "GeneOH Diffusion: Towards Generalizable Hand-Object Interaction Denoising via Denoising Diffusion"}}} {"uuid": "e506e04e-5550-5773-9242-6051d65e3ed8", "question": "In the paper that proposes a dynamic agentic framework with task-graph decomposition, introduces domain-specific evaluation metrics, and provides a specialized dataset for analyzing LLM-based autonomous agents, what are the core metrics the author uses to evaluate the accuracy and completeness of tool identification?", "answer_format": "Your answer should be a Python list indicating the name of the metrics.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["9c6bdca8-340f-5bad-859f-415628b1d83b"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["Precision", "Recall", "F1 Score"], "lowercase": true, "ignore_order": true}}} {"uuid": "e523227a-8276-5bd7-9bda-419df56f782d", "question": "How many percentage of statement-level and category-level tasks are there in the MGToolBench dataset?", "answer_format": "Your answer should be an integer between 0 and 100, without a decimal point", "tags": ["single", "objective", "text"], "anchor_pdf": ["f44e161d-059f-5cc5-bc4e-96bba461514d"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 50}}} {"uuid": "e5e80c20-0e6c-5e57-ad52-6ee1ca461e1f", "question": "In the paper that Seohong Park(the first author) published in the ICLR 2024 oral volume, which method performs second best in Quantitative comparison with unsupervised skill discovery methods?", "answer_format": "Your answer MUST be ONE single string of ONE word without explanation of the method's name in abbreviation.", "tags": ["comprehensive", "image", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["3962a504-6c3b-521f-967b-57114c6ce970"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "LSD", "lowercase": true}}} {"uuid": "e657a6d9-a59b-5c6a-ba56-a4d32b0085e9", "question": "Describe the attention pattern differences in Layer 1 before vs. after the abrupt transition. How does Layer 2's target-specific attention (bottom row) explain the network's post-transition IC accuracy?", "answer_format": "Your answer should be a sentence answering the two questions.", "tags": ["comprehensive", "image", "subjective"], "anchor_pdf": [], "reference_pdf": ["3a7844f8-6697-5648-8d00-bfe2ced21b8b"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["Before: Layer 1 shows uniform attention (no structure). After: Queries attend strongly to immediately preceding tokens (diagonal pattern).", "Post-transition, the target (query) attends sharply to the correct label's position (highlighted key), directly enabling accurate in-context prediction by copying the label from the demonstrative pair."], "question": "Describe the attention pattern differences in Layer 1 before vs. after the abrupt transition. How does Layer 2's target-specific attention (bottom row) explain the network's post-transition IC accuracy?"}}} {"uuid": "e657e45a-358b-5e63-8a99-877f436e7859", "question": "In the paper that releases Content Behavior Corpus, according to Figure 2, LCBM lags behind on only one perspective. In that perspective, which model has the best performance?", "answer_format": "Your answer should be a Python string, the model name as shown in the corresponding figure.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["4be3367a-c6e0-57a7-84fe-bf02be336555"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "GPT3.5", "lowercase": true, "ignore_blank": true}}} {"uuid": "e7a8cd80-c71d-5af1-bb2d-2852e904f665", "question": "Can you recommend me a paper published in ICLR 2024 that introduces the Dynamic Signal Distribution(DSD) classification task?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["d5bd5758-da89-5f8d-b7e3-6c0d5d1fb6d1"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Can you recommend me a paper published in ICLR 2024 that introduces the Dynamic Signal Distribution(DSD) classification task?", "reference_answer": "Role of Locality and Weight Sharing in Image-Based Tasks: A Sample Complexity Separation between CNNs, LCNs, and FCNs"}}} {"uuid": "e853552b-4597-5a04-9b7c-7f61fbe5b799", "question": "Can you recommend me a paper published in ICLR 2024 that develops the Information-Theoretic Hierarchical Perception (ITHP) model, which utilizes the concept of the information bottleneck?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["0ae80288-8701-58c6-b28d-2d6b670d62a7"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Can you recommend me a paper published in ICLR 2024 that develops the Information-Theoretic Hierarchical Perception (ITHP) model, which utilizes the concept of the information bottleneck?", "reference_answer": "Neuro-Inspired Information-Theoretic Hierarchical Perception for Multimodal Learning"}}} {"uuid": "e8581429-a888-5930-b254-adfb225709d2", "question": "Is there a paper published in ICLR 2024 which introduce an end-to-end interpretability framework designed to quantify context usage in language models' generations?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["21a289cf-c55d-5550-8d7b-b79915a69d68"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Is there a paper published in ICLR 2024 which introduce an end-to-end interpretability framework designed to quantify context usage in language models' generations?", "reference_answer": "Quantifying the Plausibility of Context Reliance in Neural Machine Translation"}}} {"uuid": "e86f282d-1a30-5f7a-8ae8-4aeaf896e254", "question": "What is the core advantage of \"Highway policy\"?", "answer_format": "Your answer should be plain text.", "tags": ["single", "text", "subjective"], "anchor_pdf": ["a45dd7dc-c980-56a2-9946-68ef4eefbfa6"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "What is the core advantage of \"Highway policy\"?", "reference_answer": "The core advantage of \"Highway policy\" is that it can avoid learn every navigation trajectory alone, which is time-consuming and inefficient."}}} {"uuid": "e8c3c5e7-2853-525f-b8f0-9d7395977500", "question": "In NeurIPS 2024 Poster papers, a paper proposes a methods called \"BECAUSE\". In this paper, what is the formula of the Theorem 1 (Performance guarantee)?", "answer_format": "Your answer should be the formula in LaTeX format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["4bf87140-b2d2-5435-86b7-ed66c30d8bd8"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"question": "In NeurIPS 2024 Poster papers, a paper proposes a methods called \"BECAUSE\". In this paper, what is the formula of the Theorem 1 (Performance guarantee)?", "formulas": "V_1^*(\\widetilde{s}) - V_1^{\\pi}(\\widetilde{s}) \\lesssim \\min\\left\\{C_1 \\log\\left(\\frac{\\lVert M \\rVert_0}{\\xi}\\right)\\sqrt{|\\mathcal{S}|}, C_s \\sigma \\sqrt{\\lVert M \\rVert_0}\\right\\}\\sum_{h = 1}^{H}\\mathbb{E}_{\\pi^*}\\left[\\sqrt{\\frac{\\log(1 / \\delta)}{n(s_h, a_h)}} \\, \\big\\vert\\, s_1 = \\widetilde{s}\\right]"}}} {"uuid": "e992ec75-07e7-5023-9e97-fee5d898eeb7", "question": "What pruning metric is used by Wanda?", "answer_format": "Your answer should be the context that describes the pruning metric used by the paper.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["e5cae9b9-016a-5169-96c7-3ef7c8afc164"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "In Wanda, the pruning metric is computed by the product of the linear weight's magnitude and its corresponding input feature norm. Specifically, the score for the linear weight $\\mathbf{W}_{ij} is defined by:\n\n\\begin{equation}\n\\mathbf{S}_{ij} = \\left|\\mathbf{W}_{ij}\\right|\\cdot\\Vert\\mathbf{X}_j\\Vert_2\n\\end{equation}\n\nwhere $\\left|\\cdot\\right|$ represents the absolute value operator, and $\\Vert\\mathbf{X}_j\\Vert_2$ evaluates the L2 norm of $j$th features aggregated accros different tokens.", "question": "What pruning metric is used by Wanda?"}}} {"uuid": "e9bb0c10-8951-5d47-94af-cf65d938607b", "question": "In ICLR 2024 Poster papers, a paper proposes a framework named LaMo (Language Models for Motion Control). Tell me the affiliation of the first author.", "answer_format": "Your answer should be a Python string.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["9b26088c-abff-5dfe-83a4-90d818553a6e"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "IIIS, Tsinghua University", "lowercase": true, "ignore_blank": true}}} {"uuid": "ebfb9b1e-dbe5-5130-8818-fd6158ec1015", "question": "For the MiniARC task examples in Figure 1, what is listed as a Bad Rule? How does this Bad Rule differ from the Good Rule in terms of output when applied to instances?", "answer_format": "Your answer should be a sentence answering the two questions.", "tags": ["single", "image", "subjective"], "anchor_pdf": ["faa7fbba-044a-51e6-a104-efd4422880e6"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_scoring_points_with_llm", "eval_kwargs": {"scoring_points": ["Swap the colors of two objects", "The Good Rule, Drop all objects, correctly removes objects, while the Bad Rule produces incorrect outputs by swapping colors instead "], "question": "For the MiniARC task examples in Figure 1, what is listed as a Bad Rule?How does this Bad Rule differ from the Good Rule in terms of output when applied to instances?"}}} {"uuid": "eca8edf3-ca78-583e-87c3-31158780d7fa", "question": "A recent paper introduces the SENSORIUM 2023 Benchmark Competition, a large-scale standardized framework for evaluating predictive models of mouse primary visual cortex responses to dynamic visual stimuli. Please retrieve the paper and provide me with the link to the corresponding GitHub code repository for this work.", "answer_format": "Your answer should be a link only without any additional prefixes or suffixes.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["4758caad-c5c7-5f2b-80ff-d54658649d0e"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "https://github.com/ecker-lab/sensorium_2023"}}} {"uuid": "ecea33d3-1013-5514-bfd6-a1c34401a85f", "question": "How many papers are published as an oral in ICLR 2024 Workshop on Large Language Model (LLM) Agents?", "answer_format": "Your answer should be a single Python integer.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["1f3fda46-b5ed-5287-baa5-714eed6bec66", "2e659b7b-d06e-5192-ac56-105194303407", "6e74a19e-a60c-5a56-9024-7263a910ea99", "9a836ac2-99e4-5f3e-9477-5693db9c20ee", "84c11d6c-85c5-5a6f-9e82-dd8054cfad30", "aefe12a6-4778-547e-959e-0cf975dd148b"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 6}}} {"uuid": "ecf271ed-7f01-500d-8085-71e59afecf07", "question": "A recent paper analyzes intermediate training checkpoints of OPT language models of varying sizes (ranging from 125M to 175B parameters) across token prediction, sequence-level generation, and downstream tasks. This work is a collaboration between a certain university and Meta AI. Please provide the name of this university.", "answer_format": "Your answer should be the name of the university", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["98bec33d-6e48-56b3-90ff-5b896cf01e24"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_element_included", "eval_kwargs": {"gold": ["Princeton University", "Princeton"]}}} {"uuid": "edc3a84c-2383-52bf-a6a7-4325c7baaede", "question": "In this paper, which example is taken to explain their model's limitation of low model-human similarity?", "answer_format": "Your answer should be a string, a phrase which indicates the example and how it works.", "tags": ["single", "subjective", "text"], "anchor_pdf": ["e01a80eb-95aa-5f37-86bc-6dbc8b8e880d"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": " Additionally, model performance in our evaluation setup may be affected by the domain gap between models' training data and the stimuli used in our benchmark; for example, TROG uses cartoon depictions of events, which are dissimilar to the more photorealistic training data of CLIP. Thus, ourevaluation results likely represent a lower bound on model-human similarity. Children as young as two years of age are able to learn from and generalise to pictographic depictions of objects [57-59], however, suggesting that generalisation across representations is an early-acquired skill. ", "question": "Which example is taken to explain their model's limitation of low model-human similarity?"}}} {"uuid": "f07343cd-1974-57a8-bfc9-3bfffe2f5fa7", "question": "According to the oral paper at ICLR 2024 that adopts Pseudo-Huber losses to replace LPIPS, what Frechet Inception Distance (FID) scores does it achieve on CIFAR-10 and ImageNet 64x64?", "answer_format": "Your answer should be a Python list of two float values rounded to 2 decimal places, representing the FID scores achieved on CIFAR-10 and ImageNet 64x64 respectively.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["23b1dcdc-86fe-5c5a-a385-db5058840501"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": [2.51, 3.25], "ndigits": 2, "ignore_order": false}}} {"uuid": "f0f5eaf9-df43-58fb-8a87-3deca1ba6463", "question": "In the paper that proposes SOFO, on which website can I find the author's own flexible implementation of batched JVPs based on OCaml-Torch?", "answer_format": "Your answer should be a Python strings of the name of the website, the website URL starting with \"https://\", as given in the paper.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["9b7c4f83-d6c2-578b-93c0-aecf0b131a55"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "https://github.com/hennequin-lab/SOFO"}}} {"uuid": "f17087c9-23d6-54fd-8a5f-2bc1792cea6b", "question": "In the paper that proposes dSTAR, which method performs best on CIFAR10 under Empire and Little attack respectively?", "answer_format": "Your answer should be a Python list of two strings of the name of the method.", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["9bc84ba7-ec54-5b1b-98f6-ab83e10141a6"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_structured_object_exact_match", "eval_kwargs": {"gold": ["CGE", "dSTAR"]}}} {"uuid": "f1faadd6-0e1c-5d3a-a43e-ed4ce0a2dc6a", "question": "In this paper, how many different tasks are illustrated in Figure 2?", "answer_format": "Your answer should be a Python integer.", "tags": ["single", "image", "objective"], "anchor_pdf": ["9494a7d1-69f6-588b-a5da-f83793b40f13"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 4}}} {"uuid": "f308c2ff-b5a4-546a-ac5c-e03404bfe0fb", "question": "What are the key innovations of DiffMatch?", "answer_format": "Your answer should be a string, a sentence.", "tags": ["comprehensive", "subjective", "text"], "anchor_pdf": [], "reference_pdf": ["ebe290da-10ea-515b-a3c8-985a575e8b53"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_candidate_reference_answer_with_llm", "eval_kwargs": {"candidate_reference_answers": ["We propose DiffMatch, a novel conditional diffusion-based framework designed to explicitly model both the data and prior terms for dense matching.", "Unlike existing discriminative learning-based methods that focus solely on maximizing the likelihood, DiffMatch aims to learn the posterior distribution of dense correspondence."], "question": "What are the key innovations of DiffMatch?"}}} {"uuid": "f33e9298-4f06-559a-a78a-d9ccecda69f4", "question": "A paper derive Contrastive Preference Learning (CPL), an algorithm for learning optimal policies from preferences without learning reward functions. In the figure which shows ablations on CPL's hyperparameters on Drawer Open from State, from which training step, because of an event, the model's success rate increases sharply? ", "answer_format": "Your answer should be an int giving the number of training step. ", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["03a2946e-16ff-5ab6-97fc-9b02bbf5b4a0"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 200000}}} {"uuid": "f49efa82-1ee1-5981-b683-da0dedf8f1e4", "question": "In the paper that This paper uniquely demonstrates that all current private adaptation methods for closed LLMs fundamentally leak data, while open LLMs with local training provide superior privacy, performance and cost efficiency, in which dataset does the PromptPATE have the highest Top1 Accuracy in figure 2?", "answer_format": "Your answer should be s Python string indicating the name of the dataset.", "tags": ["comprehensive", "image", "objective"], "anchor_pdf": [], "reference_pdf": ["9b5164ea-7f40-5872-8758-4b1e31006128"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "SST2"}}} {"uuid": "f4b5990f-2522-5e0a-a383-57e615c01519", "question": "There is a paper that introduces an interactive formal verification environment that leverages large language models (LLMs) by translating code into Isabelle for theorem proving. It also proposes a large-scale dataset called FVELER, which includes 758 Isabelle theories, 29,125 lemmas, and 200,646 proof steps, extracted from seL4 verification. Could you please tell me how many proof steps are included in the test set of the FVELER dataset?", "answer_format": "Your answer should be a Python int.", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["4bc700f1-f479-56ee-b476-3ee0255a9378"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_int_exact_match", "eval_kwargs": {"gold": 8678}}} {"uuid": "f5e0caae-457c-5b6c-bf82-3f251a68f6b9", "question": "In ICLR 2024 Poster papers, a paper attempts to solve the problem of how to enhance the arithmetic reasoning capabilities of large language models (LLMs) through zero-sample hint optimization. What is the formula of the Cross-Entropy loss given the demonstration data collected process?", "answer_format": "Your answer should be the formula in LaTeX format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["973d41d0-6812-5887-9d0b-364404bbafe6"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, a paper attempts to solve the problem of how to enhance the arithmetic reasoning capabilities of large language models (LLMs) through zero-sample hint optimization. What is the formula of the Cross-Entropy loss given the demonstration data collected process?", "formulas": "\\mathcal{L}_{\\mathrm{CE}}(\\theta ; \\mathcal{D}_{\\mathrm{dem}}^{\\ell}) = -\\mathbb{E}_{i \\in [N], k \\sim [K]}\\left[ r^{(i, k)} \\log \\sigma \\left( \\Upsilon_{\\theta}^{(i, k)} \\right) + (1 - r^{(i, k)}) \\log \\left( 1 - \\sigma \\left( \\Upsilon_{\\theta}^{(i, k)} \\right) \\right) \\right]"}}} {"uuid": "f6be1126-599e-5913-bbd3-b9a2473d5ec1", "question": "A recent paper introduces the first million-scale multi-modal, multi-turn, open-domain dialogue dataset, containing over 1.08 million dialogues and 1.53 million associated images collected from real-world social media conversations. In the paper, the authors conduct a detailed statistical comparison between their released dataset and another previous multimodal dialogue dataset. Which dataset has a greater \"Average Turns per Dialogue\"?", "answer_format": "Your answer should be the name of the dataset.", "tags": ["comprehensive", "table", "objective"], "anchor_pdf": [], "reference_pdf": ["954a1c0e-b7f6-5871-b284-492da7703fc2"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "PhotoChat", "lowercase": true, "ignore_blank": true}}} {"uuid": "f7296f13-3f99-5cd5-a32a-c2906335396f", "question": "A recent paper introduces TAG (Table-to-Graph generation), a novel end-to-end framework for joint document-level entity and relation extraction. TAG unifies coreference resolution and relation extraction using a coarse-to-fine table filling strategy and dynamically constructs latent graphs that encode semantic, relational, and syntactic dependencies. This work is a collaboration between a certain university and TopGraph.AI. Please provide the name of this university.", "answer_format": "Your answer should be the name of the university.", "tags": ["metadata", "objective", "single"], "anchor_pdf": ["9777ccf1-7366-5478-9143-4b2c14b96045"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_element_included", "eval_kwargs": {"gold": ["Peking University", "PKU", "Peking"]}}} {"uuid": "f7961583-fc21-5c0f-9632-bf9b7d266189", "question": "For the BF-CNN architecture, how many times does the number of parameters grow when the image resolution increases from 40x40 to 80x80?", "answer_format": "Your answer should be a float, rounded to 1 decimal place", "tags": ["single", "table", "objective"], "anchor_pdf": ["e653b3d4-1289-55fd-b2dc-0f3f59bf1093"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 4.2, "ndigits": 1}}} {"uuid": "f7a7b8f7-fc8c-5ea5-af89-7710c9276c6f", "question": "Tell me what is the \"harmonious loss\" in regularization term.", "answer_format": "Your answer should be the formula in LaTeX format.", "tags": ["single", "formula", "subjective"], "anchor_pdf": ["03fc50ac-00d4-5fe5-b371-690abd36b237"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"question": "Tell me what is the \"harmonious loss\" in regularization term.", "formulas": "\\sum_{i\\in\\{o,r,d\\}}\frac{1}{\\sigma_i}\\mathcal{L}_i(\theta) + \\log(1 + \\sigma_i)"}}} {"uuid": "f8216c4f-3b92-54ed-bfc8-77d613736eda", "question": "In ICLR 2024 Poster papers, a paper attempts to conduct long-range dynamic modeling in an interactive environment. What is the affiliation of the corresponding author?", "answer_format": "Your answer should be a Python string.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["7611ea90-092c-5c7b-8e62-c628327a6316"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Mila, McGill University", "lowercase": true, "ignore_blank": true}}} {"uuid": "fa04d3b0-411d-5e38-91df-adc87e113ba7", "question": "In ICLR 2024 Poster papers, which paper proposes a framework that integrates DRL with classical planning by automatically inducing task structures and substructures from a few demonstrations?", "answer_format": "Your answer should be the title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "text", "objective"], "anchor_pdf": [], "reference_pdf": ["7d6e6f55-9c3b-5db1-aa78-d10f62fdca89"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "In ICLR 2024 Poster papers, which paper proposes a framework that integrates DRL with classical planning by automatically inducing task structures and substructures from a few demonstrations?", "reference_answer": "Integrating Planning and Deep Reinforcement Learning via Automatic Induction of Task Substructures"}}} {"uuid": "fa5f2809-0e1b-558d-8b5c-192c6d967f9a", "question": "Can you recommend me a paper published in ICLR 2024 that proposes a novel learning-based method called ElliDock, which predicts an elliptic paraboloid to represent the protein-protein docking interface?", "answer_format": "Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.", "tags": ["retrieval", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["23ccb3f6-666a-5f14-b008-8f9e8fdecab8"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_paper_relevance_with_reference_answer", "eval_kwargs": {"question": "Can you recommend me a paper published in ICLR 2024 that proposes a novel learning-based method called ElliDock, which predicts an elliptic paraboloid to represent the protein-protein docking interface?", "reference_answer": "Rigid Protein-Protein Docking via Equivariant Elliptic-Paraboloid Interface Prediction"}}} {"uuid": "fb6b7aa4-f2e6-545e-a213-787756b919b4", "question": " Previous NFN often depend on permutation symmetries in neural networks' weights, a paper design corresponding equivariant and invariant layers to incorporate scaling/sign-flipping symmetries. In the experiment, when the weights undergo more extensive scaling and permutation, does their model maintains stable performance? ", "answer_format": "Your answer should be \"Yes\" or \"No\". ", "tags": ["image", "objective", "single"], "anchor_pdf": ["0612cf8e-9f2c-5a82-bc12-bf17840b7b66"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Yes", "lowercase": true, "ignore_blank": false}}} {"uuid": "fc15db88-ee0e-506e-9762-6a447219b34d", "question": "There is a recent paper introducing SuperCon3D, the first dataset combining 3D crystal structures, including both ordered and disordered materials, with experimentally measured superconducting critical temperatures. Please inform me of the name of the corresponding author of this paper.", "answer_format": "Your answer should be a name of a person.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["47cf583c-f004-5e1c-98f6-56570719ae6f"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "Yutong Lu", "lowercase": true, "ignore_blank": true}}} {"uuid": "fc3738ff-6e9e-56c3-8c4a-746a44364699", "question": "What is the average BLEU score improvement achieved by kNN-TL over strong baselines across four low-resource translation tasks?", "answer_format": "Your answer should be a Python float value representing the average BLEU score improvement in points, rounded to 1 decimal place.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["090ffd44-7d35-5c66-bde6-676d6cffe30d"], "conference": ["acl2023"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 2.0, "ndigits": 1}}} {"uuid": "fc776028-9602-57f0-ac6e-8075ce900cae", "question": "In the paper \"BigVGAN: A Universal Neural Vocoder with Large-Scale Training,\" the authors utilized a well-known TTS dataset for training purposes. Could you please clarify how this dataset compares to another prominent dataset, LibriSpeech, in terms of the distribution of audio duration per speaker? Specifically, which dataset exhibits a more dispersed and balanced distribution?", "answer_format": "Your answer should be a string", "tags": ["multiple", "image", "objective"], "anchor_pdf": ["25044bec-0c08-504f-96fe-6953544f08f3"], "reference_pdf": ["232f3380-9b63-5bba-86d3-ff1ead6c62b4"], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "LibriTTS", "lowercase": true}}} {"uuid": "fcd6d3f2-09de-5d8b-bcb1-21ad45eb728c", "question": "A paper introduce a methodology called Induced Model Matching (IMM) to develop a very accurate (often small) predictive model to a full-featured (often larger) model. In their study, which kind of model is considered as a model probablely lacking in features, personalized or general-public? ", "answer_format": "Your answer should be chosen between \"personalized\" and \"general-public\"", "tags": ["objective", "text", "single"], "anchor_pdf": ["8e28f974-beda-54a5-b06d-50b4d31a4019"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "general-public", "lowercase": true, "ignore_blank": false}}} {"uuid": "fcf4de02-61d8-5309-a588-d8e17f3ee7a1", "question": "In ICLR 2024 Spotlight papers, a paper tries to solve the performance issues of the DICE (DIstribution Correction Estimation) method in offline reinforcement learning (RL) and imitation learning (IL). Tell me the formula of the projected backward gradient in this paper.", "answer_format": "Your answer should be the formula in LaTeX format.", "tags": ["comprehensive", "formula", "subjective"], "anchor_pdf": [], "reference_pdf": ["f20defa8-4fdb-5582-adc7-ebefe03370ff"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_complex_math_formula_with_llm", "eval_kwargs": {"question": "In ICLR 2024 Spotlight papers, a paper tries to solve the performance issues of the DICE (DIstribution Correction Estimation) method in offline reinforcement learning (RL) and imitation learning (IL). Tell me the formula of the projected backward gradient in this paper.", "formulas": "\\nabla_{\\theta} V(s') - \\frac{\\nabla_{\\theta} V(s)^{\\top} \\nabla_{\theta} V(s')}{\\lVert \\nabla_{\\theta} V(s) \\rVert^2} \\nabla_{\\theta} V(s)"}}} {"uuid": "fdc6051a-6c91-5724-8249-d4e503137e52", "question": "Faced with the problem that the exact mechanism of how teleportation improves convergence in optimizing non-convex objectives remains elusive, what has this paper done to improve?", "answer_format": "Your answer should be a sentence.", "tags": ["single", "subjective", "text"], "anchor_pdf": ["09842a53-f1d2-54dc-9292-a48b75f83e2c"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"reference_answer": "In this paper, we provide theoretical guarantees on the convergence rate. In particular, we show that stochastic gradient descent (SGD) with teleportation converges to a basin of stationary points, where every point reachable by teleportation is also stationary. We also provide conditions under which one teleportation guarantees optimality of the entire gradient flow trajectory.", "question": "Faced with the problem that the exact mechanism of how teleportation improves convergence in optimizing non-convex objectives remains elusive, what has this paper done to improve?"}}} {"uuid": "fec66973-fa80-57aa-a883-e54da1b140b3", "question": "A recent paper provides the first systematic study of benchmark data repositories in machine learning, introducing the concept of a \"benchmark repository\" as a distinct entity from general-purpose or domain-specific data repositories. Could you please provide the email address of the first author?", "answer_format": "Your answer should be a mail address.", "tags": ["comprehensive", "metadata", "objective"], "anchor_pdf": [], "reference_pdf": ["4ae7d5f7-3f0b-5d0c-90f3-cfef71b95651"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "rlongjoh@uci.edu", "lowercase": true, "ignore_blank": true}}} {"uuid": "fee7cbc2-ec8a-5400-9c63-3322717c5ba0", "question": "Among the Model-based Reinforcement Learning papers in ICLR 2024, which one proposes the model called \"Skipper\"? Explain the conception of spatio-abstraction and temporal-abstraction in the paper, respectively.", "answer_format": "Your answer should be plain text", "tags": ["comprehensive", "text", "subjective"], "anchor_pdf": [], "reference_pdf": ["66895ef6-9249-537e-8645-47e7ea1c3cfa"], "conference": ["neurips2024"], "evaluator": {"eval_func": "eval_reference_answer_with_llm", "eval_kwargs": {"question": "Explain the conception of spatio-abstraction and temporal-abstraction in the paper, respectively.", "reference_answer": "Spatio-abstraction restricts decisions to relevant environmental factors, while temporal-abstraction introduces a local state perceptron."}}} {"uuid": "fef9b33e-2b5c-56a2-bc98-ed77f0ed37f0", "question": "What is the average improvement in pass rate achieved by LEGO-Prover over previous methods on the miniF2F dataset?", "answer_format": "Your answer should be a Python float value representing the percentage improvement in pass rate, between 0 and 100, rounded to 2 decimal places.", "tags": ["comprehensive", "objective", "text"], "anchor_pdf": [], "reference_pdf": ["6f300c27-36eb-51d0-a035-c9287ade3481"], "conference": ["iclr2024"], "evaluator": {"eval_func": "eval_float_exact_match", "eval_kwargs": {"gold": 6.75, "ndigits": 2}}} {"uuid": "ffb1bd4b-e55a-56c7-8d55-c7eed3f716c4", "question": "According to this paper, which model performes the second best over all test domains?", "answer_format": "Your answer should be a string, a name of model", "tags": ["single", "table", "objective"], "anchor_pdf": ["09c4923f-4bdf-5284-b4d5-5f471bb8963e"], "reference_pdf": [], "conference": [], "evaluator": {"eval_func": "eval_string_exact_match", "eval_kwargs": {"gold": "DDN", "lowercase": true, "ignore_blank": false}}}