uuid string | question string | answer_format string | anchor_pdf list | reference_pdf list | conference list |
|---|---|---|---|---|---|
e36edbbf-6630-5a7c-9706-9c4932d865cf | How many GPUs would be required to train a 70M model used in this paper if we had used the batch sizes from the GPT-3 suite? | Your answer should be a single integer. | [
"Emergent Inabilities? Inverse Scaling Over the Course of Pretraining"
] | [
"Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling"
] | [] |
e38767c8-5f41-52ea-91e9-8cc27220be14 | What challenges in mobile health does RoME handle? | Your answer should be in a well-formated item list. | [
"RoME: A Robust Mixed-Effects Bandit Algorithm for Optimizing Mobile Health Interventions"
] | [] | [] |
e3a6b6b4-9899-53e5-b338-c77a80ee71a5 | How to achieve zero-shot lip reading? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"OpenSR: Open-Modality Speech Recognition via Maintaining Multi-Modality Alignment"
] | [
"acl2023"
] |
e4945000-28d9-5f63-b821-f3def54eb88c | Has there been any recent work or competitions focused on the development of methods to counteract clickbait through spoiling, such as revealing key information upfront? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"SemEval-2023 Task 5: Clickbait Spoiling"
] | [
"acl2023"
] |
e4b24c60-eafc-51ff-96ba-7fabb64fc15d | In the training of LangBrige, when adapting finetuned LMs, is multilingual encoder trainable? | Your answer should be a Python bool of true or false. | [
"LangBridge: Multilingual Reasoning Without Multilingual Supervision"
] | [] | [] |
e5680763-aa2c-5686-9ed3-762d09067ad6 | What type of information is scored the highest when the loss equals 0, 0.01 and 0.06? What can be concluded from this result? | Your answer should be a python list of two strings, the first is the type of information, the second is the conclusion from the result. | [
"Lossy Compression and the Granularity of Causal Representation"
] | [] | [] |
e5b21555-1c9a-5275-be50-1a418f9a59d6 | What are the meanings of function $s$ and function $F$ in Equation (3)? | Your answer should be a sentence describing the meanings of function $s$ and function $F$ in Equation (3). | [
"Learning “O” Helps for Learning More: Handling the Unlabeled Entity Problem for Class-incremental NER"
] | [] | [] |
e652aa6f-5d78-56a5-8cad-549581d96c1f | For the model which suffered the biggest loss of response fidelity from adding image to query, how many hours did its training take? How many AI accelerators were used? | Your answer should be a python list with 2 elements. The elements should be integers, the first one giving the number of hours, and the second one giving the number of accelerators. | [
"Why do LLaVA Vision-Language Models Reply to Images in English?"
] | [
"LLaVA-Gemma: Accelerating Multimodal Foundation Models with a Compact Language Model"
] | [] |
e6a69aa0-9915-5c51-a2e8-ba90140fe58e | Which paper first combines rewriting and expansion methods to reformulate a query for conversational search? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"ConvGQR: Generative Query Reformulation for Conversational Search"
] | [
"acl2023"
] |
e6a78d3c-1bfe-55ea-b070-712554f7cae9 | In the domain of prediciton questions, what's the largest issue for LLAMA 2? | Your answer should be a word or phrase, indicating the largest issue. YOU MUST USE THE EXACT WORDS FROM PDF WITHOUT EXPLANATIONS. | [
"Leveraging Large Language Models for Learning Complex Legal Concepts through Storytelling"
] | [] | [] |
e6bd50e2-c698-520b-b4fe-a62d742c9d01 | When analyse the statistics of Chinese GEC datasets, which dataset for the usage of Fine-tuning has the largest number of sentences? | Your answer should be a single string of the dataset's name. | [
"MixEdit: Revisiting Data Augmentation and Beyond for Grammatical Error Correction"
] | [] | [] |
e6fcc866-61ca-5cba-985b-e64ceefdf84c | What are the 3 primary algorithmic strategies evaluated in the LLMC toolkit for quantizing large language models, and how do they differ in approach? | Your answer should be a sentence. | [
"LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit"
] | [
"LLM-FP4: 4-Bit Floating-Point Quantized Transformers"
] | [] |
e7356e42-a08e-5c65-abb0-6e00ea2a400a | What paper first proposes that simply reversing the output can significantly enhance the sample efficiency and the performance of the arithmetic capability of a decoder-only Transformer model? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Teaching Arithmetic to Small Transformers"
] | [
"iclr2024"
] |
e744adb1-ce7c-5d3e-9b2f-a8790dfb6cb7 | What are the baseline models used in the experiments of the two most recent papers and this paper? | Your answer should be a python list of elements, each element is the baseline model name string, e.g., ["model1", "model2", ...]. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION. | [
"AlignRE: An Encoding and Semantic Alignment Approach for Zero-Shot Relation Extraction"
] | [
"RE-Matching: A Fine-Grained Semantic Matching Method for Zero-Shot Relation Extraction",
"Revisiting Large Language Models as Zero-shot Relation Extractors"
] | [] |
e74fd71e-eaf4-59dc-a956-299cee5375e3 | In the paper that shares the similar world model loss function with the R2I paper, a competition concerning Minecraft is introduced. Where can I find the data of that competition? | Your answer should be a Python string starting with "https://", the URL of the dataset as given in the paper. You don't need to make sure the URL is still valid, just provide the URL as it is in the paper. | [
"Mastering Memory Tasks with World Models"
] | [
"Mastering Diverse Domains through World Models"
] | [] |
e8654b21-dff6-5447-90f4-afb0974ce94d | Which backdoor paper first used the CLIP to suppress benign features and enhance poisoning features to design triggers? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Efficient Backdoor Attacks for Deep Neural Networks in Real-world Scenarios"
] | [
"iclr2024"
] |
e87fa3e0-7d2f-5909-8e01-5c2d8de2e64c | Which dataset did not get improved performance after applying the proposed RECOST method to Alpaca-gpt4, compared to the Random baseline? Tell me this worst-performing dataset. And what's the remaining performance gap for our best-performing RECOST method compared to the reported human upper bound on the testset for that dataset? | Your answer should be a Python list of two elements, where the first element is the name of the dataset, and the second element is a float number rounded to 2 decimal places, calculated by subtracting the performance of the best-performing RECOST method from the reported human upper bound performance for that dataset. | [
"RECOST: External Knowledge Guided Data-efficient Instruction Tuning"
] | [
"HellaSwag: Can a Machine Really Finish Your Sentence?"
] | [] |
e89d9ee6-ed85-55bc-98fc-687823d1695f | What data augmentation strategies are used in the recently proposed dataset used in this paper? | Your answer should be a python strings about the detailed data augmentation strategies. | [
"How Vocabulary Sharing Facilitates Multilingualism in LLaMA?"
] | [
"BigTranslate: Augmenting Large Language Models with Multilingual Translation Capability over 100 Languages"
] | [] |
e8c52858-a386-5e87-a9ba-3a7ec32ae1e2 | What are the catogories of label biases in in-context learning for text classification and what are the definitions of these categories? | Your answer should be a Python list of text strings, with each element being one category that this paper defines, e.g., ["category 1: define 1", "category 2: define 2", ...]. | [
"Mitigating Label Biases for In-context Learning"
] | [] | [] |
e910df81-f6bc-5c88-8df2-b99ce1990a47 | Which work discusses an analysis of source and target contributions to output generation based on local interpretation when machine translation models experience hallucinations? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Local Interpretation of Transformer Based on Linear Decomposition"
] | [
"acl2023"
] |
e91cd875-a6a7-540d-80be-279f30dd2e4a | Which paper first found that when transformers are trained to in-context learn function classes, they might exhibit generalization followed by memorization, in certain settings? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"In-Context Learning through the Bayesian Prism"
] | [
"iclr2024"
] |
e9748de4-8290-5fbe-9814-9443d3f4075e | According to the paper that proposed Synapse, which other concurrent work performs the best? In the dataset that both papers applied, how many tasks are there? | Your answer should be a Python list of 2 elements, the first is a string, the name of method, and the second is an integer, the number of tasks. | [
"Synapse: Trajectory-as-Exemplar Prompting with Memory for Computer Control"
] | [
"From Pixels to UI Actions: Learning to Follow Instructions via Graphical User Interfaces",
"Reinforcement Learning on Web Interfaces Using Workflow-Guided Exploration"
] | [] |
ea28952f-c060-5e5a-b0bc-0269aaab57fe | The two anchor PDFs propose the AudioDec and LMCodec models, both improved based on the same codec. What is the name of this codec? Also, do the RVQ parts in both works have the same single codebook size? | Your answer should be a Python list where the first item is the codec name (a string) and the second item is 'yes' or 'no' (a string). | [
"Audiodec: An Open-Source Streaming High-Fidelity Neural Audio Codec",
"LMCodec: A Low Bitrate Speech Codec with Causal Transformer Models"
] | [] | [] |
ea3a6252-6542-58e2-85e5-0c5274fac510 | Out of the four baselines used by the paper that proposes MEQE method, which one is also utilized as a baseline by the other three papers? Additionally, what are the two other baselines that the three papers have in common? | Your answer should be a Python list with two elements. The first element should be the full name of the baseline shared by all four papers (the anchor PDF method and the three other baselines). The second element should be a Python list containing the full names of the two additional baselines shared by the three papers. Ensure the baseline names are the full names. | [
"Complex Query Answering on Eventuality Knowledge Graph with Implicit Logical Constraints"
] | [
"Embedding Logical Queries on Knowledge Graphs",
"Query2Particles: Knowledge Graph Reasoning with Particle Embeddings",
"Neural Methods for Logical Reasoning Over Knowledge Graphs",
"Fuzzy Logic Based Logical Query Answering on Knowledge Graphs"
] | [] |
ea497ecc-8bd7-5954-a3ac-d212432c7feb | Which paper surveyed the datasets and tasks of asking clarification questions in conversational systems?? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"A Survey on Asking Clarification Questions Datasets in Conversational Systems"
] | [
"acl2023"
] |
ea6c0002-4771-57a4-af92-55a590a92777 | Which model achieves superior performance with a large number of examples in the task of cancer type classification? | Your answer should be a single model name used in the corresponding figure. | [
"Enhancing vision-language models for medical imaging: bridging the 3D gap with innovative slice selection"
] | [] | [] |
ea965a94-3dc2-58e6-93e6-6da8d839e7e8 | What is a large event-coverage general-domain event argument extraction dataset? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"GENEVA: Benchmarking Generalizability for Event Argument Extraction with Hundreds of Event Types and Argument Roles"
] | [
"acl2023"
] |
eb327f3c-93ea-5851-a544-9c05b109ac16 | In the experiments, which datasets did the authors use, and how many samples are there in the training set of each dataset? | Your answer should be a Python dictionary, where the keys are the names of datasets and the values are the number of samples in the respective training set. e.g. {\"dataset1\": 10, \"dataset2\": 20, ...} . | [
"MetaReflection: Learning Instructions for Language Agents using Past Reflections"
] | [] | [] |
eb3a5dd5-0008-5edf-b8e7-8cebd614f282 | In the survey of Large Language Models for NL2Code, what are the multi-lingual benchmarks to evaluate the NL2Code task, and how many instances do they contain per promgramming language? | Your answer should be a Python dictionary of entries, each dictionary key is a string, the benchmark name DIRECTLY FROM THE PDF WITHOUT CHANGING CAPITALIZATION, and each value is an integer of the corresponding instance number, e.g., {"benchmark1": 10, "benchmark2": 100}, .... | [
"Large Language Models Meet NL2Code: A Survey"
] | [] | [] |
ebd4c18e-2148-5952-b257-5c899148ff26 | Can you tell me the core idea of the source paper of the methodology which inspires the creation of HarmfulQ dataset? | Your answer should be a string about the core idea. | [
"On Second Thought, Let’s Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning"
] | [
"Red Teaming Language Models with Language Models"
] | [] |
ebd5482c-b856-5427-876b-fcd24759d8d4 | MMD and xVal, a baseline in the anchor paper, both aim to solve the problem of embedding numbers in language models. Did the tasks focused on by the two papers belong to the same domain? If not, what types of tasks does xVal focus on? | Your answer should be brief text answering whether the tasks focused on by the two papers belong to the same domain, and if not, the domain of the task focused on by xVal. | [
"Interleaving Text and Number Embeddings to Solve Mathemathics Problems",
"xVal: A Continuous Numerical Tokenization for Scientific Language Models"
] | [] | [] |
ec05c8e8-b789-514f-802e-7c710b0bec67 | In the main results of ShortGPT's source paper, which paper does the experimental results of several comparison methods come from? Does the method proposed in this paper require post-training? | Your answer should be a python list of two elements. The first element is a python string, the paper's full name. The second element is a python bool. | [
"LaCo: Large Language Model Pruning via Layer Collapse",
"ShortGPT: Layers in Large Language Models are More Redundant Than You Expect"
] | [] | [] |
ec24ce45-3e3f-5164-9a9e-24381d9208a8 | In the previous work of "Hokoff: Real Game Dataset from Honor of Kings and its Offline Reinforcement Learning Benchmarks" which uses the same game, how many training hours does it take in average for the agent trained with 128 CPU cores to beat the behavior-tree AI? | Your answer should be a float, rounding to 2 decimal places. | [
"Hokoff: Real Game Dataset from Honor of Kings and its Offline Reinforcement Learning Benchmarks"
] | [
"Honor of Kings Arena: an Environment for Generalization in Competitive Reinforcement Learning"
] | [] |
ecef28ab-8648-51af-b77f-91d2ed598e89 | Which one of the prior works on state space models by the same team that published the Mamba paper proposes FlashConv for accelerating state space model training? | Your answer should be a python string, the full paper name of the prior work. | [
"Mamba: Linear-Time Sequence Modeling with Selective State Spaces",
"Hungry Hungry Hippos: Towards Language Modeling with State Space Models",
"Efficiently Modeling Long Sequences with Structured State Spaces",
"Combining Recurrent, Convolutional, and Continuous-time Models with Linear State-Space Layers"
] | [] | [] |
ed62604f-0aad-569a-9105-8381212aeb43 | What is the optimal number of layers to skip for LLaMA2-13B? | Your answer should be a Python integer number. e.g. 3 | [
"Draft& Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding"
] | [] | [] |
edf8b7f4-c386-5053-ac89-00bf27fc0d54 | What techniques exist for incorporating context in detecting emotions within dialogues by leveraging pre-trained language models? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Context-Dependent Embedding Utterance Representations for Emotion Recognition in Conversations"
] | [
"acl2023"
] |
ee44be40-1780-5f28-9fb4-7c2e626bc4a0 | In the experiment section of the paper, what is the detailed procedure of back-translation for sentence reconstruction? | Your answer should be a python strings. | [
"Bridging Continuous and Discrete Spaces: Interpretable Sentence Representation Learning via Compositional Operations"
] | [
"ParaNMT-50M: Pushing the Limits of Paraphrastic Sentence Embeddings with Millions of Machine Translations"
] | [] |
ee6dba6d-938c-5690-83e0-729ccbf2882c | What are the selected questionnaires or scales on personality trait domain in the PsychoBench framework? | Your answer should be a python list of the abbreviations of the questionnaires names, e.g., ['NEO-FFI', 'IPIP'] | [
"On the Humanity of Conversational AI: Evaluating the Psychological Portrayal of LLMs"
] | [] | [] |
eea13a76-cf7b-533c-beb5-9c1c49a7bc9d | Across different corpuses analysed in the paper, which one has the best word order monotonicity? What's its key feature? | Your answer should be a single python list of two strings, the first string is the name of the corpus, the second string is about the key feature. | [
"Simultaneous Interpretation Corpus Construction by Large Language Models in Distant Language Pair"
] | [] | [] |
eed1fb76-7c69-540b-9b6f-ad67c3ce4153 | According to Figure 3, what are the layers proposed by the paper(compared to existing methods) in the overall framework of AR quality predictions? | Your answer should be a python list, every element of the list is a string presented in the original figure of the paper. If there are multiple layers with the same name, they only need to be mentioned once. | [
"Predicting the Quality of Revisions in Argumentative Writing"
] | [] | [] |
ef0d91b0-8648-519f-9181-ca56496723b6 | What is the maximum gain obtained by adding more outputs in all the datasets tested? | Your answer should be a floating point numbers with three decimal places. | [
"Quantifying Uncertainty in Answers from any Language Model and Enhancing their Trustworthiness"
] | [] | [] |
ef85ae29-2dcf-5ccc-a0c6-a90689ba11b5 | What are the three types of instruction-following data, and which one has the largest number of samples? | Your answer should be a Python list of 2 elements. The first element is a Python list of 3 elements, containing the names of the three types of instruction-following data. The second element is a string, indicating the name of the type of instruction-following data that has the largest number of samples. e.g. [["type1", "type2", "type3"], "type"]. | [
"Visual Instruction Tuning"
] | [] | [] |
efa52128-c101-56c2-aaf0-320000e9bc55 | What is the improvement in precision (in percentage points, rounded to one decimal place) from unconstrained to constrained for LLaMA-33B in closed information extraction (4 shots)? And where can I get the testing dataset (the GitHub link)? | Your answer should be a single python list containing two strings, the first element of the list is the improvement in precision (in percentage points, rounded to one decimal place), the second element of the list is the GitHub link of the testing dataset. | [
"Grammar-Constrained Decoding for Structured NLP Tasks without Finetuning"
] | [
"Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and the Case of Information Extraction"
] | [] |
efb108ff-2ddd-5cb2-9408-afba55df144b | How many tasks are in WebArena? Can they be categorized into classes? How many classes can they be categorized into? What are the classes? How many tasks are in theses classes, respectively? | Your answer should be a Python list. The first element is an integer indicating the total task number. The second one is a boolean indicating if the tasks can be categorized. If the second one is true, there should be more elements. The third element should be an integer indicating the class number. The fourth one should be a string list storing the class names. The fifth one should be an integer list storing the task numbers in each class. If any needed information cannot be specified through the paper, give an empty string as the answer for that item. | [
"WebArena: A Realistic Web Environment for Building Autonomous Agents"
] | [] | [] |
efd9be34-b6b2-5abc-b686-0962c27c350c | Provide an example of a paper which proposes a method to learn a dynamic (conditioned on the input) sequence tokenizer (segmenter) via standard gradient backpropagation. | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Efficient Transformers with Dynamic Token Pooling"
] | [
"acl2023"
] |
f0291d22-9853-5727-b582-349739d89cbe | What are the key advantages of coupling neural SDEs with neural CDEs for treatment effect estimation over existing baselines? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Bayesian Neural Controlled Differential Equations for Treatment Effect Estimation"
] | [
"iclr2024"
] |
f0331580-b619-5be3-957b-252a16b65159 | In formula(3), what do r_where and r_what represent? How to estimate whether the brain indicates better performance by WhereCNN or WhatCNN? | Your answer should be a python strings of the representation of rwhere and rwhat and how to identify whether WhereCNN or WhatCNN performs better. | [
"A Dual-Stream Neural Network Explains the Functional Segregation of Dorsal and Ventral Visual Pathways in Human Brains"
] | [] | [] |
f06b7b4b-58fd-5450-9c97-00542144b8b2 | Is there a paper exploring the curse of multilinguality for similar languages? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages"
] | [
"acl2023"
] |
f0e4639b-09da-5581-87d4-2eb470c2dc0d | On which datasets were the best-performing Medical MLLMs (excluding the method proposed in this paper) trained and evaluated in the Medical VQA benchmark of the paper? | Your answer should be a python list of the dataset names, e.g. ["dataset1", "dataset2", ...]. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION. | [
"Towards Injecting Medical Visual Knowledge into Multimodal LLMs at Scale"
] | [
"LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day"
] | [] |
f1429616-9c0c-5f32-b39f-46a63a5f7d03 | What open-source dataset combined knowledge retrieval with constraint satisfaction queries? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"KITAB: Evaluating LLMs on Constraint Satisfaction for Information Retrieval"
] | [
"iclr2024"
] |
f1d19f7e-17b7-582f-b9dd-465860422e9e | Is there a paper which proposes a general data selection method based on information theory? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"GIO: Gradient Information Optimization for Training Dataset Selection"
] | [
"iclr2024"
] |
f1f24bb5-7f16-5048-86c0-3723a919a07e | Which foundation model paper first proposed a time series model with proposed financial time series and text data? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting"
] | [
"iclr2024"
] |
f21f555a-1254-59ba-8cbc-11791cdab6b0 | Are there any papers that use a world model for planning to ensure that decisions meet constraints? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"SafeDreamer: Safe Reinforcement Learning with World Models"
] | [
"iclr2024"
] |
f22e1e2f-bf4a-579e-a11f-f28e9226693a | In multimodal (multilingual) abstractive summarization field, is there any paper that propose target-oriented vision modeling method to improve the quality of summaries? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Summary-Oriented Vision Modeling for Multimodal Abstractive Summarization"
] | [
"acl2023"
] |
f343cc68-7d22-55cb-8e9c-fc6efa23d8b7 | When answering the question "Are ExNLP tasks associated with high-risk situations?", which paper does this paper("On Evaluating Explanation Utility for Human-AI Decision Making in NLP") learn from? In the experiment of the source paper, how many types of knowledge-context are there? | Your answer should be a single python list, the first element is the string of title of the paper, the second element is an integer number. | [
"On Evaluating Explanation Utility for Human-AI Decision Making in NLP"
] | [
"Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs"
] | [] |
f36225a8-3139-58df-843a-e89b838e1f37 | Which base model does this paper("Adapt in Contexts: Retrieval-Augmented Domain Adaptation via In-Context Learning") train as the retrieval model for the SA task? In its source paper, on how many STS tasks is it evaluated? | Your answer should be a python list of two strings, the first element is the model name(one word), and the second element is an integer number. | [
"Adapt in Contexts: Retrieval-Augmented Domain Adaptation via In-Context Learning"
] | [
"SimCSE: Simple Contrastive Learning of Sentence Embeddings"
] | [] |
f4154375-e94a-5623-a51d-0ae5cf5c4039 | Which paper measured how well the source-translation contribution by the translation model can be used to detect its own hallucinations? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Detecting and Mitigating Hallucinations in Machine Translation: Model Internal Workings Alone Do Well, Sentence Similarity Even Better"
] | [
"acl2023"
] |
f4f09e69-4c85-581a-9bab-0ced35cccdb7 | In which website can I find the information of the benchmark used to compare multilingual models? In which conference was the mT5 reference paper included in the benchmark published? | Your answer should be a python list of two strings, the name of the website and the name of the conference. | [
"Massively Multilingual Corpus of Sentiment Datasets and Multi-faceted Sentiment Classification Benchmark"
] | [] | [] |
f586cf96-1650-57f8-b7c9-2436c89216f8 | When we utilize decoder-only language models in understanding word meaning, does prompting styles affect performance? If so, which technique outperforms the others? If not, what is the worst one? | Your answer should be a Python list of two elements, the first element is "yes" or "no", and the second element is the prompting style name string, don"t reply abbreviations, e.g., ["yes", "prompting_style_name"]. | [
"Are Decoder-Only Language Models Better than Encoder-Only Language Models in Understanding Word Meaning?"
] | [] | [] |
f5abd5f8-b8b0-5fcf-af97-739ca262c1c0 | Which model gets the highest DR value in the Random Retrieval performance? | Your answer should be plain text. | [
"Model Analysis & Evaluation for Ambiguous Question Answering"
] | [] | [] |
f61c9dbc-5058-5621-8aeb-bd83c90b296e | How to find those question samples that the model considers to be ambiguous. | Your answer should be a single string | [
"Aligning Language Models to Explicitly Handle Ambiguity"
] | [] | [] |
f640029c-539b-58b1-a742-05b8bb0edacb | What's the biggest reason of incorrect action for each model? | Your answer should be a Python dictionary. e.g. {"model1": "answer1", "model2": "answer2", ...}. YOU MUST USE THE EXACT AND FULL TEXT FROM PDF WITHOUT CHANGING CAPITALIZATION. | [
"TimeArena: Shaping Efficient Multitasking Language Agents in a Time-Aware Simulation"
] | [] | [] |
f641587e-0065-54e9-92c4-d2b194535f80 | Which paper first study POMDP with enhanced feedback on observations? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Sample-Efficient Learning of POMDPs with Multiple Observations In Hindsight"
] | [
"iclr2024"
] |
f70c12d9-365c-5fb5-aa6f-7cb0620991fc | In the larger dataset in hours that the EgoDistill paper applies, what's the second largest country of residence for camera wearers? | Your answer should be a string, the country as given in the paper. | [
"EgoDistill: Egocentric Head Motion Distillation for Efficient Video Understanding"
] | [
"Ego4D: Around the World in 3,000 Hours of Egocentric Video"
] | [] |
f7b228d5-cd68-555b-afee-f05a51a12165 | What are the differences in the composition of the Primary System for the unconstrained setting between the 2023 and 2024 QUESPA Submissions? | Your answer should be a python strings. | [
"QUESPA Submission for the IWSLT 2024 Dialectal and Low-resource Speech Translation Task",
"QUESPA Submission for the IWSLT 2023 Dialect and Low-resource Speech Translation Tasks"
] | [] | [] |
f7b532c1-3fd7-5a2b-87b4-522592ff6dbe | When training with non-English image-text pairs, what is the loss function of the TriKD? | Your answer should be a sentence describing the loss function of the TriKD when training with non-English image-text pairs, including the terms involved in the loss function and their meanings, as given in the paper. | [
"mCLIP: Multilingual CLIP via Cross-lingual Transfer"
] | [] | [] |
f7c8f3fc-801a-5e50-9722-af38407a0b9d | What are the seven categories of tasks, which form the dataset used to conduct SFT on a Llama-2-7B model in section 2.1? | Your answer should be a Python list of seven elements, containing the names of the seven categories of tasks. e.g. ["task1", "task2", ... "task7"]. YOU MUST USE THE EXACT AND FULL NAMES OF THE TASKS AS MENTIONED IN THE PAPER. | [
"LoRAMoE: Alleviating World Knowledge Forgetting in Large Language Models via MoE-Style Plugin"
] | [] | [] |
f9276cd0-6c6a-5da7-a169-385a7f04ebb0 | What are the main models mentioned in the anchor_pdf and what is the relationship between them? | Your answer should be a python strings. | [
"Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer",
"Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks",
"Multitask Prompted Training Enables Zero-Shot Task Generalization"
] | [] | [] |
f94b871f-f8e7-5bcc-b646-7eb9840a95c4 | Which sub-splits are included in the validation and test sets of the OC20 dataset used in the paper? | Your answer should be a python list of the full names of the sub-splits. YOU MUST USE THE EXACT FULL NAMES FROM THE PAPER. | [
"EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations"
] | [] | [] |
f97e22e2-d4b4-5141-80e6-7270a2a9b9cc | What are the sources of the pre-training data for the latest LLM used in the experiment section of the paper "AN UNFORGEABLE PUBLICLY VERIFIABLE WATERMARK FOR LARGE LANGUAGE MODELS"? | Your answer should be a python list of strings, e.g., ["source1", "source2"]. | [
"An Unforgeable Publicly Verifiable Watermark for Large Language Models"
] | [
"LLaMA: Open and Efficient Foundation Language Models"
] | [] |
f9866921-b6c0-55f2-874f-8bcb5d1e733b | Is there a paper that links exposure bias to distillation? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"A Systematic Study of Knowledge Distillation for Natural Language Generation with Pseudo-Target Training"
] | [
"acl2023"
] |
f987547b-e418-5424-8f8b-f8855bdf63cc | Which two datasets it combines, the dataset that Alchemist used to evaluate image modality? | Your answer should be a Python list of two strings, the abbreviations of the datasets as given in the paper. | [
"The ALCHEmist: Automated Labeling 500x CHEaper than LLM Data Annotators"
] | [
"Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization"
] | [] |
fa7f68f5-fd2e-5b0b-b099-2c09331b7c25 | Which papers develop methods to make in-context learning more computationally efficient? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning"
] | [
"acl2023"
] |
fac6cccc-3d4a-5211-9180-1a825de52b16 | In the largest dataset concerning procedural graph extraction before PAGED, how many labeled sentences are there in total? | Your answer should be a single integer. | [
"PAGED: A Benchmark for Procedural Graphs Extraction from Documents"
] | [
"An Approach for Process Model Extraction By Multi-Grained Text Classification",
"Constructing Procedural Graphs with Multiple Dependency Relations: A New Dataset and Baseline",
"Knowing-how & Knowing-that: A New Task for Machine Comprehension of User Manuals"
] | [] |
fbca5330-8955-5359-94a5-d91961e2a6d9 | Which paper proposes to integrate black-box LLMs with a pool of smaller but specialized language models? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Knowledge Card: Filling LLMs' Knowledge Gaps with Plug-in Specialized Language Models"
] | [
"iclr2024"
] |
fbd10d75-9ede-5480-8312-08b7435413df | In the GenRec paper, which dataset used in the experiment is not evaluated in Table 1? Additionally, what's the range of the clip length for that dataset? | Your answer should be a Python list of 2 elements, the first is a string, the name of the dataset, and the second is a Python list of 2 floats, the range of the clip length, rounded to 2 decimal places, in seconds, e.g. ["dataset", [1.01, 2.02]] | [
"GenRec: Unifying Video Generation and Recognition with Diffusion Models"
] | [
"UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild"
] | [] |
fbfeab50-b132-5933-95c4-cb5034790ab3 | In the respective main experiment of SLED and DoLa, do they use the same evaluation datasets?Do they use the same model family? | Your answer should be a list of two integers, where the first integer is 1 if the evaluation datasets are the same and 0 otherwise, and the second integer is 1 if the model family is the same and 0 otherwise. | [
"SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Models",
"DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"
] | [] | [] |
fc882b50-9385-5452-a96a-e0e93a9cbd2f | What is the first paper to address the problem of predicting knowledge graphs whose nodes, links and attributes change with time? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Holistic Prediction on a Time-Evolving Attributed Graph"
] | [
"acl2023"
] |
fd391ae5-5893-5d2a-b630-55b7f0cc1fb3 | According to Table 2 in the paper "ACT-SQL: In-Context Learning for Text-to-SQL with Automatically-Generated Chain-of-Thought", how many times does the LLM's API need to be called to generate a SQL query in the DIN-SQL approach? What modules in the DIN-SQL approach lead to those API calls? | Your answer should be a Python list like [integer, string1, string2, ...]. The first element should be an integer, representing the number of times the LLM's API needs to be called. Each subsequent element should be a string, representing a module name in the DIN-SQL approach. Note that the module names do not need to include the word "module". | [
"ACT-SQL: In-Context Learning for Text-to-SQL with Automatically-Generated Chain-of-Thought"
] | [
"DIN-SQL: Decomposed In-Context Learning of Text-to-SQL with Self-Correction"
] | [] |
fd98767f-44ef-5721-8fff-3fb1d8eca4b3 | According to the MathCAMPS paper, among the models evaluated, which one performs the second best on MathCAMPS grade 8? In the paper that proposed the model, how is the output y computed, given an input token x? | Your answer should be a Python list of 2 elements, the first is the name of the model along with its parameter size as given in the paper, and the second is the formula in LaTeX format. | [
"MathCAMPS: Fine-grained Synthesis of Mathematical Problems From Human Curricula"
] | [
"Mixtral of Experts"
] | [] |
fdf2bff6-1dcf-5744-9c8b-da9d40aba09f | In the paper that proposed the model that is applied in TabMT for subtle pattern detection, how many instances are there in the dataset where the proposed model performs the best? | Your answer should be an integer. | [
"TabMT: Generating tabular data with masked transformers"
] | [
"CatBoost: unbiased boosting with categorical features"
] | [] |
fe64bb38-1b46-53c8-b82b-dfc2acf75c2e | I want to contact the first author of this paper. What's the email address? | Your answer should be a verbose text string representing the email address if there is only one first author. Otherwise, return a Python list of e-mail strings for each first and co-first author, e.g., ["xxx@xxx.com", "yyy@yyy.com", ...]. DO NOT INCLUDE ANY OTHER CONTEXT IN YOUR ANSWER. | [
"Benchmarking Retrieval-Augmented Generation for Medicine"
] | [] | [] |
fe85c6f9-9ec2-5535-8ae4-d9be3e92d66c | Why can we omit $p(\{y_l, l \in L\})$ in Equation (1)? | Your answer should be a sentence explaining why we can omit this term. | [
"Neural Unsupervised Reconstruction of Protolanguage Word Forms"
] | [] | [] |
fea63b48-8759-5b18-93d3-748ab9953c6c | The anchor PDF used two benchmark datasets for evaluation. Overall, on which dataset did the methods perform better? | Your answer should be a python strings about the name of the dataset. YOU MUST USE THE EXACT NAME FROM THE PAPER. | [
"M2SUM: Multi-Granularity Scale-Adaptive Video Summarizer towards Informative Context Representation Learning"
] | [] | [] |
fec0d844-3c0c-5c05-827f-cdbbf762d406 | In the entity detection experiments, what is the text type of the dataset used in the training stage with highest F-Score_test? | Your answer should be a short word or phrase. | [
"WikiBio: a Semantic Resource for the Intersectional Analysis of Biographical Events"
] | [] | [] |
fed63d9e-52d3-5a7c-89f4-ac6f37f7e02b | Which paper explored training a GPT-2 for automatic diagnosis, emphasizing efficient data augmentation for symptom prediction and disease identification? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"CoAD: Automatic Diagnosis through Symptom and Disease Collaborative Generation"
] | [
"acl2023"
] |
fee3ed60-b2ce-55ce-b06d-0f4e9fe1639f | Is there a paper comparing knowledge distillation and human annotation in terms of cost efficiency? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Distill or Annotate? Cost-Efficient Fine-Tuning of Compact Models"
] | [
"acl2023"
] |
ff1576ee-d2c5-505a-964a-c2fcc94c75ff | Is there any paper that seamlessly integrates the multigrid structure in operator learning for solving partial differential equations (PDEs)? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"MgNO: Efficient Parameterization of Linear Operators via Multigrid"
] | [
"iclr2024"
] |
ff31ef9b-f07d-59a4-ac9a-4d694ff7bb13 | Which paper is the first to comprehensively review the progress of deep learning in mathematical reasoning? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"A Survey of Deep Learning for Mathematical Reasoning"
] | [
"acl2023"
] |
ff40fb8f-a1d9-5598-91b2-2af15bbad92e | Among the specific models tested, whose performance is closest to RePe on MixATIS++? | Your answer should be the name of model DIRECTLY FROM THE PDF WITHOUT ANY EXPLANATION. | [
"Code-Switching Can be Better Aligners: Advancing Cross-Lingual SLU through Representation-Level and Prediction-Level Alignment"
] | [] | [] |
ff4ded6a-ee13-5a44-bd0e-a87a976df068 | In the anchor PDF, the authors introduce two key changes relative to the original work. Which part are they located in Figure 1? Additionally, are there any changes in the blue boxed area in Figure 1? Please briefly describe these changes. | Your answer should be a Python list with two strings: the first is a location option (from top-left, top-right, bottom-left or bottom-right), and the second is your answer to the remaining question. | [
"PromptTTS++: Controlling Speaker Identity in Prompt-Based Text-To-Speech Using Natural Language Descriptions"
] | [
"Prompttts: Controllable Text-To-Speech With Text Descriptions"
] | [] |
ff50fbdd-645a-5d67-ae87-b02133b59ed6 | In the paper that proposes NoMAD-Attention, what do the authors choose as vector database? Additionally, in the paper that proposes that vector database, how to compute the number of distance computations and when it reaches a minimum? | Your answer should be a Python list of 2 strings, each string is a formula in LaTeX format, representing the equation of the number of distance computations and the condition when it reaches a minimum. | [
"NoMAD-Attention: Efficient LLM Inference on CPUs Through Multiply-add-free Attention"
] | [
"The Faiss library"
] | [] |
00a3b0ea-60be-5b46-bf5c-3f4868b0e5f2 | How much improvement does "Dr.Strategy" achieve in "Maze-7*7" in average? | Your answer should be a Python float number rounded to 2 decimal place. e.g. 11.45 | [
"Dr. Strategy: Model-Based Generalist Agents with Strategic Dreaming"
] | [] | [] |
00edc733-91e5-591b-8c25-c4f3be128f38 | GeoBFN (Geometric Bayesian Flow Network) handles data from which three main modes in 3D molecule generation? What are the modeling characteristics of the first of these modalities (atomic coordinates) in GeoBFN? | Your answer should be two sentences, each answers a question. | [] | [
"Unified Generative Modeling of 3D Molecules with Bayesian Flow Networks"
] | [
"iclr2024"
] |
01f2b413-2f4b-524d-8a84-97e6c648a9e0 | In the paper that proposes a margin-based satisficing imitation learning method that autonomously surpasses human demonstrators' aspiration levels rather than rigidly mimicking suboptimal behaviors, which method in Table 1 achieves the highest \gamma-satisficing value for the "cartpole" environment, and what is the value? | Your answer should be a Python list of two elements, the first is the name of the method and the second is a float number (rounded to 2 decimal places) of the value, the formula in LaTeX format. | [] | [
"Value-Aligned Imitation via focused Satisficing"
] | [
"neurips2024"
] |
0359c894-b118-54bf-a107-6b07a159be72 | A paper demonstrate the high accurancy of posterior hallucination rate in estimation of the actual probability of hallucination. In the visualization of individual PHR and THR predictions at different context lengths, under which length of context, they show the less linearity? | Your answer should be an int. | [] | [
"Estimating the Hallucination Rate of Generative AI"
] | [
"neurips2024"
] |
03728f61-50cb-55ec-bd3f-8ec76b178ccd | What's the base pre-trained model used in this paper? For this pre-trained model, what's the pre-train dataset? | Your answer should be a single python list like this: ["model_name", ["dataset_name1","dataset_name2"]]. Note that for theses names, the abbreviation is required. | [
"MEDICAL IMAGE UNDERSTANDING WITH PRETRAINED VISION LANGUAGE MODELS: A COMPREHENSIVE STUDY"
] | [
"Grounded Language-Image Pre-training"
] | [] |
0374337f-3cf1-5969-a27f-c89e6eeccfae | In ICLR 2024 Poster papers, a paper tries to ensemble the reward models to mitigate the over-optimization problem. What is the formula of the reward model? | Your answer should be the formula in LaTeX format. | [] | [
"The Effective Horizon Explains Deep RL Performance in Stochastic Environments"
] | [
"iclr2024"
] |
03bcda39-9e5b-54d4-8d05-59b96c06ff95 | In the paper that proposes a task-oriented imputation framework that evaluates and optimizes time series filling strategies based on their direct performance gains in downstream tasks without model retraining, what is the key assumption of formula (9) to compress the size of \frac{\partial f(X_i, \theta)}{\partial \theta}? | Your answer should be a Python strings of the key assumption. | [] | [
"Task-oriented Time Series Imputation Evaluation via Generalized Representers"
] | [
"neurips2024"
] |
03ca86ed-328d-5495-8410-2d3754f51ad6 | In the paper that proposes Variational BoN, where can I find the binary classifier that the author uses with two classes {POS, NEG} as the reward model? | Your answer should be Your answer should be a Python string, the website URL starting with "https://", as given in the paper. | [] | [
"Variational Best-of-N Alignment"
] | [
"neurips2024"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.