uuid
string
question
string
answer_format
string
anchor_pdf
list
reference_pdf
list
conference
list
416e608a-d105-5d0f-ad19-c046bc2e8a12
The paper "Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic" proposed a new safety evaluation benchmark. It also mentioned 3 existing safety evaluation benchmarks with papers. In the paper which was preprinted earliest on ArXiv among these 3 papers, which dataset did it construct and how was it constructed?
Your answer should be brief text giving the dataset's name in the paper and how it was constructed.
[ "Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic" ]
[ "On Second Thought, Let’s Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning", "Language Model Unalignment: Parametric Red-Teaming to Expose Hidden Harms and Biases", "Universal and Transferable Adversarial Attacks on Aligned Language Models" ]
[]
418eb1e7-9224-54f3-8146-ab0301d57974
Which encoder is used in the architecture of the paper titled "Self-Distilled Depth Refinement with Noisy Poisson Fusion"? In the source paper of this encoder, How many models are proposed with it as a series?
Your answer should be a single python list of two elements, the first is a string of the encoder name(abbreviation), the second is an integer.For example, ["encoderx",3].
[ "Self-Distilled Depth Refinement with Noisy Poisson Fusion" ]
[ "SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers" ]
[]
423fc0b3-7a09-5941-a3fb-220ae1d220ff
In terms of joint models for Hebrew parsing, compared to the new 'flipped pipeline' where decisions are made directly on the whole-token units by expert classifiers, what drawbacks does the model in the paper named "A truly joint neural architecture for segmentation and parsing" have?
Your answer should be a single python string about the drawbacks of the model.
[ "A Truly Joint Neural Architecture for Segmentation and Parsing" ]
[ "MRL Parsing Without Tears: The Case of Hebrew" ]
[]
424196cb-a6e0-5e75-8b3a-379e266bbcfb
In terms of multilingual lexical specialization for XLM-R, on which task(s) does Babel-FT get the highest score among the three tasks? Please give the full name of the task, not the abbreviation.
Your answer should be a single python list, every element of the list is a string of the full name of the task.
[ "Massively Multilingual Lexical Specialization of Multilingual Transformers" ]
[]
[]
425218df-3dac-5bc2-90d3-78005e9f6a9d
In the Bellman equation of formula (5), which part represents the cost function in state (s, x)?
Your answer should be a python string indicating the part that represents the cost function in state (s, x).
[ "Sample Efficient Reinforcement Learning in Mixed Systems through Augmented Samples and Its Applications to Queueing Networks" ]
[]
[]
432471a3-12dc-5238-99c0-67b83fe63ce9
In the LMRL-Gym domain, besides the task mentioned in the paper, what other interactive dialogue tasks are proposed here?
Your answer should be a python list, each element is the name of the task, e.g.,["task1", "task2", ...]. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION AND INCLUDE THE FULL NAMES OF THE TASKS.
[ "Multi-turn Reinforcement Learning with Preference Human Feedback" ]
[ "LMRL Gym: Benchmarks for Multi-Turn Reinforcement Learning with Language Models" ]
[]
43938b52-8259-5777-a088-4faa891a1ba6
How does this paper(titled "FLatS: Principled Out-of-Distribution Detection with Feature-Based Likelihood Ratio Score") formulate the objective of OOD detection?If I want to contact the author(s) of the source paper of this formula, what is the email address I can refer to?
Your answer should be a single python list, the first element is a formula in latex format, the second element is a string of the email address.Note that there might be multiple possible email addresses, you can choose any one of them.
[ "FLatS: Principled Out-of-Distribution Detection with Feature-Based Likelihood Ratio Score" ]
[ "Falsehoods that ML researchers believe about OOD detection" ]
[]
43ab49eb-020d-5c64-a210-7f0931d39224
How can I get $h_i$ or $h_j$ in Equation (1)?
Your answer should be a paragraph describing the procedure to get $h_i$ or $h_j$ as given in the paper.
[ "DualGATs: Dual Graph Attention Networks for Emotion Recognition in Conversations" ]
[]
[]
43cfa1aa-ccbc-5008-8e4f-105b889ae74f
In the paper that proposes the component represented by a magnifier in the overview figure of DigiRL, after applying Reflexion for two rounds, using Oracle Evaluator, how much does the performance improve on WebArena?
Your answer should be a float between 0 and 1, rounding to 3 decimal places.
[ "DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning" ]
[ "Autonomous Evaluation and Refinement of Digital Agents" ]
[]
44db1f84-1791-509e-91ae-79b2856153ee
What are the datasets and their metrics used in this paper according to the tables?
Your answer should be a Python dictionary, e.g., {"dataset1": "metric1", "dataset2": "metric2", ...}. YOU MUST USE THE EXACT TEXT AND FULL DATASET NAME FROM THE PAPER WIHOUT CHANGING CAPITALIZATION.
[ "On the Compositional Generalization in Versatile Open-domain Dialogue" ]
[]
[]
451c2edc-87aa-58c0-8b34-e3715ae66def
For the evaluation on the FLORES+ Karakalpak devtest set, which model has the best sacreBLEU score on the language pair en-kaa? What's its difference with other similar models?
Your answer should be a single python list of two strings, the first string is the model name, the second string is about its feature.
[ "Open Language Data Initiative: Advancing Low-Resource Machine Translation for Karakalpak" ]
[]
[]
460ddccb-d53b-5ec2-9a20-6739ce65da29
How many labels are in the dataset used in the experiments section of the paper "DYST: TOWARDS DYNAMIC NEURAL SCENE REPRESENTATIONS ON REAL-WORLD VIDEOS"?
Your answer should be a single integer.
[ "DyST: Towards Dynamic Neural Scene Representations on Real-World Videos" ]
[ "The \"something something\" video database for learning and evaluating visual common sense" ]
[]
464cf29d-23db-51dc-b505-01e2bcc97151
In this paper, what are the maximum, average, and minimum lengths of utterances in the video datasets used for training and testing?
Your answer should be a python dictionary like {"maximum": 5.0, "average": 3.0, "minimum": 1.0}. THE NUMBERS SHOULD BE ROUNDED TO 1 DECIMAL PLACE.
[ "Unsupervised Learning of Facial Optical Flow via Occlusion-Aware Global-Local Matching" ]
[ "VoxCeleb: a large-scale speaker identification dataset" ]
[]
4684de7e-9fc6-5bfe-acd6-0b8d0fc97647
Which works shows that training large language models with purely mathematical and structural data can exhibit emergence of causal reasoning faster?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Learning Multi-Step Reasoning by Solving Arithmetic Tasks" ]
[ "acl2023" ]
4697c604-fb77-54a5-9a22-f1e8cf32351e
According to the paper that proposes JailbreakBench, what's the best defense for PAIR attack? Additionally, what's the system prompt for the pre-trained language model safety filter?
Your answer should be a Python list of 2 elements, the first is a string, the name of the defense, the second
[ "JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models" ]
[ "Certifying LLM Safety against Adversarial Prompting", "Baseline Defenses for Adversarial Attacks Against Aligned Language Models", "SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks" ]
[]
46be0f62-9897-5042-b3c4-f67bdd0bed89
Is there an existing dataset of images with alt-text that also includes the text the image was originally posted with?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Alt-Text with Context: Improving Accessibility for Images on Twitter" ]
[ "iclr2024" ]
46d8670b-3464-5526-b9f9-d5d48dd5bfa1
Which paper first proposed to combine pretrained masked language models (BERT) and discrete diffusion language models?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "DiffusionBERT: Improving Generative Masked Language Models with Diffusion Models" ]
[ "acl2023" ]
46ea5bb8-9895-5439-8f45-8e1792b1ec8b
On the ALFWorld dataset experiments, how much did the success rate improve when the authors used their method compared to the original baseline model?
Your answer should be a floating-point number with one decimal place.
[ "Retrospex: Language Agent Meets Offline Reinforcement Learning Critic" ]
[]
[]
47389b0a-23c2-5a87-9ee6-cabde545a2ef
What're the three types of agents in IBSEN and which agent involves the usage of database?
Your answer should be a Python list of 2 elements. The first element is a Python list of 3 elements, containing the names of the three types of agents in IBSEN. The second element is a string, indicating the name of the agent that involves the usage of database. e.g. [["agent1", "agent2", "agent3"], "agent"].
[ "IBSEN: Director-Actor Agent Collaboration for Controllable and Interactive Drama Script Generation" ]
[]
[]
47492be6-a53e-5d04-8426-67e188aec7a9
What is the main innovation in the distillation methods employed by the models in the experimental section of the article "BEYOND UNIFORM SCALING: EXPLORING DEPTH HETEROGENEITY IN NEURAL ARCHITECTURES"?
Your answer should be a python strings.
[ "Beyond Uniform Scaling: Exploring Depth Heterogeneity in Neural Architectures" ]
[ "Training data-efficient image transformers & distillation through attention" ]
[]
478ae300-f520-52dc-8d4b-385e268774af
Compared to vanilla ViT-Base, how much relative accuracy degradation does ECP-ViT result in on ImageNet? How much relative latency save does ECP-ViT obtained?
Your answer should be a list of two floats rounded to 2 decimal places. Both floats should be in [0, 100] as percentages. The first float is the accuracy degradation and the second float is the latency reduction.
[ "Real-time Core-Periphery Guided ViT with Smart Data Layout Selection on Mobile Devices" ]
[]
[]
48471601-0130-52f7-8580-d15b057e1bbf
Is there any paper that constructs augmented training data based on the entity-to-entity correlations?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "PeerDA: Data Augmentation via Modeling Peer Relation for Span Identification Tasks" ]
[ "acl2023" ]
48dc6ebe-9dc2-5a7b-8f78-9030ab6ec5a1
What's the original form of the metrics "w(S) = \frac{1}{2} \mathbb{E} \sup_{x, y \in S} \langle g, x - y \rangle"?
Your answer should be a Python string, the formula in LaTeX format.
[ "How Sparse Can We Prune A Deep Network: A Fundamental Limit Perspective" ]
[ "Estimation in high dimensions: a geometric perspective" ]
[]
48e7250f-a89b-524e-9e41-ef99314b5118
According to the paper that proposes the Subspace Identification Guarantee model, which method is used to estimate the label distribution from the target domain $p_{\hat{\mathbf{y}}}$? What's the formula of the loss they reweight using the distribution? In the paper that proposes the aforementioned method, as the dataset size increases, when the proposed method first surpasses the baseline method on MNIST with $\alpha=1.0$?
Your answer should be a Python list of 3 elements, the first is a string, the abbreviation of the method, the second is a string, the formula in LaTeX format, and the last is an integer, the approximate dataset size. Note that for the third sub-question, you don't need to figure out the exact value, just provide the approximate value that appears on the horizontal axis of the figure.
[ "Subspace Identification for Multi-Source Domain Adaptation" ]
[ "Detecting and Correcting for Label Shift with Black Box Predictors" ]
[]
4985e0e1-5249-5fba-81d1-3e8834b95d53
In the MetaMath paper, a bootstraping method A is utilized in Example 3.4. In the paper that proposes the method A, which baseline method surpasses the proposed method A in AQuA under some specific setting?
Your answer should be a string, the name of the baseline method.
[]
[ "MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models", "Forward-Backward Reasoning in Large Language Models for Mathematical Verification" ]
[ "iclr2024" ]
4a2b4ab6-a332-5d58-b58b-b2e8405edf77
What does formula (3) in this paper mean?
Your answer should be a Python strings of the detailed explanation of the formula.
[ "PAD-Net: An Efficient Framework for Dynamic Networks" ]
[]
[]
4a616ad5-43dd-5eb1-9787-8d5808f69bbe
In the experiments of the paper "SPZ: A Semantic Perturbation-based Data Augmentation Method with Zonal-Mixing for Alzheimer's Disease Detection" on the ADReSS challenge dataset, which single model method and ensemble method performed the best, excluding the method proposed in this paper?
Your answer should be a python list, with the first element being the best single model method and the second element being the best ensemble method. YOU MUST USE THE ABBREVIATIONS IN THE TABLE.
[ "SPZ: A Semantic Perturbation-based Data Augmentation Method with Zonal-Mixing for Alzheimer’s Disease Detection" ]
[]
[]
4a99d14d-69e7-55d7-b6fa-2878ad1a8e50
Which paper did a comprehensive survey of the code large language model (code LLMs)?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Large Language Models Meet NL2Code: A Survey" ]
[ "acl2023" ]
4ab4e4dc-fc8a-5749-a2cf-171f0a0bc4e3
What are the meanings of $h_i, r_i, t_i$ in Equation (1)?
Your answer should be a precise sentence describing the meanings of $h_i, r_i, t_i$ as given in the paper.
[ "GreenKGC: A Lightweight Knowledge Graph Completion Method" ]
[]
[]
4b4877cd-4cdc-5d52-ac20-edfaa6dd7e32
Is there any paper leverages knowledge distillation of language models for textual out-of-distribution detection or anomaly detection?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Multi-Level Knowledge Distillation for Out-of-Distribution Detection in Text" ]
[ "acl2023" ]
4c29808a-cdfa-5e4b-90ee-318b30636e7c
Which paper studies how current retrieval systems handle queries which contain multiple constraints?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "QUEST: A Retrieval Dataset of Entity-Seeking Queries with Implicit Set Operations" ]
[ "acl2023" ]
4c3b2423-fcfb-5a2b-9eea-44d28196587b
In the experiment of the paper that proposed knowledge card, a model is used as the component denoted by a cube with a question mark in the overview figure. What're the training hyperparameters of this model according to the paper that proposed it?
Your answer should be a paragraph, the training hyperparameters of the model.
[]
[ "Knowledge Card: Filling LLMs' Knowledge Gaps with Plug-in Specialized Language Models", "Evaluating Large Language Models Trained on Code" ]
[ "iclr2024" ]
4c9a32c4-52df-56cf-bbcf-0a10a18d594f
According to Table 1, how many times is the average number of tokens for the dataset with the highest average number of tokens versus the one with the least average number of tokens?
Your answer should be a floating-point number with two decimal places.
[ "MDACE: MIMIC Documents Annotated with Code Evidence" ]
[]
[]
4ca40740-fa6c-50f2-b417-7a2ebfd0cc22
I would like to utilize the datasets introduced in the "DeakinNLP at BioLaySumm" paper. Could you tell me in which format were the papers in the datasets retrieved from each data source?
Your answer should be a string, the name of the format, e.g. JSON, HTML, MARKDOWN.
[ "DeakinNLP at BioLaySumm: Evaluating Fine-tuning Longformer and GPT-4 Prompting for Biomedical Lay Summarization" ]
[ "Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature" ]
[]
4da68474-8cf2-5077-aa1d-3b7ae74cc70e
Is there a paper that applies large language models to visual Raven's Progressive Matrices?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "In-Context Analogical Reasoning with Pre-Trained Language Models" ]
[ "acl2023" ]
4dbe770d-1734-5c99-b16d-af3242b8c0ee
Give me a paper proposing to circumvent a single-truth target in training generative language models.
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Soft Alignment Objectives for Robust Adaptation of Language Generation" ]
[ "acl2023" ]
4de3ce4f-4b12-59ea-9141-fe765b6e94b3
Are there sequential learning guarantees for configuring a linear system solver under a distributional assumption on the systems' target vectors?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Learning to Relax: Setting Solver Parameters Across a Sequence of Linear System Instances" ]
[ "iclr2024" ]
4e28e6b7-6761-5f7b-8e00-5b210498b0ba
What method was adopted in the paper developing DisGAT and SpkGAT for ERC to integrate these two modules?
Your answer should be a python strings.
[]
[ "DualGATs: Dual Graph Attention Networks for Emotion Recognition in Conversations" ]
[ "acl2023" ]
4e44b819-6129-5b0f-a6cd-935f2eb6bb85
In the NeurIPS paper, mentioned in the RestoreAgent paper, that utilizes uniquely designed prompts to guide the network, what formula can the module in light yellow in Figure 3 be summaried as?
Your answer should be a string, the formula in LaTeX format.
[ "RestoreAgent: Autonomous Image Restoration Agent via Multimodal Large Language Models" ]
[ "PromptIR: Prompting for All-in-One Image Restoration" ]
[]
4ea66ea8-4a7e-52a2-9c97-c900c9e55da6
How to faithfully and explicitly measure the helpfulness of human explanations to language models during finetuning and inference?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Are Human Explanations Always Helpful? Towards Objective Evaluation of Human Natural Language Explanations" ]
[ "acl2023" ]
4f284188-a3d4-5a9a-a723-4f589f221cdd
Which paper systematically examed the input mismatch between training and sampling in diffusion models
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Elucidating the Exposure Bias in Diffusion Models" ]
[ "iclr2024" ]
4f7ee674-3282-554e-a59c-2f911bd5d9e0
How many datasets are generated in the source paper of the dataset mainly used in the paper named "Steering Llama 2 via Contrastive Activation Addition"?
Your answer should be a single integer.
[ "Steering Llama 2 via Contrastive Activation Addition" ]
[ "Discovering Language Model Behaviors with Model-Written Evaluations" ]
[]
4fe2e01e-83c6-5121-80fc-7c937e0d73ae
What paper first uses decoupled workers in distributed RL system?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "SRL: Scaling Distributed Reinforcement Learning to Over Ten Thousand Cores" ]
[ "iclr2024" ]
509aeeca-8801-5099-99e3-d896c499db43
According to the main body of the paper "Bayesian low-rank adaptation for large language models", which method is statistical significant on BoolQ with ECE metrics? In that method, how is the hyperparameter k selected?
Your answer should be a Python list of 2 strings, the first is the full name of the method, and the second is the formula in LaTeX format.
[ "Bayesian low-rank adaptation for large language models" ]
[ "Checkpoint Ensembles: Ensemble Methods from a Single Training Process" ]
[]
50f6e66a-aa2a-56ee-bb54-d2ade82a95ad
What success rate does MapGPT(with GPT-4V) achieve on the validation unseen set of the R2R dataset?
Your answer should be a single float, rounded to 1 decimal place.
[ "MapGPT: Map-Guided Prompting with Adaptive Path Planning for Vision-and-Language Navigation" ]
[]
[]
510b8067-46e8-5783-a8a6-e752132a8a7a
In the proxy task of exploring lexical semantics in the paper "Fantastic Semantics and Where to Find Them: Investigating Which Layers of Generative LLMs Reflect Lexical Semantics", how many instances are there in total?
Your answer should be a python int.
[ "Fantastic Semantics and Where to Find Them: Investigating Which Layers of Generative LLMs Reflect Lexical Semantics" ]
[ "WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations" ]
[]
512fd6fd-1c6a-54a9-addf-51e622e99dfe
In terms of WER values with ASR across the six different methods tested in the paper, how much higher is DD2 compared to NV1?
Your answer should be a single float number, rounded to 3 decimal places.
[ "Investigating Phoneme Similarity with Artificially Accented Speech" ]
[]
[]
51690cda-38bb-51a8-8c7d-59e8a7f732eb
Which paper first conducted the positioned error test for the MAUVE metric?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "On the Blind Spots of Model-Based Evaluation Metrics for Text Generation" ]
[ "acl2023" ]
52bc8a41-b87d-56ad-b253-83e0fd05e698
Which work proposes an approach to improve candidate responses in the smart reply task by directly optimizing the metric to ensure that a response is selected by the user?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Model-Based Simulation for Optimising Smart Reply" ]
[ "acl2023" ]
536d890f-e245-5bc4-9265-820664e843d6
According to Figure 4, when generating Token 11, which tokens will the cache preserve, and what positions will be assigned to them?
Your answer should be a Python list containing two sublists. The first sublist should list the tokens that the cache preserves. The second sublist should contain the positions assigned to each corresponding token. Example format: [[0, 1, 2], [0, 1, 2]].
[ "Efficient Streaming Language Models with Attention Sinks" ]
[]
[]
539593f7-e17a-57d2-9030-b8e6690c27e3
Among the dataset used for experimentation in the TEXTEE paper and the "Small Models, Big Insights" paper, how many were proposed in 2022?
Your answer should be a python int.
[ "TextEE: Benchmark, Reevaluation, Reflections, and Future Challenges in Event Extraction", "Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for LLMs" ]
[]
[]
541382e2-2866-5c2c-9a53-36c96868b9f1
Which paper proposed the integration of human translators' considerations, such as length control, rhyme type control and suggestion, and enhancing compatibility between translation output and unseen melodies, into the design of machine translation models when translating lyrics?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Songs Across Borders: Singable and Controllable Neural Lyric Translation" ]
[ "acl2023" ]
541435a6-878e-540f-8b6a-86bf7920dc82
What is the main design of Auto-GUI framework from the aspects of the encoder, interaction, and decoder?
Your answer should be a Python list of text strings, with each element being one core stage of this framework, you"d better use the origin text, e.g., ["stage 1", "stage 2", ...].
[ "You Only Look at Screens: Multimodal Chain-of-Action Agents" ]
[]
[]
542bbfde-bdf3-524e-9505-40c061a3590b
In the paper "Leveraging Behavioral Cloning for Representation Alignment in Cross-Domain Policy Transfer", the Portable Latent Policy (PLP) method is introduced. In Figure 5 depicting the alignment scores, how many PLP and its variant methods exhibit a P2P-medium accuracy that exceeds the P2P-obs-medium accuracy?
Your answer should be a python int.
[ "Leveraging Behavioral Cloning for Representation Alignment in Cross-Domain Policy Transfer" ]
[]
[]
546b830f-aca5-56e1-8ebc-cffda2bd6ad6
Except WKM, which method performs the best on WebShop? Whether the two methods' papers use the same evaluation datasets or not?
Your answer should be a Python list of two strings, the first is the abbreviation of the method, the second is either `true` or `false`.
[ "Agent Planning with World Knowledge Model" ]
[ "Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents" ]
[]
546c4f0c-bbb9-5de6-913b-c1685321039c
In the paper that develops KUCB-RL, which model-free algorithm applies weakly communicating MDP assumption? What's the algorithm's main contribution in the online setting, regarding the assumption?
Your answer should be a Python list of two strings, the first string is the name of the algorithm, and the second string is its main contribution.
[ "Kernel-Based Function Approximation for Average Reward Reinforcement Learning: An Optimist No-Regret Algorithm" ]
[ "Sharper Model-free Reinforcement Learning for Average-reward Markov Decision Processes", "Learning Infinite-horizon Average-reward MDPs with Linear Function Approximation" ]
[]
55534d55-ed7c-5240-96a5-cde7fd739de8
What're the related domains of this paper according to related works?
Your answer should be a Python list of strings where each string is a related domain. e.g. ["domain1", "domain2", ...]
[ "No clues good clues: out of context Lexical Relation Classification" ]
[]
[]
55c4fae8-375a-53eb-819d-e6d81a7c62ea
In terms of experimental results when unigrams are used for evaluation, which model gets the highest F1-score among Mbase, Mclf, Mcxt and Mclfcxt? What's its added module compared with Mbase according to figure 2?
Your answer should be a list of two strings, the first element is the name of the model(chosen from Mbase, Mclf, Mcxt and Mclfcxt), and the second element is the name of the added module presented in figure 2.
[ "Transformer-based Live Update Generation for Soccer Matches from Microblog Posts" ]
[]
[]
560cb9c7-cd1b-5574-947b-8a3da732d2e3
Which paper first aggregates statements to represent political actors and learns the mapping from languages to representation via pre-training?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "UPPAM: A Unified Pre-training Architecture for Political Actor Modeling based on Language" ]
[ "acl2023" ]
561f7371-37d2-5940-9171-73472e33cded
Which core NLP problem is mentioned in the paper "Are Machines Better at Complex Reasoning? Unveiling Human-Machine Inference Gaps in Entailment Verification", and what is it usually structured as?
Your answer should be a python list of two elements. The first one is a core NLP problem name and you should use abbreviation as given in the papers. The second one is a python strings, describing how the problem is usually structured.
[ "Are Machines Better at Complex Reasoning? Unveiling Human-Machine Inference Gaps in Entailment Verification" ]
[ "Language Models are Few-Shot Learners" ]
[]
56894e39-b1fc-5699-9c19-200e02c975f0
In the paper "Are Emergent Abilities in Large Language Models just In-Context Learning?", token edit distance is introduced as an additional evaluation metric, what's the purpose of doing so?
Your answer should be a sentence explaining the purpose of introducing token edit distance as an additional evaluation metric in the context of evaluating emergent abilities in large language models.
[ "Are Emergent Abilities in Large Language Models just In-Context Learning?" ]
[ "Are Emergent Abilities of Large Language Models a Mirage?" ]
[]
56b65fba-a965-5e63-a409-4d834fe2926f
Is there a tool that can automatically segment speech and the corresponding text transcriptions, to obtain a finer grained alignment?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "CMOT: Cross-modal Mixup via Optimal Transport for Speech Translation" ]
[ "acl2023" ]
56d50d2a-9ade-583d-a3e9-277363538066
Which paper shows assessment of training instabilities at different levels for language models?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Measuring the Instability of Fine-Tuning" ]
[ "acl2023" ]
56f3ff15-de1c-5769-ac00-6218e9d9a0a6
Among the previous methods applied in the FunCoder's experiments on open-source models, which one was proposed later? Additionally, which datasets was applied in the evaluation of that method, but not in FunCoder?
Your answer should be a Python list of 2 elements, the first is the name of the method, the second is a python list of strings, the name of the datasets.
[ "Divide-and-Conquer Meets Consensus: Unleashing the Power of Functions in Code Generation" ]
[ "CodeT: Code Generation with Generated Tests", "Parsel🐍: Algorithmic Reasoning with Language Models by Composing Decompositions", "Reflexion: language agents with verbal reinforcement learning", "Debug like a Human: A Large Language Model Debugger via Verifying Runtime Execution Step by Step", "MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework" ]
[]
57082e3d-1fcd-54f5-8985-370723fcc4c2
Which domain in the GRBench dataset has the most number of questions?
Your answer should be a single string representing the domain name.
[ "Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs" ]
[]
[]
5741c36f-3c84-51e1-80ac-960026dfba12
According to the results, in which interval of attack budget does the ASR of SCTS saturate? Note that the interval has been indicated directly in the text.
Your answer should be a python list of two float rounded to 2 decimal places, e.g [0.35, 0.40]
[ "Bypassing LLM Watermarks with Color-Aware Substitutions" ]
[]
[]
5752ba6d-a2f0-5672-90c8-919979dd4edf
Are there any papers that construct convolutional networks which are equivariant with respect to non-compact/non-abelian Lie groups?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Lie Group Decompositions for Equivariant Neural Networks" ]
[ "iclr2024" ]
58bdb8e3-b1ad-5e0a-9c11-ac8b7bf63570
On average, how many steps does a solution have in the training set of PRM800K, and how many solutions are provided per question?
Your answer should be a Python list containing two floating-point numbers, each rounded to two decimal places. The first number represents the steps per solution, and the second number represents the solutions per question. Example format: [1.23, 4.56].
[ "Let's Verify Step by Step" ]
[]
[]
59369806-b544-5f82-b668-1bd4b943e892
What research exists on incorporating knowledge graphs into language models to improve their complex question-answering capabilities?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Knowledge Graph-augmented Language Models for Complex Question Answering" ]
[ "acl2023" ]
5937898e-7f8e-5d38-9acc-c09060fbf7a5
In the paper that introduces TMID dataset, which pretrained model gets the best F1 score after fine-tuning on TMID dataset? And when pretraining this model, what percentage of tokens were used in the Self-Supervised Blank Infilling task?
Your answer should be a single python list containing two strings, the first element of the list is the pretrained model's name, and the second element of the list is the percentage as an integer, for example, '56%'.
[ "TMID: A Comprehensive Real-world Dataset for Trademark Infringement Detection in E-Commerce" ]
[ "GLM-130B: An Open Bilingual Pre-trained Model" ]
[]
5960606a-4a02-5726-8048-bc2c52ad726b
Is there any paper that applies curriculum learning to various NLG tasks without depending on specific metrics?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "In-sample Curriculum Learning by Sequence Completion for Natural Language Generation" ]
[ "acl2023" ]
5be96361-68a3-5b32-8d15-668f306d33e7
According to the tables about Zero-shot performance, what is the range of accuracy of OPT-125M in different task tests (considering the data tested in all papers)?
Your answer should be a python list of two floats, rounding to one decimal place, e.g. [12.1, 23.4].
[ "Baby Llama: knowledge distillation from an ensemble of teachers trained on a small dataset with no performance penalty", "MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases" ]
[]
[]
5c49a736-420a-52b4-8188-ad80f375e948
From which subset of ExHVV was MemeMQACorpus chosen, and why? How many questions were selected? Also, provide the changes in each role-label for the chosen subset.
Your answer should be a Python list of 4 elements. The first element is the subset's name. The second element is the reason why the author chose this subset. The third element is an integer, denoting the number of questions chosen. The fourth element is a Python dict, containing role-labels and their corresponding changes, where each change is calculated as the new count minus the old count. e.g. ["answer1", "answer2", 3, {"role1": -2, "role2": 3, ...}]
[ "MemeMQA: Multimodal Question Answering for Memes via Rationale-Based Inferencing", "What do you MEME? Generating Explanations for Visual Semantic Role Labelling in Memes" ]
[]
[]
5c4be3c8-e4ad-5154-83af-3e2ff896c210
How many words are in the train splits of the oldest dataset used by GeNE to evaluate language modeling?
Your answer should be a Python integer.
[ "Context Length Extension via Generalized Extrapolation Scale" ]
[ "LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding", "Random-Access Infinite Context Length for Transformers", "Compressive Transformers for Long-Range Sequence Modelling" ]
[]
5c967488-f464-5ab5-aa13-d1dc6be7e4e2
Is there any paper that proposes a set of criteria to comprehensively evaluate generated conversations?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Modeling What-to-ask and How-to-ask for Answer-unaware Conversational Question Generation" ]
[ "acl2023" ]
5c98eeb0-ae95-530c-85b4-6e7dc3c12ecf
For Llama2 on DialogSum, which newly proposed module contributes more to the improvement of performance? How do the authors further explain why that module works?
Your answer should be a Python list of 2 elements, where the first element is the FULL NAME of the module that contributes more to the improvement of performance, and the second element is a string, explaining why that module works. e.g. ["module", "explanation"].
[ "Dialogue Summarization with Mixture of Experts based on Large Language Models" ]
[]
[]
5cae6dda-4a2d-52ec-b511-953f476c3600
Is there any paper that proposes a new multimodal video dataset that image-level multimodal models do not work well?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Revealing Single Frame Bias for Video-and-Language Learning" ]
[ "acl2023" ]
5df175e3-e99c-5eb3-8a5f-8133701c474b
According to the paper that enhances traditional input transformations by mixing the input image with images from other categories to create admixed images, what're the related adversarial attacks?
Your answer should be a Python list of abbreviations of the attacks.
[ "OSLO: One-Shot Label-Only Membership Inference Attacks" ]
[ "Admix: Enhancing the Transferability of Adversarial Attacks" ]
[]
5e0aaa58-e5f7-54c6-9c1f-faccded1ba31
In the agent "MO-DDN" (Multi-object Demand-driven Navigation), what is the basic success rate for a specific demand instruction DI?
Your answer should be a formula string in latex format.
[]
[ "MO-DDN: A Coarse-to-Fine Attribute-based Exploration Agent for Multi-Object Demand-driven Navigation" ]
[ "neurips2024" ]
5e2b676b-74ee-5a70-9852-7301615a7de0
What are the top 3 most main CONCEPTNET relations in the commonsense reasoning task dataset used in the paper "Can LLMs Learn From Mistakes? An Empirical Study on Reasoning Tasks"?
Your answer should be a python list. YOU MUST USE THE EXACT NAMES OF THE RELATIONS AS THEY APPEAR IN THE PAPER.
[ "Can LLMs Learn From Mistakes? An Empirical Study on Reasoning Tasks" ]
[ "CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge" ]
[]
5f2de2c6-fbcd-561a-b7a4-be129671f5db
On which labeled dataset did the metric AMR not reduce to Acc? On that dataset, which model performs best on the metric AMR?
Your answer should be a Python list of three elements, the first element is the name of the labeled dataset, the second and third element is the model family and the variant of the model. e.g. ["answer1", "answer2", "answer3"].
[ "ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs" ]
[]
[]
6098fb2b-f951-52c7-8cf9-e17aa7124833
What is the difference between Equation (1) and Equation (2)?
Your answer should be text describing the difference.
[ "CITADEL: Conditional Token Interaction via Dynamic Lexical Routing for Efficient and Effective Multi-Vector Retrieval" ]
[]
[]
60bf1f10-7280-54b7-b364-b7c322b69d51
Which paper utilized MMD flows with Riesz kernels to solve Bayesian inverse problems?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Posterior Sampling Based on Gradient Flows of the MMD with Negative Distance Kernel" ]
[ "iclr2024" ]
60ef8ee5-97de-59a7-8c22-9fa45df8d152
Which subtask in NADI 2024 was the CUFE paper related to?
Your answer should be a python string. The string should be "Subtask 1", "Subtask 2" and so on.
[ "CUFE at NADI 2024 shared task: Fine-Tuning Llama-3 To Translate From Arabic Dialects To Modern Standard Arabic" ]
[ "NADI 2024: The Fifth Nuanced Arabic Dialect Identification Shared Task" ]
[]
610ee0be-3405-58a0-8b1b-247ee6018640
The article "DISTINGUISHED IN UNIFORM: SELF-ATTENTION VS. VIRTUAL NODES" employed certain datasets from the recent LRGB collection paper. Please specify which benchmarking datasets included in that paper were not utilized in this study.
Your answer should be a python list of strings, e.g., ["dataset1", "dataset2"].
[ "Distinguished In Uniform: Self-Attention Vs. Virtual Nodes" ]
[ "Long Range Graph Benchmark" ]
[]
61259bab-6b0f-5e36-8abf-8d3bf62994d1
In the EQA-MX dataset, which task takes the smallest proportion? In that task, which output appears the most?
Your answer should be a Python list of 2 strings, where the first element is the abbreviation of the task and the second element is the most frequent output in that task.
[]
[ "EQA-MX: Embodied Question Answering using Multimodal Expression" ]
[ "iclr2024" ]
612b0b3e-0f94-5dbe-85f8-708b9170b97e
In the paper that introduces Agent-as-a-Judge and proposes a dataset called DevAI, which evaluation method is the primary comparison target to Agent-as-a-Judge? And in this evaluation method's original paper, which LLM performs the best in Consistency?
Your answer **must** be a single python list containing two strings, the first element of the list is the method's name, the second element of the name of the best model in Consistency.
[ "Agent-as-a-Judge: Evaluate Agents with Agents" ]
[ "Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena" ]
[]
6167db60-98c3-5ad6-b051-9d79f76e065c
In the experiment section of this paper, it is proposed that research shows one evaluation method is better. What desired criteria are these conclusions based on?
Your answer should be a python list about the criteria, e.g. ["criterion1", "criterion2"]. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.
[ "Text Augmentation Using Dataset Reconstruction for Low-Resource Classification" ]
[ "Measuring the Measuring Tools: An Automatic Evaluation of Semantic Metrics for Text Corpora" ]
[]
61e20be1-7b19-580f-a86e-2132be450bc3
Which paper examined the scalability of instruction-tuning with respect to Mixture of Expert models?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Mixture-of-Experts Meets Instruction Tuning: A Winning Combination for Large Language Models" ]
[ "iclr2024" ]
623fe210-95bc-536d-8300-f726ac45f7a1
What is the difference between DICE and TAILO as regard to unlabeled data and what are the two steps of training a discriminator c(s)?
Your answer should be a Python
[ "A Simple Solution for Offline Imitation from Observations and Examples with Possibly Incomplete Trajectories" ]
[]
[]
63155a14-fe2e-5eb3-aacf-3a7e97368faf
Among the tested models, which model performs best on code problems?
Your answer should be a python string of the name of the model.
[ "CausalBench: A Comprehensive Benchmark for Evaluating Causal Reasoning Capabilities of Large Language Models" ]
[]
[]
639e6c07-c357-5660-91f0-feaaad8d7cd9
Which evaluation method is used in the paper against gold standards, despite having a low correlation with human judgments according to various studies?
Your answer must be ONE string of the evaluation method's name.
[ "GLIMPSE: Pragmatically Informative Multi-Document Summarization for Scholarly Reviews" ]
[]
[]
639f4526-9d30-5840-977f-900496bc4b09
How many datasets are evaluated in the work that the "BIG-Bench Mistake" paper follows to generate each step separately?
Your answer should be a integer.
[ "LLMs cannot find reasoning errors, but can correct them given the error location" ]
[ "ReAct: Synergizing Reasoning and Acting in Language Models" ]
[]
63dc113b-0220-5cb4-9bd3-17ba26c310b0
Which paper introduce a DRO (distribution robust optimization) like training objective for doing adversarial training without constructing adversarial samples.
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "DSRM: Boost Textual Adversarial Training with Distribution Shift Risk Minimization" ]
[ "acl2023" ]
646bc801-d082-54bf-b3f0-5437c6fad2be
On which downstream tasks did the authors experiment with their method, and by how much did it improve compared to the best existing methods?
Your answer should be a Python dictionary, where the keys represent the downstream tasks on which the authors conducted experiments, and value is the numerical part of a percentage (between 0 and 100, rounded to 1 decimal place), indicating the improvement compared to the best existing method.. e.g. {"task1": 1.9, "task2": 3.5, ...} .
[ "Crafting Personalized Agents through Retrieval-Augmented Generation on Editable Memory Graphs" ]
[]
[]
655b8b31-8ecd-5b34-9bc4-e9816b314c27
Could you recommend research that assesses how well language learning models, such as ChatGPT, perform in creating reading comprehension tasks for educational software?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Evaluating Reading Comprehension Exercises Generated by LLMs: A Showcase of ChatGPT in Education Applications" ]
[ "acl2023" ]
65a648a6-9bea-5467-84fd-2ca01dc52084
Which paper uses the latent diffusion model for the first time to solve offline reinforcement learning problems based on the sequential modeling framework?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Efficient Planning with Latent Diffusion" ]
[ "iclr2024" ]
65c25042-0b5e-5677-8c23-2374a72947c0
In the existing deblurring dataset compared in GS-Blur paper, that contains both real and synthetic data, what's the average noise level estimated?
Your answer should be a float, rounding to 4 decimal places.
[ "GS-Blur: A 3D Scene-Based Dataset for Realistic Image Deblurring" ]
[ "Realistic Blur Synthesis for Learning Image Deblurring" ]
[]
65c9fe88-46f0-579d-ba7b-ca58ee7c55f2
Which paper introduces the R-GCN technique into document-level joint entity and relation extraction?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "A Novel Table-to-Graph Generation Approach for Document-Level Joint Entity and Relation Extraction" ]
[ "acl2023" ]
65d3fbf5-5319-5490-9686-537924c3c4ee
I want to replicate the experiment in this paper. Please list all the datasets and baselines that I should prepare.
Your answer should be plain text
[ "DetermLR: Augmenting LLM-based Logical Reasoning from Indeterminacy to Determinacy" ]
[]
[]