uuid
string
question
string
answer_format
string
anchor_pdf
list
reference_pdf
list
conference
list
215e6bbc-eec4-5912-849a-e9ec96850a60
What are the main differences between Mobile-Agent-v2 and Mobile-Agent?
Your answer should be in a well-formated item list.
[ "Mobile-Agent-v2: Mobile Device Operation Assistant with Effective Navigation via Multi-Agent Collaboration", "Mobile-Agent: Autonomous Multi-Modal Mobile Device Agent with Visual Perception" ]
[]
[]
2166da5e-be09-5f2b-a8e9-7fed58ede51d
According to Table 2, which models perform the highest on each of the 8 tasks of GLUE?
Your answer should a python list of the name of models reaching highest performance on MNLI, QQP, QNLI, SST-2, STS-B, MRPC, RTE, and CoLA respectively. If two models get the same score, you can use "and" to connect their names, e.g. A and B.
[ "ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale" ]
[]
[]
21ba07ba-2e6d-5200-9764-f40cc4aa3a6d
In the comparison of APoLLo with SOTA methods on a Base-to-Novel Class Generalization task, what is the title of the source paper of the dataset where APOLLO outperforms MaPLe(previous SOTA) the most?
Your answer should be a single string.
[ "APoLLo : Unified Adapter and Prompt Learning for Vision Language Models" ]
[]
[]
220ea46c-5777-52dd-a581-54513207a179
How many thousand conversations are there in the datasets used to train CONVAUG in total?
Your answer should be a python float with one decimal places.
[ "Generalizing Conversational Dense Retrieval via LLM-Cognition Data Augmentation" ]
[ "Open-Domain Question Answering Goes Conversational via Question Rewriting", "TopiOCQA: Open-domain Conversational Question Answering with Topic Switching" ]
[]
2214bdec-6cf4-5cce-a5fb-b531bb41e777
Which paper first proposed shared adapter module across layers?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning" ]
[ "acl2023" ]
231728d1-f6b7-5cd5-862e-ee831b2c4ed4
In the paper that proposes the best-performing model on hMOF evaluated in LLM4Mat-Bench paper, which baseline that performs better than CGCNN both on validation and test sets is not evaluated in the LLM4Mat-Bench paper?
Your answer should be a string, the name of the baseline.
[ "LLM4Mat-Bench: Benchmarking Large Language Models for Materials Property Prediction" ]
[ "LLM-Prop: Predicting Physical And Electronic Properties Of Crystalline Solids From Their Text Descriptions" ]
[]
234f08cf-8a52-53cc-947e-e508f711e87a
Which model reaches the highest accuracy under zero-shot setting in CARES, considering the dimension shown in the bottom-middle of Figure 1? Additionally, in the paper that proposes the model, which dataset for pre-training is also released? What's the largest data source that CARES uses but this dataset doesn't?
Your answer should be a Python list of 3 elements, the first is the name of the model, the second and the third are the abbreviations of the datasets.
[ "CARES: A Comprehensive Benchmark of Trustworthiness in Medical Vision Language Models" ]
[ "Towards Generalist Foundation Model for Radiology by Leveraging Web-scale 2D&3D Medical Data" ]
[]
2351ad69-2ee2-5348-a305-1b7bc5a8fb3a
Which paper first found that REINFORCE works better than actor critic algorithms like PPO for RL finetuning of pretrained chemistry language models (Transformers and RNNs)?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Searching for High-Value Molecules Using Reinforcement Learning and Transformers" ]
[ "iclr2024" ]
235fcdd4-ea08-51ed-8a01-c9637eecfcab
Which three VQA benchmarks does the paper use for evaluation? Among the training datasets, which has the largest number of images?
Your answer should be a list of four strings, the last element is the string of the name of the largest training dataset.
[ "Visually-Situated Natural Language Understanding with Contrastive Reading Model and Frozen Large Language Models" ]
[]
[]
23ca6f0a-69de-55f5-9489-c0d7ddd50b18
Which papers were among the first to explore the task of targeted training data extraction?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "ETHICIST: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confidence Estimation" ]
[ "acl2023" ]
23cb6726-c7b6-56f0-86bc-4939eac49e1d
What is the innovation of the formula (6) in this paper?
Your answer should be a Python strings of innovation of the formula.
[ "Connective Prediction for Implicit Discourse Relation Recognition via Knowledge Distillation" ]
[]
[]
247b6978-be01-50c8-92fb-e27122c244f0
Is there any paper that explores using only an encoder-only masked language model for open-ended long text generation (such as story generation)?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Open-ended Long Text Generation via Masked Language Modeling" ]
[ "acl2023" ]
2536a846-15c8-5b2a-bedf-8b878bff149a
Which institution is the corresponding author of the paper "DVD: Dynamic Contrastive Decoding for Knowledge Amplification in Multi-Document Question Answering" affiliated with?
Your answer should be a string containing the exact full name of the institution without changing CAPITALIZATION.
[ "DVD: Dynamic Contrastive Decoding for Knowledge Amplification in Multi-Document Question Answering" ]
[]
[]
259a085d-f5b9-5b80-aa31-a9720bad7047
Which paper first proved that wide-enough transformer architectures trained with gradient methods on enough data would learn to solve relational reasoning tasks?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "When can transformers reason with abstract symbols?" ]
[ "iclr2024" ]
25c34c03-3d73-51df-bb4a-ba58f03bab41
On which benchmark does the author evaluate SnapKV against multiple baseline models? Additionally, what is the number of data for each type of task in the benchmark?
Your answer should be a Python list containing two elements: the first element should be a string representing the benchmark name, and the second element should be a list of integers indicating the number of data points for each type of task in the benchmark. For example: ["GLUE", [100, 200, 300]].
[ "SnapKV: LLM Knows What You are Looking for Before Generation" ]
[ "LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding" ]
[]
25c78c8f-a93c-547a-b06a-b46a60ecba87
Is there any paper improves adversarial training by forming semantic aware label without extra pre-train time or data?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Annealing Self-Distillation Rectification Improves Adversarial Training" ]
[ "iclr2024" ]
25e00706-c80c-5169-a5e9-9256c5165a89
What is the method of prefix-tuning, mentioned as the PEFT module, in the method section of this paper?
Your answer should be a python string about the method of prefix-tuning.
[ "Learn it or Leave it: Module Composition and Pruning for Continual Learning" ]
[ "Prefix-Tuning: Optimizing Continuous Prompts for Generation" ]
[]
25fd4dd0-a865-541f-bcdd-246a56ba36ed
Both the papers use the Model Performance on EditEval to test their models. What existing models' data do they use in common?
Your answer should be a python list, each elemet is a string , which refers to a model name.
[ "Towards an On-device Agent for Text Rewriting", "RewriteLM: An Instruction-Tuned Large Language Model for Text Rewriting" ]
[]
[]
26030580-cffa-5664-bd4d-4f9eab957b98
In experiments with the similar two-stage framework as BalSum, are there any other available datasets besides the ones used in this paper?
Your answer should be a python list of elements, each element is the experiment dataset name string, e.g., ["dataset1", "dataset2", ...]. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.
[ "Balancing Lexical and Semantic Quality in Abstractive Summarization" ]
[ "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", "BRIO: Bringing Order to Abstractive Summarization", "SummaReranker: A Multi-Task Mixture-of-Experts Re-ranking Framework for Abstractive Summarization" ]
[]
26ec953d-2268-577d-a22e-7f8313b800d8
Which testing dataset in the paper has the largest size(determined by counts of instances)?I want to read its source paper, can you give me the title?
Your answer should be a list of two strings, the first element is the name of the testing dataset, and the second element is the title of the source paper.
[ "CoRec: An Easy Approach for Coordination Recognition" ]
[]
[]
2719728b-95f0-5418-a64d-6f6a4b9d8e71
In the two phase pre-training of this paper, what is the phase after the regular pretraining? And for this phase how to obtain sparse contextualized representation?
Your answer should be a list of two strings, the first element is the name(two words) of the phase, and the second element is the formula in latex format providing useful signal during the second phase of pre-training.
[ "Better Together: Jointly Using Masked Latent Semantic Modeling and Masked Language Modeling for Sample Efficient Pre-training" ]
[]
[]
27413ff9-4f7d-5885-a5ea-79e29a534fa9
Which paper first found that multilingual models can inference cross-lingual supervision in MLM training by themself?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "On-the-fly Cross-lingual Masking for Multilingual Pre-training" ]
[ "acl2023" ]
27873950-73ca-554c-be4b-88fc723841e7
How to calculate $l^{\prime}(c,c^{\prime})$ in the equation under Section 3.2?
Your answer should be a sentence describing how to calculate the equation, including the explanation of relevant terms.
[ "Predicting Text Preference Via Structured Comparative Reasoning" ]
[]
[]
27bd3238-0bb7-540a-8e4f-5acc74fe7b92
In the paper that proposed two existing remote sensing vision-language datasets listed in the VRSBench paper, which method reaches the highest score on area comparison tasks?
Your answer should be a single word, the name of the method.
[ "VRSBench: A Versatile Vision-Language Benchmark Dataset for Remote Sensing Image Understanding" ]
[ "RSGPT: A Remote Sensing Vision Language Model and Benchmark", "RSVG: Exploring Data and Models for Visual Grounding on Remote Sensing Data" ]
[]
27d44cad-3277-5e38-9d8a-87f953efe90f
Which datasets in the reading comprehension domain are used for instruction tuning datset curation in both FLAN and INTERS?
Your answer should be a Python list of strings, the abbreviation of the datasets.
[ "INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning", "Finetuned Language Models Are Zero-Shot Learners" ]
[]
[]
2819ea5c-0598-511e-a95f-ce3e567a1b10
Is there a paper that connects the basic elements of storytelling with biased or imbalanced media reporting?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Conflicts, Villains, Resolutions: Towards models of Narrative Media Framing" ]
[ "acl2023" ]
282710e9-2b2e-5e43-82c1-58505f4ee11f
Which pre-trained model does the Index Generation Framework of this paper(titled "Generative Emotion Cause Triplet Extraction in Conversations with Commonsense Knowledge") use as backbone? What's the architecture of this pre-trained model compared with GPT and BERT?
Your answer should be a single python list, the first element is a string of the model name, the second element is s string about its special architecture.
[ "Generative Emotion Cause Triplet Extraction in Conversations with Commonsense Knowledge" ]
[ "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension" ]
[]
2939967c-6e6d-5ae2-8ff0-d7863eac8ae0
Both as vision-based GUI model accepting high-resolution screenshots, what improvement does MobileFlow make compared to CogAgent?
Your answer should be in influent English.
[ "MobileFlow: A Multimodal LLM For Mobile GUI Agent", "CogAgent: A Visual Language Model for GUI Agents" ]
[]
[]
299660d7-a57b-5e22-9a6d-95c7bf8923af
What's the difference between the methods to convert GUI usage tutorials into trajectories of Synatra and AgentTrek? Which performance on WebArena of 7B model is better? How much of success rate does the better one outperforms the other one by?
Your answer should be a list of three elements. The first element should be a string of free-form natural English describing the difference between methods of two works. The second element should be a string from ["Synatra", "AgentTrek"]. The third element should be a float rounded to 2 decimal places in [0, 100] as the difference of the success rates of two models.
[ "Synatra: Turning Indirect Knowledge into Direct Demonstrations for Digital Agents at Scale", "AgentTrek: Agent Trajectory Synthesis via Guiding Replay with Web Tutorials" ]
[]
[]
2a0aa66e-7f7a-5870-b5a3-935855255b31
Is there any paper that combines causal inference and finetuning for language models?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Preserving Commonsense Knowledge from Pre-trained Language Models via Causal Inference" ]
[ "acl2023" ]
2a25d73b-2f10-5623-8dc5-ff64901b0c82
Which paper first showed that task-specific knowledge embedded in parameters can be extracted from one LLM using seed samples and transferred to another via parameter-efficient fine-tuning?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective" ]
[ "iclr2024" ]
2a448d7b-073e-5d05-b1ed-4368558ab1d5
Which paper first investigates the knowledge preferences of LLMs when there are conflicts between the context and the parametric memory?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts" ]
[ "iclr2024" ]
2a6abc65-d61b-5a48-b3a8-978d92c55720
What is the Overall Fmacro score corresponding to the three baselines in the StanceEval 2024 task? How many baselines did the PICT team surpass?
Your answer should be a python list of four number, The first three are fractions (between 0 and 100, rounded to 2 decimal places, from largest to smallest), and the last one is a int.
[ "PICT at StanceEval2024: Stance Detection in Arabic using Ensemble of Large Language Models", "StanceEval 2024: The First Arabic Stance Detection Shared Task" ]
[]
[]
2abed84f-df45-53f5-8761-12df8c5f8185
What is the quantity of BLIP-2' s Trainable Params? Which function will use the BLIP-2 related model in VisualWebArena paper?
Your answer should be a python list of two strings. The first string is Trainable Params and You should use 'M' for million and 'B' for billion. For example, you should answer "1M" instead of "1000000". The second is a function name.
[ "VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks", "BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models" ]
[]
[]
2baffe53-b50a-51a0-b88d-bf0bc18e1b00
Accoring to the course plan in this paper, what percentage of the total course time do students spend in lectures?
Your answer should be a a Python float rounded to two decimal places ranged from 0 to 1
[ "Teaching Natural Language Processing in Law School" ]
[]
[]
2c25d8f9-d09e-547f-bdaf-6bb8a489e458
Summarize the data collection process of the dataset used in the evaluation section of the paper "ATTACKING LLM WATERMARKS BY EXPLOITING THEIR STRENGTHS."
Your answer should be a python strings
[ "Attacking LLM Watermarks by Exploiting Their Strengths" ]
[ "A Watermark for Large Language Models", "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer" ]
[]
2cd0cc5e-defb-51aa-b04d-1cfead682bda
For handling hallucinations with auxiliary models, what is the model they use, and what are the metrics or measures to evaluate semantic similarity of two sentences?
Your answer should be a Python list of two elements, the first element is the model name string, and the second element is a list of metric names, e.g., ["model_name", ["metric1", "metric2", ...]].
[ "Detecting and Mitigating Hallucinations in Machine Translation: Model Internal Workings Alone Do Well, Sentence Similarity Even Better" ]
[]
[]
2e491092-7531-5cee-972f-fc7afb092e9f
What is the difference between formula (1) and formula (2) in the paper?
Your answer should be a python strings concisely describing the difference between the two formulas.
[ "Improving Multi-Speaker ASR With Overlap-Aware Encoding And Monotonic Attention" ]
[]
[]
2e8bd79d-01b0-5ee1-accf-eed43dc316da
Which paper in human motion generation can control the spatial location of any joints of the human with either dense or sparse 3D points?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "OmniControl: Control Any Joint at Any Time for Human Motion Generation" ]
[ "iclr2024" ]
2ee66dfa-7715-5103-8a58-1b372665df07
Is there any generalizable NeRF paper that disentangles texture and shape?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "TUVF: Learning Generalizable Texture UV Radiance Fields" ]
[ "iclr2024" ]
2f4dc6e0-c001-55c8-ba36-c72fc509e506
What are the conditions under which the zero generalization error can be achieved?
Your answer should be a Python string listing all the conditions in detail.
[ "Transformers as Multi-Task Feature Selectors: Generalization Analysis of In-Context Learning" ]
[]
[]
2f769184-b5d0-5b61-952d-3ac813a55275
What assumption does Deja Vu make to accelerate LLM inference? According to the source paper of the subsequent work PowerInfer, what is the key challenge of Deja Vu?
Your answer should be a python list of two strings
[ "Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time", "PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU" ]
[]
[]
2f7da671-2337-5c7b-9a25-35c1b996fe80
In Figure 1 of the paper "When Benchmarks are Targets: Revealing the Sensitivity of Large Language Model Leaderboards", which model has the largest difference in ranking between Fixed Answer and Cloze Prompt? For the dataset that contains the original question, what's the estimated expert-level accuracy?
Your answer should be a Python list of two strings, the first is the name of the model, as proposed in the figure, the second is the estimated expert-level accuracy, rounding to 1 decimal place, like "12.3%".
[ "When Benchmarks are Targets: Revealing the Sensitivity of Large Language Model Leaderboards" ]
[ "Measuring Massive Multitask Language Understanding", "Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge" ]
[]
2fc10cbc-1818-5cfd-962d-4c15b87f9865
Where can I find the datasets published in 2021 used in the experiments in the paper "Question-Instructed Visual Descriptions for Zero-Shot Video Question Answering"?
Your answer should be a python list of several strings, the website of that dataset as given in the paper that proposes it.
[ "Question-Instructed Visual Descriptions for Zero-Shot Video Answering" ]
[ "STAR: A Benchmark for Situated Reasoning in Real-World Videos", "NExT-QA:Next Phase of Question-Answering to Explaining Temporal Actions" ]
[]
2fee39d3-e6a7-50d1-918c-3f8a140a47bb
In Figure 1, the presence of what operation divides the discretization process of continuous speech into two categories?
Your answer should be a python string.
[ "Towards Universal Speech Discrete Tokens: A Case Study for ASR and TTS" ]
[]
[]
302c67ba-c324-5ae2-9757-0e05956f17cc
Which paper first explored In-context learning in a cross lingual setup and made use of alignment to better it's performance?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Multilingual LLMs are Better Cross-lingual In-context Learners with Alignment" ]
[ "acl2023" ]
30335449-a618-5e66-8c7e-dc0eb81bfaae
Which neural theorem proving paper first attempted to prove theorems in a block-by-block manner?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "LEGO-Prover: Neural Theorem Proving with Growing Libraries" ]
[ "iclr2024" ]
31c0e826-57b0-5445-a16d-0e3d4adc46ab
Of the following three combinations, which reaches the highest pass@1 accuracy on HumanEval and what's the exact accuracy value: Codestral+MGDebugger, Reflexion+LDB(GPT-4), MetaGPT.
Your answer should be a Python List of 2 elements, the first is the combination and the second is the exact accuracy value, rounded to one decimal places. Note that you should use the same names as in the question.
[ "Debug like a Human: A Large Language Model Debugger via Verifying Runtime Execution Step by Step", "MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework", "From Code to Correctness: Closing the Last Mile of Code Generation with Hierarchical Debugging" ]
[]
[]
3272942c-5db5-5122-b612-f09332a27a5a
How are the data in Table 5 in the paper "Overview of the 9th Social Media Mining for Health Applications" obtained, from the valid dataset or test dataset?
Your answer should be a python strings of "valid" or "test".
[ "Overview of the 9th Social Media Mining for Health Applications (#SMM4H) Shared Tasks at ACL 2024 – Large Language Models and Generalizability for Social Media NLP" ]
[ "SMM4H 2024: 5 Fold Cross Validation for Classification of tweets reporting children’s disorders" ]
[]
32b0c214-afd6-59b1-8e6e-690caf288104
In the overview figure of KG-FIT, which component is pointed by a red arrow at the bottom-left? In the paper that proposes this component, what's the algorithm with the theoretically highest worst-case complexity? What's its advantage over the other two faster algorithm, considering the update scheme?
Your answer should be a Python string, including the answers to the three questions.
[ "KG-FIT: Knowledge Graph Fine-Tuning Upon Open-World Knowledge" ]
[ "Modern hierarchical, agglomerative clustering algorithms" ]
[]
330ec130-4467-529e-a5e3-83d9391863e7
What dataset does this paper(titled "Semi-Structured Object Sequence Encoders") use for Anomaly Detection?How many datasets is this dataset originally consist of?
Your answer should be a single python list, the first element is a string of the dataset name, the second element is an integer number.
[ "Semi-Structured Object Sequence Encoders" ]
[ "Loghub: A Large Collection of System Log Datasets for AI-driven Log Analytics" ]
[]
3337061c-d350-5522-9c68-f810e017a567
Can we reduce visual tokens in vision transformers right from the beginning?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "SparseFormer: Sparse Visual Recognition via Limited Latent Tokens" ]
[ "iclr2024" ]
333e0fbf-b322-5998-939c-cada7786f47a
Which dataset supports narration generation and temporal localization tasks in Chinese movies?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Movie101: A New Movie Understanding Benchmark" ]
[ "acl2023" ]
33c4b17e-84de-57a7-b3b9-fa4c55bb3e60
How does "Analog Computing for AI Sometimes Needs Correction by Digital Computing: Why and When" estimate the confidence of analog computing?
Your answer should be in fluential English.
[ "Analog Computing for AI Sometimes Needs Correction by Digital Computing: Why and When" ]
[]
[]
33f77112-8775-5066-8bb6-e74f93379410
In the paper named"MBIAS: Mitigating Bias in Large Language Models While Retaining Context", to develop MBIAS, which PEFT(Parameter Efficient Fine-tuning) technique is used to finetune the model?In the paper where this technique is proposed, what's the innovations introduced to save memory?
Your answer should be a single python list, containing two strings. The first string is the name(abbrievation) of the PEFT technique. The second string is the innovations introduced to save memory in the relevant paper.
[ "MBIAS: Mitigating Bias in Large Language Models While Retaining Context" ]
[ "QLoRA: Efficient Finetuning of Quantized LLMs" ]
[]
34031849-a464-5cf5-a3f4-c70b6dfb37e8
Among the papers that proposed PopQA, KBP and ASQA, which one evaluates the most language models? What question does it want to answer by evaluating so many models?
Your answer should be a Python list of two strings, the first string is the name of the dataset, that evaluates the most models in its paper, and the second string is the question that it wants to answer.
[ "When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories", "Large Language Models as Source Planner for Personalized Knowledge-grounded Dialogues", "ASQA: Factoid Questions Meet Long-Form Answers" ]
[]
[]
343baca6-bb8b-55a6-8bb4-8aaa548dc66d
In the paper that proposes the second method to verify the LLMs' outputs introduced in the paper "I am a Strange Dataset: Metalinguistic Tests for Language Models", the method was mainly evaluated on which dataset?
Your answer should be a string, the name of the main dataset.
[ "I am a Strange Dataset: Metalinguistic Tests for Language Models" ]
[ "Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification", "Large Language Models are Better Reasoners with Self-Verification" ]
[]
349514c1-4e39-545c-b647-6c413a9a683e
In the paper that proposed a CASH algorithm for finetuning, which two objective functions are essential for learning the estimator and the predictor?
Your answer should be a Python list of 2 strings, each string is a formula for the two objective functions in the LaTeX format.
[]
[ "Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and How" ]
[ "iclr2024" ]
34fe12fd-640c-506e-86a2-5ab70a15c11a
Is there any paper that leverages graph neural network by integrating label information for multi-label low-resource intent classification?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Dual Class Knowledge Propagation Network for Multi-label Few-shot Intent Detection" ]
[ "acl2023" ]
34fed469-2cc2-531c-8d93-4e318d5de7c0
Which datasets are used for Multi-Document QA in this paper?
Your answer should be python list, each element of the list is a string of the name of a dataset.
[ "LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding" ]
[]
[]
354583f4-8367-5e41-b3a6-9b63d9e05e69
I want to download this paper from the internet.Can you give me a link?
Your answer should be single string of the link.
[ "Attribute First, then Generate: Locally-attributable Grounded Text Generation" ]
[]
[]
35843fb0-6f14-51a5-a205-2acf0faa83a5
Which tool is used as the basic verification tool in the paper "Leveraging Large Language Models for Automated Proof Synthesis in Rust"? Can the tool call executable functions in proof mode?
Your answer should be a python list of two strings. The first string is the name of the tool, and the second string is "yes" or "no".
[ "Leveraging Large Language Models for Automated Proof Synthesis in Rust" ]
[ "Verus: Verifying Rust Programs using Linear Ghost Types (extended version)" ]
[]
359cb240-d14a-55b3-a0d9-2652c02ac278
In the dataset listed in the paper "A Benchmark Dataset for Event-Guided Human Pose Estimation and Tracking in Extreme Conditions0", that has the number of boxes closest to EHPT-XC, what are the two largest categories under daytime roadscene?
Your answer should be a Python list of the names of the two categories.
[ "A Benchmark Dataset for Event-Guided Human Pose Estimation and Tracking in Extreme Conditions" ]
[ "Target-aware Dual Adversarial Learning and a Multi-scenario Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection" ]
[]
35b4110d-486c-562f-b488-c8a8b417ef82
Which paper first apply mixture of experts idea to large language models for domain adaptation?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models’ Memories" ]
[ "acl2023" ]
366510d9-f1e1-51a7-987d-cb6e47c79812
For the biggest dataset used in the paper titled "HiCuLR: Hierarchical Curriculum Learning for Rhetorical Role Labeling of Legal Documents", what's its rhetorical role labels?
Your answer should be a single string about the labels.
[ "HiCuLR: Hierarchical Curriculum Learning for Rhetorical Role Labeling of Legal Documents" ]
[ "Corpus for Automatic Structuring of Legal Documents" ]
[]
37877f34-e27f-5de2-a0ee-ffa5a543a374
Is there a paper that supports the use of automated coherence metrics in topic model evaluations?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Large-Scale Correlation Analysis of Automated Metrics for Topic Models" ]
[ "acl2023" ]
37d41534-3614-5483-904d-213f07860a88
Where can I find the dataset, from which the paper "Is Programming by Example solved by LLMs?" seeds LOGO problems?
Your answer should be a string, the website of that dataset as given in the paper that proposes it.
[ "Is Programming by Example solved by LLMs?" ]
[ "Leveraging Language to Learn Program Abstractions and Search Heuristics" ]
[]
37e98c25-68ba-54c4-9068-596ed64b546d
Is there an evaluation metric for natural language generation that predicts the factual consistency score through a mean-max aggregation method?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "AlignScore: Evaluating Factual Consistency with A Unified Alignment Function" ]
[ "acl2023" ]
383909ad-dc1d-5f60-ade6-46bea6e7c62b
In the paper "Mastering Task Arithmetic: $\tau$Jp as a Key Indicator for Weight Disentanglement", what are the names of the datasets used for task addition on vision tasks? Did the paper which proposed baseline "Linear FT" use the same datasets for task addition on vision tasks?
Your answer should be a Python dictionary, containing the names of datasets for the first question and a boolean value for the second question, e.g., {"datasets": ["dataset 1", "dataset 2", ...], "same_datasets": true}. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.
[ "Mastering Task Arithmetic: $\\tau$Jp as a Key Indicator for Weight Disentanglement" ]
[ "Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained Models", "Editing models with task arithmetic", "TIES-Merging: Resolving Interference When Merging Models" ]
[]
38965ab2-4bc0-562a-98bf-805f7a9fc3ee
Which pre-trained model is specifically designed for low-resource dialogue summarization tasks?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "DIONYSUS: A Pre-trained Model for Low-Resource Dialogue Summarization" ]
[ "acl2023" ]
38a692e8-8566-539a-aecd-b3e7df04dbcf
According to the methods proposed by this paper,how to calculate the bias Scores when aggregating attributions for tokens, instances and instructions respectively?
Your answer should be a python list of three elements, every element is a formula string in latex format.
[ "Mitigating Biases for Instruction-following Language Models via Bias Neurons Elimination" ]
[]
[]
396c566e-ead8-50a4-b00a-d5ca4c432275
What paper first extends rotary positional encoding (RoPE) for camera-geometry encoding in multi-view transformers?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "GTA: A Geometry-Aware Attention Mechanism for Multi-View Transformers" ]
[ "iclr2024" ]
398ee3a7-26c8-5967-8b5b-196b5d7641b3
According to Figure 1 in the "Shoulders of Giants: A Look at the Degree and Utility of Openness in NLP Research" paper, for TACL papers based on Spanish, where does their LMs mainly come from?
Your answer should be a phrase indicating the category DIRECTLY FROM THE PDF WITHOUT ANY MODIFICATION OR EXPLANATION.
[ "Shoulders of Giants: A Look at the Degree and Utility of Openness in NLP Research" ]
[ "The State and Fate of Linguistic Diversity and Inclusion in the NLP World" ]
[]
39fb54be-7c67-59c2-9179-8cd66ce19bc2
Considering the performance of ChatDev agent on DSEval-LeetCode benchmark, what is the most common cause of the errors?
Your answer should be a python list of elements, the first element is the string of the main verdict, the second element is the string of the sub-verdict, e.g., ["verdict_name", "sub-verdict_name"].
[ "Benchmarking Data Science Agents" ]
[]
[]
3a357488-48e9-58d5-ab3f-fdb931ab1db1
What work proposes to combine video foundation models with vision language models to effective high dimensional robot planning?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Video Language Planning" ]
[ "iclr2024" ]
3a86e8ba-3a3c-5cd4-a799-b76cfc9b643f
In the dataset used in the experiment of the paper "Soft-Label Integration for Robust Toxicity Classification" containing 3 classes, which two explainability based metrics are applied?
Your answer should be a Python list of 2 strings, the names of the datasets.
[ "Soft-Label Integration for Robust Toxicity Classification" ]
[ "HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection" ]
[]
3ae37796-7491-5c6f-9d5c-c6f3e358a888
What work attempts to explore multi-hop reasoning by densifying commonsense knowledge graphs?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Dense-ATOMIC: Towards Densely-connected ATOMIC with High Knowledge Coverage and Massive Multi-hop Paths" ]
[ "acl2023" ]
3b007244-9a68-5972-b0a9-04691a2dd6d2
Which language model distillation paper that first identified the capacity gap in distillation and used the MoE student model to counter the curse of capacity gap?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Lifting the Curse of Capacity Gap in Distilling Language Models" ]
[ "acl2023" ]
3b42e1f2-e150-5216-aeab-44e976e28900
Which operation on task vectors is employed in the paper named "Towards Safer Large Language Models through Machine Unlearning"? Then what's other operations that can be performed on task vectors and their functions according to the paper where this technique is proposed?
Your answer should be a single string about the operations used in the two papers.
[ "Towards Safer Large Language Models through Machine Unlearning" ]
[ "Editing Models with Task Arithmetic" ]
[]
3b83e010-75b3-5fa9-a5c1-7f786db8d957
Which paper proposes an alignment framework that steers language models to preferences of individual groups in a few-shot manner through augmenting the LLM with a transformer module?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Group Preference Optimization: Few-Shot Alignment of Large Language Models" ]
[ "iclr2024" ]
3bec1f83-7dfa-5650-81b5-f70d0aaf5232
Among AlpacaEval, MT-Bench and MMLU, which ones collect open-ended questions accross different domains without providing concrete reference answers?
Your answer should be a python list of 1-3 strings, and the strings should be AlpacaEval, MT-Bench or MMLU.
[ "AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback", "AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback", "Measuring Massive Multitask Language Understanding" ]
[]
[]
3c0fcf08-0c65-5387-855f-d2fcb7a81379
What's the difference between the supported OS platforms of two works, OSWorld and Spider2-V?
Your answer should be concise text string highlighting the differences.
[ "OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments", "Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?" ]
[]
[]
3c3f8ba1-26de-54c9-84d9-6d66dc664a8d
In the paper that SaulLM-141B paper follows the most in data cleaning, how much higher is the balanced accuracy of the final checkpoint of the proposed model than that of the initial checkpoint?
Your answer should be a float, rounding to 2 decimal places.
[ "SaulLM-54B & SaulLM-141B: Scaling Up Domain Adaptation for the Legal Domain" ]
[ "SaulLM-7B: A pioneering Large Language Model for Law" ]
[]
3c712282-2534-5627-84f0-ce1e39212d20
Is there a paper that uses evolutionary algorithms and neural MT metrics to produce translations?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Breeding Machine Translations: Evolutionary approach to survive and thrive in the world of automated evaluation" ]
[ "acl2023" ]
3c770698-2830-5eea-9b03-3984091527a3
How many more LLMs are evaluated in ConvBench paper than in MINT paper?
Your answer should be an integer.
[ "ConvBench: A Multi-Turn Conversation Evaluation Benchmark with Hierarchical Ablation Capability for Large Vision-Language Models", "MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback" ]
[]
[]
3caa2b4d-e1e4-532d-9976-125838093bb8
According to the paper "FinGPT: Instruction Tuning Benchmark for Open-Source Large Language Models in Financial Datasets", after the tuning phase shown in the right top of the paradigm overview figure, on which dataset MPT outperforms the other models with an average F1 score of around 0.87? In that dataset, which entity class accounts for the largest proportion?
Your answer should be a Python list of 2 elements, the first is the abbreviation of the dataset, and the second is the entity class.
[ "FinGPT: Instruction Tuning Benchmark for Open-Source Large Language Models in Financial Datasets" ]
[ "Good Debt or Bad Debt: Detecting Semantic Orientations in Economic Texts" ]
[]
3cc4cb1e-ec2f-53ca-a69b-029e013b2d6a
In Theorem 4.7 of this paper, a basic learning algorithm is applied. According to the paper that proposed that algorithm, what's the objective to be maximized for the actor and loss function to be minimized for the critic?
Your answer should be a Python list of 2 strings, the formulas in LaTeX format. Remember that order matters.
[ "Conservative Offline Policy Adaptation in Multi-Agent Games" ]
[ "The Surprising Effectiveness of PPO in Cooperative, Multi-Agent Games" ]
[]
3cc9e70a-bd6b-525c-af61-4b66f9ef8a77
Who is the corresponding author of this paper?
Your answer should be a python string about the name of the corresponding author.
[ "Analyzing and Reducing the Performance Gap in Cross-Lingual Transfer with Fine-tuning Slow and Fast" ]
[]
[]
3cdb56d6-cdfc-57d0-8b3d-736aee6fa4c7
How to initialize $h_{i,l-1}^S$ in Equation (11)?
Your answer should be a paragraph describing the initialization procedure as given in the paper.
[ "AoM: Detecting Aspect-oriented Information for Multimodal Aspect-Based Sentiment Analysis" ]
[]
[]
3d204779-e506-5fbd-8a25-5172d94a1b6c
Which paper did the anchor PDF reference for the method to address codebook collapse? In fact, which paper originally proposed this method?
Your answer should be a Python list where each item is the full name of a paper (a string).
[ "Neural Audio Codec for Latent Music Representations" ]
[ "High-Fidelity Audio Compression with Improved RVQGAN" ]
[]
3d3d6314-7069-5382-b942-830f22b0b94c
Which conference was the paper 'Fact-Checking Complex Claims with Program-Guided Reasoning' published in? Is it a long paper, a short paper or findings?
Your answer should be a Python list of two elements, the first element is the abbreviation of the conference name (including the year), e.g. EMNLP 2022, and the second element is the type of this paper, i.e. long paper, short paper or findings.
[ "Fact-Checking Complex Claims with Program-Guided Reasoning" ]
[]
[]
3dc8318e-3f56-5ba9-8542-f845aad5e8a8
What are some methods for solving the class-incremetal continual learning problems?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Rehearsal-free Continual Language Learning via Efficient Parameter Isolation" ]
[ "acl2023" ]
3e1ea082-261d-549f-88fe-2bfe6e9d7b4c
Which meta learning-based baseline is used in the paper named "Can We Continually Edit Language Models?On the Knowledge Attenuation in Sequential Model Editing"? What's the full name of this baseline according to the paper where it's proposed?
Your answer should be a single python list containing two strings, the first element of the list is the abbreviation of the baseline, the second element of the list is the full name of this baseline, e.g.["MAML","Model-Agnostic Meta-Learning"].
[ "Can We Continually Edit Language Models? On the Knowledge Attenuation in Sequential Model Editing" ]
[ "Fast Model Editing at Scale" ]
[]
3f69a7de-fe99-531a-8399-d4cbbb1b8da0
In the paper "Puzzle Solving using Reasoning of Large Language Models: A Survey", which methods mentioned in the paper could be used to help solve the puzzle in figure 1?
Your answer should be a python list containing names of several methods mentioned in the paper. Each element in the list should only contain the name of ONE method.
[ "Puzzle Solving using Reasoning of Large Language Models: A Survey" ]
[]
[]
3fa1c7fc-1e4b-57f1-8aea-efac33cefb54
How much higher is Octopus with resolution 336 than Kosmos-2 on RefCOCOg test set?
Your answer should be a float between 0 and 100, rounding to 2 decimal places.
[ "Grounding Multimodal Large Language Models to the World", "Octopus: A Multi-modal LLM with Parallel Recognition and Sequential Understanding" ]
[]
[]
3fd4d805-ae6c-527d-b6d2-30a18fb0ab12
On the dataset proposed by this work, how much does the GPT-3.5-turbo model improve its GPT4score after using Graph-CoT?
Your answer should be a single float number ranging from 0 to 100, rounded to 2 decimal places, representing the subtraction result.
[ "Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs" ]
[]
[]
3fe5526c-b647-51b0-9abb-6edd43c20f79
Which paper is the first to model the helpfulness and harmlessness alignment of LLMs as a Constrained MDP problem?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Safe RLHF: Safe Reinforcement Learning from Human Feedback" ]
[ "iclr2024" ]
4098e496-9c0b-53b7-acf1-5cde707b8f91
Which paper proposed decomposing the logit update of each of the attention blocks' inputs to analyze how the context influences the prediction?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Explaining How Transformers Use Context to Build Predictions" ]
[ "acl2023" ]
40d036d6-78d8-5a2d-b692-9cb3fb24b3a6
In Table 2 of the paper "Train-Attention: Meta-Learning Where to Focus in Continual Knowledge Learning", which method performs better between TAALM and Rho-1? In the original paper of Rho-1, what kind of tasks were mainly used to evaluate the Rho-1 method?
Your answer should be a single python list of two strings, the first element is the name of the method, the second element is the type of tasks
[ "Train-Attention: Meta-Learning Where to Focus in Continual Knowledge Learning", "Not All Tokens Are What You Need for Pretraining" ]
[]
[]
412f7530-b194-5aca-8508-22318575e1b2
According to the expression and physical meaning of formula (2), if I want the weight to be 0.5 right at the middle of the training process, what is the value of parameter s?
Your answer should be a python float of the exact value of parameter s.
[ "HuCurl: Human-induced Curriculum Discovery" ]
[]
[]