uuid
string
question
string
answer_format
string
anchor_pdf
list
reference_pdf
list
conference
list
a4ce44e5-a7d3-5043-981c-99695dd766e5
Which paper first proposed extracting the pair of target and stance from sentences?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "A New Direction in Stance Detection: Target-Stance Extraction in the Wild" ]
[ "acl2023" ]
a5387805-be6f-5199-b97a-d01ece58dd35
In the baseline construction of the experiment of TrojText, which models are employed? Where can I get the code or data of these models?
Your answer should be a single python list like [["model_name1","model_name2",...],["https://github.com/a/b","https://github.com/c/d",...]].Note that you should choose the most concise way to express the name of the model.
[ "TrojText: Test-time Invisible Textual Trojan Insertion" ]
[ "Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger", "TBT: Targeted Neural Network Attack with Bit Trojan" ]
[]
a629b08b-2d1e-5a0e-a39a-007749de7759
Is there a paper that uses the tree structure of math equations in autoregressive language models?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Tree-Based Representation and Generation of Natural and Mathematical Language" ]
[ "acl2023" ]
a64654b4-b4c5-5167-b58b-529530c1be68
Is there any paper about style transfer for stories?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "StoryTrans: Non-Parallel Story Author-Style Transfer with Discourse Representations and Content Enhancing" ]
[ "acl2023" ]
a69b7df9-1ecd-579e-85ae-17de9f0dfbba
Are there any examples of using dense phrase retrieval systems in the automatic curation of entity dictionaries?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Automatic Creation of Named Entity Recognition Datasets by Querying Phrase Representations" ]
[ "acl2023" ]
a6e178bd-06c1-58c3-b6f1-e72b0cab6a03
Which paper first studied differential privacy for in-context learning to prevent prompt leakage attacks?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Privacy-Preserving In-Context Learning for Large Language Models" ]
[ "iclr2024" ]
a743e85d-4b2c-5671-a371-578b2f0af908
Is there any paper that explores and annotates the effectiveness of using testimonials or anecdotes in discussions?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "StoryARG: a corpus of narratives and personal experiences in argumentative texts" ]
[ "acl2023" ]
a762550d-b54a-5e5f-8fcf-d3be3058cd28
Could you recommend a contemporary research paper that has advanced natural language watermarking quality through algorithmic methods?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Robust Multi-bit Natural Language Watermarking through Invariant Features" ]
[ "acl2023" ]
a7707667-5eee-5708-b4d6-3a5368b417f8
What's the framework proposed in this paper("Global Learning with Triplet Relations in Abstractive Summarization")? Which framework is it similar to? In the source papers of these two frameworks, how many same datasets are they experimented on?
Your answer should be a python list like ["string1", "string2", integer]. The first element should be a string, representing the name(abbrievation) of the framework proposed in this paper. The second element should be a string, representing the name(abbrievation) of the similar framework. The third element should be an integer, representing the number of common datasets experimented on.
[ "Global Learning with Triplet Relations in Abstractive Summarization" ]
[ "GSum: A General Framework for Guided Neural Abstractive Summarization" ]
[]
a7d785c5-bcc8-5dae-aabd-bfe6a5f61174
What is the difference between using two open-source LLMs in the experiments of the paper "ARE LARGE LANGUAGE MODELS BAYESIAN? A MARTINGALE PERSPECTIVE ON IN-CONTEXT LEARNING"?
Your answer should be a python strings.
[ "Are Large Language Models Bayesian? A Martingale Perspective on In-Context Learning" ]
[ "Llama 2: Open Foundation and Fine-Tuned Chat Models", "Mistral 7B" ]
[]
a803c5e9-ad61-5580-8819-66875022e19b
In order to improve Parrot's abilities, which method proposed in the paper "Direct Preference Optimization: Your Language Model is Secretly a Reward Model" is used to train the model? Which method is compared with it under distribution shifts?
Your answer should be a Python list of two strings, answering the two questions respectively. You must use abbreviations as given in the papers.
[ "Direct Preference Optimization: Your Language Model is Secretly a Reward Model", "Parrot: Enhancing Multi-Turn Instruction Following for Large Language Models" ]
[]
[]
a903623b-95ca-5dc2-a8a8-3c9851d02779
Is there any paper that employs code LLMs to iteratively generate and refine code with execution results to improve the performance?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Self-Edit: Fault-Aware Code Editor for Code Generation" ]
[ "acl2023" ]
a911834c-2700-51fa-8a37-ab3649fdd8d7
In section 4, what research quesitions do the authors aim to answer?
Your answer should be be plein text DIRECTLY FROM THE PDF.
[ "DocLens: Multi-aspect Fine-grained Medical Text Evaluation" ]
[]
[]
a9275d5c-ec5c-5fd2-b8de-0866aaee4fb8
Which paper combines the advantages of different frameworks for grammar error correction (GEC) and achieves good performance?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "TemplateGEC: Improving Grammatical Error Correction with Detection Template" ]
[ "acl2023" ]
a927fcee-f0a7-50cd-b5a7-ca8ab5c8eb23
What are the institutions of the first author and corresponding author of this paper?
Your answer should be a Python list of length 2 containing the institution names respectively, e.g., ["first_author_institute", "corresponding_author_institute"]. If there are multiple first authors or corresponding authors, please replace the corresponding institution name with a name list, e.g., [["first_author1_institute", "first_author2_institute", ...], ["corresponding_author1_institute", "corresponding_author2_institute", ...]].
[ "Revisiting Demonstration Selection Strategies in In-Context Learning" ]
[]
[]
a93430e0-ae3b-585d-8622-ed9b5844da8c
In Experiment Section of the paper, what is the overall framework of the baseline model achieving the second best BERTScore on the dataset LOCOMO?
Your answer should be a python strings about the detailed overall framework of the baseline model.
[ "SeCom: On Memory Construction and Retrieval for Personalized Conversational Agents" ]
[ "MemoChat: Tuning LLMs to Use Memos for Consistent Long-Range Open-Domain Conversation", "Personalized Large Language Model Assistant with Evolving Conditional Memory" ]
[]
a96944de-c900-55f0-9b3b-cccb597a8b71
Which category of website takes up the most proportion in the dataset MC2?
Your answer should be a phrase indicating the category DIRECTLY FROM THE PDF WITHOUT ANY MODIFICATION OR EXPLANATION.
[ "MC2: Towards Transparent and Culturally-Aware NLP for Minority Languages in China" ]
[]
[]
a98997f3-d4ad-5739-91dd-4dc08fb626a6
According to the paper that proposes the pixel-level similarity metrics that the LG-VQ paper employs, what's the metrics' value, given that it's a gray scale image 0f 8 bits, with MSE=1?
Your answer should be a float, rounding to 2 decimal places.
[ "LG-VQ: Language-Guided Codebook Learning" ]
[ "A Formal Evaluation of PSNR as Quality Measurement Parameter for Image Segmentation Algorithms" ]
[]
a9aba86b-c608-5d87-9039-cc130911a03d
What molecular representation learning paper introduced a benchmark that focuses on learning over thermodynamically-accessible conformer ensembles across diverse molecular properties and chemical reactions?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "70940d87-34ac-5a74-b1c6-707c361fc017" ]
[ "iclr2024" ]
aa4ec90c-b162-5319-9a00-ca47101c24f8
Which paper showed that social relationships were helpful for identifying inappropriate messages?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Your spouse needs professional help: Determining the Contextual Appropriateness of Messages through Modeling Social Relationships" ]
[ "acl2023" ]
aa5598d0-e570-5f39-afd6-159fd696bdc6
What paper mitigates language model sampling errors due to the softmax bottleneck?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Closing the Curious Case of Neural Text Degeneration" ]
[ "iclr2024" ]
aa73c1ee-05cb-5570-bfac-bf86eb94caeb
What is the reason why Soft MoE cannot be applied to autoregressive models currently?
Your answer should be a python string
[ "From Sparse to Soft Mixtures of Experts" ]
[]
[]
aadd5754-71f6-5b8d-ad9e-d7d8e24975ce
Who is the first author of WebArena? How many papers of his/hers are cited in the paper? Which are they?
Your answer should be a Python list of 3 elements. The first one is a string serving as the first author name of WebArena. The second one is an interger indicating the number of self-referenced papers. The third one is a string list storing the titles of self-referenced papers.
[ "WebArena: A Realistic Web Environment for Building Autonomous Agents" ]
[]
[]
aaf4f321-f2c3-5cd3-9924-d22ed02ed43c
Calculate the increase in throughput when the batch size increases from 24 to 64 for H2O (20%) at a sequence length of 2048+2048 on A100 GPU.
Your answer should be a Python float number rounded to 1 decimal place. e.g. 20.3
[ "H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models" ]
[]
[]
ab369ade-a399-5f3a-82ba-13c02f4a91a7
Whether the code and data of this paper are publicly available or not?
Your answer should be a simple "yes" or "no" WITHOUT PUNCTUATION OR EXPLANATION.
[ "COKE: A Cognitive Knowledge Graph for Machine Theory of Mind" ]
[]
[]
ab9138b7-f6f2-5fd0-9430-1d0664ceb5c3
Which paper first studied the efficiency robustness of multi-exit language models?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Dynamic Transformers Provide a False Sense of Efficiency" ]
[ "acl2023" ]
ab9cd1cf-213f-5551-b4fb-104a3ba51266
Which paper shows that in instruction tuning, the instructions can be compressed to small supporting sets of words that provide useful information?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Did You Read the Instructions? Rethinking the Effectiveness of Task Definitions in Instruction Learning" ]
[ "acl2023" ]
ac041b6a-467d-53ce-8419-f283f3e0d7aa
Is there any paper that reveals annotation problems in cross-lingual summarization caused by decomposing the task into translation and summarization?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Revisiting Cross-Lingual Summarization: A Corpus-based Study and A New Benchmark with Improved Annotation" ]
[ "acl2023" ]
ac2f076d-19cf-5703-b728-dc3077dd410e
The experiment section of this paper introduces a new metric, S^2MATCH, what is the main difference between it and the metric used in Appendix D? Answer with one formula.
You only need to provide one definition formula of S2MATCH in Python strings. You don't need to explain the formula or variables.
[ "Incorporating Graph Information in Transformer-based AMR Parsing" ]
[ "AMR Similarity Metrics from Principles" ]
[]
ac4cbd2d-98ea-5717-acc3-00a4084623ab
What baselines (excluding MLP) are used in the paper proposing the model zoo in the paper?
Your answer should be a python list about the exact names of the baselines, e.g., ['baseline1', 'baseline2', ...]. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.
[ "Set-based Neural Network Encoding Without Weight Tying" ]
[ "Equivariant Architectures for Learning in Deep Weight Spaces" ]
[]
ad341e2b-cb59-5695-b41a-9912be57ea77
What is the shape of $W$ in Equation (3)? And what about $W_l$ and $W_v$ in Equation (5)?
Your answer should be a sentence describing the shapes of $W$, $W_l$ and $W_v$ in detail.
[ "You Only Look at Screens: Multimodal Chain-of-Action Agents" ]
[]
[]
ad6b9fa5-cac1-531f-8b8c-c82fe6665863
For the VQA DOC task, what are the optimal values of alpha and beta?
Your answer should be a python list of two numbers
[ "Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment" ]
[]
[]
ae713f72-a3ae-5bdd-8704-f849359fe19b
Which model did both anchor_pdfs use for experiments?
Your answer should be a python string, and it should be the model name.
[ "Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference", "Learning Job Title Representation from Job Description Aggregation Network" ]
[]
[]
af63f4bc-4bf0-5521-aa2d-c032a1b947c8
Is there any paper that address attacks on code models by leveraging the semantic information of the source code through attention scores, while also guaranteeing that the generated adversarial examples can always be compiled successfully?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "DIP: Dead code Insertion based Black-box Attack for Programming Language Model" ]
[ "acl2023" ]
afe1dc15-35c0-54fb-8934-356aa8803efe
In the paper that proposes GLPFT, which training dataset is larger? In that dataset, what's the format of the data?
Your answer should be a Python list of 2 elements, the first is the name of the dataset, the second is the format of the data in "A-B pairs" format, as given in the paper. e.g. ["MMLU", "question-answer pairs"].
[ "Small LLMs Are Weak Tool Learners: A Multi-LLM Agent" ]
[ "ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs", "ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases" ]
[]
b0b2b9a1-fa76-5027-9ba7-84a9876c07ac
Is there any paper that uses token-level loss to enhance sentence-level embedding learning?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Dual-Alignment Pre-training for Cross-lingual Sentence Embedding" ]
[ "acl2023" ]
b10c0e3a-48e4-5878-bbff-1611969ca685
What is the first paper that uses the generalized linear model to analyze multi-neural spike train data?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "One-hot Generalized Linear Model for Switching Brain State Discovery" ]
[ "iclr2024" ]
b11ab881-9ac0-5b77-9b27-394744cf06e1
What are the most important optimizations of transformer network in this paper?
Your answer should be a Python list of text strings, with each element being one important optimization that this paper proposes, e.g., ["optimization 1", "optimization 2", ...].
[ "EIT: Enhanced Interactive Transformer" ]
[]
[]
b123fcb5-e4ab-5ed9-b8f2-6f7fa2b6880d
Which model uses Llama2-7B as the LLM base model in Table 1 in paper 'DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effective for LMMs' and in this model's original paper, how many models are compared in Table 4 in total?
Your answer should be a Python list of 2 strings, the name of the model, and the number of compared models.
[ "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effective for LMMs" ]
[ "Improved Baselines with Visual Instruction Tuning" ]
[]
b14a34e0-6226-5a98-aeec-2ade7fe35d70
Regarding the dataset ROCKS used in the anchor paper, it contains ratings assessed by 20 annotators for each of the 12 pictures of a given rock type. How does its experimental setup ensure the objectivity and fairness of the ratings, specifically how do subjects use consistent scale values?
Your answer should be a python string that explains the detailed experimental setup.
[ "Ranking Entities along Conceptual Space Dimensions with LLMs: An Analysis of Fine-Tuning Strategies" ]
[]
[]
b15e2f1e-a31c-58b9-8d53-1910e0d28391
On what devices is StreamVoice trained?
Your answer should be a plein text directly from the PDF without explanation.
[ "StreamVoice: Streamable Context-Aware Language Modeling for Real-time Zero-Shot Voice Conversion" ]
[]
[]
b1724696-f143-5f5a-a58d-2f4086212016
Is there any paper that utilizes graph structure to model conversation history?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "History Semantic Graph Enhanced Conversational KBQA with Temporal Information Modeling" ]
[ "acl2023" ]
b1ee7930-cebf-5b6d-8ebc-bbc0a6246aca
In which comparisons of models did the two papers reach similar conclusions?
Your answer should be a python list of several strings. The string should be language model name.
[ "Exploring Ordinality in Text Classification: A Comparative Study of Explicit and Implicit Techniques", "On Robustness of Finetuned Transformer-based NLP Models" ]
[]
[]
b2711e57-f28a-5955-9413-35717769b3c1
For retrieval evaluation, what metrics applied by MTEB are not used by LocalRQA?
Your answer should be a python list of strings, each string is the name of a metric, as given in the MTEB paper.
[ "LocalRQA: From Generating Data to Locally Training, Testing, and Deploying Retrieval-Augmented QA Systems", "MTEB: Massive Text Embedding Benchmark" ]
[]
[]
b2b59368-db51-520e-b292-c2293ef13fd4
In the paper proposing SG-USM for task-oriented dialogues, which baseline method performs the best across all datasets, excluding SG-USM itself?
Your answer should be a python strings. YOU MUST USE THE EXACT ABBREVIATION AS IN THE PAPER. DO NOT USE FULL NAMES.
[]
[ "Schema-Guided User Satisfaction Modeling for Task-Oriented Dialogues" ]
[ "acl2023" ]
b300fae4-e575-5062-9f11-2c8f320463cb
How was the data for the latest text classification tasks used in this paper collected?
Your answer should be a python strings.
[ "Knowledge Distillation ≈ Label Smoothing: Fact or Fallacy?" ]
[ "Automatically Constructing a Corpus of Sentential Paraphrases", "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank", "SQuAD: 100,000+ Questions for Machine Comprehension of Text", "A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference" ]
[]
b33b2cf3-a27a-5b2a-a1ca-5f08d8b1e75e
Which paper makes sure that the questions used in the paper are all from real users that are genuinely curious about a specific topic or concept?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "CREPE: Open-Domain Question Answering with False Presuppositions" ]
[ "acl2023" ]
b384c73f-b916-5d13-809c-473938369a69
To evaluate the AdaLoRA algorithm, which model is used for natural language understanding and question answering? Can you give me the relevant github link of this model?
Your answer should be a single python list of two strings, like ["model_name","https://github.com/a/b"].Note that in the model name, you should use "-" between the series name and the size, for example, "modelx-small".
[ "Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning" ]
[ "DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing" ]
[]
b3918d42-c1a0-5c09-97ef-f9182fe40a5c
Regarding the construction of the Edit Matrix, which paper's findings does this study reference and follow? What are the affiliations of the authors of the cited work?
Your answer should be a brief text containing the cited paper's name and the authors' affiliations.
[ "Multi-Granularity Information Interaction Framework for Incomplete Utterance Rewriting" ]
[ "Incomplete Utterance Rewriting as Semantic Segmentation" ]
[]
b39cbbdd-8489-53f0-a9ca-d0dbc46c8ead
What limitations do large language models have in evaluating information-seeking question answering?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Evaluating Open-Domain Question Answering in the Era of Large Language Models" ]
[ "acl2023" ]
b3a5fb63-2a87-5e0c-bd8d-29f25772319c
What paper first associate the modeling frequency with input human skeletons under the NeRF framework?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Pose Modulated Avatars from Video" ]
[ "iclr2024" ]
b3bdd115-e25d-57c2-8931-40fb33a5f9a0
Among the SR algorithms chosen in the paper "Expression Sampler as a Dynamic Benchmark for Symbolic Regression", which one performs the best on SRBench, considering the R-squared test.
Your answer should be a string, the abbreviation of the algorithm as given in the paper.
[ "Expression Sampler as a Dynamic Benchmark for Symbolic Regression", "Contemporary Symbolic Regression Methods and their Relative Performance" ]
[]
[]
b4dcc93d-635a-54c4-be8f-c5ec443d08db
The training dataset used in the paper "Semiparametric Token-Sequence Co-Supervision" is filtered to 42932 instances, then what's the original size of this dataset?
Your answer should be a single integer.
[ "Semiparametric Token-Sequence Co-Supervision" ]
[ "Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection" ]
[]
b509eb3e-12e2-51cc-a02d-ed22d0c8a8b3
How does Multi-DYLE combine the three different losses as the objective of training?
Your answer should be single formula in latex format, extracted from the specified pdf.
[ "ExplainMeetSum: A Dataset for Explainable Meeting Summarization Aligned with Human Intent" ]
[]
[]
b50d066a-9ed9-5aac-b79c-a32e3bef9734
Which dataset performs better on the LLaMA model, PRM800K or Math-Shepherd? In the source paper of PRM800K, which methods are compared with PRM?
Your answer should be a python list of two items. The first item is a python string. The second item is a python list of strings.
[ "Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations", "Let's Verify Step by Step" ]
[]
[]
b5307d05-348e-50df-8932-95ccf83020f0
Which paper investigates the influence of the diversity of source tasks on the performance of target tasks in prompt tuning using CrossFit?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Learning to Initialize: Can Meta Learning Improve Cross-task Generalization in Prompt Tuning?" ]
[ "acl2023" ]
b551c2aa-7d01-5fbf-af59-ae4645fcba85
Which paper first proposed a cross-domain language model to automatically generate much labeled data for a unlabeled target domain?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Cross-Domain Data Augmentation with Domain-Adaptive Language Modeling for Aspect-Based Sentiment Analysis" ]
[ "acl2023" ]
b5dfad94-c5ef-5f7e-a5f4-1c1a479acbe5
What does the special symbol $\overline{\mathcal{V}}$ mean in the proposed CFIC decoding strategy?
Your answer should be concise text string highlighting the meaning of the symbol.
[ "Grounding Language Model with Chunking-Free In-Context Retrieval" ]
[]
[]
b5f5b2f4-9e71-5a20-afcb-392406123af3
In the algorithm applied to find the minimum norm interpolant, when does the first step stop?
Your answer should be a Python string, the formula of the end condition in LaTeX format.
[ "Minimum norm interpolation by perceptra: Explicit regularization and implicit bias" ]
[ "Optimal bump functions for shallow ReLU networks: Weight decay, depth separation and the curse of dimensionality" ]
[]
b7327d6a-9ab2-5fd7-966d-4250ce72ae00
Which family of model generally perform the best for the event conceptualization task
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "CAT: A Contextualized Conceptualization and Instantiation Framework for Commonsense Reasoning" ]
[ "acl2023" ]
b78a0d2a-e972-522e-97ad-e2e5795d8f64
In the event extraction section of the paper "Is It Safe to Tell Your Story? Towards Achieving Privacy for Sensitive Narratives", how were the dependency parses of the stories obtained?
Your answer should be a python strings.
[ "Is It Safe to Tell Your Story? Towards Achieving Privacy for Sensitive Narratives" ]
[ "Stanza: A Python Natural Language Processing Toolkit for Many Human Languages" ]
[]
b8034b03-c46a-5b8c-8bdd-09e67ad45f9f
What is the composition of the training dataset in the paper?
Your answer should be a python strings about the dataset.
[ "Identifying and Solving Conditional Image Leakage in Image-to-Video Diffusion Model" ]
[ "Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval" ]
[]
b91a8c27-fa71-5ff7-a867-b58985276991
Among three single-agent baselines in table one, which performs best on Damped Spring?
Your answer should be a python string. YOU MUST USE THE EXACT NAME FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.
[ "TANGO: Time-reversal Latent GraphODE for Multi-Agent Dynamical Systems" ]
[]
[]
b939dfd5-150e-5b54-9ef3-b9b5497d688d
In the CLAP paper, which three challenges for patchwork learning are mentioned?
Your answer should be a Python list of 3 strings, each string is a challenge.
[]
[ "CLAP: Collaborative Adaptation for Patchwork Learning" ]
[ "iclr2024" ]
b943f9ec-685a-5bbf-b82e-65bd00415e6d
In "MetaGPT: Merging Large Language Models Using Model Exclusive Task Arithmetic", what is the main advantage of "MetaGPT" that the authors claim to have over "AdaMerging"? Also, what is the most significant difference between the experiment settings of the papers which proposed these two methods?
Your answer should be brief text answering the 2 questions with separate sentences.
[ "MetaGPT: Merging Large Language Models Using Model Exclusive Task Arithmetic", "AdaMerging: Adaptive Model Merging for Multi-Task Learning" ]
[]
[]
b9a2794c-8387-5693-b200-d80db3f9eb0f
In the paper Zhengyuan Liu as the first writer published in ACl 2023 that is related to stance detection, which corpus dataset was only used in evaluation, not in training?
Your answer should be a string of the corpus's name without any explanation and anyother word.
[]
[ "Guiding Computational Stance Detection with Expanded Stance Triangle Framework" ]
[ "acl2023" ]
ba07c4e4-443b-557f-87d1-ce383cd772ef
In the EGraFFBench paper, Equiformer's hyperparameter setting resembles that in the original Equiformer paper on MD17 dataset. Specifically, in the setting in the original paper that is the closest to the EGraFFBench setting, what's the value of L_{max}?
Your answer should be an integer.
[ "EGraFFBench: Evaluation of Equivariant Graph Neural Network Force Fields for Atomistic Simulations", "Equiformer: Equivariant Graph Attention Transformer for 3D Atomistic Graphs" ]
[]
[]
ba924c39-78a1-5236-8861-cd718dfc4c9a
Which model, among GPT3.5, GPT-4, Llama-7B, and Mistral-7B, experiences the largest drop in overall accuracy from the Conversation History Task to MMLU AA, relative to zero-shot performance?
Your answer should be a single model name, without any other text. The answer should be one of the following: GPT3.5, GPT-4, Llama-7B, or Mistral-7B.
[ "LLM Task Interference: An Initial Study on the Impact of Task-Switch in Conversational History" ]
[]
[]
baab0bc5-e83e-54ec-933b-6edb1b9d47d3
Which subtask of task3 of SemEval-2023 does the paper(titled "BERTastic at SemEval-2023 Task 3: Fine-Tuning Pretrained Multilingual Transformers- Does Order Matter?" perform the experience on?
Your answer should be a string describing this subtask. Note that you should not include other subtasks of task3 of SemEval-2023.
[ "BERTastic at SemEval-2023 Task 3: Fine-Tuning Pretrained Multilingual Transformers Does Order Matter?" ]
[ "SemEval-2023 Task 3: Detecting the Category, the Framing, and the Persuasion Techniques in Online News in a Multi-lingual Setup" ]
[]
bafaee02-a31b-55d6-bb62-6d382ae3bcb6
Both as hybrid digital twins, what's the advantage of HDTwinGen over PINN-based Med-Real2Sim?
Your answer should be in well-formed item list.
[ "Automatically Learning Hybrid Digital Twins of Dynamical Systems", "Med-Real2Sim: Non-Invasive Medical Digital Twins using Physics-Informed Self-Supervised Learning" ]
[]
[]
bb5aedbf-7683-56e7-a348-0ee986fe0fd2
Is there any works that explores how to achieve balance between representativeness and diversity in chosen samples for few-shot data selection?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Cold-Start Data Selection for Better Few-shot Language Model Fine-tuning: A Prompt-based Uncertainty Propagation Approach" ]
[ "acl2023" ]
bb6a0f0e-0c0c-5038-b340-3044e9ffefd6
What paper evaluated the ability of visual few-shot learning models to do in-context learning?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "1256d979-2f84-5a16-85a5-8f88126363a8" ]
[ "iclr2024" ]
bb6ffb6d-2235-58cc-b04d-291818f74b05
In the dataset proposed by the authors, how many states are there per game?
Your answer should be a floating-point number with one decimal place.
[ "Can Language Models Serve as Text-Based World Simulators?" ]
[]
[]
bb7c8889-1582-5409-95bc-74cb179506a1
What's the original annotation process drawn by the paper named "Where Do People Tell Stories Online?Story Detection Across Online Communities?"?
Your answer should be a single string indicating the original annotation process.
[ "Where Do People Tell Stories Online? Story Detection Across Online Communities" ]
[ "Literary Event Detection" ]
[]
bbc522d2-649a-5660-8180-7f67728376bf
which paper first focuses on addressing the over-smoothing issue for sentence embedding?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Alleviating Over-smoothing for Unsupervised Sentence Representation" ]
[ "acl2023" ]
bbe726ca-1f0d-564d-b553-7bc625404d15
What is the original dataset size (including all data splits) for each shared dataset used in both the works ABEX and MinPrompt according to its original papers?
Your answer should be a Python dictionary with the keys being the names (case-sensitive) of the shared datasets and the integer values being the corresponding dataset sizes.
[ "ABEX: Data Augmentation for Low-Resource NLU via Expanding Abstract Descriptions", "MinPrompt: Graph-based Minimal Prompt Data Augmentation for Few-shot Question Answering" ]
[ "SQuAD: 100,000+ Questions for Machine Comprehension of Text", "NewsQA: A Machine Comprehension Dataset" ]
[]
bc5c4cf7-21ed-5298-9c2c-81386204608e
In the paper "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture", according to Figure 1, which two dimensions are mixed using Monarch matrices? In the source paper of Monarch matrices, what training settings can Monarch matrices be used for?
Your answer should be a python list containing two items. The first item is a python list with two strings. The second item is a python list with an indefinite number of strings.
[ "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture", "Monarch: Expressive Structured Matrices for Efficient and Accurate Training" ]
[]
[]
bd3d1dd5-7f10-5e09-aa76-486685c77180
What do formula (2) to formula (4) mean?
Your answer should be a brief summarization of the meaning of these formulas, and you do not need to introduce these formulas one-by-one.
[ "Mitigating the Learning Bias towards Repetition by Self-Contrastive Training for Open-Ended Generation" ]
[]
[]
bd576276-efd9-5168-86f6-42937141fea4
Which dataset(s) is dataset SAFECONV derived from? Give me their github link(s).
Your answer should be a single python list like [["dataset1","dataset2",...],["https://github.com/a/b","https://github.com/c/d",...]].Note that you should retain the size in the dataset name if available.
[ "SafeConv: Explaining and Correcting Conversational Unsafe Behavior" ]
[ "Pchatbot: A Large-Scale Dataset for Personalized Chatbot", "A Large-Scale Chinese Short-Text Conversation Dataset" ]
[]
bdb390b0-30bb-5dc9-bd58-a832c6689bcf
Which speech encoder does the paper("Advancing Large Language Models to Capture Varied Speaking Styles and Respond Properly in Spoken Conversations") choose to extract universal paralinguistic and prosody embeddings?In the proposal of this encoder, which datasets are used in both pre-training and downstream tasks?
Your answer should be a list like ["encoder_name", ["dataset1", "dataset2",...]].
[ "Advancing Large Language Models to Capture Varied Speaking Styles and Respond Properly in Spoken Conversations" ]
[ "emotion2vec: Self-Supervised Pre-Training for Speech Emotion Representation" ]
[]
bdcc2f81-9b12-56e3-90cc-23e6513985d4
In the paper "Training Trajectories of Language Models Across Scales" (anchor_pdf), figure 17 and 18 can be used to provide a supportive reasoning on the conclusion, what is it? And is it giving a similar conclusion as the paper "Are Emergent Abilities of Large Language Models a Mirage?" did?
Your answer should be a Python list of two elements, where the first element is the supportive reasoning on the conclusion provided in figure 17 and 18, and the second is a boolean value indicating whether it is giving a similar conclusion as the paper "Are Emergent Abilities of Large Language Models a Mirage?" did.
[ "Training Trajectories of Language Models Across Scales" ]
[ "Are Emergent Abilities of Large Language Models a Mirage?" ]
[]
be08635a-0dbc-5dab-85d0-40f45c6edfc2
Which paper enables interactive semantic parsing by training an error correction model with simulated human feedback instead of human annotations?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Learning to Simulate Natural Language Feedback for Interactive Semantic Parsing" ]
[ "acl2023" ]
be178eef-403f-5633-8cbd-5b876059fce4
For each test split in Figure 5, provide the name and type of the website with the highest step success rate.
Your answer should be a Python dictionary. e.g. {"split1": ["web1", "type1"], "split2": ["web2", "type2"], ...}. YOU MUST USE THE EXACT WORDS FROM PDF WITHOUT CHANGING CAPITALIZATION.
[ "On the Multi-turn Instruction Following for Conversational Web Agents" ]
[]
[]
bec9b106-831a-5e17-97b6-8af2636194d3
Which paper proposed a learning-based data augmentation method for improving compositional generalization of language models?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Learning to Substitute Spans towards Improving Compositional Generalization" ]
[ "acl2023" ]
befbacf1-d163-5021-bb6c-2ba79257c81c
Which was the first paper to explore the online adaptation of neural MT metrics for use during the inference stage?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Test-time Adaptation for Machine Translation Evaluation by Uncertainty Minimization" ]
[ "acl2023" ]
bf391f7a-d33b-5b00-9001-ee92284a15ec
According to Figure 1, with the increasing of the number of few-shot training samples, which setting keeps getting a better score?
Your answer should be the name of the setting appearing in the legend of the image.
[ "Does GPT-3 Grasp Metaphors? Identifying Metaphor Mappings with Generative Language Models" ]
[]
[]
bf7fe85f-b409-5a58-ac9e-ba738e5390c7
In FewRel's 10-way 5-shot setting, what is the maximum decrease of AOD+ROD between ConPL and AGCKD across all task indexes?
Your answer should be a positive floating-point number with two decimal place.
[ "Continual Few-shot Relation Extraction via Adaptive Gradient Correction and Knowledge Decomposition" ]
[]
[]
bfa70a42-daa5-52db-aa3f-8ceb0960739a
Which vision-language model paper in 2023 developed techniques that reduce input tokens to improve model inference speed?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "PuMer: Pruning and Merging Tokens for Efficient Vision Language Models" ]
[ "acl2023" ]
bfb209f1-da03-5d97-a7e8-aa3bd63e257d
How to better attract readers to news articles by generating personalized headlines?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Generating User-Engaging News Headlines" ]
[ "acl2023" ]
c0e96750-91fe-5f24-aee3-74ea8706654a
For each category of PQA in terms of the form of provided answers, from what aspects does the author analyze it?
Your answer should be a Python list, where each element is a string representing an aspect DIRECTLY FROM THE PDF. Note that the aspects are the same for each category. e.g. ["aspect1", "aspect2", ...]
[ "Product Question Answering in E-Commerce: A Survey" ]
[]
[]
c100db0f-bb91-514e-af99-6c6efcf22cd3
How much data do the author use in total in million for the main experiment conducted on WMT17 ZhEn?
Your answer should be a Python float, rounding to 1 decimal places.
[ "Text Style Transfer Back-Translation" ]
[]
[]
c1027cf8-184a-5c77-8c53-6247abe0160d
On which model does the paper(titled "Making Large Language Models Better Reasoners with Orchestrated Streaming Experiences") conduct the most analysis experiments?Is there any other size of parameters for this model?
Your answer should be a single python list formatted like ["model_name", ["10B","20B",...]].The first element of the list is a string representing the name of the model, the second element of the list is a list representing other size of params(Note that you shouldnot include the size already employed in the paper).
[ "Making Large Language Models Better Reasoners with Orchestrated Streaming Experiences" ]
[ "Llama 2: Open Foundation and Fine-Tuned Chat Models" ]
[]
c17c03e2-ea11-5472-be57-c7ead3b8605f
Which paper employs a two-stage approach in generative models to tackle ABSA tasks across various domains?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Bidirectional Generative Framework for Cross-domain Aspect-based Sentiment Analysis" ]
[ "acl2023" ]
c181315e-0268-53f9-a982-60eb5747f0e5
Which paper first attempts to take potential dependencies among same-level labels into account in Hierarchical Text Classification?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Peer-Label Assisted Hierarchical Text Classification" ]
[ "acl2023" ]
c1acd5a0-7a76-5605-996d-0191bda04f6c
Which is the first multimodal model combining text and speech transformers trained without labelled text-speech pairs?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Introducing Semantics into Speech Encoders" ]
[ "acl2023" ]
c1cbcf5c-632c-5424-a1ef-d9add6094746
What are the Low resource languages in INDICGENBENCH?
Your answer should be a Python list, where each element is a string representing a language. e.g. ["language1", "language2", ...]
[ "IndicGenBench: A Multilingual Benchmark to Evaluate Generation Capabilities of LLMs on Indic Languages" ]
[]
[]
c1f769f6-eaff-5441-9f1c-d62445efe58d
What's the total number of augmented training samples across all datasets used in the MINPROMPT work?
Your answer should be a single integer number.
[ "MinPrompt: Graph-based Minimal Prompt Data Augmentation for Few-shot Question Answering" ]
[]
[]
c2089236-8909-5a22-9eaa-35644720a87b
According to the author, how does Cross Entropy contribute to miscalibration?
Your answer should be a string.
[ "FIRST: Teach A Reliable Large Language Model Through Efficient Trustworthy Distillation" ]
[]
[]
c21e6d8e-865c-5544-8177-49b48d723934
Is there any paper that applies symbolic distillation on black-box generalist language models to harvest high-quality counterfactual data for out-of-distribution generalization?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "DISCO: Distilling Counterfactuals with Large Language Models" ]
[ "acl2023" ]
c2412c63-0fda-5e8c-95c0-615c415d5ff9
What is the accuracy of the base model used in the experiment in the paper "Protecting Privacy in Classifiers by Token Manipulation" on the RACE test set?
Your answer should be a python float with one decimal places.
[ "Protecting Privacy in Classifiers by Token Manipulation" ]
[ "RoBERTa: A Robustly Optimized BERT Pretraining Approach" ]
[]