uuid
string
question
string
answer_format
string
anchor_pdf
list
reference_pdf
list
conference
list
66c0154e-799f-5095-9de0-36d41967dfe9
What are the advantages of the transformer-based architecture proposed in the paper?
Your answer should be a python strings about the obvious advantages.
[ "Rough Transformers for Continuous and Efficient Time-Series Modelling" ]
[]
[]
66c5fd15-e82b-5a02-bce6-bb0aab05184f
Among the datasets of the benchmark that collects CHIP-CDN, what are the evaluation metrics they applied?
Your answer should be a Python list of strings, containing the names of the evaluation metrics.
[ "RRNorm: A Novel Framework for Chinese Disease Diagnoses Normalization via LLM-Driven Terminology Component Recognition and Reconstruction" ]
[ "CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark" ]
[]
66e99a9b-0660-5574-b8b6-1a05b76c7396
What are the two loss functions in Equation (8) means?
Your answer should a list with two items, representing the meaning of the first and the second loss function respectively.
[ "A Non-autoregressive Generation Framework for End-to-End Simultaneous Speech-to-Any Translation" ]
[]
[]
6746c386-b889-59ad-abed-144cd56101d3
What's the baseline used in the experiment?
Your answer should be a plein text DIRECTLY FROM THE PDF.
[ "Towards Fewer Hallucinations in Knowledge-Grounded Dialogue Generation via Augmentative and Contrastive Knowledge-Dialogue" ]
[]
[]
67f33e3f-646d-5bac-8a18-5080e6a2563e
How to calculate final loss function(Loss) in this paper?
Your answer should be a python string, which is a formula in latex format to calculate a parameter.
[ "Event-Arguments Extraction Corpus and Modeling using BERT for Arabic" ]
[]
[]
681c2bb8-b4cf-5f2a-bd56-ae26e0bb51b6
I would like to reproduce the experiments of KnowGPT, could you please provide me with the websites of the datasets applied in the experiment?
Your answer should be a Python list of 3 strings, the websites. Note that you should provide the original URL as given in the papers that proposed the datasets.
[ "KnowGPT: Knowledge Graph based Prompting for Large Language Models" ]
[ "Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering", "CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge", "What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams" ]
[]
68855b4d-dd5d-5b33-8ddd-61b13b1b6c51
On cmudog, which one among the linguistics operators appears the most frequently? What's its distribution?
Your answer should be a Python list of 2 elements. The first element is the linguistic operator"s name, and the second element is its disrtibution in percent in string format. e.g. ["answer", "5%"].
[ "On the Compositional Generalization in Versatile Open-domain Dialogue" ]
[]
[]
68baa0b9-8e5e-5436-94d5-6dd0b3bbfff0
On which datasets this study surpassed the SOTA?
Your answer should be a Python list of dataset, e.g., ["dataset1", "dataset2", ...]. YOU MUST USE THE EXACT TEXT FROM THE PAPER.
[ "On the Compositional Generalization in Versatile Open-domain Dialogue" ]
[]
[]
699bf716-c3d2-526a-b3fc-2c1c10f5aa09
What does each record in the dataset used by the event-based knowledge editing benchmark in the paper "EVEDIT: Event-based Knowledge Editing for Deterministic Knowledge Propagation" include?
Your answer should be a python list of strings. YOU MUST USE THE EXACT NAMES FROM THE PAPER, RATHER THAN MATHEMATICAL SYMBOLS OR ABBREVIATION.
[ "EVEDIT: Event-based Knowledge Editing for Deterministic Knowledge Propagation" ]
[ "Locating and Editing Factual Associations in GPT" ]
[]
69fb7412-8288-562a-8de7-d7727f689fcf
What manipulation operations does the manipulation network in this paper allow? For the add network, how is the training loss of relation defined?
Your answer should be a single python list, the first element of the list is a list of the strings of manipulation operations, the second element of the list is a string of the formula in latex format, e.g. [["operation1", "operation2"], "l_{\text {relation }}=..."]
[ "Image Manipulation via Multi-Hop Instructions - A New Dataset and Weakly-Supervised Neuro-Symbolic Approach" ]
[]
[]
6a72002b-7dcf-55df-8a3c-3cc49ee326a3
What dataset does this paper(titled "Identifying Conspiracy Theories News based on Event Relation Graph") use for training? How many event coreference chains does this dataset contain?
Your answer should be a python list of two strings, the first element is the dataset name(abbrievation), and the second element is an integer number.
[ "Identifying Conspiracy Theories News based on Event Relation Graph" ]
[ "MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction" ]
[]
6aefdbec-8411-50e8-a9f3-b26afe188083
In the paper that proposes the only comparable interactive theorem prover applied as a baseline by AIPS, where are the evaluation samples chosen from?
Your answer should be the a raw text from the papers.
[ "Proving Olympiad Algebraic Inequalities without Human Demonstrations" ]
[ "Towards Large Language Models as Copilots for Theorem Proving in Lean", "Llemma: An Open Language Model for Mathematics" ]
[]
6af99fe6-5e33-5632-8553-aa9d1daaad86
Find the NLP paper that focuses on dialogue generation and introduces advancements in the augmentation of one-to-many or one-to-one dialogue data by conducting augmentation within the semantic space.
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "DialoGPS: Dialogue Path Sampling in Continuous Semantic Space for Data Augmentation in Multi-Turn Conversations" ]
[ "acl2023" ]
6b2a09ee-5f74-57c7-a863-ebd390f89150
In figure 3, there are three types of losses. What is the function for the first loss?
Your answer should be a formula in latex format extracted from the paper.
[ "Matryoshka-Adaptor: Unsupervised and Supervised Tuning for Smaller Embedding Dimensions" ]
[]
[]
6b4e8ab7-8482-55dc-ba84-ebc606c19f27
Please give me the github link of this work.
Your answer should be a single string of the github link.
[ "Offline Imitation Learning with Variational Counterfactual Reasoning" ]
[]
[]
6b660c4a-c2a0-538f-b42a-bfe6337add99
Could you recommend a dataset paper which presents relation extraction performance on translated data and compare it to English data?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "MultiTACRED: A Multilingual Version of the TAC Relation Extraction Dataset" ]
[ "acl2023" ]
6bb32702-f9f0-53a5-a534-be38bfc75b3f
In Figure 3, what can we infer from comparing the performance with training data generated by self-training (ST) versus without it?
Your answer should be a Python strings about the conclusion from comparing the performance with training data generated by self-training (ST) versus without it.
[ "Making More of Little Data: Improving Low-Resource Automatic Speech Recognition Using Data Augmentation" ]
[]
[]
6bdd99b0-3976-5029-9b64-d82b7bfb4276
What does formula (1) mean in the Methodology section?
Your answer should be a python strings of the detailed explanation of the formula.
[ "DecompX: Explaining Transformers Decisions by Propagating Token Decomposition" ]
[]
[]
6beb7fc3-96ff-587f-8362-bcd0f709a2e9
How does the system efficiently adapt to completely unfamiliar opponent policies during deployment, while still maintaining performance with known policies?
Your answer should be a sentence.
[ "Towards Offline Opponent Modeling with In-context Learning" ]
[]
[]
6c290495-01d3-5b69-adda-7d26b92f0da1
Which institution funded the model with worse debiasing ability in Post-hoc in "A Parameter-Efficient Multi-Objective Approach to Mitigate Stereotypical Bias in Language Models"?
Your answer should be a python string of the institution name. You should use full name.
[ "A Parameter-Efficient Multi-Objective Approach to Mitigate Stereotypical Bias in Language Models" ]
[ "Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection", "Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP" ]
[]
6dc50c47-0782-5277-b10a-e5e427a10223
What is the first paper that theoretically studies training neural networks under small initialization?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Early Neuron Alignment in Two-layer ReLU Networks with Small Initialization" ]
[ "iclr2024" ]
6de72b3a-ac37-5d2d-b870-a61dac353bdb
Is there any paper that attempts to evaluate the similarity of meaning representations without using annotated data?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Evaluate AMR Graph Similarity via Self-supervised Learning" ]
[ "acl2023" ]
6df645eb-d78e-56b7-be7a-8712b3ed7a75
In which four directions can the author's model be trained?
Your answer should be python list, each element of the list is a string like 'A-to-A', 'A-to-B'.
[ "Cher at KSAA-CAD 2024: Compressing Words and Definitions into the Same Space for Arabic Reverse Dictionary" ]
[]
[]
6ec30967-a7aa-5ecb-8819-15e738ad4b50
In the two papers that updated ToMi according to the SimTom paper, what are the other datasets used to evaluate the models, besides ToMi?
Your answer should be a Python list of strings, containing the name of the datasets.
[ "Think Twice: Perspective-Taking Improves Large Language Models’ Theory-of-Mind Capabilities" ]
[ "Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs", "Textual Time Travel: A Temporally Informed Approach to Theory of Mind" ]
[]
6ee75006-72d3-5d81-b85d-ec25b99ed502
Which vision-language model can demonstrate that visual grounding could facilitate efficient language acquisition? (OctoBERT)
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "World-to-Words: Grounded Open Vocabulary Acquisition through Fast Mapping in Vision-Language Models" ]
[ "acl2023" ]
6f0ece87-9055-5ad9-9b89-f88c7a19d08f
For the strongest baseline mentioned in "TIES-Merging: Resolving Interference When Merging Models", which benchmark and what tasks were used for NLP in the paper which proposed it?
Your answer should be a python dictionary with the keys "benchmark" and "tasks". The value for "benchmark" should be a string and the value for "tasks" should be a list of strings.
[ "TIES-Merging: Resolving Interference When Merging Models" ]
[ "Editing models with task arithmetic" ]
[]
6f2ff186-5ec6-5234-8936-b3ee47c23059
According to the paper that proposes ExpressivityArena, what's the notable example that uses human feedback to manually evaluate the model? In that arena, which model has a 0.72 win-rate against llama-2-7b-chat at the time when that paper was written?
Your answer should be a Python list of 2 strings, the name of the example, and the name of the model as given in the paper.
[ "ExpressivityArena: Can LLMs Express Information Implicitly?" ]
[ "Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference" ]
[]
6f3b4d9d-f033-5e1b-bd31-eec4aef51ad2
In the paper "Quantized Local Independence Discovery for Fine-Grained Causal Dynamics Learning in Reinforcement Learning", which method performs the best on ID states with full-chain setting? In the paper that proposed that method, where does the -inf in Fig. 3 come from?
Your answer should be a Python list of 2 strings, the first is the abbreviation of the method, the second is the reason why -inf appears in Fig. 3.
[ "Quantized Local Independence Discovery for Fine-Grained Causal Dynamics Learning in Reinforcement Learning" ]
[ "Causal Dynamics Learning for Task-Independent State Abstraction" ]
[]
6f68d5a6-8a34-55e9-8212-64c01d072d68
What are the datasets used in the experiments and what are their respective durations? Where can I get these datasets?
Your answer should be a Python list of 2 elements. The first element is a Python dictionary containing dataset names and respective durations (in hours, rounded to 1 decimal place), and the second element is a Python string containing the answer to the last question, e.g. [{"dataset1": 2.1, "dataset2": 10.8, ...}, "answer"]
[ "Multi-Speaker Expressive Speech Synthesis via Multiple Factors Decoupling" ]
[]
[]
7055fe3b-222a-5001-8d37-827c97dba1e4
How much higher is the best EX score in the paper "Synthetic SQL Column Descriptions and Their Impact on Text-to-SQL Performance" than that in the original dataset applied? Assuming that both experiments are taken under no knowledge setting.
Your answer should be a float between 0 and 1, rounding to 4 decimal places.
[ "Synthetic SQL Column Descriptions and Their Impact on Text-to-SQL Performance" ]
[ "Can LLM Already Serve as A Database Interface? A BIg Bench for Large-Scale Database Grounded Text-to-SQLs" ]
[]
7126cbfb-136e-5a9a-950f-8fc57feda734
How does the latest Wellness Descriptions dataset used in the paper address the ambiguity issue in the task of annotating text for wellness dimensions?
Your answer should be a python strings concisely summarizing the method.
[ "WellDunn: On the Robustness and Explainability of Language Models and Large Language Models in Identifying Wellness Dimensions" ]
[ "WellXplain: Wellness Concept Extraction and Classification in Reddit Posts for Mental Health Analysis" ]
[]
71456d85-6af1-5d17-ae3f-324516ab0853
In the experiments presented in the paper "Fast and Efficient Speech Enhancement with Variational Autoencoders" evaluating the performance for speech enhancement, which method demonstrates the best performance apart from the proposed framework itself?
Your answer should be a python strings. YOU MUST USE THE EXACT ABBREVIATION AS IN THE PAPER.
[ "Fast and Efficient Speech Enhancement with Variational Autoencoders" ]
[]
[]
7156d9cc-5b02-50d7-bb20-bdcc414b76e4
Among the diverse interactive domains used to test SOFT-SC, which one is the first parallel interactive text-based and embodied environment?
Your answer should be a python string, and it should be a diverse interactive domain name.
[ "Soft Self-Consistency Improves Language Models Agents" ]
[ "WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents", "ALFWorld: Aligning Text and Embodied Environments for Interactive Learning", "InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback" ]
[]
71570654-c808-539b-9149-5ede4b64d39b
What linguistic property does this paper investigate?
Your answer should be an English string.
[ "The Hidden Folk: Linguistic Properties Encoded in Multilingual Contextual Character Representations" ]
[]
[]
7175414d-1ddc-5d5a-b4a6-8a25ba6f2078
Is there a decoder-only language model that does not use a tokenizer and operates on raw text bytes?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "ByGPT5: End-to-End Style-conditioned Poetry Generation with Token-free Language Models" ]
[ "acl2023" ]
71fd543c-b0f5-5631-b97d-0c9f7a996a86
What paper first used the technique of prompt engineering to generate adversarial prompts that can fool LLMs into making wrong predictions in prompt-based learning?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "An LLM can Fool Itself: A Prompt-Based Adversarial Attack" ]
[ "iclr2024" ]
7231e809-3ffe-5fb5-84b6-633ba6c788f5
Are there datasets and benchmarks available for measuring LLM graph reasoning abilities?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "b7c43a2c-11c4-5c63-8986-49097ff6e18d" ]
[ "iclr2024" ]
7236429c-2845-556e-98c9-886d6a05c384
How much higher ASRs do user cases with high content freedom yield, compared to those with low content freedom?
Your answer should be a floating point numbers with one decimal places.
[ "InjecAgent: Benchmarking Indirect Prompt Injections in Tool-Integrated Large Language Model Agents" ]
[]
[]
72c5b793-458d-5af4-86eb-542f839c023a
What research has been conducted on incorporating visual data into the text summarization process?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Summary-Oriented Vision Modeling for Multimodal Abstractive Summarization" ]
[ "acl2023" ]
7350bb9b-a510-5614-a994-1d99a6368e57
In the paper that DRAGIN follows the most in term of template prompt, which models are utilized as the CoT generator of the proposed retriever?
Your answer should be a string, giving the detailed names of the models, as proposed in the reference paper.
[ "DRAGIN: Dynamic Retrieval Augmented Generation based on the Real-time Information Needs of Large Language Models" ]
[ "Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions", "Active Retrieval Augmented Generation", "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models" ]
[]
7369f690-c9b9-52d7-8698-3b38d8c2baf1
For dataset MultiDialog, what's the number of dialogues, the number of turns, total length in hours, number of speakers, and the name of source dataset of dialogue scripts? Please adopt the statistics the most accurate you can find in the paper.
Your answer should be a python list,of which the elements are in the following order: number of dialogues(int), number of turns(int), total length in hours(float, rounded to 2 decimal places), number of speakers(int), and the name of source dataset of dialogue scripts(str), every element of the list is an int or float or string representing the relevant dataset information.
[ "Let’s Go Real Talk: Spoken Dialogue Model for Face-to-Face Conversation" ]
[]
[]
73b027fd-971d-5dd0-893e-8e6cc5b0d885
According to the DeepKKT paper, which method performs the best on CIFAR10 with 1 generated image per class under 50-shot setting? Additionally, in the paper that proposes that method, what's the latest dataset evaluated?
Your answer should be a Python list of 2 elements, the abbrevation of the method and the name of the dataset.
[ "Deep Support Vectors" ]
[ "Dataset Condensation with Differentiable Siamese Augmentation" ]
[]
74124f30-2365-53f1-9b81-fccaa9a4d5e0
According to Figure 1, which net has the best generalization ability?
Your answer should be a single string of the net's name.
[ "Benchmark of Machine Learning Force Fields for Semiconductor Simulations: Datasets, Metrics, and Comparative Analysis" ]
[]
[]
748c93fa-539a-5f0f-887d-746da0323e23
Which paper first proposed to only update some original weights of self-attention layers in parameter-efficient fine-tuning?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "HiFi: High-Information Attention Heads Hold for Parameter-Efficient Model Adaptation" ]
[ "acl2023" ]
74d535a9-5796-57d9-81c8-0c68e3c7188d
How many programming languages in The Stack are selected in the code dataset used for hypernetwork training in the paper?
Your answer should be a single integer.
[ "Zero-Shot Tokenizer Transfer" ]
[ "StarCoder: may the source be with you!" ]
[]
75b9dc7d-abbe-5627-ac3d-649055da6df9
Which paper proposes to use rewriting based approaches to defending against adversarial attacks in text classification?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Don’t Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text" ]
[ "acl2023" ]
75c1fd66-8271-5ae8-b45f-c188ae9ccf84
Which evaluation metric demonstrates the greatest improvement in the finetuned model proposed in this paper compared to GPT baseline?
Your answer should be a Python string, which is the name of the evaluation metric DIRECTLY FROM THE PDF.
[ "BIPED: Pedagogically Informed Tutoring System for ESL Education" ]
[]
[]
75cd4886-f858-506d-ad37-85cc7c605b3f
Which paper first introduced document content as an intermediate generation target and utilized textual document identifiers in generative retrieval?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "TOME: A Two-stage Approach for Model-based Retrieval" ]
[ "acl2023" ]
765fc890-3100-5b7f-9068-9460147a99cd
Which article first proposed shuffled-group-whitening to solve the problem of sentence representation learning?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "WhitenedCSE: Whitening-based Contrastive Learning of Sentence Embeddings" ]
[ "acl2023" ]
7696934c-fc83-504d-83d9-3716e13dfd89
How much does the average performance of the model improve on WMT'19 test sets by replacing one of example-specific prompts with a task-level prompt?
Your answer should be a single float number ranging from 0 to 100 and rounded to 2 decimal places, representing the subtraction result.
[ "In-context Examples Selection for Machine Translation" ]
[]
[]
76aee9c9-711d-5c33-9edd-68f80d3dc1ca
Are there any papers that build dense retrievers with mixture-of-experts architecture where each expert is responsible for different types of queries?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Chain-of-Skills: A Configurable Model for Open-Domain Question Answering" ]
[ "acl2023" ]
76dc78aa-daa0-5e3a-8377-96072b98e408
Which PLM method achieve the best bias score in the experiment?
Your answer should be a single string representing the PLM method.
[ "Uncovering and Categorizing Social Biases in Text-to-SQL" ]
[]
[]
77318114-59c2-51e6-9719-990770d4e50c
According to the paper "Whose Preferences? Differences in Fairness Preferences and Their Impact on the Fairness of AI Utilizing Human Feedback", both the papers "Is Your Toxicity My Toxicity? Exploring the Impact of Rater Identity on Toxicity Annotation" and "Designing Toxic Content Classification for a Diversity of Perspectives" adopted standard analysis methods. Then which variable's impact on experimental data is considered in all three papers?
Your answer should be a python strings.
[ "Whose Preferences? Differences in Fairness Preferences and Their Impact on the Fairness of AI Utilizing Human Feedback" ]
[ "Designing Toxic Content Classification for a Diversity of Perspectives", "Is Your Toxicity My Toxicity? Exploring the Impact of Rater Identity on Toxicity Annotation" ]
[]
775ac142-b55e-5cbb-9dc2-ebfb7aa64260
What shortcoming does REV overcome? and how?
Your answer should be a paragraph describing the shortcoming that REV overcomes and how it overcomes it, based on the content of the paper.
[ "REV: Information-Theoretic Evaluation of Free-Text Rationales" ]
[]
[]
780f0147-be99-5d8d-ab82-daec0d471510
What's the Type of the Pattern "Character Role Play" in jailbreak prompts? How can we make role-playing models more responsible in the RoleLLM paper?
Your answer should be a python list of two elemments. The first is a python string of a type name, and the second one is a list of several measures.
[ "Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study", "RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models" ]
[]
[]
7870d38f-1d3b-57d0-b0a0-bdcf9c1cd381
What are the advantages and disadvantages of MUX-PLMs mentioned in the paper?
Your answer should be a string list of advantages and disadvantages.
[ "MUX-PLMs: Pre-training Language Models with Data Multiplexing" ]
[]
[]
78feef9e-1c36-5824-9c47-544c65f73c86
Which domain in the GRBench dataset does not have any hard questions?
Your answer should be a single string representing the domain name.
[ "Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs" ]
[]
[]
7942e599-6cc3-59c7-89ec-2be7f578f002
How many samples are there in total in the dataset used by MIDGARD for Task 2 evaluation?
Your answer should be an integer, the number of samples.
[ "MIDGARD: Self-Consistency Using Minimum Description Length for Structured Commonsense Reasoning" ]
[ "ExplaGraphs: An Explanation Graph Generation Task for Structured Commonsense Reasoning", "proScript: Partially Ordered Scripts Generation" ]
[]
79b25301-76c6-5594-9f59-76e6ea48246c
In section 3, the author provides an exemplary event description. List the features in the example that correspond to the semantic roles discussed in the following paragraph.
Your answer should be a Python dictionary where the keys are the semantic roles and the values are the features that correspond to the roles. e.g. {"semantic_role1": "feature1", "semantic_role2": "feature2", ...}
[ "An Ordinal Latent Variable Model of Conflict Intensity" ]
[]
[]
79cc66b6-2a03-523d-a878-6d87d876a9c5
Which national project supports both the BeamAggR and SpikeVoice paper?
Your answer should be a python string of the project name. You should use full name as given in the papers and don't add "the" before the project name.
[ "BeamAggR: Beam Aggregation Reasoning over Multi-source Knowledge for Multi-hop Question Answering", "SpikeVoice: High-Quality Text-to-Speech Via Efficient Spiking Neural Network" ]
[]
[]
79d00a52-e8f8-5cc0-af9f-385ac4139377
Which languages are included in the evaluation dataset used in the paper?
Your answer should be a list of languages, e.g., ["Language1", "Language2"].
[ "Scope-enhanced Compositional Semantic Parsing for DRT" ]
[ "The Parallel Meaning Bank: Towards a Multilingual Corpus of Translations Annotated with Compositional Meaning Representations" ]
[]
79e15976-b650-5e63-847a-8a6ed4c1de02
If one would like to train (or evaluate) a helpful assistant agent that can converse with humans while the humans traverse an environment, which work has the most suitable resource?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "SIMMC-VR: A Task-oriented Multimodal Dialog Dataset with Situated and Immersive VR Streams" ]
[ "acl2023" ]
79fa440a-5fef-5f90-a8a2-fec7a7b0c6b8
What is the adversarial dataset used for the PI task in this paper, and where are the source sentences of this dataset from?
Your answer should be a brief text.
[ "Mind the instructions: a holistic evaluation of consistency and interactions in prompt-based learning" ]
[ "PAWS: Paraphrase Adversaries from Word Scrambling" ]
[]
7a1887ea-4b59-53c5-a860-d6dbd87f0d83
Can you find a dataset that shows LLM-based evaluation may not be reliable enough?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "1257ec72-a61f-5579-8f41-cb486b3af9a0" ]
[ "iclr2024" ]
7a638d3c-59b9-5ab8-9a50-26cd189c15c0
Among the two baselines introduced in the experiment setting of the paper "Fine-tuning Language Models for Factuality", which one performs better on Medical QA? That baseline was evaluated on which dataset in the paper that proposed it?
Your answer should be a Python list of 2 elements, the first is the abbreviation of the baseline, and the second is the name of the dataset.
[ "Fine-tuning Language Models for Factuality" ]
[ "Inference-Time Intervention: Eliciting Truthful Answers from a Language Model" ]
[]
7a9a3252-14a2-5e7c-a143-fda677deccf5
What is the lowest accuracy score achieved by RawNet2 on a dataset used in the paper "Reliability Estimation for Synthetic Speech Detection" but not in "SAMO: Speaker Attractor Multi-Center One-Class Learning For Voice Anti-Spoofing"?
Your answer should be a Python float, rounded to 2 decimal places.
[ "Reliability Estimation for Synthetic Speech Detection", "SAMO: Speaker Attractor Multi-Center One-Class Learning For Voice Anti-Spoofing" ]
[]
[]
7acf83f6-e04d-5b39-9b7e-c5867793f00a
Which model gains the highest acc in the Table 1 under the dataset of XNLI in the paper 'InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training', and in the original paper of this model, how many layers the author designes in the base size?
Your answer should be a Python list of 2 strings, the name of the model, and the number of layers of this model's base size.
[ "Struct-XLM: A Structure Discovery Multilingual Language Model for Enhancing Cross-lingual Transfer through Reinforcement Learning" ]
[ "InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training" ]
[]
7b4842aa-2e95-51b9-afd2-1f5e70174b3c
What is the original form of the metric formula used in the anchor paper for the test split of the BabyLM shared task dataset?
Your answer should be one formula in LaTeX format without explanation.
[ "Increasing The Performance of Cognitively Inspired Data-Efficient Language Models via Implicit Structure Building" ]
[ "A Better Way to Do Masked Language Model Scoring" ]
[]
7bd66a0c-2558-572f-8e9c-51c2422a7d1d
*Could you suggest a dataset with legally or ethically contentious content, and labels for acceptable and non-acceptable questions.
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "SQuARe: A Large-Scale Dataset of Sensitive Questions and Acceptable Responses Created through Human-Machine Collaboration" ]
[ "acl2023" ]
7bf73cbd-fff0-5e06-b902-5a3d89669232
According to the paper that proposed acceptance rate and relative performance ratio, what's the most direct metrics for model evaluation?
Your answer should be a string, the formula of the metrics in LaTeX format.
[]
[ "SEAL: A Framework for Systematic Evaluation of Real-World Super-Resolution" ]
[ "iclr2024" ]
7c5afdfd-0983-59be-b714-636d275bf7ad
Which paper used both automatically generated and manual templates with word tuples to adapt language models from one timestamp to another?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Learning Dynamic Contextualised Word Embeddings via Template-based Temporal Adaptation" ]
[ "acl2023" ]
7ca5b284-3586-51a2-b05f-e6adacb7e072
Is there any paper that previously proposed to control a risk using prediction sets, based on the literature in conformal prediction?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Conformal Risk Control" ]
[ "iclr2024" ]
7d208f5a-edd2-5b68-82bc-2feab767d620
In formula (3) of the paper, how is I_{container} calculated?
Your answer should be a python strings about the calculation approach and formula of I_{containe}.
[ "DITW: A High-Performance Deep-Independent Template-Based Watermarking" ]
[]
[]
7d231de8-b8f7-588f-87b0-4fe7b4be0863
Which knowledge graph completion method focuses on reducing memory usage by pruning features?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "GreenKGC: A Lightweight Knowledge Graph Completion Method" ]
[ "acl2023" ]
7d856467-aba1-5b39-8ebc-d533b61dc86b
Which paper highlights the need for leveraging all available resources, including dictionaries, machine translation systems, and language learners, to construct NLP data in low-resource languages?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Rethinking Annotation: Can Language Learners Contribute?" ]
[ "acl2023" ]
7dbe525f-0e5e-5e2d-a9d5-06dac6643dff
What is the main difference between the paper retrieval methods of this two papers?
Your answer should be a brief text.
[ "SciPIP: An LLM-based Scientific Paper Idea Proposer", "Chain of Ideas: Revolutionizing Research Via Novel Idea Development with LLM Agents" ]
[]
[]
7dbe882d-0adf-5c1b-86f8-71b1a7508bca
Which paper trains on linear regression to hypothesize how fine-tuning affects language models?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "063fae3a-7661-594c-ba88-ef87c051c4da" ]
[ "iclr2024" ]
7dee922c-95de-5d8e-8f03-4c27b84c7919
Which paper formally defines the problem of model selection in llm agent for multi-modal reasoning?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Towards Robust Multi-Modal Reasoning via Model Selection" ]
[ "iclr2024" ]
7e784da6-8f04-575f-b944-93cb8f4e65a3
According to Figure 1,compared with CoT prompt,what is the advantage of QAP?
Your answer should be a sentence that clearly mention the advantage of QAP
[ "Question-Analysis Prompting Improves LLM Performance in Reasoning Tasks" ]
[]
[]
7e8122b4-a93c-553d-917a-3f049456c2cb
What is the average length of essays used in this study?
Your answer should be a float number with 1 decimal place.
[ "PMAES: Prompt-mapping Contrastive Learning for Cross-prompt Automated Essay Scoring" ]
[]
[]
7e872b44-e211-5a40-9e99-0a4c361283a6
In the TARA dataset, which tool is evaluated the most in test set?
Your answer should be a Python string, the name of the tool.
[]
[ "Tool-Augmented Reward Modeling" ]
[ "iclr2024" ]
7e8b3e8b-6834-5662-bb05-c05c9b0d38d6
Is there any research paper that can extract attributes from both a predefined label set and the surrounding context?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "AtTGen: Attribute Tree Generation for Real-World Attribute Joint Extraction" ]
[ "acl2023" ]
7f058c83-bd50-525a-9643-68140cf0b6da
In the dataset that the GeMQuAD paper used for Spanish labeled examples, which language has the longest paragraph in average in tokens?
Your answer should be the full name of the language, e.g. English
[ "GeMQuAD : Generating Multilingual Question Answering Datasets from Large Language Models using Few Shot Learning" ]
[ "On the Cross-lingual Transferability of Monolingual Representations" ]
[]
7f6dafa1-72c9-5c9b-a4bc-dbddaf15f4de
Is there a paper illustrating that pre-trained transformers from LLMs can be used to encode visual information in a wide range of scenarios?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Frozen Transformers in Language Models Are Effective Visual Encoder Layers" ]
[ "iclr2024" ]
8060edd0-d5a8-5671-8c02-a83f6e9a43a1
How much faster is OFU-MLogB compared to MNL-UCB in the multinomial logistic bandit experiment?
Your answer should be a python strings
[ "Online (Multinomial) Logistic Bandit: Improved Regret and Constant Computation Cost" ]
[]
[]
807fdd37-864e-584c-b556-cf63ef4b428e
For the latest two selected ML datasets of vision modality in Croissant, where did the raw images come from?
Your answer should be a single word, the name of the website where the images came from.
[ "Croissant: A Metadata Format for ML-Ready Datasets" ]
[ "Microsoft COCO: Common Objects in Context", "Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations" ]
[]
818501f3-3983-598b-903a-9bfc0ec268d6
Is there any paper that utilizes masked language modeling to defend against word-level adversarial attacks?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "RMLM: A Flexible Defense Framework for Proactively Mitigating Word-level Adversarial Attacks" ]
[ "acl2023" ]
81947076-ac46-5c93-a2a5-aab2896f3d36
What motivates the author to propose this paper?
Your answer should be a brief text.
[ "Gacs-Korner Common Information Variational Autoencoder" ]
[]
[]
81a62980-3225-5149-b703-e7c4bc4d48ea
Is there such a reading comprehension dataset in understanding a snippet from a long story book, while it requires to integrate the necessary long history texts before the snippet to full understand it?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Personality Understanding of Fictional Characters during Book Reading" ]
[ "acl2023" ]
81ddab00-acc0-571a-a579-739957afc345
There are 12 datasets examined with code-davinci-002 and 2 datasets have a large accuracy gap. What is the average performance difference of these two datasets while using instruction fine-tuned model?
Your answer should be a float number with 4 decimal places between 0 and 1.
[ "Exploring the Curious Case of Code Prompts" ]
[]
[]
8205ffe4-abc4-54fc-bf39-2e0f3b375848
According to the paper that proposed BooookScore, another paper analysed the disadvantage of the only existing public dataset for book-length summarization. In that analysis paper, which book published in the 21st century is among the books where GPT-4 performs the best on name cloze?
Your answer should be a Python string, the title of the book.
[]
[ "BooookScore: A systematic exploration of book-length summarization in the era of LLMs", "Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT-4" ]
[ "iclr2024" ]
82a90ead-45dc-5914-9db5-5a9242a056a9
Explain the reasoning behind formula (3) in the paper.
Your answer should be a python strings about the concise reasoning behind formula (3) in the paper.
[ "Semi-Supervised Sound Event Detection with Local and Global Consistency Regularization" ]
[]
[]
82bdaa47-a2cb-5fbd-a827-83d981f4bb52
According to Table 2, what is the difference of the percentage of over 80% agreement threshold for both concessive and causal relations between the first and second iteration on the English dataset?
Your answer should be a percentage with two decimal places, indicating the difference in proportion.
[ "Unpacking Ambiguous Structure: A Dataset for Ambiguous Implicit Discourse Relations for English and Egyptian Arabic" ]
[]
[]
82d3067f-1da3-5800-b7ef-a4571d85ccde
What paper is the first to prove finetuned LLM can be a reliable judge?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization" ]
[ "iclr2024" ]
8323e9b0-52be-5d8e-8c68-3975a4e1ecfe
What new efficient pre-training method was used in the pre-training process of the model used to compute embedding representations in the Document Similarity section of the paper?
Your answer should be a python strings concisely summarizing the method.
[ "De-Identification of Sensitive Personal Data in Datasets Derived from IIT-CDIP" ]
[ "Learning Transferable Visual Models From Natural Language Supervision" ]
[]
835eda31-c9b0-53a8-bc45-bb5418334a61
What datasets are used in the paper proposing the baseline for the experiments in this paper?
Your answer should be a python list, each element is the name of the dataset used, e.g.,["dataset1", "dataset2", ...]. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.
[ "Don’t Parse, Choose Spans! Continuous and Discontinuous Constituency Parsing via Autoregressive Span Selection" ]
[ "Fast and Accurate Neural CRF Constituency Parsing" ]
[]
837e7da7-5e5c-5cb9-bcb0-c1dc60d97569
Which paper first used structural information for coherence modeling?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Modeling Structural Similarities between Documents for Coherence Assessment with Graph Convolutional Networks" ]
[ "acl2023" ]
838bd7f8-6475-577a-801b-90c69d9b04f4
Which model achieves the best performance in the experimental results of MQUAKE-T in Figure 2? Considering the GPU VRAM consumption of the model from the previous question, which models from ENN and KE have similar consumption?
Your answer should be a single python list containing two strings, the first element of the list is the model's name, the second element of the name of the model that has similar GPU VRAM consumption of with the first element.
[ "MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions" ]
[ "Fast Model Editing at Scale" ]
[]
83ec97fc-091f-54b0-a627-cf693204090f
What are the two key technologies used in the process of augmenting trajectory-level data proposed in this paper?
Your answer should be plain text.
[ "GTA: Generative Trajectory Augmentation with Guidance for Offline Reinforcement Learning" ]
[]
[]
84834991-f063-5ab4-ad5c-bccb3de208d0
Which two independent sources of variance do the models performing sentiment classifications have to cope with?
Your answer should be a python strings of the two independent sources of variance.
[ "Massively Multilingual Corpus of Sentiment Datasets and Multi-faceted Sentiment Classification Benchmark" ]
[]
[]