uuid string | question string | answer_format string | anchor_pdf list | reference_pdf list | conference list |
|---|---|---|---|---|---|
c2a0f81b-e98d-50ed-b809-cc95eb952082 | The methods proposed in the two anchor PDFs have similarities. For the following statements in VoiceFlow: ["Duration adapter", "y", "\|u_\theta(x_t, y, t) - (x_1 - x_0)\|^2"], which statements in Reflow-TTS correspond closely to them respectively? | Your answer should be a Python list containing three strings, arranged in the same order as in the question. | [
"VoiceFlow: Efficient Text-To-Speech with Rectified Flow Matching",
"Reflow-TTS: A Rectified Flow Model for High-Fidelity Text-to-Speech"
] | [] | [] |
c4048cbf-71e6-55ec-a0e9-ba082c5a2954 | In the PPTC benchmark paper, among the works that focus on LLMs' tool-use ability to generate APIs for solving user instructions, which one doesn't apply AST accuracy? | Your answer should be a string, the name of the method or model. | [
"PPTC Benchmark: Evaluating Large Language Models for PowerPoint Task Completion"
] | [
"Toolformer: Language Models Can Teach Themselves to Use Tools",
"ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs",
"Gorilla: Large Language Model Connected with Massive APIs"
] | [] |
c40f9463-e7de-5f3e-b3ec-8f64b3289541 | Are there any papers that study whether you can identify if a LLM has been instructed to hide some information? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions"
] | [
"iclr2024"
] |
c4461086-1920-5037-8eb9-f7d8e00aa31b | In the paper "MultiTabQA: Generating Tabular Answers for Multi-Table Question Answering", dataset SPIDER was used in the training process. What were the inputs and outputs in the original design of SPIDER, and how did the authors of this work adapt the dataset for their task? | Your answer should be brief text regarding the inputs and outputs of SPIDER in the two works. | [
"MultiTabQA: Generating Tabular Answers for Multi-Table Question Answering",
"Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task"
] | [] | [] |
c451a039-cf16-5c3b-803c-3c2b7be1d355 | what is the exact performance drop when the diffusion model is removed in the MMWHS MR-to-CT UDA setting? | Your answer should be float number rounded to 1 decimal place. | [
"Towards Generic Semi-Supervised Framework for Volumetric Medical Image Segmentation"
] | [] | [] |
c4db8fe1-fe59-5b60-af98-b3e8edd5ef16 | Which language performs better on old sense IDs compared to new sense IDs during experiments? | Your answer should be a string of a language name. | [
"Presence or Absence: Are Unknown Word Usages in Dictionaries?"
] | [] | [] |
c516145b-51ad-5146-a4f2-88773ff98293 | In the S4WM paper's 3D environment, what's the episode length for the largest setting? | Your answer should be an integer. | [
"Facing Off World Model Backbones: RNNs, Transformers, and S4"
] | [
"Evaluating Long-Term Memory in 3D Mazes"
] | [] |
c53cba22-704b-51db-ae71-0166a727b747 | How to calculate the Word-pair Representation matrix in Figure 4? | Your answer should be a list of formulas, representing the calculation of Word-pair Representation matrix. | [
"USSA: A Unified Table Filling Scheme for Structured Sentiment Analysis"
] | [] | [] |
c63f7ad7-00de-56f0-8b74-2fdf420ceaa2 | How many models are evaluated in WebArena? What's the success rate of the most powerful model? | Your answer should be a Python list of 2 elements. The first one is an integer. The second one is a float rounded to 2 decimal places. | [
"WebArena: A Realistic Web Environment for Building Autonomous Agents"
] | [] | [] |
c67e3e4c-245b-509d-92b0-3ff41e82f9d4 | What research advances are incorporated into the generative language model that used to generate associations in different languages in the SeeGULL Multilingual paper? | Your answer should be a python list of several strings. | [
"SeeGULL Multilingual: a Dataset of Geo-Culturally Situated Stereotypes"
] | [
"PaLM 2 Technical Report"
] | [] |
c69c648f-bde4-5a8b-a82e-67fe2cdefe9f | What is the limitation of this work proposed by the authors themselves? | Your answer should be plain text. | [
"Hyper-CL: Conditioning Sentence Representations with Hypernetworks"
] | [] | [] |
c6be1785-b1b0-56c7-8133-3dca86c62222 | According to the paper, how to combine the two losses presented in figure 2? | Your answer should be a single formula in latex format extracted from the paper. | [
"Aspect-Category Enhanced Learning with a Neural Coherence Model for Implicit Sentiment Analysis"
] | [] | [] |
c6d527e4-0a3f-5c85-86a7-3b9bf155fa0a | Is there any paper that investigates backdoor attacks across various types of tasks, not limited to classification, in language models? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Multi-target Backdoor Attacks for Code Pre-trained Models"
] | [
"acl2023"
] |
c7525517-c527-563a-bc60-33adfb8309a2 | In the situation between 2 agents and 2 arms, how many unique matching pairs will incur linear regret with non-incentive-aware learning algorithms? | Your answer should be a single number. | [
"Incentivized Exploration in Two-sided Matching Markets"
] | [] | [] |
c7fd4be1-7261-5b8c-bdb8-0da621536182 | Can we learn to represent an image with arbitary numbers of tokens? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"SparseFormer: Sparse Visual Recognition via Limited Latent Tokens"
] | [
"iclr2024"
] |
c820fec0-1295-5d70-b300-6feb9bc66d5a | When the number of retrieved pairs is chosen empirically to be in the range of 3 to 5 for this data, which caption group of the testing set performs the best overall? | Your answer should be a python strings about the name of the caption group. YOU MUST USE THE EXACT NAME FROM THE PAPER. | [
"Retrieval-Augmented Text-to-Audio Generation"
] | [] | [] |
c8457d49-b7c2-5ed3-880e-be91d064d1d8 | What baselines are used in this paper? Note that a baseline is counted only once, even if different variants are provided based on it. | Your answer should be a python list of strings, every element of the list is the name of a baseline directly mentionned in this paper. | [
"Explicating the Implicit: Argument Detection Beyond Sentence Boundaries"
] | [] | [] |
c86eef72-3e3e-5fb2-b008-c97b3d33433e | How many LLMs does the authours test in the experiment part? And how many of the LLMs are openly accessible? | Your answer should be a python list of two integer. The first one is the number of the LLMs, and the second one is the number of the openly accessible LLMs of them. | [
"Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models"
] | [] | [] |
c912d1d0-dced-53f0-9a89-5c982701fbb5 | Is there any paper exploring real speakers and thus performing multimodal emotion recognition task? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations"
] | [
"acl2023"
] |
c95b6bb1-3445-5378-b465-2b4d4da30a17 | What are the core parameters L, H, A, and the total number of parameters(params) of the base model of the classifier in this paper? | Your answer should be a python dictionary with keys 'L', 'H', 'A', and 'params', and the corresponding value should be a number, e.g., {'L': 1, 'H': 1, 'A': 1, 'params': 1}. | [
"TartuNLP @ AXOLOTL-24: Leveraging Classifier Output for New Sense Detection in Lexical Semantics"
] | [
"Unsupervised Cross-lingual Representation Learning at Scale"
] | [] |
ca95aa28-b131-5cea-880a-63b9357ba912 | Is there any paper that utilizes Gaussian processes to analyze the vulnerability of text-conditioned generative models? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Query-Efficient Black-Box Red Teaming via Bayesian Optimization"
] | [
"acl2023"
] |
cb2ee6d9-c891-53d8-92de-c5ba08404ab4 | Considering all the methods tested in the experiment section of the paper, which LLM performs the worst on the Jailbreak Success Rate metric? | Your answer should be a python strings about the name of the LLM model. YOU MUST USE THE EXACT NAME FROM THE PAPER. | [
"GUARD: Role-playing to Generate Natural-language Jailbreakings to Test Guideline Adherence of Large Language Models"
] | [] | [] |
cb5327e3-022f-5eb5-98fe-a84c26dd68ad | How many tools does the proposed CodeAgent integrate into its framework, and which one is the most useful based on its ablation study? | Your answer should be a Python list of length 2. The first one is an integer number, and the second one is a text string of the tool name. | [
"CodeAgent: Enhancing Code Generation with Tool-Integrated Agent Systems for Real-World Repo-level Coding Challenges"
] | [] | [] |
cb721bd4-b219-50b7-99f9-1f0a5f5da438 | Which paper found that mutual learning benefits multlingual models? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Towards Higher Pareto Frontier in Multilingual Machine Translation"
] | [
"acl2023"
] |
cb9cb4ee-c76a-5b00-bb19-9f238ac88b5f | When discussing "Engagingness" in the paper, what is the definition of engagingness with interestingness from prior research? | Your answer should be a python strings. | [
"MEEP: Is this Engaging? Prompting Large Language Models for Dialogue Evaluation in Multilingual Settings"
] | [
"Compression, Transduction, and Creation: A Unified Framework for Evaluating Natural Language Generation",
"G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment",
"Towards a Unified Multi-Dimensional Evaluator for Text Generation"
] | [] |
cc8b6743-e4fd-5365-b027-f6a70a30187e | Name a paper which proposes a probabilsitic formulation of retrosynthesis. | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"fa40de30-fa96-53b9-b422-29fc2a233e3a"
] | [
"iclr2024"
] |
cc9a3391-3e28-5f15-933b-1fca191d7c30 | Which dataset used in this paper consists of 14K open-domain English conversations with a total of 80K question-answer pairs? I want to use this dataset for my research. Can you provide me with the github link of this dataset? | Your answer should be a Python list of 2 strings, the name of the dataset, and the github link of this dataset. | [
"Enhancing Conversational Search: Large Language Model-Aided Informative Query Rewriting"
] | [
"Open-Domain Question Answering Goes Conversational via Question Rewriting"
] | [] |
cd182de6-a2ef-52fd-bc07-73a990855005 | What stages does training of Med-Real2Sim comprise? | Your answer should be a string list consisting of the training stages in order. | [
"Med-Real2Sim: Non-Invasive Medical Digital Twins using Physics-Informed Self-Supervised Learning"
] | [] | [] |
cd235027-4032-5403-964a-b2c7e7550966 | How much percent does VerifiNER improve the F1 score of the three baseline models on average on GENIA? | Your answer should be a Python float rounded to two decimal places WITHOUT ANY PUNCTUATION OR EXPLANATION. e.g. 21.30 | [
"VerifiNER: Verification-augmented NER via Knowledge-grounded Reasoning with Large Language Models"
] | [] | [] |
cd63b251-d7ef-58a6-83be-75d95099d550 | Is there a Chinese hate speech paper that constructs an insulting lexicon while building the dataset? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Facilitating Fine-grained Detection of Chinese Toxic Language: Hierarchical Taxonomy, Resources, and Benchmarks"
] | [
"acl2023"
] |
cd837558-b900-5448-9c36-9a0c0f29924d | Which paper proposes a memory-efficient optimizer considering the confidence of each update during the optimization? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"CAME: Confidence-guided Adaptive Memory Efficient Optimization"
] | [
"acl2023"
] |
cd837c4f-07d1-5db8-84c1-f258aa7985ea | Considering both benefits and costs, what is the best size of generation pool for the proposed method? | Your answer should be a single integer number. | [
"SQuARe: A Large-Scale Dataset of Sensitive Questions and Acceptable Responses Created through Human-Machine Collaboration"
] | [] | [] |
cd981e15-df19-5c78-8ed1-13b16d0ff91f | Why can I find the exact dataset utilized by the LEGO-Prover paper? | Your answer should be a string, the URL as given in the paper without "https://", "http://" or "www.", e.g. "google.com" | [] | [
"LEGO-Prover: Neural Theorem Proving with Growing Libraries",
"Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs"
] | [
"iclr2024"
] |
cdf4e053-3112-54ba-a582-6fbb58c15a20 | When it comes to Empirical and Certified Robustness, On which dataset and which poison rate the accuracy on Benign Samples of the method proposed by the paper is closest to the accuracy on Benign Samples of no defence situation? | Your answer should be a single python list, the first element is the dataset name, the second element is a float number rounded to 1 decimal place. | [
"CROWD: Certified Robustness via Weight Distribution for Smoothed Classifiers against Backdoor Attack"
] | [] | [] |
cdfcefb3-e2be-515b-aa68-6baf717b17a2 | What paper showed first that one can build a fully differentiable mixture of experts layer with no increase in time complexity? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"From Sparse to Soft Mixtures of Experts"
] | [
"iclr2024"
] |
ce722b7c-281b-5f8e-bf5e-06117f832f54 | What is the iAA(inter-annotator agreement when AttentionXML) value of the correctly predicted results? | Your answer should be a floating-point number with two decimal place. | [
"Financial Numeric Extreme Labelling: A dataset and benchmarking"
] | [] | [] |
ce769caf-b9cd-58c7-9b38-ee23d0d17f9b | How many more turns per dialogue are there in MMDU Benchmark than in MMDialog? | Your answer should be an integer, the difference of average turns rounded to integer. | [
"MMDU: A Multi-Turn Multi-Image Dialog Understanding Benchmark and Instruction-Tuning Dataset for LVLMs",
"MMDialog: A Large-scale Multi-turn Dialogue Dataset Towards Multi-modal Open-domain Conversation"
] | [] | [] |
ce95db65-95c3-55d5-8eda-3e80ef6d0775 | Using only task-level prompts or using only example-specific prompts, which is better on the Multi-Domain test set? | Your answer should be a single string, either "task-level" or "example-specific". | [
"In-context Examples Selection for Machine Translation"
] | [] | [] |
cecfb20f-ebba-5f01-98c1-5259abb28f74 | Which model did the authors use to fine-tune the pre-detector $g_{\phi}$ and conflict disambiguator $g_{\psi}$? How many times more params does ELMo have than this model? | Your answer should be a python list of two elements. The first element is a string of a name of a model, and the second element is a python float with two decimal places. | [
"PokeMQA: Programmable knowledge editing for Multi-hop Question Answering"
] | [
"DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter"
] | [] |
cf038bcd-0053-5e50-8f6c-b52b103387c3 | Which Dataset has the most classes according to Table 1? | Your answer should be a single string of the Dataset's name. | [
"RankAug: Augmented data ranking for text classification"
] | [] | [] |
cff35edd-a526-59ea-a003-787ebabcd2d7 | Which paper first applied the chain-of-thought technique in the text summarization field? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Element-aware Summarization with Large Language Models: Expert-aligned Evaluation and Chain-of-Thought Method"
] | [
"acl2023"
] |
d011e781-2e29-52ea-8281-e7bc25c68622 | What paper first proposed a robust perceptual similarity metric with certificates? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"LipSim: A Provably Robust Perceptual Similarity Metric"
] | [
"iclr2024"
] |
d0157667-a921-5e91-8948-4e0f31b3010c | Is there a paper that uses an app for a popular tabletop game to gather real transcripts of gameplay with concrete values for players' and monsters' health? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"FIREBALL: A Dataset of Dungeons and Dragons Actual-Play with Structured Game State Information"
] | [
"acl2023"
] |
d017f05c-7062-526f-9b1c-8ec63bbda641 | What are the sources of the forecasting questions in the datasets used in the experimental section of the paper "AUTOCAST++: ENHANCING WORLD EVENT PREDICTION WITH ZERO-SHOT RANKING-BASED CONTEXT RETRIEVAL"? | Your answer should be a python list of strings, e.g., ["source1", "source2"]. | [
"AutoCast++: Enhancing World Event Prediction with Zero-shot Ranking-based Context Retrieval"
] | [
"Forecasting Future World Events with Neural Networks"
] | [] |
d01dc0bf-ace9-5117-bd6f-8c943ddf494c | Among the several bias presented in the paper named "Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena", which one is considered and discussed in the paper named"Humans or LLMs as the Judge? A Study on Judgement Bias"? After the discussion, is this bias the main influencial factor in the paper(I mean the latter)? | Your answer should be a single list, the first string is the bias name, the second string is bool, e.g., ["Verbosity bias", false] | [
"Humans or LLMs as the Judge? A Study on Judgement Bias",
"Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena"
] | [] | [] |
d0f2ce9d-5b5b-5920-a747-48a3ad34cdfe | I believe introducing synthetic text-to-SQL data into model fine-tuning will benefit the final accuracy of the text-to-SQL task. But will this also help the model to generalize better on other general and board tasks? | Your answer should be a single boolean value (`True` or `False`) indicating whether the synthetic data will improve the performance on universal tasks, not only text-to-SQL. | [
"Synthesizing Text-to-SQL Data from Weak and Strong LLMs"
] | [] | [] |
d170af87-1580-52a1-b6d1-814f2ddbfac4 | What's the evaluation baseline used in the paper titled "Generative Adversarial Training with Perturbed Token Detection for Model Robustness"? What's the contributions that makes this baseline different from existing adversarial datasets? | Your answer should be a python list of two strings, the first element is the baseline name(the abbrievation format is enough), and the second element is the contributions. | [
"Generative Adversarial Training with Perturbed Token Detection for Model Robustness"
] | [
"Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models"
] | [] |
d1df78e0-b32e-5878-b302-ae1d5408e8a7 | What is a paper studying data being collected in bundles in reinforcement learning ? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Sample-Efficiency in Multi-Batch Reinforcement Learning: The Need for Dimension-Dependent Adaptivity"
] | [
"iclr2024"
] |
d1f34ae4-023d-5913-bcaa-0f58087bbe36 | According to the paper that proposed the second smallest baseline applied in the MetaGPT paper, what's the difference in pass rate between the best and the worst method, under single-line infilling setting? | Your answer should be a float between 0 and 100, rounding to one decimal place. | [] | [
"MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework",
"InCoder: A Generative Model for Code Infilling and Synthesis"
] | [
"iclr2024"
] |
d2223321-8fa3-5adc-a616-0b5d794941f6 | What is the architecture of the transformer-based classifier used in the paper "A Two-Model Approach for Humour Style Recognition"? | Your answer should be a python strings. | [
"A Two-Model Approach for Humour Style Recognition"
] | [
"DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter"
] | [] |
d2369c7a-ea10-5299-818e-78c80de60a82 | Is there a single GNN model that can inductively generalize to any knowledge graph?;What is the method to generalize knowledge graph reasoning to graphs with new entities and relations?;Is there a foundation model for knowledge graphs that does not learn embeddings for each node and relation type? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"ec1f567e-30b2-5fb3-a626-b478de9f79ba"
] | [
"iclr2024"
] |
d28d742d-3c54-5729-9aec-ff098cd5f44f | Which large model series were used to evaluate the prototype of the BBH dataset when it was proposed in the paper "Tree of Problems: Improving structured problem solving with compositionality"? | Your answer should be a python list of string, e.g., ['model1', 'model2']. YOU MUST USE THE ABBREVIATIONS PROVIDED IN THE PAPER. | [
"Tree of Problems: Improving structured problem solving with compositionality"
] | [
"Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models"
] | [] |
d2ce712c-a887-538c-a0fd-4cf01de110d4 | Is there any paper that leverages syntactic rules to explicitly guide text generation? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Explicit Syntactic Guidance for Neural Text Generation"
] | [
"acl2023"
] |
d2f3c57b-d05d-522b-b8bf-21651c72b837 | What paper mitigates the vocabulary size limitation when pretraining multilingual masked language models using a contrastive loss? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Headless Language Models: Learning without Predicting with Contrastive Weight Tying"
] | [
"iclr2024"
] |
d35568c3-eed9-5383-a49a-c363470c175d | In the main evaluation results of the different baselines in this paper, both of which use CodeLLaMA as the base model, which one performs better? In the paper introducing this model, aside from the datasets mentioned in this paper, what other in-domain datasets are used? | Your answer should be a python list, the first element is the name of the baseline model, and the following elements are the in-domain datasets used in the paper, e.g.,["baseline_model_name", "dataset1", "dataset2", ...]. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION. | [
"SEGO: Sequential Subgoal Optimization for Mathematical Problem-Solving"
] | [
"MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning",
"Code Llama: Open Foundation Models for Code"
] | [] |
d4046885-386a-5ea9-a53e-44a4d33ab4b4 | Is there commonsense reasoning dataset which generates diverse sentences to describe the relation between concepts? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"DimonGen: Diversified Generative Commonsense Reasoning for Explaining Concept Relationships"
] | [
"acl2023"
] |
d4098c02-cef5-5a6e-b9ec-12e068658af6 | Which datasets are used in the experiments of the paper proposing the previous best RGB-based method that are not used in this paper? | Your answer should be a python list about the names of the datasets, e.g., ['dataset1', 'dataset2']. YOU MUST USE THE EXACT NAMES FORM THE PAPER. | [
"Slowfast Network for Continuous Sign Language Recognition"
] | [
"Continuous Sign Language Recognition with Correlation Network"
] | [] |
d49c4e91-ace9-5ba1-a728-6083ffc72194 | According to Table 3, on which single-task and on which metric, no multi-task model can outperform the corresponding single-task model? | Your answer should be a Python list of two strings. The first string is the name of the task, and the second string is the name of the metric. | [
"VoxtLM: Unified Decoder-Only Models for Consolidating Speech Recognition, Synthesis and Speech, Text Continuation Tasks"
] | [] | [] |
d4cca186-6fd8-5b84-a38a-6145ceaec283 | What's the second method discussed in the "Related Work" of the paper that proposes CoMeDi? In the work that proposed that method, how many fewer sabotages per game in average did the method propsed by the authors make than their baseline, under "Non-repulser" condition? | Your answer should be a Python list of 2 elements, the first is a string, the name of the method, and the second is a float rounding to 2 decimal places, the difference of sabotages per game in average. | [
"Diverse Conventions for Human-AI Collaboration"
] | [
"Adversarial Diversity in Hanabi"
] | [] |
d53db8ce-5380-58fc-be67-409a729fb21f | Provide a brief introduction to the task in the SuperGLUE benchmark that were not used in the paper "CUSTOMIZABLE COMBINATION OF PARAMETER-EFFICIENT MODULES FOR MULTI-TASK LEARNING". | Your answer should be a python strings. | [
"Customizable Combination of Parameter-Efficient Modules for Multi-Task Learning"
] | [
"SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems"
] | [] |
d575f608-c3fc-5a6a-97ea-01443d949f57 | How many more examples each model are used in the experiment of "LLM Evaluators Recognize and Favor Their Own Generations" than in "Benchmarking Cognitive Biases in Large Language Models as Evaluators"? | Your answer should be a integer. | [
"Benchmarking Cognitive Biases in Large Language Models as Evaluators",
"LLM Evaluators Recognize and Favor Their Own Generations"
] | [] | [] |
d5e3a89b-4ef9-5ce5-b80b-76cba7c02e76 | What are the detailed hyperparameters of the sentence-level scorer in Fine-grained Evaluation System of this paper? | Your answer should be a python dictionary about the hyperparameters, e.g. {"hyperparameter1": "value1", "hyperparameter2": "value2", ...}. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION. | [
"PSST: A Benchmark for Evaluation-driven Text Public-Speaking Style Transfer"
] | [
"TinyLlama: An Open-Source Small Language Model"
] | [] |
d5ea5e23-0a82-5621-9932-ff0f19a68885 | The paper("BLM-s/lE: A structured dataset of English spray-load verb alternations for testing generalization in LLMs") use two pre-trained models for experiment. For the newer one, what is its name and based on what task is it pre-trained? | Your answer should be a single python list, the first element is the name of the model, the second element is the task name it is pre-trained on. | [
"BLM-s/lE: A structured dataset of English spray-load verb alternations for testing generalization in LLMs"
] | [
"ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators"
] | [] |
d60762bb-75c6-57d0-918b-9df1525d7269 | In the introduction of this paper, five tracks are mentioned. What are the detailed definitions of Track 2 and Track 3? | Your answer should be a python strings about the detailed definition of Track2 and Track3. | [
"YNU-HPCC at WASSA-2023 Shared Task 1: Large-scale Language Model with LoRA Fine-Tuning for Empathy Detection and Emotion Classification"
] | [
"Findings of WASSA 2023 Shared Task on Empathy, Emotion and Personality Detection in Conversation and Reactions to News Articles"
] | [] |
d68e9387-5bbd-5a7d-9091-bc67f849d296 | Which paper first constructed a structured knowledge base to interconnect different human social roles and attributes? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"PeaCoK: Persona Commonsense Knowledge for Consistent and Engaging Narratives"
] | [
"acl2023"
] |
d6e0672c-e059-597e-9686-8e77be00fc2c | According to the DreamLLM paper, how many evaluated Text2Image Specialists it failed to beat in MS-COCO? | Your answer should be an integer. | [] | [
"DreamLLM: Synergistic Multimodal Comprehension and Creation"
] | [
"iclr2024"
] |
d74c128e-d9ec-545c-b10b-d3d3116d9ec9 | How many samples are there in SeeClick's general vision-language instruction-following data? | Your answer should be a single number rounding to thousands, e.g. 15000. | [
"SeeClick: Harnessing GUI Grounding for Advanced Visual GUI Agents"
] | [
"Visual Instruction Tuning"
] | [] |
d7aa4317-7e09-53d4-9b5d-61b51995b83f | Which paper first applied the chain of thought concepts in 3D localization problem? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"CoT3DRef: Chain-of-Thoughts Data-Efficient 3D Visual Grounding"
] | [
"iclr2024"
] |
d7d4bc83-37ab-5ab9-8693-b4c2e6e38781 | What do you think is the biggest advantage of Variator compared to baseline models as shown in table 1 in the article 'Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules'? Please briefly describe the working principle of LTP, which is one of the baseline models. | Your answer should be a brief text. | [
"Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules"
] | [
"Learned Token Pruning for Transformers"
] | [] |
d82a4438-587e-5405-9351-319110cd89de | What is the average proportion of papers in the ACL anthology in recent ten years which mention the words speech, spoken or audio in the title? | Your answer should be a Python float number roudning to 3 decimal places, e.g., 0.001. | [
"Putting Natural in Natural Language Processing"
] | [] | [] |
d8bac6a0-2cb4-5620-ac2c-1b2b67c25d0b | Which two prompting methods in the two papers have similar principles? | Your answer should be a python list of two prompting method. You must use abbreviations as the papers given. | [
"From Good to Great: Improving Math Reasoning with Tool-Augmented Interleaf Prompting",
"ReAct: Synergizing Reasoning and Acting in Language Models"
] | [] | [] |
da0e52b5-63f3-5fac-a046-063ecb48cf5a | When conducting experiments, which kind of GPU device is used in this paper? | Your answer should be a brief text. | [
"Re-weighting Tokens: A Simple and Effective Active Learning Strategy for Named Entity Recognition"
] | [] | [] |
da0eec1f-57e5-5fd8-aff2-cd21493eb60c | Has any study explored the zero-shot extraction of persona characteristics within conversational dialogues? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"PAED: Zero-Shot Persona Attribute Extraction in Dialogues"
] | [
"acl2023"
] |
da8f996b-f289-5718-ac05-36ba34285a28 | Which paper first tried to fine-tune LLMs with chain-of-thoughts and program-of-thoughts for math reasoning? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning"
] | [
"iclr2024"
] |
da9752f5-e86d-577d-a99b-88a399197e6a | How many multi-modal baselines excluding the method this paper proposed do authors use? Among this baselines, who reaches the highest F1 score on Twitter2015 dataset? And what about Twitter2017 dataset? | Your answer should be a python list, the first item is the number of multi-modal baselines the paper used, the second item and the third item are the name of methods reaching the highest F1 score on Twitter2015 and Twitter2017 dataset respectively. | [
"AoM: Detecting Aspect-oriented Information for Multimodal Aspect-Based Sentiment Analysis"
] | [] | [] |
db1901ae-ae9a-5f74-9479-c1846458d265 | Is there a paper which applies Bayesian optimization to modular continual learning? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"A Probabilistic Framework for Modular Continual Learning"
] | [
"iclr2024"
] |
db606413-3034-5687-a6ec-535a4244e8a1 | For Zero-shot performance on unseen languages, which model in the experiment of the paper(titled "Zero-Shot Cross-Lingual NER Using Phonemic Representations for Low-Resource Languages") gets the highest F1 score? In the source paper of this model, which languages is it evaluated on? | Your answer should be a python list like ["string1", ["string2", "string3", ...]]. The first element should be a string, representing the name of the model. The second element should be a list of strings, representing the languages.NOTE that the languages should be in the format of ISO 639-3 code. | [
"Zero-Shot Cross-Lingual NER Using Phonemic Representations for Low-Resource Languages"
] | [
"XPhoneBERT: A Pre-trained Multilingual Model for Phoneme Representations for Text-to-Speech"
] | [] |
db9b0fe4-a8e1-5344-8fab-77bbea36c1f1 | Which paper first construct large-scale corpus to improve in-context learning of large language models in the pre-training stage? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Pre-Training to Learn in Context"
] | [
"acl2023"
] |
dbe56b05-6b5d-5b95-8b02-6d455e8f0c75 | How many more layers of Transformer should the new method compute compared with the standard LLM in a LLM with 13B parameters? | Your answer should be a integer. | [
"Dialogue Summarization with Mixture of Experts based on Large Language Models"
] | [] | [] |
dc151869-421a-5d8e-b56b-af7266c08585 | What training acceleration methods are compared in the original paper that describes the methods used in training the FLM-101B model? | Your answer should be a python list of strings. | [
"FLM-101B: An Open LLM and How to Train It with $100K Budget",
"Masked Structural Growth for 2x Faster Language Model Pre-training"
] | [] | [] |
dc634e00-e936-527b-b3e1-93565fa0178b | What are some data-efficient ways to learn text embeddings thru contrastive learning? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Composition-contrastive Learning for Sentence Embeddings"
] | [
"acl2023"
] |
dcd9e737-6a5f-519f-8357-0a0e6d002c1e | From which two subsets does the benchmark used as evaluation set for BLOOM and BLOOMZ models in the paper merged?(The paper is named "An Empirical Study of In-context Learning in LLMs for Machine Translation") | Your answer should be a single python list, every element of the list is a string of the abbrievation name of the subset, e.g.["TAT-Conv","TAT-Web"]. | [
"An Empirical Study of In-context Learning in LLMs for Machine Translation"
] | [
"IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages"
] | [] |
ddd6cf56-5026-5482-b637-f8dd9a20acf6 | Is there a theory paper that explains why sometimes tuning momentum does not boost performance for training a neural network? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"The Marginal Value of Momentum for Small Learning Rate SGD"
] | [
"iclr2024"
] |
de8d9b7f-4117-53d1-988d-77036e001339 | Which approach was first proposed to solve the text-conditioned image retrieval task according to the SDA paper? In this approach, how to compute gating connection? | Your answer should be a python list of two strings, the first is an approach name and you must use abbreviation as given in the papers. The second is a fomula in latex format. | [
"SDA: Semantic Discrepancy Alignment for Text-conditioned Image Retrieval"
] | [
"Composing Text and Image for Image Retrieval - An Empirical Odyssey"
] | [] |
dea6a700-d8b4-5269-851e-d3a99f3961f5 | In the PRL paper, besides the proposed method, which baseline performs better? In the paper that proposes that baseline, what's the loss function for the controller? | Your answer should be a Python list of 2 elements, the first is the abbreviation of the baseline, and the second is a string, the formula in LaTeX format. | [
"Work-in-Progress: Using Symbolic Planning with Deep RL to Improve Learning"
] | [
"Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation"
] | [] |
df2d7dce-b86a-5805-b524-3f453268240f | According to Figure 1, in which year has the highest proportion of NLP papers that explicitly mention speech-related terms in their title? | Your answer should be the year number with the highest proportion, e.g. 2000. | [
"Putting Natural in Natural Language Processing"
] | [] | [] |
df46a4db-9a21-55a7-b84a-7764604b47c5 | Is there any paper that uses Lipschitz continuity in learning a dynamics model? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"CCIL: Continuity-Based Data Augmentation for Corrective Imitation Learning"
] | [
"iclr2024"
] |
df8afde5-4e93-5e03-86a6-a98bcccdc1e7 | What paper provides generalization bounds for self supervised learning models eg. CLIP | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Understanding prompt engineering may not require rethinking generalization"
] | [
"iclr2024"
] |
e010a084-060b-5edb-8ff5-9be8bc82f010 | Which baselines are chosen to study for Seperate Training based methods in the paper named "Benchmarking and Improving Compositional Generalization of Multi-aspect Controllable Text Generation"? In the source paper of the baseline 'Prior', what control framework is proposed? | Your answer should be a single python list,the first element is a list of strings of the baselines, the second element is the string about the control framework.e.g.[["Prior","Baseline2"],"The paper proposes a novel control framework that introduces..."]. | [
"Benchmarking and Improving Compositional Generalization of Multi-aspect Controllable Text Generation"
] | [
"Controllable Text Generation via Probability Density Estimation in the Latent Space"
] | [] |
e03ad6fe-d951-5f49-b4b8-6415e0c8203e | Which paper produces a dataset for text simplification in over 12 languages and evaluates both finetuning and in context learning approaches to text simplification in those languages? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Revisiting non-English Text Simplification: A Unified Multilingual Benchmark"
] | [
"acl2023"
] |
e0411ac6-86d2-52ff-bcc1-4e9dba8177c5 | Is there any paper that automatically creates a dataset for summarizing text from one language to another for a large collection of languages? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"CrossSum: Beyond English-Centric Cross-Lingual Summarization for 1,500+ Language Pairs"
] | [
"acl2023"
] |
e1180112-dc52-5a5c-9907-6d007f17b729 | I am a beginnner to the field of NLP, and I wonder roughly how many papers on average should one paper cite, given the provided paper list. | Your answer should be a single integer number, rounded from the average reference number. | [
"XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts",
"SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding",
"MinPrompt: Graph-based Minimal Prompt Data Augmentation for Few-shot Question Answering"
] | [] | [] |
e1184bfe-06e5-5294-9f96-fa353008ba83 | Who are the authors of this paper? What're their institutions? | Your answer should be a Python dictionary, e.g. {"Amy": ["Massachusetts Institute of Technology", "Carnegie Mellon University"], "Bob": ["Shanghai Jiaotong University"]}. YOU MUST USE THE FULL AND EXACT WORDS FROM PDF. | [
"Leveraging Collection-Wide Similarities for Unsupervised Document Structure Extraction"
] | [] | [] |
e1464930-9482-58cc-8245-b84ab34841e9 | In the paper that proposed the information filtering hypothesis, under non-semantic task setting, how much does the frozen LLM transformers improve VectorNet, considering the miss rate? | Your answer should be a positive float, rounding to 1 decimal place. | [] | [
"Frozen Transformers in Language Models Are Effective Visual Encoder Layers"
] | [
"iclr2024"
] |
e24d9741-a47e-5f69-8bde-ead5219761be | In video diffusion models, is there any paper that tried decomposing video instruction into sub instructions of different time? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Seer: Language Instructed Video Prediction with Latent Diffusion Models"
] | [
"iclr2024"
] |
e26691f8-389b-5939-917d-f0be16cec850 | What are the differences between the two GNeRF settings in the experiments of the paper? | Your answer should be a python strings about the obvious differences. | [
"InsertNeRF: Instilling Generalizability into NeRF with HyperNet Modules"
] | [] | [] |
e26e0a8e-7e6b-55b0-b658-af94309cd496 | According to the experimental results, if we remove the document fact attention module and use mean pooling to fuse all document semantic representation vectors, by how much does the F1 score of FINEGRAINFACT decline in summaries generated by pre-trained language models published in or after 2020? | Your answer should be a single python float, rounded to 2 decimal places. | [
"Interpretable Automatic Fine-grained Inconsistency Detection in Text Summarization"
] | [] | [] |
e26fafda-6b5f-5a6c-8fe6-57647a29c7e7 | Is there any paper trying to improve MLE for auto-regressive language modeling through the lens of optimal transport? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"EMO: EARTH MOVER DISTANCE OPTIMIZATION FOR AUTO-REGRESSIVE LANGUAGE MODELING"
] | [
"iclr2024"
] |
e2e3bd05-d47f-5602-83a5-06b21a463035 | Which machine learning paper proposed certified robustness in the malware detection domain? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified Robustness"
] | [
"iclr2024"
] |
e30796f1-1fa4-516f-8295-ba45725d32de | Is there a dataset that allows to perform aspect-based sentiment classification on French news? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"MAD-TSC: A Multilingual Aligned News Dataset for Target-dependent Sentiment Classification"
] | [
"acl2023"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.