uuid
string
question
string
answer_format
string
anchor_pdf
list
reference_pdf
list
conference
list
0418ab60-0b47-578f-9af4-5fee333dcaa3
In ICLR 2024 Oral papers, which paper proposes a novel unsupervised RL objective, which authors call Metric-Aware Abstraction (METRA)?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "METRA: Scalable Unsupervised RL with Metric-Aware Abstraction" ]
[ "iclr2024" ]
04c66ea8-6710-58d1-9cda-ce351f28fd4d
In the paper that integrates Foundation Models, Federated Learning, and Blockchain into a unified framework for smart cities, what is the institution of the first author of this paper?
Your answer should be a python strings of the name of the institution.
[]
[ "The SynapticCity Phenomenon: When All Foundation Models Marry Federated Learning and Blockchain" ]
[ "neurips2024" ]
04f045de-d5e0-591e-b877-8222a1a360a7
A recent paper introduces BrainMD, a large-scale multimodal dataset comprising 2,453 3D MRI brain scans paired with radiology reports and longitudinal health records, designed to evaluate vision-language models (VLMs) on medical imaging tasks. Based on the experimental results presented in this work regarding BrainMD, could you please tell me which model among [Flamingo, Med-Flamingo, Med-PaLM-2] demonstrates the highest performance in identifying cancer type?
Your answer must be one of the following: ['Flamingo', 'Med-Flamingo', 'Med-PaLM-2']
[ "Enhancing vision-language models for medical imaging: bridging the 3D gap with innovative slice selection" ]
[]
[]
057cc726-950b-578f-a8af-a6a2cb3f40ad
In the paper that proposes I-Frame Domain Adaptation in Neural Video Compression,the author modifies the \gamma_d and \gamma_u to the trade-off between parameter efficiency and performance improvement. Which pair of \gamma_d and \gamma_u has the least training parameters added compared to the base model?
Your answer should be a Python list of two integers indicating \gamma_d and \gamma_u respectively.
[]
[ "An image to tailor: I-Frame Domain Adaptation in Neural Video Compression" ]
[ "neurips2024" ]
06b9e050-ff53-55a6-8619-ba98e5e360e5
In ICLR 2024 Spotlight papers, a paper attempts to solve how to train a general agent in Reinforcement Learning (RL) that can thoroughly explore the environment and learn new and diverse skills. What is the affiliation of the second author?
Your answer should be a Python string.
[]
[ "Proximal Policy Gradient Arborescence for Quality Diversity Reinforcement Learning" ]
[ "iclr2024" ]
072b05e0-08ee-5fba-a27f-da78002f5b67
There is a recent paper introducing a large-scale, long-term, semantically annotated outdoor dataset collected across the USC campus using a mobile robot equipped with multi-camera and LiDAR sensors. It features 10 million images and 1.4 million point clouds annotated with 267 semantic classes using GPT-4 and Grounded-SAM, enabling fine-grained 3D scene understanding. Please inform me which semantic label has a higher point frequency in this dataset.
Your answer should be a name of a semantic label.
[]
[ "USCILab3D: A Large-scale, Long-term, Semantically Annotated Outdoor Dataset" ]
[ "neurips2024" ]
082caa5e-c57d-5fe5-a0dd-226b0d55fc09
In NeurIPS 2024 Poster papers, a paper proposes a new problem in offline MBRL called "The Edge-of-Reach Problem". In Figure 5, which two kinds of statistical visualization method are used?
Your answer should be plain text.
[]
[ "The Edge-of-Reach Problem in Offline Model-Based Reinforcement Learning" ]
[ "neurips2024" ]
08f0f56d-a98e-5065-8cec-b4c4a78f830d
A paper focus on generating 3D molecular conformers conditioned on molecular graphs in a multiscale manner to study the way that diffusion models process 3D geometries in a coarse-to-fine manner. In their analyze of the spectral domain of Equivariant Blurring Diffusion (EBD), smaller or higher eigenvalues correspond to finer details?
Your answer should be chosen between "smaller" and "higher".
[ "Equivariant Blurring Diffusion for Hierarchical Molecular Conformer Generation" ]
[]
[]
0afdfb80-125c-5cb1-a814-e410ad15d56e
In the paper that proposed the LMSYS-Chat-1M dataset, according to the sampled conversations, which category contains the most clusters?
Your answer should be a string, the category tag as shown in the corresponding figure.
[]
[ "LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset" ]
[ "iclr2024" ]
0b025b6f-17f4-5481-bd38-a6d95dd8ecee
There is a paper proposing Hybrid-LITE, a memory-efficient hybrid retriever that combines BM25 with a novel dense retriever called LITE. LITE is trained using joint contrastive learning and knowledge distillation from DrBoost, a boosting-based dense retriever. May I ask which lab collaborated with Arizona State University on this work?
Your answer should be a name of a lab.
[ "A Study on the Efficiency and Generalization of Light Hybrid Retrievers" ]
[]
[]
0bed04d1-ac96-52ff-becb-4840138e3c34
In ICLR 2024 Spotlight papers, a paper unifies reinforcement learning and imitation learning methods under a dual framework. How many pages are there in this paper?
Your answer should be Python integer.
[ "Dual RL: Unification and New Methods for Reinforcement and Imitation Learning" ]
[]
[]
0d5b262f-8137-58d6-b2f8-65610ac2130a
What's the forward process of HydraLoRA?
Your answer should be a Python string, the formula in LaTeX format.
[]
[ "HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning" ]
[ "neurips2024" ]
0e331b4a-bcfd-5aa1-a638-ceccedd419b5
In which mathematical subjects, the ablation model trained on 512K instances has the highest accuracy?
Your answer should be a string, which indicates a subject.
[ "OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset" ]
[]
[]
0e989547-e157-5bb4-b961-314772eac86c
In ICLR 2024 Poster papers, a paper proposes a novel framework named PARL (Policy Alignment in Reinforcement Learning), aiming to address the policy alignment problem in Reinforcement Learning (RL). Tell me the affiliated university of the first author.
Your answer should be a Python string.
[]
[ "PARL: A Unified Framework for Policy Alignment in Reinforcement Learning from Human Feedback" ]
[ "iclr2024" ]
0f29ac1d-7066-52ff-8967-9eb6e02d47e2
Which paper published in ICLR 2024 formulates the reference-free MT evaluation into a pairwise ranking problem?
Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.
[]
[ "MT-Ranker: Reference-free machine translation evaluation by inter-system ranking" ]
[ "iclr2024" ]
0f9dacbc-f690-5e1c-ab56-30e96ae31265
Which paper published in ICLR 2024 proposes EControl, a novel mechanism that can regulate error compensation by controlling the strength of the feedback signal?
Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.
[]
[ "EControl: Fast Distributed Optimization with Compression and Error Control" ]
[ "iclr2024" ]
0fa85fce-1bca-5a8f-9b37-f3fe84777222
What is the role of mSDF and how does G-SHELL's extraction algorithm integrate SDF and mSDF?
Your answer should be a sentence answering the two questions.
[]
[ "Ghost on the Shell: An Expressive Representation of General 3D Shapes" ]
[ "iclr2024" ]
0fe413e3-5931-522a-aade-0d0436b9f160
Among the papers at ACL 2023 focusing on cross-lingual transfer learning, what is the average improvement in cross-lingual classification accuracy achieved by the X-InSTA method compared to random prompt selection?
Your answer should be a Python float value between 0 and 100, representing the percentage point improvement, rounded to 1 decimal place.
[]
[ "Multilingual LLMs are Better Cross-lingual In-context Learners with Alignment" ]
[ "acl2023" ]
104e822b-ccda-58e7-a162-888e4eb2bd68
In ICLR 2024 Poster papers, a paper proposes a reward smoothing method called "DreamSmooth". How many different tasks are illustrated in Figure 5?
Your answer should be Python integer
[]
[ "DreamSmooth: Improving Model-based Reinforcement Learning via Reward Smoothing" ]
[ "iclr2024" ]
108f6d70-3eec-5cb9-99ed-828c2b243731
In the paper of titled "CAUSAL CONFUSION AND REWARD MISIDENTIFICA TION IN PREFERENCE-BASED REWARD LEARNING", how is the the difference between reward functions measured? Please give me the relevant github link.
Your answer should be a single python string like "https://github.com/a/b", the link should be the full link of the github repository.
[ "Causal Confusion and Reward Misidentification in Preference-Based Reward Learning" ]
[ "Quantifying Differences in Reward Functions" ]
[]
10b8b656-3230-5398-be80-502883a46a2e
In the paper that proposes MULAN, which model in autoregressive type has the highest likelihood in bits per dimension on the test set of ImageNet?
Your answer should be a Python strings indicating the name of the model.
[]
[ "Diffusion Models With Learned Adaptive Noise" ]
[ "neurips2024" ]
10ba9268-333c-555c-a52f-2b3f9473f5c8
In ICLR 2024 Poster papers, a paper proposes a new Adversarial Imitation Learning (AIL) algorithm, aiming to address the sample efficiency and scalability issues of existing AIL methods when dealing with off-policy data. Tell me the title of this paper.
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Adversarial Imitation Learning via Boosting" ]
[ "iclr2024" ]
10e16a45-bcf5-5bc4-93f7-b1a0d851b14b
A recent paper introduces a publicly available multi-granularity dataset for job skill demand forecasting, compiled from millions of online job advertisements. It uniquely supports forecasting at the occupation, company, and regional levels, and includes comprehensive benchmarks across statistical, deep learning, and graph-based models. Please let me know which institution the first author of this work is affiliated with.
Your answer should be a name of an institution.
[]
[ "Job-SDF: A Multi-Granularity Dataset for Job Skill Demand Forecasting and Benchmarking" ]
[ "neurips2024" ]
15618a03-0eec-514a-a04b-df78b61ef229
A paper introduce a entirely data-driven relighting method, where intrinsics and lighting are each represented as latent variables. In the experiments of sensitivity to light changes, which model is sensitive to lighting changes?
Your answer should be a string, aneme of model.
[]
[ "Latent Intrinsics Emerge from Training to Relight" ]
[ "neurips2024" ]
15b0adb9-3ba0-51d0-b8f3-a533ef7a2291
In ICLR 2024 Poster papers, a paper proposes a framework where an agent first learns a sufficient set of skill primitives to achieve all high-level goals in its environment. How many baselines are compared in Figure 2?
Your answer should be a Python integer.
[]
[ "Skill Machines: Temporal Logic Skill Composition in Reinforcement Learning" ]
[ "iclr2024" ]
15c7a8ad-f85f-57da-8cf1-f4d219fe2a79
In ICLR 2024 Poster papers, a paper mainly studies how to more effectively utilize data augmentation techniques to improve sample efficiency and generalization ability in image-based deep reinforcement learning (DRL). Tell me the definition of Q-invariance of this paper.
Your answer should be the formula in LaTeX format.
[]
[ "Revisiting Data Augmentation in Deep Reinforcement Learning" ]
[ "iclr2024" ]
16aef45c-711c-538c-97a0-65a8e0a2ce8e
What percentage of the MEPS dataset is in the test set?
Your answer should be a float indicating the percentage, rounded to 1 decimal place.
[ "Unprocessing Seven Years of Algorithmic Fairness" ]
[]
[]
1741c5e3-8551-5ec1-b7d1-2fefb81ef527
Among the papers at ACL 2023 researching grammatical error correction, what $\text{F}_{0.5}$ score does GEC-DePenD (SUNDAE) achieve on the CoNLL-2014 test set?
Your answer should be a Python float value representing the $\text{F}_{0.5}$ score, rounded to 1 decimal place.
[]
[ "GEC-DePenD: Non-Autoregressive Grammatical Error Correction with Decoupled Permutation and Decoding" ]
[ "acl2023" ]
17fcb7a7-45ca-530a-8f4f-12a93fbb9a16
What are the three knowledge selectors shown in Figure 1, and where are they positioned in the workflow? How does the Factuality Selector combine two types of scores to filter knowledge documents?
Your answer should be a sentence answer the two questions.
[ "Knowledge Card: Filling LLMs' Knowledge Gaps with Plug-in Specialized Language Models" ]
[]
[]
1a3ab1fb-9deb-5f78-999e-808ae10f4c94
What is the key energy decomposition formula used in LED-GFN?
Your answer should be a formula
[]
[ "Learning Energy Decompositions for Partial Inference in GFlowNets" ]
[ "iclr2024" ]
1abeaace-aa0f-54ec-9e35-6adc252e6267
In the experiment of the paper that introduces MTOB dataset, how many models managed to achieve 30 chrF score, for kgv to eng task under W+S+G^S setting?
Your answer should be a integer, the number of models.
[]
[ "A Benchmark for Learning to Translate a New Language from One Grammar Book" ]
[ "iclr2024" ]
1b21571b-e7e6-521d-a6b2-13b65325a958
What's the base model of MERU? Which institute is this base model from?
Your answer should be a single python list like this: ["model_name", "institute_name"]. Note that for both of these names, the abbreviation is required. For example, if the model name is "OpenAI GPT-3" and the institute name is "OpenAI", then your answer should be ["GPT-3", "OpenAI"].
[ "Hyperbolic Image-Text Representations" ]
[ "Learning Transferable Visual Models From Natural Language Supervision" ]
[]
1b21a337-581d-5f5d-88d7-73d39d9c4a92
There is a paper that introduces TRAC, a benchmark suite comprising four fundamental reasoning tasks (Projection, Executability, Plan Verification, Goal Recognition) aimed at the textual understanding of preconditions and effects in dynamic environments. In its experiments, the performance of RoBERTa-base on the four sub-tasks shows a stable increase with the growing number of training samples. The question is: which sub-task of TRAC does the RoBERTa-base model achieve the highest performance with 10,000 training samples?
Your answer should be the exact name of the sub-task, must be one of ['Projection', 'Executability', 'Plan Verification', 'Goal Recognition']
[]
[ "Exploring the Capacity of Pretrained Language Models for Reasoning about Actions and Change" ]
[ "acl2023" ]
1b488e45-fa85-52b3-81d7-282e1a443383
In ICLR 2024 Spotlight papers, a paper tries to solve the performance issues of the DICE (DIstribution Correction Estimation) method in offline reinforcement learning (RL) and imitation learning (IL). How many different affiliation of all authors?
Your answer should be a Python integer.
[ "ODICE: Revealing the Mystery of Distribution Correction Estimation via Orthogonal-gradient Update" ]
[]
[]
1b94e7ef-ccfd-5ba0-a268-d053dbc1bd34
Is there any oral work that is related to apply LORA to long context fine-tuning?
Your answer should be a text string representing the title of the work.
[]
[ "LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models" ]
[ "iclr2024" ]
1c97caca-ee1f-5ec2-8215-88b97c600f76
The lack of task specific knowledge or rely on ground truth as few-shot samples is one of the causes of poor performance of traditional learning-based approaches. Therefore, a paper propose a novel approach called Progressive Retrieval Augmented Generation (P-RAG). This models' success rate is satured more quickly with less rounds of iteration in the ALFRED Valid Unseen dataset or the ALFRED Train 100 dataset?
Your answer should be chosen between "ALFRED Valid Unseen dataset" and "ALFRED Train 100 dataset"
[]
[ "Exploratory Retrieval-Augmented Planning For Continual Embodied Instruction Following" ]
[ "neurips2024" ]
1cfb75b2-ff4f-5e64-9b17-49e2a4efcba1
In ICLR 2024 Poster papers, which paper proposes to extract an efficient deterministic inference policy from critic models and pretrained diffusion behavior models, leveraging the latter to directly regularize the policy gradient with the behavior distribution's score function during optimization?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Score Regularized Policy Optimization through Diffusion Behavior" ]
[ "iclr2024" ]
1da02b38-bb41-56ae-b3cb-2101f2ea9095
In NeurIPS 2024 Poster papers, a paper proposes a methods called "BECUASE". What is the affiliation of the first author?
Your answer should be a Python string.
[ "BECAUSE: Bilinear Causal Representation for Generalizable Offline Model-based Reinforcement Learning" ]
[]
[]
1dd42f4f-0844-5b5b-95f7-97f7c795f0e2
A recent paper introduces a Transformer variant that enhances computational efficiency and performance by dynamically pooling variable-length token segments during intermediate layers. The paper conducts experiments on the memory consumption of a training step for different shortening factors using the English text8 dataset. With a shortening factor of 3, how much GPU memory in GB is required?
Your answer should be a python float, rounded to 1 decimal place.
[]
[ "Efficient Transformers with Dynamic Token Pooling" ]
[ "acl2023" ]
1e13d99f-d4ad-5f07-8a11-0c3a41db5e36
A recent paper introduces ProGraph, a benchmark designed to evaluate large language models (LLMs) on graph analysis tasks using external APIs, aligning their behavior with that of human experts. Among the four question types in ProGraph (True/False, Drawing, Calculation, and Hybrid), which category exhibits the highest average difficulty according to the experimental results under the RAG7 settings?
Your answer must be one of the following: ['True/False', 'Drawing', 'Calculation', 'Hybrid']
[]
[ "Can Large Language Models Analyze Graphs like Professionals? A Benchmark, Datasets and Models" ]
[ "neurips2024" ]
1ecd973a-fa86-53b0-96e5-4781f4fdedd9
A paper studies a realistic Continual Learning (CL) setting and applies it to large-scale semi-supervised Continual Learning scenarios with sparse label rate. They show models' performance on 1% labeled ImageNet10k with varying computational steps and conclude DietCL's alleviation of overfitting, how this advantage is reflected in the figure?
Your answer should be a phrase which explains the question with phenomenon showed in figure.
[]
[ "Continual Learning on a Diet: Learning from Sparsely Labeled Streams Under Constrained Computation" ]
[ "iclr2024" ]
20795612-be12-57ec-ab55-b15fca7b1358
In the paper that proves a phase transition in attention mechanisms from positional to semantic in language modelsand shows it emerges only beyond a critical data threshold, tell me the number of the authors.
Your answer should be an integer indicating the number of the authors.
[]
[ "A Phase Transition between Positional and Semantic Learning in a Solvable Model of Dot-Product Attention" ]
[ "neurips2024" ]
20c140bf-259b-5d6c-b6e8-6681491c2bc3
what are the two main integration approaches proposed in KNOWLEDGE CARD? How does one differ from the other in terms of knowledge card activation?
Your answer should be a sentence answer the two questions.
[]
[ "Knowledge Card: Filling LLMs' Knowledge Gaps with Plug-in Specialized Language Models" ]
[ "iclr2024" ]
211cf320-483f-5ba7-bb4b-4b6935a3f6f2
What is the same core conclusion "Which Layer is Learning Faster? A Systematic Exploration of Layer-wise Convergence Rate for Deep Neural Networks" and "On the Spectral Bias of Neural Networks" draw?
Your answer should be a string
[ "Which Layer is Learning Faster? A Systematic Exploration of Layer-wise Convergence Rate for Deep Neural Networks", "On the Spectral Bias of Neural Networks" ]
[]
[]
211fc5cb-a1a6-5e48-b48b-67790d8c4eab
In the paper at NeurIPS 2024 that introduces LLM landscape for safety, what is the minimum number of adversarial examples required to compromise the safety alignment of GPT-3.5 Turbo through fine-tuning?
Your answer should be a Python integer representing the minimum number of adversarial examples required.
[]
[ "Navigating the Safety Landscape: Measuring Risks in Finetuning Large Language Models" ]
[ "neurips2024" ]
2122c495-bf24-5030-ab80-49644d33bc9d
A recent paper introduces PertEval, a toolkit designed to assess the real knowledge capacity of large language models (LLMs) through knowledge-invariant perturbations. Could you please retrieve the article and provide the corresponding GitHub repository link for this work?
Your answer should be a link only without any additional prefixes or suffixes.
[]
[ "PertEval: Unveiling Real Knowledge Capacity of LLMs with Knowledge-Invariant Perturbations" ]
[ "neurips2024" ]
219dbc8a-cd90-58e1-b95d-2c419634648c
In the paper that proposes DFA-GNN, what is the use of the first and the second term in formula (7)?
Your answer should be a Python strings indicating the use of the two terms.
[]
[ "DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment" ]
[ "neurips2024" ]
22671b31-7e6c-5ed7-b649-52e762bd3e30
A paper introduce Self-Calibrating Conformal Prediction to recognize the complementary roles of point and interval predictions. In their experiment of comparing SC-CP with baselines, which baseline under-adapts due to insufficient bins?
Your answer should be a string, a name of baseline.
[]
[ "Self-Calibrating Conformal Prediction" ]
[ "neurips2024" ]
232fd1d1-def0-5e09-aed3-612a5dd25337
A recent paper proposes a novel reinforcement learning-based approach, DPPO-PR2, for UAV local path planning, demonstrating superior convergence speed and planning performance across six simulated environments. The proposed algorithm is an optimized variant of a classical reinforcement learning algorithm. Which algorithm does DPPO-PR2 build upon?
Your answer should be a string
[ "UAV Local Path Planning Based on Improved Proximal Policy Optimization Algorithm" ]
[]
[]
2376766c-1a1e-55c6-aff5-27ee9edda7dc
Among the Findings papers at ACL 2023 researching text summarization, what is the average improvement in ROUGE scores achieved by the proposed framework in the paper "Do You Hear The People Sing? Key Point Analysis via Iterative Clustering and Abstractive Summarisation"?
Your answer should be a Python integer representing the maximum improvement in ROUGE scores in percentage points.
[ "Do You Hear The People Sing? Key Point Analysis via Iterative Clustering and Abstractive Summarisation" ]
[]
[]
24f6ae04-4f02-5799-8109-e25b5de33d07
Many complex high-dimensional physical systems are recently modeled by graph neural network (GNN) models. In the simulation of the stress on the falling ball and plate after a collision, MGN or GT has a better performance?
Your answer should be chosen between "MGN" and "GT".
[]
[ "Learning Flexible Body Collision Dynamics with Hierarchical Contact Mesh Transformer" ]
[ "iclr2024" ]
25802786-3944-5e7e-8f5e-3b57c592436a
What is the overall accuracy achieved by GPT-4V on the MathVista benchmark, and how does it compare to the second-best performing model?
Your answer should be a Python tuple of two float values: [gpt4v_accuracy, difference_from_second_best], both between 0 and 100, rounded to 1 decimal place and represented as percentages.
[]
[ "MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts" ]
[ "iclr2024" ]
266ef0cd-be78-556d-b8e0-9921909775dd
In the SDXL paper, the c_crop parameter proposed in figure 5 aim to solve which problem?
Your answer should be a paragraph, indicating the problem.
[]
[ "SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis" ]
[ "iclr2024" ]
296811ae-cfe9-52ba-908a-f9debaa29cfb
A paper propose a learning paradigm that directly establishes causation between events in the course of time. According to its diabetes simulator, SI1 and GS1 could probablely have the common effect or the opposite effect?
Your answer should be chosen between "common" and "opposite".
[]
[ "A Dynamical View of the Question of Why" ]
[ "iclr2024" ]
29782c60-c850-5407-850a-0e2527b59b95
There is a paper that introduces a lightweight, model-agnostic alignment framework called Aligner, which corrects residuals between preferred and dispreferred responses using a small plug-and-play model. In this paper, the authors investigate the effects of different identity mapping proportions on the model's helpfulness and harmlessness. Which model consistently achieved the highest scores in the experimental results?
Your answer should be a name of the model.
[]
[ "Aligner: Efficient Alignment by Learning to Correct" ]
[ "neurips2024" ]
29d6f506-65f4-5d11-94b7-dd2673df32cf
In ICLR 2024 Oral papers, a paper presents PTGM, a novel method that pre-trains goal-based models to augment RL by providing temporal abstractions and behavior regularization. Tell me the overall objective for training the high-level policy is to maximize the expected return.
Your answer should be formula in the Latex format.
[]
[ "Pre-Training Goal-based Models for Sample-Efficient Reinforcement Learning" ]
[ "iclr2024" ]
29db626a-b8ee-5ba3-b431-a01257170a2f
In YOCO, what's the detailed formulas for self-decoder?
Your answer should be a Python list of two strings, the formulas in LaTeX format.
[]
[ "You Only Cache Once: Decoder-Decoder Architectures for Language Models" ]
[ "neurips2024" ]
2a126f74-2895-580d-bde5-525a57fe306e
A recent paper introduces a large-scale synthetic dataset of muscle activations derived from biomechanical simulations using OpenSim, encompassing 227 subjects and 402 muscle strands. By enriching motion capture data with simulated muscle activations, the authors bridge the gap between observable motion and internal biomechanics. In this dataset, which action within Dynamic Actions has the highest prevalence?
Your answer should be a name of an action.
[]
[ "Muscles in Time: Learning to Understand Human Motion In-Depth by Simulating Muscle Activations" ]
[ "neurips2024" ]
2b1e878a-91d0-5910-815f-db0608a4c7e7
Which paper designs a world model for continuous control with "SimNorm"?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "TD-MPC2: Scalable, Robust World Models for Continuous Control" ]
[ "iclr2024" ]
2b2d5721-b503-5c7d-b3e0-65fadcee243c
A paper report the phenomenon that a small number attention heads transport a compact representation of the demonstrated task, which they call a function vector (FV). From results across layers for the zero-shot case, after adding the function vecteurs, in the lastest layers, do the models get a significantly higher accurancy comparing with models without FV?
Your answer should be "Yes" or "No".
[]
[ "Function Vectors in Large Language Models" ]
[ "iclr2024" ]
2bd19f2e-3eef-535b-8ce2-fcbdffe80c34
Can you recommend me a paper published in ICLR 2024 that proposes a novel model, CONSISGAD, which is tailored for Graph Anomaly Detection in scenarios characterized by limited supervision?
Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Consistency Training with Learnable Data Augmentation for Graph Anomaly Detection with Limited Supervision" ]
[ "iclr2024" ]
2c3325e6-72dd-50d7-89df-92018ab5305c
In ICLR 2024 Poster papers, a paper attempts to solve the credit assignment problem in Preference-based Reinforcement Learning (PbRL). How many different methods are compard in Figure 2
Your answer should be a Python integer.
[]
[ "Hindsight PRIORs for Reward Learning from Human Preferences" ]
[ "iclr2024" ]
2c80a9fe-9507-5538-bdd0-77a38580b6da
Does the paper include any diagrams illustrating the relationship between projected-gradient norms and exploitability? If so, what key insight does the diagram convey about bounding exploitability using Lemma 2?
Your answer should be a sentence answering the two questions.
[ "Approximating Nash Equilibria in Normal-Form Games via Stochastic Optimization" ]
[]
[]
2c961e82-6918-51c9-9aa6-0e3527e4d000
In ICLR 2024 Poster papers, a paper attempts to address the challenges faced when learning from pixel-level inputs in multi-object manipulation tasks. How many different tasks are considered in Figure 3?
Your answer should be a Python integer.
[]
[ "Entity-Centric Reinforcement Learning for Object Manipulation from Pixels" ]
[ "iclr2024" ]
2cadec06-d31b-56ea-ab4b-a83eae802af4
How does LED-GFN modify the Detailed Balance loss function?
Your answer should be a formula
[]
[ "Learning Energy Decompositions for Partial Inference in GFlowNets" ]
[ "iclr2024" ]
2d8e65eb-0b65-50d6-83c8-b69666e39699
In ICLR 2024 Oral papers, a paper attempts to solve the problem of how to accelerate the learning process and avoid getting trapped in local optimal solutions in Cooperative Multi-Agent Reinforcement Learning (MARL). This paper develops the deterministic conditional autoencoder, tell me the corresponding loss function.
Your answer should be formula in the Latex format.
[]
[ "Efficient Episodic Memory Utilization of Cooperative Multi-Agent Reinforcement Learning" ]
[ "iclr2024" ]
2d9d3a83-0d60-527a-924d-b8245c882db5
In ICLR 2024 Poster papers, a paper attempts to solve the credit assignment problem in Preference-based Reinforcement Learning (PbRL). Tell me the number ofpages of the appendix of this paper?
Your answer should be a Python integer.
[]
[ "Hindsight PRIORs for Reward Learning from Human Preferences" ]
[ "iclr2024" ]
2e1c39a1-2886-58e0-aa1e-6087995c9e50
In MetaLA, what's the main improvement for the hidden state $S_t^h$?
Your answer should be a string, the formula in LaTeX format, indicating the main improvement.
[]
[ "MetaLA: Unified Optimal Linear Approximation to Softmax Attention Map" ]
[ "neurips2024" ]
2e337fa6-897f-579b-9b1d-3d87279e705a
There is a paper that introduces a novel framework for evaluating language model (LM) agency through structured negotiation games, addressing the limitations of static benchmarks. The work provides an open-source library (LAMEN) and negotiation transcripts to facilitate reproducible research on LM agent capabilities. The authors discuss four types of negotiation issues, among which is the issue where agents value each issue differently, creating opportunities for trade-offs.
Your answer should be one of the four types of negotiation issues: ['Distributive', 'Compatible', 'Mixture', 'Integrative Distributive'].
[]
[ "Evaluating Language Model Agency Through Negotiations" ]
[ "iclr2024" ]
2e4c876d-4a7f-5fb4-ad13-9334a6eaf533
Among the oral presentations at ICLR 2024 researching image generation, what Frechet Inception Distance (FID) score does Wurstchen Stage C achieve on the COCO30K dataset at 256x256 resolution?
Your answer should be a Python float value representing the FID score, rounded to 1 decimal place.
[]
[ "Würstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models" ]
[ "iclr2024" ]
2f6302ea-9662-57be-9b5c-e1ecbeee1159
In ICLR 2024 Poster papers, which paper proposes sub-trajectory mining to extract potentially valuable sub-trajectories from offline data, and diversify the behaviors within those sub-trajectories by varying coverage of the state-action space?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "On Trajectory Augmentations for Off-Policy Evaluation" ]
[ "iclr2024" ]
2f9efcac-82a4-5227-8400-29809958fa20
What are the three steps of the iterative hypothesis refinement process shown in Figure 1? What is the role of the first step, Hypotheses Generation?
Your answer should be a sentence answering the two questions.
[ "Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement" ]
[]
[]
3076e893-19aa-5491-8e94-4b52ac5c3096
At what memory reduction percentage can M-SMoE retain performance on which datasets?
Your answer should be a Python list of two elements, the first is an integer that indicates the percentage, and the second is a list of strings that indicates the dataset names.
[]
[ "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy" ]
[ "iclr2024" ]
30b13d11-904d-59fb-ac41-ab155831cada
A paper propose the Factorized Fourier Neural Operator (F-FNO) which bridges the performance gap between pure machine learning approaches to that of the best numerical or hybrid solvers. In the example where both F-FNO and DNS are using the same spatial resolution, which model is visually closer to the ground truth?
Your answer should be chosen between "F-FNO" and "DNS".
[ "Factorized Fourier Neural Operators" ]
[]
[]
30d0eed5-ce79-5caa-9aea-1be15551193f
There is a paper that proposes a novel uncertainty-based gradient matching approach for model merging, linking the inaccuracy of parameter averaging to gradient mismatches between individual models and the target model. It employs a new second-order approximation method using Hessian estimates to reduce mismatch. When exploring the effectiveness of removing data, the authors tested the model's toxicity on Detoxify, considering generations with a score exceeding a certain threshold as toxic. What is this specific threshold?
Your answer should be a python float number, rounded to 1 decimal place.
[]
[ "Model Merging by Uncertainty-Based Gradient Matching" ]
[ "iclr2024" ]
3104ef8c-e942-5372-a842-4480d7f5d68e
A paper studies the generalized linear contextual bandit problem within the constraints of limited adaptivity and proposes two algorithms, B-GLinCB and RS-GLinCB. In the comparison of RS-GLinCB against ECOLog Faury et al. (2022) and GLOC Jun et al. (2017), which algorithm showed the smallest regret after 17500 rounds?
Your answer should be string, a name of algorithm.
[]
[ "Generalized Linear Bandits with Limited Adaptivity" ]
[ "neurips2024" ]
310c3b00-9568-544b-a5af-40efc5e3dee7
Which Tiny Paper published in ICLR 2024 applies an image-to-image refinement of each image in the InstructPix2Pix dataset with the help of the text-to-image diffusion model SDXL?
Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Improving Image Editing Models with Generative Data Refinement" ]
[ "iclr2024" ]
312971a0-c394-5aec-a1d6-14507a83d70c
In all experiments, on average, how many minutes did each model take to train and evaluate?
Your answer should be a float, rounded to 1 decimal place, the time taken for training and evaluating in minutes.
[ "Unprocessing Seven Years of Algorithmic Fairness" ]
[]
[]
31ad8166-f23d-56fc-baa4-fff230889e5f
In the paper "HLLM: Enhancing Sequential Recommendations via Hierarchical Large Language Models for Item and User Modeling," the authors evaluate HLLM on a dataset called PixelRec. Specifically, how many samples within the 200K subset of this dataset have a Session Length in the range of [10, 20]? Please provide the exact number.
Your answer should be a Python int.
[ "HLLM: Enhancing Sequential Recommendations via Hierarchical Large Language Models for Item and User Modeling" ]
[ "An Image Dataset for Benchmarking Recommender Systems with Raw Pixels" ]
[]
31e4f1fa-dd92-58ea-82cf-e7939ef8038d
In the paper that introduces MuMA-ToM, how many multi-modal social interactions between two agents does the MuMA-ToM Benchmark consist, and how many multi-choice questions are there based on these social interactions?
Your answer should be a Python list of two integers, the first indicating the number of multi-modal social interactions and the second is the number of multi-choice questions.
[ "In-Context Learning by Linear Attention: Exact Asymptotics and Experiments" ]
[]
[]
3245ec6e-0cce-5157-9258-8447a4b41348
A paper introduce a novel Contrastive Signal Generative Framework for Accurate Graph Learning to avoid impact of inappropriate contrastive signals. In their model, the hyperparameter $\gamma $ is used to adjust the weight of the contrastive loss in the overall loss function. When $\gamma$ equals to which value, the model achieves the best average accuracy on the three datasets?
Your answer should be a float rounded to 1 decimal place.
[]
[ "Unified Graph Augmentations for Generalized Contrastive Learning on Graphs" ]
[ "neurips2024" ]
332e9eed-a895-5f23-ae24-73b9ee936b21
How is Q_{EC}(f_{\phi}(s_t), \bm{a}_t) defined in terms of immediate reward r_t and the highest return H from episodic memory? How does the loss function L^{EC}_{\theta} combine the TD error and the episodic memory error?
Your answer should be two formulas in LaTeX format.
[ "Efficient Episodic Memory Utilization of Cooperative Multi-Agent Reinforcement Learning" ]
[]
[]
335aac7c-4401-5a6d-ab89-271141803f05
In the paper proposing RAGraph, what measures were adopted to emphasize nodes in the long-tail part?
Your answer should be a python strings.
[]
[ "RAGraph: A General Retrieval-Augmented Graph Learning Framework" ]
[ "neurips2024" ]
33a15d70-5c4d-5a69-93f6-693d95e9028f
Recently, an enhanced version of the MMLU dataset, called MMLU-Pro, has been proposed as a benchmark for evaluating large language models (LLMs) by addressing the limitations of the original MMLU dataset. Could you please tell me which discipline in MMLU-Pro has the highest proportion of questions?
Your answer should be a name of a discipline.
[]
[ "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" ]
[ "neurips2024" ]
33ad083d-6865-52fa-aa99-32bd951248a3
What's the main suggestion of the paper inspiring the research of "A KERNEL-BASED VIEW OF LANGUAGE MODEL FINE-TUNING"?
Your answer should be a short text about the main suggestion of the paper.
[ "A Kernel-Based View of Language Model Fine-Tuning" ]
[ "More Than a Toy: Random Matrix Models Predict How Real-World Neural Representations Generalize" ]
[]
3467612f-a2e6-5d1c-b331-6e3a3621f1e6
A paper studies a realistic Continual Learning (CL) setting and applies it to large-scale semi-supervised Continual Learning scenarios with sparse label rate. In the pseudo-supervised class-wise contrastive learning and the instance-wise conrastive learning, which representation distinct the samples into small, seperated clusters but confuses categories?
Your answer should be a string, a name of representation.
[]
[ "Robust Representation Learning with Reliable Pseudo-labels Generation via Self-Adaptive Optimal Transport for Short Text Clustering" ]
[ "acl2023" ]
34cb700b-b21a-5e22-9734-9c4141f46a2b
A paper introduce DIAMOND (DIffusion As a Model Of eNvironment Dreams) for image generation with visual details. In the comparison with IRIS, IRIS generates the trajectories which contain visual inconsistencies between frames, what is taken as an example to explain probablely consequences for reinforcement learning of this behavior?
Your answer should be a phrase, which conclude the example.
[]
[ "Diffusion for World Modeling: Visual Details Matter in Atari" ]
[ "neurips2024" ]
34d6ee8a-a394-55b0-be3d-26ac927d96cb
In Figure 1, what does the autoregressive model predict for the query Canada in the country-capital example? How does the autoregressive prediction mechanism for the Boolean function task mirror the country-capital example , despite differing data types?
Your answer should be a sentence answering the two questions.
[ "Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions" ]
[]
[]
3514253a-3c30-5f1c-8990-d7a16bb1d7d6
In the paper that proposes L-GATr, where can I find the author's implementation of L-GATr?
Your answer should be a Python strings of the website, the website URL starting with "https://", as given in the paper.
[ "Lorentz-Equivariant Geometric Algebra Transformers for High-Energy Physics" ]
[]
[]
37721d94-5214-5aa8-894a-a8797fa046e4
In ICLR 2024 Spotlight papers, which paper is motivated by the idea of "sensing scaffold"?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Privileged Sensing Scaffolds Reinforcement Learning" ]
[ "iclr2024" ]
38141558-19e8-57f5-9482-0fec8c121683
In the paper that proposes Trap-MID, what percentage is the Recall of the reconstructed images from PLG-MI and what does this statistic mean?
Your answer should be a Python list of two elements. The first is a float number of the percentage (between 0 and 100, rounded to 2 decimal places) and the second is the indication of the statistic.
[]
[ "Trap-MID: Trapdoor-based Defense against Model Inversion Attacks" ]
[ "neurips2024" ]
383135b1-4179-58d6-8040-0a34f2d7e0cf
In the paper that presents PANORAMIA, which type of PANORAMIA in ResNet-101 model has the highest precision on CIFAR10 dataset in figure 3 at different levels of recall?
Your answer should be a python strings indicating the variant name of the PANORAMIA.
[]
[ "PANORAMIA: Privacy Auditing of Machine Learning Models without Retraining" ]
[ "neurips2024" ]
3a8cfcd8-82e4-5a08-ab13-659624281122
There is a paper that proposes a multimodal Emotion Recognition and Classification (ERC) framework that leverages bidirectional multi-head cross-attention to model complex correlations across textual, audio, and visual modalities. It introduces a novel visual feature extractor (VisExtNet) for capturing emotion-rich facial expressions and a Sample-Weighted Focal Contrastive (SWFC) loss to address class imbalance and semantic similarity between emotions. The paper employs a dataset called IEMOCAP for emotion classification. Could you please tell me which emotion label category has the second largest sample proportion in this dataset?
Your answer should be a exact match of the emotion label category name.
[]
[ "MultiEMO: An Attention-Based Correlation-Aware Multimodal Fusion Framework for Emotion Recognition in Conversations" ]
[ "acl2023" ]
3ad795ab-4381-5810-a9a1-c3c80a40f1c3
A recent paper proposes an embodied social intelligence benchmark focusing on accessibility and inclusivity. The benchmark evaluates agents on their ability to infer human intentions and constraints through egocentric observations and to cooperatively plan actions. Please retrieve the paper and provide me with the link to the corresponding code repository for this work.
Your answer should be a link only without any additional prefixes or suffixes.
[ "Constrained Human-AI Cooperation: An Inclusive Embodied Social Intelligence Challenge" ]
[]
[]
3bab9b77-ac37-5784-949d-6e90f0e588d7
Can you recommend me a paper published in ICLR 2024 that introduced Hierarchical cOntext MERging (HOMER), a novel method that efficiently addresses the context limit issue inherent in large language models?
Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Hierarchical Context Merging: Better Long Context Understanding for Pre-trained LLMs" ]
[ "iclr2024" ]
3bbffbd6-a376-5f1d-835b-e4dfea9d9eb0
A recent paper introduces a proposal-based framework for natural language video localization that enables efficient and effective moment-to-moment interaction through learnable templates and dynamic anchors. By incorporating a multi-scale visual-linguistic encoder and an anchor-guided moment decoder with anchor highlight attention, the proposed method transcends the locality assumption and achieves state-of-the-art performance across several benchmarks. All the research institutions involved in this study are from the same country. Please indicate the name of this country.
Your answer should be a name of a country.
[ "MS-DETR: Natural Language Video Localization with Sampling Moment-Moment Interaction" ]
[]
[]
3bc6a3a6-cc31-58b6-bf5d-fb2719b1ad6f
In the paper "TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding," a strategy known as StreamingLLM is employed, which utilizes a KV Cache Eviction Strategy by retaining critical attention sink tokens alongside recent KV pairs to enhance long-context capabilities. The question arises: how many attention sink tokens do the authors of the original StreamingLLM paper consider to be sufficient?
Your answer should be a Python int.
[ "TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding" ]
[ "Efficient Streaming Language Models with Attention Sinks" ]
[]
3c67b92e-2a4f-5a99-a380-3d245268a98f
In ICLR 2024 Spotlight papers, a paper introduces a novel algorithm, Robust Policy Improvement (RPI), which actively interleaves between IL and RL based on an online estimate of their performance. What is the formula of $ extbf{A}^+$ advantage function?
Your answer should be the formula in LaTeX format.
[]
[ "Blending Imitation and Reinforcement Learning for Robust Policy Improvement" ]
[ "iclr2024" ]
3c6a8a71-eb8c-501e-98b6-00564244c97d
In the paraphrase category of the experiments in the paper "Knowledge-in-Context: Towards Knowledgeable Semi-Parametric Language Models", which task experienced a decline in accuracy after adding any type of knowledge? What is the significance of this task?
Your answer should be a list containing two elements, the first one is the name of the task being asked, and the second one explains the significance of the task
[ "PAWS: Paraphrase Adversaries from Word Scrambling" ]
[]
[]
3c6cb9af-cfc9-5515-97c7-d40805b2ec55
In Figure 1, what are the two accuracy metrics plotted against training steps? How does the relationship between these metrics change during training, and what does this imply about the learning phases?
Your answer should be a sentence answering the two questions.
[ "The mechanistic basis of data dependence and abrupt learning in an in-context classification task" ]
[]
[]