uuid
string
question
string
answer_format
string
anchor_pdf
list
reference_pdf
list
conference
list
3d37bcb5-ade4-5109-ac37-87fab0d4ace9
In the paper that proposes P-RLHF, where can I find the fine-tuned GPT-J model used as the SFT model?
Your answer should be a Python strings of the website, the website URL starting with "https://", as given in the paper.
[]
[ "Personalized Language Modeling from Personalized Human Feedback" ]
[ "neurips2024" ]
3d64d18d-26f9-5baf-ac94-ea28b01bc658
A recent paper proposes a novel generative method named TP-EGG for constructing typed entailment graphs (EGs). Unlike prior extractive approaches that rely on large corpora, TP-EGG generates new predicates and entailment relations using pre-trained language models, effectively addressing both predicate and edge sparsity. Please indicate the name of the university affiliated with this research.
Your answer should be the name of the university.
[ "From the One, Judge of the Whole: Typed Entailment Graph Construction with Predicate Generation" ]
[]
[]
3d77ba99-9b4b-581c-9b45-443d64b37ed9
There's a recent paper introduces Human-Aware Vision-and-Language Navigation (HA-VLN), extending traditional VLN by incorporating dynamic human activities and relaxing key assumptions (e.g., egocentric action space, sub-optimal expert supervision). Could you please provide the name of the corresponding author of this paper?
Your answer should be a name of a person.
[]
[ "Human-Aware Vision-and-Language Navigation: Bridging Simulation to Reality with Dynamic Human Interactions" ]
[ "neurips2024" ]
3e2edeb1-10ae-5dc4-94e6-1be457a9985e
In ICLR 2024 Spotlight papers, a paper proposes a method named "Heuristic Blending". In this paper, how many theorems are proposed?
Your answer should be a Python integer.
[ "Improving Offline RL by Blending Heuristics" ]
[]
[]
3e5a7f65-8cc1-5f62-85cd-cf906a2997d1
Facing the failure of existed methods to address the compound degradations presented in source images, a paper proposes a novel interactive multi-modal image fusion framework based on the text-modulated diffusion model, called Text-DiFuse. In the comparison of the basic version of their Text-DiFuse with current state-of-the-art fusion methods, what is their model's advantage?
Your answer should be a sentence describing the model's advantage in detail.
[]
[ "Text-DiFuse: An Interactive Multi-Modal Image Fusion Framework based on Text-modulated Diffusion Model" ]
[ "neurips2024" ]
3efd0502-7123-5b57-8b54-8c42097d8be7
Which paper published in ICLR 2024 propose a novel method, namely Sorting Krylov Recycling (SKR), to accelerate data generation for neural operators training?
Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Accelerating Data Generation for Neural Operators via Krylov Subspace Recycling" ]
[ "iclr2024" ]
3fe086fb-83fd-5861-ab9d-a1075873b79e
Tell me the core contribution of "Hybrid RSSM".
Your answer should be plain text.
[ "Learning Latent Dynamic Robust Representations for World Models" ]
[]
[]
40aa6ad8-bae3-583f-afff-ed15c0224fc8
In the paper that characterizes the exact privacy-utility tradeoff for locally private sampling under f-divergences, proposing universally optimal mechanisms for both discrete and continuous spaces, where can I find the code of the paper if I want to reproduce the experiment?
Your answer should be a Python strings of the website, the website URL starting with "https://", as given in the paper.
[]
[ "Exactly Minimax-Optimal Locally Differentially Private Sampling" ]
[ "neurips2024" ]
4105ba8c-8517-5976-8bde-6cdcfa676223
Give a brief introduction of the innovative points of the essay.
Your answer should be a sentence.
[ "SpikeReveal: Unlocking Temporal Sequences from Real Blurry Inputs with Spike Streams" ]
[]
[]
4114f9ce-4c1b-5d4c-8955-a2272732d6f4
Which two main challenges does GeoBFN address when dealing with molecular geometry data? What is a specific manifestation of the first challenge (multimodality)?
Your answer should be two sentences, eaching answer one of the two questions.
[]
[ "Unified Generative Modeling of 3D Molecules with Bayesian Flow Networks" ]
[ "iclr2024" ]
419db7dd-495c-5130-a911-2ed5438af06d
In this paper, what is the theoretical basis of "Dynamic Path Feedback"?
Your answer should be plain text.
[ "Walk Wisely on Graph: Knowledge Graph Reasoning with Dual Agents via Efficient Guidance-Exploration" ]
[]
[]
425ca788-b18c-5413-817b-e267ea270882
How long time can the new model reduce while training a 1B parameter Stage C text conditional diffusion model, compared to the amount SD 2.1 used for training?
Your answer should be a float, rounded to 1 decimal place, evaluating in GPU hours.
[ "Würstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models" ]
[]
[]
42944afa-fed6-563d-9457-7b64e738f305
There is a paper that introduces a large-scale, multilingual, multi-technique singing corpus designed to address key limitations in existing datasets for singing voice synthesis (SVS) and related tasks. It features 80.59 hours of high-quality recordings from 20 professional singers across 9 languages. Could you please tell me the most frequently occurring segment duration (s) in the proposed dataset?
Your answer should be a Python int
[]
[ "GTSinger: A Global Multi-Technique Singing Corpus with Realistic Music Scores for All Singing Tasks" ]
[ "neurips2024" ]
4365278e-ba3d-5b04-8881-85fd4f456748
Is there a paper published in ICLR 2024 that proposes a scalable and effective exploration strategy based on Thompson sampling for reinforcement learning?
Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Provable and Practical: Efficient Exploration in Reinforcement Learning via Langevin Monte Carlo" ]
[ "iclr2024" ]
43999904-d634-5255-95b3-f5cc16993486
In order to solve limitation of the application of Causal Temporal Representation Learning (Ctrl) methods in real-world scenarios without prior knowledge of the domain variables, a paper propose CtrlNS. In their model, there are two kinds of variables: domain variables and hidden variables. Which variable remains relatively unchanged in phrase two during the training?
Your answer should be chosen between "domain variables" and "hidden variables".
[]
[ "Causal Temporal Representation Learning with Nonstationary Sparse Transition" ]
[ "neurips2024" ]
44ec0a10-6bba-59fe-836f-1079442d6e3b
In ICLR 2024 Poster papers, a paper proposes a framework where an agent first learns a sufficient set of skill primitives to achieve all high-level goals in its environment. Tell me the number of authors of this paper.
Your answer should be a Python integer.
[ "Skill Machines: Temporal Logic Skill Composition in Reinforcement Learning" ]
[]
[]
457bbc52-0425-51cd-8573-1a4feda62b88
A recent paper introduces the first large-scale benchmark specifically designed to evaluate large multimodal models (LMMs) on scientific figure interpretation. It consists of 2,000 curated multiple-choice questions across two tasks, figure-to-caption and caption-to-figure, sourced from arXiv figures using adversarial filtering and human verification. Could you please tell me which subcategory within the general question set has the highest sample proportion?
Your answer should be a name of a subcategory.
[]
[ "SciFIBench: Benchmarking Large Multimodal Models for Scientific Figure Interpretation" ]
[ "neurips2024" ]
462fdea7-d178-5611-8ccf-878f27584346
Among the oral presentations at NeurIPS 2024 studying causal discovery, how does the proposed method in the paper "Do Finetti: On Causal Effects for Exchangeable Data" perform in estimating causal effects compared to traditional i.i.d.-based methods?
Your answer should be a Python string comparing the performance in terms of estimation error, specifically mentioning the mean squared error (MSE) values.
[ "Do Finetti: On Causal Effects for Exchangeable Data" ]
[]
[]
46711729-bbed-5a8e-8490-5908d79fb267
Can you recommend me a paper published in ICLR 2024 that studies the required model size when considering average-case and worst-case error scenarios, showing how the model size needs to change based on accuracy, data size and data dimensionality?
Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Towards Establishing Guaranteed Error for Learned Database Operations" ]
[ "iclr2024" ]
46f3e5ad-c092-5a63-b011-2df6131506d5
In ICLR 2024 Poster papers, a paper tries to ensemble the reward models to mitigate the over-optimization problem. In Section 5 (Results), how many main findings are reported?
Your answer should be Python integer.
[ "Reward Model Ensembles Help Mitigate Overoptimization" ]
[]
[]
4718a830-33ab-5ae7-a544-50b47dada242
How does ClimODE handle local and global weather effects? List two key points and give me a formula.
Your answer should be a list of 2 strings, the first is a sentence containing the two key points, and the second is a formula in LaTeX format.
[]
[ "ClimODE: Climate and Weather Forecasting with Physics-informed Neural ODEs" ]
[ "iclr2024" ]
4ab46af5-2c59-5e09-a7fe-a519c4768271
How does the Kantorovich-Rubinstein duality provide a tractable objective for optimizing the Wasserstein dependency measure Iw?
Your answer should be a formula
[]
[ "METRA: Scalable Unsupervised RL with Metric-Aware Abstraction" ]
[ "iclr2024" ]
4b5b243d-975a-5ba1-956b-75f2cad06a76
In the paper that proposes RETR, what is the use of projimage in formula (2)?
Your answer should be a Python strings describing the use of the projimage function.
[]
[ "RETR: Multi-View Radar Detection Transformer for Indoor Perception" ]
[ "neurips2024" ]
4c14462b-e17f-50fd-8983-1b739caca76f
In the paper that proposes a temperature-controlled differentiable mapping from free vectors to orthogonal matrices that asymptotically converges to permutation matrices, enabling both gradient-based and stochastic optimization over permutations, what hinders gradient-based optimization for \theta in formula (15) and how to solve it?
Your answer should be a Python strings explaining the obstacle and how to solve it.
[]
[ "OT4P: Unlocking Effective Orthogonal Group Path for Permutation Relaxation" ]
[ "neurips2024" ]
4c1b1fd1-83b6-5b32-92d2-daa564cde987
What parameterization of f(s,z) is used to simplify the optimization of , and how does it constrain the Lipschitz continuity?
Your answer should be a formula
[]
[ "METRA: Scalable Unsupervised RL with Metric-Aware Abstraction" ]
[ "iclr2024" ]
4cb116ee-e0db-510d-af46-69e4c9d4160d
In figure 2, describe the global change of angle $\Phi$ and $\Psi$ during the transition.
Your answer should be a Python strings.
[ "Stochastic Optimal Control for Collective Variable Free Sampling of Molecular Transition Paths" ]
[]
[]
4cb82403-dc05-5b19-a04c-7591b4329f1a
A paper studies behavior policy for data-efficient General Value Functions (GVFs) learning to perfect this domain. In their study, which loss function is used to mesure the model's performance?
Your answer should be a string which indicates a name of loss function.
[]
[ "Adaptive Exploration for Data-Efficient General Value Function Evaluations" ]
[ "neurips2024" ]
4ecb6954-9276-558c-a9b1-6dff787b8784
In Figure 1, what are the three types of datasets used to fine-tune GPT-3.5 Turbo in the experiments? How does the harmfulness score change across the 11 safety categories after fine-tuning?
Your answer should be a sentence answering the two questions.
[ "Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!" ]
[]
[]
4f0fa7d7-8024-5627-aae9-c8fc79e1aad3
A recent paper presents a token reduction framework for Vision-Language (VL) models that integrates text-informed image token pruning and modality-aware token merging into cross-modal Transformer layers. By progressively removing text-irrelevant visual tokens and merging semantically similar tokens within each modality, the proposed method achieves up to 2x inference speedup and over 50% memory reduction on models such as ViLT and METER. Which university's researchers proposed this work?
Your answer should be a name of a university.
[]
[ "PuMer: Pruning and Merging Tokens for Efficient Vision Language Models" ]
[ "acl2023" ]
4f50e9ab-20d1-5218-a0e7-a877905b3ec8
In the paper that proposed Exclusively Penalized Q-learning, to solve unnecessary estimation bias, a new penalty is introduced. What is the formula for this penalty?
Your answer should be a string, the formula in LaTeX format.
[]
[ "Exclusively Penalized Q-learning for Offline Reinforcement Learning" ]
[ "neurips2024" ]
4fc8d452-9f0b-5d60-b379-e85e4b417ee2
By how much does PromptNER outperform the state-of-the-art model on average in the cross-domain few-shot setting?
Your answer should be a Python float between 0 and 100 rounded to 1 decimal place, stating the percentage improvement.
[]
[ "PromptNER: Prompt Locating and Typing for Named Entity Recognition" ]
[ "acl2023" ]
4fd37435-f841-5084-ad30-6543f4e7fb11
A paper propose a novel training paradigm for GenQA using supervision from automatic QA evaluation models (GAVA). When changing threshold $\theta$, the hyper-parameter, to a lower value, the GAVA-Score of GAVA-SDA models trained on MS-MARCO would likely become lower or higher?
Your answer should be a string chosen between "lower" and "higher"
[]
[ "Learning Answer Generation using Supervision from Automatic Question Answering Evaluators" ]
[ "acl2023" ]
4ff37823-b7b7-549e-8314-ffbe82ffa99e
A paper reports a small number attention heads transport a compact representation of the demonstrated task. They study the AIE per attention head in GPT-J over many tasks, and show it in a figure, which index of head has the highest AIE in the middle layer?
Your answer should be an int between 0 to 15.
[]
[ "Function Vectors in Large Language Models" ]
[ "iclr2024" ]
5043c2c9-d82d-5612-89a1-ba3560626005
A paper introduce BAdam which offers a memory efficient approach to the full parameter finetuning of large language models. Besides its high efficiency, does this model has a better convergence behavior compared with other models?
Your answer should be "Yes" or "No".
[]
[ "BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models" ]
[ "neurips2024" ]
50596b7e-10e2-5027-9d65-5408a69c5c48
What is the maximum number of nodes that SDF-Sim can scale to before mesh-based approaches run out of memory?
Your answer should be a Python float value representing the number of nodes in millions, rounded to 1 decimal place.
[]
[ "Learning rigid-body simulators over implicit shapes for large-scale scenes and vision" ]
[ "neurips2024" ]
508113a9-27ac-5e9a-8c9d-2d595c6af1d4
Among the papers at NeurIPS 2024 researching fair machine learning, how many real-world datasets did the authors of "Unprocessing Seven Years of Algorithmic Fairness" empirically evaluate?
Your answer should be a Python integer representing the number of real-world datasets empirically evaluated.
[ "Unprocessing Seven Years of Algorithmic Fairness" ]
[]
[]
50aa1f58-4c83-5b31-b4b8-455d192003f0
What two known instabilities are reproduced in the paper?
Your answer should be a sentence.
[ "Small-scale proxies for large-scale Transformer training instabilities" ]
[]
[]
50c45430-63ff-5b92-9cc0-46392205944f
What is the Bayesian framework equation that decomposes predictive uncertainty (PU) into epistemic (EU) and aleatoric (AU) components?
Your answer should be a formula
[]
[ "ValUES: A Framework for Systematic Validation of Uncertainty Estimation in Semantic Segmentation" ]
[ "iclr2024" ]
50e4855e-e346-5f60-9959-31a8f93f9cb2
Among the papers at ACL 2023 researching text style transfer, what is the main innovation of the "Text Style Transfer Back-Translation" method proposed in the paper?
Your answer should be a Python string describing the key innovation of the Text Style Transfer Back-Translation method.
[ "Text Style Transfer Back-Translation" ]
[]
[]
51477120-f5f4-594e-b6dd-d931ac57d853
What is the main idea of InfoBatch?
Your answer should be a sentence brefly introducing the method
[]
[ "InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning" ]
[ "iclr2024" ]
515c44a2-c4ae-5a3a-a5c3-621a24a765d7
According to the paper that introduces the first adaptive decision-making framework for LLMs that mirrors real-world MDM processes via dynamic collaboration among AI agents based on task complexity, considering the impact of agent number, if we remove one agent from the peak accuracy setting, how much does the accuracy drop?
Your answer should be a float rounded to 1 decimal place.
[]
[ "MDAgents: An Adaptive Collaboration of LLMs for Medical Decision-Making" ]
[ "neurips2024" ]
517fca53-68de-5aae-af94-f7b76b79f50c
In ICLR 2024 Poster papers, which paper introduces Uni-RLHF, a comprehensive system implementation tailored for RLHF?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Uni-RLHF: Universal Platform and Benchmark Suite for Reinforcement Learning with Diverse Human Feedback" ]
[ "iclr2024" ]
5181b3b8-7afc-5e59-b113-694fc2df7935
In ICLR 2024 Poster papers, which paper tackles the Offline Opponent Modeling problem by harnessing the full potential of the supervised pre-trained Transformers' in-context learning capabilities?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Towards Offline Opponent Modeling with In-context Learning" ]
[ "iclr2024" ]
518991c7-c643-53a7-aa7b-2d6e4303c92c
In RobustFill's "Switch-Concept-Order" task, how much more accurate is ExeDec compared to sub-target-free ablation experiments (Ablation)?
Your answer should be a float, rounded to 1 decimal place, evaluating in percentage.
[]
[ "ExeDec: Execution Decomposition for Compositional Generalization in Neural Program Synthesis" ]
[ "iclr2024" ]
524a738e-efac-5f13-95df-9cee64c8ff97
I wonder if there are any datasets and benchmarks that are published as orals in ICLR 2024? Also tell me their respective dataset size (including all data splits).
Your answer should be a Python list of tuples (List[Tuple[str, int]]). For each tuple in the list, the first element is the paper title string and the second element is an integer representing the dataset size.
[]
[ "BooookScore: A systematic exploration of book-length summarization in the era of LLMs", "MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts", "SWE-bench: Can Language Models Resolve Real-world Github Issues?", "How Well Do Supervised 3D Models Transfer to Medical Imaging Tasks?" ]
[ "iclr2024" ]
5260af05-70c0-556c-89a3-5f478e8ac526
A paper propose a SummAttacker, an efficient generator of diverse cases, to bring more variations to the hidden states. When E bar, the average Euclidean distance of paired original and attacked states, decreases, the hidden states in the latent space show smaller or larger diversity?
Your answer should be chosen between "smaller" and "larger".
[]
[ "Improving the Robustness of Summarization Systems with Dual Augmentation" ]
[ "acl2023" ]
532503fe-48be-57f3-9994-ff83eda8cc36
According to the ablation study in the paper that proposes Variance Reduced Meta-CL, what improvement in final accuracy (Acc) does VR-MCL_2 achieve compared to MCL on the Seq-CIFAR10 benchmark?
Your answer should be a Python float value representing the percentage improvement in accuracy, between 0 and 100 and rounded to 2 decimal places.
[]
[ "Meta Continual Learning Revisited: Implicitly Enhancing Online Hessian Approximation via Variance Reduction" ]
[ "iclr2024" ]
536d809c-344a-5ef8-9afe-e1c0f46fb58b
When learning framework for State Embedding in SEMANTIC MEMORY EMBEDDING, what is the loss fonction and how can it be expressed with the highest return and a scale factor, noted lambdarocon?
Your answer should be a list of teo formulas
[]
[ "Efficient Episodic Memory Utilization of Cooperative Multi-Agent Reinforcement Learning" ]
[ "iclr2024" ]
54230565-ee38-52aa-a4d3-ce3b49ab196b
In ICLR 2024 Poster papers, which paper proposes to extract an efficient deterministic inference policy from critic models and pretrained diffusion behavior models, leveraging the latter to directly regularize the policy gradient with the behavior distribution's score function during optimization?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Score Regularized Policy Optimization through Diffusion Behavior" ]
[ "iclr2024" ]
5569d36d-49b2-5ec4-821b-a5ce29ae50d2
In ICLR 2024 Poster papers, a paper first constructs a dynamics model from the expert demonstration, enforcing local Lipschitz continuity while skipping the discontinuous regions. How many different aspacts are illustrated in Figure 1?
Your answer should be a Python integer.
[]
[ "CCIL: Continuity-Based Data Augmentation for Corrective Imitation Learning" ]
[ "iclr2024" ]
55ca34ac-d006-5563-bac8-ee915cfcd8da
How does the adversarial accuracy of ATINTER improve compared to the best existing defense on the SST-2 dataset?
Your answer should be a Python string specifying the percentage improvement in adversarial accuracy compared to the best existing defense.
[]
[ "Don’t Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text" ]
[ "acl2023" ]
55cec0ea-d0f4-5c6a-9dc1-e1549494eac8
In ICLR 2024 Spotlight papers, a paper attempts to solve how to train a general agent in Reinforcement Learning (RL) that can thoroughly explore the environment and learn new and diverse skills. In Figure 6, how many tasks are considered?
Your answer should be a Python integer.
[]
[ "Proximal Policy Gradient Arborescence for Quality Diversity Reinforcement Learning" ]
[ "iclr2024" ]
5636ee62-c100-56a3-817d-9994cf6c666d
Can you recommend me a paper published in ICLR 2024 that propose a data-driven, physically-informed deep-learning framework for classifying dynamical regimes and characterizing bifurcation boundaries based on the extraction of topologically invariant features?
Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Let's do the time-warp-attend: Learning topological invariants of dynamical systems" ]
[ "iclr2024" ]
565161cc-5824-54df-8cc8-ca00e58fb564
In the paper that introduces a new family of non-Gaussian distributions for deep neural networks, derived via Hermite polynomials, providing significantly more accurate scaling laws than classical Gaussian approximations, what value is the para-Gaussian correction term if \phi(x) = \sqrt{2}(x)_+ and the input z is a standard N(0, 1) Gaussian?
Your answer should be an integer of the value.
[]
[ "Improving the Gaussian Approximation in Neural Networks: Para-Gaussians and Edgeworth Expansions" ]
[ "neurips2024" ]
5675cf2e-b2ae-5b9a-bd60-9a46469bcc88
In ICLR 2024 Spotlight papers, a paper explores the scalability problem of large-scale Inverse Reinforcement Learning (IRL) in practical applications. In this paper, how author frames Inverse reinforcement learning algorithms as the two-player zero-sum game?
Your answer should be the formula in LaTeX format.
[]
[ "Massively Scalable Inverse Reinforcement Learning in Google Maps" ]
[ "iclr2024" ]
56aae038-a30f-514b-b7aa-8964219fb39e
What is the size of the Large Reconstruction Model, and how quickly can it reconstruct a 3D object from a single image?
Your answer should be a sentence describing the model size and reconstruction time.
[]
[ "LRM: Large Reconstruction Model for Single Image to 3D" ]
[ "iclr2024" ]
573a0bdf-5322-5f92-8ddd-7d4192bf28ba
A paper propose a learning paradigm that directly establishes causation between events in the course of time. According to its diabetes simulator, will the variation of the dose of SI1 impact GS1, Glu. sto. 1?
Your answer should be "Yes" or "No".
[]
[ "A Dynamical View of the Question of Why" ]
[ "iclr2024" ]
57a13138-216d-5220-86eb-1ec344c54299
Which benchmark is used in the main experiment of the paper? In this benchmark, how is the 'probe' defined?
Your answer should be a single python list like this: ["benchmark_name", "probe_definition"]. Note that for the benchmark name, the abbreviation is required. For the probe definition, you should give a short string to describe the definition.
[ "Relational Attention: Generalizing Transformers for Graph-Structured Tasks" ]
[ "The CLRS Algorithmic Reasoning Benchmark" ]
[]
5819ecba-eea7-5660-a568-22c40087fc6b
Among the poster presentations at NeurIPS 2024 on reinforcement learning, what is the sample complexity bound for weakly communicating average-reward MDPs established by the paper "Span-Based Optimal Sample Complexity for Weakly Communicating and General Average Reward MDPs"?
Your answer should be a Python string containing the mathematical expression for the sample complexity bound in LaTeX-like format.
[ "Span-Based Optimal Sample Complexity for Weakly Communicating and General Average Reward MDPs" ]
[]
[]
59673e0f-b951-5566-b908-9cc36ff4d9f0
In NeurIPS 2024 Poster papers, which paper builds a Python program based on the interaction with the environment to represent agent's understanding of the world?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "WorldCoder, a Model-Based LLM Agent: Building World Models by Writing Code and Interacting with the Environment" ]
[ "neurips2024" ]
59cc500d-6a09-5c63-bbc8-85c57adbc598
Why using cascaded pipeline?
Your answer should be a sentence.
[ "Diffusion Model for Dense Matching" ]
[]
[]
5a15998a-846c-500d-8953-6e97fa7d745d
In the joint context transfer module of the paper, which attention mechanism is used? Please give me the pdf url to the paper that proposed this attention mechanism.
Your answer should be a single link like this: https://arxiv.org/abs/xxxx.xxxxx.
[ "LDMIC: Learning-based Distributed Multi-view Image Coding" ]
[ "Efficient Attention: Attention with Linear Complexities" ]
[]
5a6dc0a7-10b1-513c-9372-81e6f897b088
Given Figure 1's timeline illustration, suppose a new clip v_3' is inserted into the video stream V at timestamp t_3, while the text stream T remains unchanged. What type of multi-granularity noisy correspondence (MNC) does this introduce?
Your answer should be a sentence.
[ "Multi-granularity Correspondence Learning from Long-term Noisy Videos" ]
[]
[]
5b2eae07-e5d0-54e5-8a3a-ee7ebf3f62ce
Which paper published in ICLR 2024 proposes a synchronized multiview diffusion model that models the joint probability distribution of multiview images, enabling the generation of multiview-consistent images in a single reverse process?
Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.
[]
[ "SyncDreamer: Generating Multiview-consistent Images from a Single-view Image" ]
[ "iclr2024" ]
5ccbc57c-2bf3-54ef-bbee-da61e0696153
A paper introduce a framework called GRaded Generative Retrieval to address challenges of normal generative retrieval. To simulate the low-resource retrieval setting, the author randomly samples different fixed limited numbers of queries from the training set. How is GR methods' performance comparing with BM25, under the zero-resource setting?
Your answer should be chosen between "better" and "worse".
[ "Generative Retrieval Meets Multi-Graded Relevance" ]
[]
[]
5d0f6bf1-1ddd-5d32-958e-07eee0636eb9
Among the papers at ACL 2023 that introduced new dialogue datasets, what is the scale of the LiveChat dataset presented in the paper "LiveChat: A Large-Scale Personalized Dialogue Dataset Automatically Constructed from Live Streaming"?
Your answer should be a Python dictionary with keys 'total_dialogues', 'personas', and 'sessions_per_persona', containing the numerical values for each statistic.
[ "LiveChat: A Large-Scale Personalized Dialogue Dataset Automatically Constructed from Live Streaming" ]
[]
[]
5d21381e-e16b-5771-866d-0dd99f014aa0
In ICLR 2024 Spotlight papers, a paper tries to solve the performance issues of the DICE (DIstribution Correction Estimation) method in offline reinforcement learning (RL) and imitation learning (IL). How many pages are there in this paper's appendix?
Your answer should be an integer representing the number of pages in the appendix of the paper.
[]
[ "ODICE: Revealing the Mystery of Distribution Correction Estimation via Orthogonal-gradient Update" ]
[ "iclr2024" ]
5dad1a7d-2995-56fe-bb82-56392a86fc90
Which paper introduced the DevBench dataset? This dataset is a multimodal developmental benchmark consisting of seven tasks across lexical, syntactic, and semantic domains, and it incorporates behavioral data from both children and adults.
Your answer should be a string.
[]
[ "DevBench: A multimodal developmental benchmark for language learning" ]
[ "neurips2024" ]
5db51b6d-f10c-5769-8193-f3757dbb0580
A paper introduces Sequoia, a scalable, robust, and hardware-aware algorithm for speculative decoding. In the comparison of the wall-clock time speedup of Sequoia trees of various sizes, under which size, the tree with L40 GPUs maximize sppedup value?
Your answer should be an int.
[]
[ "Sequoia: Scalable and Robust Speculative Decoding" ]
[ "neurips2024" ]
5e1bd23b-46c9-5209-9b58-b633ff39e392
In the paper at ICLR 2024 that employs submodular mechanism for model interpretability, what performance improvements does the proposed method achieve over HSIC-Attribution on the CUB-200-2011 dataset for incorrectly predicted samples?
Your answer should be a Python dictionary with two keys: 'confidence_gain' and 'insertion_score_gain', with values as float percentages between 0 and 100 rounded to 1 decimal place.
[]
[ "Less is More: Fewer Interpretable Region via Submodular Subset Selection" ]
[ "iclr2024" ]
5e1cb0a7-4a3c-5e70-8455-dd39567a8e45
How does MC-SMoE identify the dominant experts in each expert group?
Your answer should be the context that describes the method used by the paper.
[]
[ "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy" ]
[ "iclr2024" ]
5e6bfd36-657b-5b68-9ca0-e3996a37ee7b
There is a theoretical paper that establishes the tight parallel complexity of boosting algorithms within the weak-to-strong learning framework. It closes a longstanding gap between the upper and lower bounds on the trade-off between the number of parallel rounds and the total work per round. Could you please tell me which university the authors of this paper are affiliated with?
Your answer should be a name of a university.
[ "Optimal Parallelization of Boosting" ]
[]
[]
5f5dc605-fa71-5678-a21d-05907900081f
What novel metric does the paper "Improving Environment Novelty Quantification for Effective Unsupervised Environment Design" introduce to enhance Unsupervised Environment Design (UED)?
Your answer should be a Python string containing the full name of the metric introduced in the paper.
[ "Improving Environment Novelty Quantification for Effective Unsupervised Environment Design" ]
[]
[]
5f87ab57-6a62-55d3-8b7e-6a4edad48d76
Which Tiny Paper published in ICLR 2024 builds an efficient pipeline for automated evaluation of priming attacks against open-source LLMs?
Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Bypassing the Safety Training of Open-Source LLMs with Priming Attacks" ]
[ "iclr2024" ]
61e2ac03-e333-514e-8ad2-bd972463571b
In the paper that developes a model of an agent that navigates using noisy egocentric visual and self-motion signals. In the figure 3F, which model consistently achieves a smaller DKL between its place field orientation distribution and the animal data?
Your answer should be a Python strings indicating the name of the model.
[]
[ "Predictive Learning Induces Probabilistic Cognitive Maps" ]
[ "neurips2024" ]
62dbd4bc-7e04-5d57-8952-f028224dfd82
In ICLR 2024 Poster papers, a paper attempts to propose a meta-reinforcement learning algorithm that is improved in multiple aspects, especially in terms of sample efficiency, generalization ability, and handling of high-dimensional task distributions, by combining the latest model-based RL techniques and meta-RL techniques. Tell me the number of authors.
Your answer should be a Python integer.
[]
[ "MAMBA: an Effective World Model Approach for Meta-Reinforcement Learning" ]
[ "iclr2024" ]
6317e865-1855-596f-9554-0fe6b9484efa
What innovative technologies are included in the benchmark specifically designed for practical TTA settings used in the paper "Persistent Test-time Adaptation in Recurring Testing Scenarios"?
Your answer should be a python strings.
[ "Persistent Test-time Adaptation in Recurring Testing Scenarios" ]
[ "Robust Test-Time Adaptation in Dynamic Scenarios" ]
[]
63b17ccf-dacf-53ca-be1e-66ef2a23c354
What are the three key architectural improvements of the state-space models (SSMs) introduced in R2I's world model over traditional RNNs? Among these improvements, how does the "Parallel Scan" mode of SSMs address the efficiency of training long sequences?
Your answer should be a list of two strings (sentences), each answering one of the two questions.
[]
[ "Mastering Memory Tasks with World Models" ]
[ "iclr2024" ]
6444d8d4-f8ea-54f3-b514-aa8db3c3b530
A paper use the variational framework to study models' ability of planning and show that planning corresponds exactly to a different set of weights. In the experiment where they use 6 different domains from IPPC2011, each with 10 instances (factored MDPs) of increasing difficulty. For the game of life domain instance index, which model performs the worst?
Your answer should be a string, a name of model.
[]
[ "What type of inference is planning?" ]
[ "neurips2024" ]
64ac92a0-9c09-513c-8e47-553c63189198
In this paper, what is the number of the authors?
Your answer should be Python integer.
[ "Model-based Reinforcement Learning for Parameterized Action Spaces" ]
[]
[]
64af3261-c658-5908-9970-e42dae242237
In NeurIPS 2024 Poster papers, a paper proposes a new problem in offline MBRL called "The Edge-of-Reach Problem". In this paper, how many papers are cited?
Your answer should be a Python integer.
[]
[ "The Edge-of-Reach Problem in Offline Model-Based Reinforcement Learning" ]
[ "neurips2024" ]
65aeb135-fa7a-5593-a369-ce0f68e0b849
In ICLR 2024 Poster papers, which paper proposes AlignDiff, a novel framework that leverages RLHF to quantify human preferences, covering abstractness, and utilizes them to guide diffusion planning for zero-shot behavior customizing, covering mutability?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "AlignDiff: Aligning Diverse Human Preferences via Behavior-Customisable Diffusion Model" ]
[ "iclr2024" ]
674871a6-8283-5b95-8b97-6882363f0a61
In the paper introducing Sequence Parallelism, a system-level approach that facilitates Transformer training on arbitrarily long sequences by distributing input sequences across multiple GPUs, the authors present Ring Self-Attention (RSA) as a mechanism for computing attention across devices without requiring the full sequence to be stored on any single GPU. My question is: How many times longer sequence length does Sequence Parallelism achieve compared to the state-of-the-art method of tensor parallelism, under the given experimental conditions?
Your answer should be a Python int.
[]
[ "Sequence Parallelism: Long Sequence Training from System Perspective" ]
[ "acl2023" ]
6a035d82-c4fe-5b8f-b216-a9ecff97f77a
In ICLR 2024 Poster papers, a paper attempts to address the hidden confounding problem in Offline Reinforcement Learning (RL). How many authors are there in this paper?
Your answer should be a Python integer.
[]
[ "DMBP: Diffusion model-based predictor for robust offline reinforcement learning against state observation perturbations" ]
[ "iclr2024" ]
6ae83c2d-6541-584f-b600-526be5540b31
How to represent the convolution operation of an image in the paper that first provides competitive performanceon LRA with Transformers and diagonal linear RNNs
Your answer should be a string, the formula in LaTeX formula.
[]
[ "Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors" ]
[ "iclr2024" ]
6b987048-a00f-5715-a227-57a5fd56be94
In ICLR 2024 Poster papers, a paper proposes a reward smoothing method called "DreamSmooth". What is the formula of "DreamSmooth"?
Your answer should be the formula in LaTeX format.
[]
[ "DreamSmooth: Improving Model-based Reinforcement Learning via Reward Smoothing" ]
[ "iclr2024" ]
6bcfdec0-206c-5e1a-8279-94a855673e3b
There is a paper that introduces NDCR, the first end-to-end framework for image retrieval from linguistically complex text by integrating analogical and logical reasoning. The research institutions involved in this study are all from the same country. Please provide the name of this country.
Your answer should be a name of a country.
[]
[ "A Neural Divide-and-Conquer Reasoning Framework for Image Retrieval from Linguistically Complex Text" ]
[ "acl2023" ]
6be7a678-0b17-5686-81b5-c81a03db7baf
Why MINDER consumes relatively more memory than other models?
Your answer should be a phrase which concludes the reason.
[ "Multiview Identifiers Enhanced Generative Retrieval" ]
[]
[]
6c49cb88-0fc5-562d-bcf3-46efa71b4666
There is a recent paper conducting a comprehensive empirical evaluation of eight model selection methods for unsupervised domain adaptation (UDA), revealing their vulnerability to worst-case selections across diverse scenarios. The authors utilized a pool of 28 models and examined the performance of different selection methods as additional models were incrementally included. Which algorithm among SND, Ensemble, and EnsV demonstrates the best performance when 15 models are added?
Your answer should be one of the following: ['SND', 'Ensemble', 'EnsV'].
[]
[ "Towards Reliable Model Selection for Unsupervised Domain Adaptation: An Empirical Study and A Certified Baseline" ]
[ "neurips2024" ]
6d135c7c-312f-588d-b338-dbc7ccdf9c1e
There is a paper that introduces the first large-scale multimodal dataset specifically tailored for autonomous trucking, addressing challenges unique to heavy-duty vehicles such as dynamic trailer occlusion, sensor placement, and terminal environments. It features 747 diverse 20-second scenes annotated with high-quality 3D bounding boxes across 27 object classes. I would like to know which subcategory within the area category has the highest proportion in the distribution of scene tags for all 747 dataset scenes.
Your answer must be one of the following: ['city', 'highway', 'terminal', 'rural']
[ "MAN TruckScenes: A multimodal dataset for autonomous trucking in diverse conditions" ]
[]
[]
6d76ebba-7326-58d7-80c1-654e4cb9f71d
Among the spotlight papers at ICLR 2024 focusing on neural architecture search, how many pretrained models and hyperparameter configurations did Quick-Tune evaluate to generate its large-scale meta-dataset?
Your answer should be a sentence.
[]
[ "Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and How" ]
[ "iclr2024" ]
6d7a8ff8-89e1-598c-9d2a-07ecac85df3e
Is there a paper published in ICLR 2024 which studies the estimation of a planted signal hidden in a recently introduced nested matrix-tensor model, which is an extension of the classical spiked rank-one tensor model, motivated by multi-view clustering?
Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Performance Gaps in Multi-view Clustering under the Nested Matrix-Tensor Model" ]
[ "iclr2024" ]
6dd90293-c42c-5736-9e44-9e5bb3d194f4
According to the paper at ICLR 2024 that proposed SPT, on which supervised task did incorporating data-driven pretraining improve the best reported results of state space models by 20 absolute points?
Your answer should be a Python string specifying the name of the supervised task that saw a 20-point improvement.
[]
[ "Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors" ]
[ "iclr2024" ]
6e5c93f9-7a5b-575e-bbba-5bcb489b0da1
In the ray resolution ablation, how does the Rotation Accuracy augment during the increase of number of camera rays from 2*2 to 16*16?
Your answer should be a float, rounded to 1 decimal place.
[ "Cameras as Rays: Pose Estimation via Ray Diffusion" ]
[]
[]
6f1de746-8671-501d-a48d-bb16ebffc84d
Which paper proposes the new task open-world video instance segmentation and captioning?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "OW-VISCapTor: Abstractors for Open-World Video Instance Segmentation and Captioning" ]
[ "neurips2024" ]
6fd27acf-04d7-50fe-95c8-c8221d2ad6ca
In the paper that demonstrates that the dispersion of self-attention scores underlies Transformers' working memory limits in N-back tasks, mirroring human attention-based memory constraints, how many independent models the author uses for each N in N-backed tasks? How many epochs does the author train each model?
Your answer should be a Python list of two integers, the first is the number of independent models and the second is the number of epochs.
[ "Self-Attention Limits Working Memory Capacity of Transformer-Based Models" ]
[]
[]
70403087-25d0-581e-b2ec-7fc20c1efc12
What is the maximum accuracy improvement achieved by DapperFL over other federated learning frameworks?
Your answer should be a Python float value representing the percentage improvement in accuracy, between 0 and 100, rounded to 2 decimal places.
[]
[ "DapperFL: Domain Adaptive Federated Learning with Model Fusion Pruning for Edge Devices" ]
[ "neurips2024" ]
7090ca92-bff7-5c7b-987e-b51fa6db7db8
Is there a paper published in ICLR 2024 which provides the first instantiation of the white-box design paradigm that can be applied to large-scale unsupervised representation learning?
Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Masked Completion via Structured Diffusion with White-Box Transformers" ]
[ "iclr2024" ]
723220c7-5ffd-52a2-bce5-56153427135c
Which paper published in ICLR 2024 propose the first LMaaS-compatible approach for leveraging LLMs to enhance representation learning on text-attributed graphs?
Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation Learning" ]
[ "iclr2024" ]
7299b622-f460-5902-acca-ff09c4b13ad0
A paper shows that language models' learning novel factual knowledge effectively from finetuning on limited textual demonstrations is due to its bias to learn word co-occurrence statistics instead of true factual associations. In the study of effect of layer wise ablation on the models' performance on simple QA and multiple choice tasks, which part of layer show a high responsibility to the models' performance?
Your answer should be a string, which indicates the part of layer. e.g. "upper 1/6 layers".
[]
[ "Co-occurrence is not Factual Association in Language Models" ]
[ "neurips2024" ]