uuid
string
question
string
answer_format
string
anchor_pdf
list
reference_pdf
list
conference
list
aa065f51-e562-5ffb-887c-7968237cf9a8
How does ClimODE simulate weather and climate physics? Give me the formula of its equation.
Your answer should be a python list containing a sentence and a formula, eaching answer one of the two questions.
[]
[ "ClimODE: Climate and Weather Forecasting with Physics-informed Neural ODEs" ]
[ "iclr2024" ]
aafe62b4-8305-5906-b110-b30c5f3bb0f4
A paper use a mean-based decomposition method to extend the context window and achieve length extrapolation of transformer-based large language models (LLMs). In their study of relationship between positional vectors and length extrapolation ability, besides TL-Window-RoPE, which model maintains stable PPL across longer texts?
Your answer should be a string which indicates a name of model
[]
[ "Exploring Context Window of Large Language Models via Decomposed Positional Vectors" ]
[ "neurips2024" ]
ab723b23-7907-5fa0-aea1-e34ee72082a8
What two key components of MetaGPT are depicted in Figure 2 and how does the right panel emphasize the improvement in code quality?
Your answer should be a sentence answering the two questions.
[ "MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework" ]
[]
[]
ac2bee50-307a-5f17-9320-d2779770360a
In ICLR 2024 Oral papers, a paper attempts to solve the problem of how to accelerate the learning process and avoid getting trapped in local optimal solutions in Cooperative Multi-Agent Reinforcement Learning (MARL). Tell me the number of authors of this paper.
Your answer should be a Python integer.
[ "Efficient Episodic Memory Utilization of Cooperative Multi-Agent Reinforcement Learning" ]
[]
[]
ac7aaf31-1bf1-54b4-a501-363b12bd2380
In the paper that proves gradient-based algorithms achieve polynomial smoothed complexity for solving zero-sum games, eliminating exponential dependence on condition numbers through a novel perturbation-stability analysis, what is the main conclution of formula (3) and definition 1.3 on \kappa?
Your answer should be a Python strings indicating the main conclusion.
[]
[ "Convergence of $\\text{log}(1/\\epsilon)$ for Gradient-Based Algorithms in Zero-Sum Games without the Condition Number: A Smoothed Analysis" ]
[ "neurips2024" ]
ace8225f-ccd4-5401-a1ca-da1e00bab7a8
In the paper at ACL 2023 that first analyzed instance-level pretraining data to interpret in-context learning (ICL), what is the maximum improvement in ICL ability achieved by continued pretraining on a supportive subset of data?
Your answer should be an integer between 0 and 100, stating the maximum percentage improvement in ICL ability achieved by continued pretraining.
[]
[ "Understanding In-Context Learning via Supportive Pretraining Data" ]
[ "acl2023" ]
acf17a7b-4264-520f-9ca1-3d1a2792c1d5
In the paper that proposes Normalize-and-Project (NaP), we can find a paper in the references in which the C4 dataset is released. In which journal was this paper published?
Your answer should only be a Python strings of the name of the journal.
[ "Normalization and effective learning rates in reinforcement learning" ]
[]
[]
ae551879-1d8c-55e5-ab67-6eea37acde80
In ICLR 2024 Poster papers, which paper introduces the Hierarchical Diffuser, a simple, fast, yet effective planning method combining the advantages of hierarchical and diffusion-based planning?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Simple Hierarchical Planning with Diffusion" ]
[ "iclr2024" ]
af016855-321f-5d05-9997-6c81104f8db3
In ICLR 2024 Spotlight papers, a paper attempts to alleviate the over-optimization problem that occurs when LLMs are optimized by reward models through human feedback. What is the formula of the results in mixed advantages which are a convex combination of the task and constraint advantages?
Your answer should be the formula in LaTeX format.
[]
[ "Confronting Reward Model Overoptimization with Constrained RLHF" ]
[ "iclr2024" ]
aff903e6-93b8-5b4b-aacf-e84ee6e6d1e2
How is mutual information (MI) used to represent epistemic uncertainty (EU) in the Bayesian framework?
Your answer should be a formula
[]
[ "ValUES: A Framework for Systematic Validation of Uncertainty Estimation in Semantic Segmentation" ]
[ "iclr2024" ]
b055054a-f026-51e0-9afb-452f89bd8ea3
In ICLR 2024 Spotlight papers, a paper introduces a new conception named "Effective Horizon". How many figures in this paper?
Your answer should be a Python integer.
[ "The Effective Horizon Explains Deep RL Performance in Stochastic Environments" ]
[]
[]
b08e1a60-c67e-585f-95e4-716d76c4a58b
In the experiment where they isolate the influence of the LLM in an LLM-based forecaster and propose three ablations, does a simple ablations of LLM-based methods cause worse performance?
Your answer should be "Yes" or "No".
[ "Are Language Models Actually Useful for Time Series Forecasting?" ]
[]
[]
b0a36a4d-62e5-5e4b-9814-9855543d47cb
Investigating the correlation of sharpness, curvatures, and validation loss on MNIST, Fashion-MNIST, and CIFAR-10, which has a strong positive correlation with validation loss?
Your answer should be a string, a noun.
[ "Improving Convergence and Generalization Using Parameter Symmetries" ]
[]
[]
b135917d-8cc0-5a31-a7ca-8007d58870db
In this paper, how many different benchmarks are used?
Your answer should be a Python integer.
[ "Learn A Flexible Exploration Model for Parameterized Action Markov Decision Processes" ]
[]
[]
b206bc2a-fdbf-5b37-be28-7ac7fc5a3cc6
A recent paper proposes a scalable, low-cost captioning engine, Perceptual Fusion, that integrates specialized vision experts and a multimodal model to generate hyper-detailed image descriptions. Please inform me which subcategory within the Infographics category of the proposed dataset has the largest data volume.
Your answer should be a name of a subcategory.
[]
[ "DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception" ]
[ "neurips2024" ]
b211f910-cd54-5e5b-b135-877454e99463
A recent paper presents MVCN for multimodal sentiment detection, addressing the challenge of modality heterogeneity in text-image pairs. It introduces three novel modules: (1) a Text-Guided Fusion module with Sparse Attention, (2) a Sentiment-based Congruity Constraint, and (3) an Adaptive Loss Calibration strategy. MVCN achieves state-of-the-art results on the MVSA and HFM benchmarks. The research institutions involved in this work are all from the same country. Please provide the name of this country.
Your answer should be a name of a country.
[ "Tackling Modality Heterogeneity with Multi-View Calibration Network for Multimodal Sentiment Detection" ]
[]
[]
b2f66570-8dd6-560e-be69-454ffbff412f
A paper propose Hierarchical Contact Mesh Transformer (HCMT) for modeling complex high-dimensional physical systems. In the experiment of sensitivity to the number of level, generally, with which number of level, the model gets a relatively low RSME?
Your answer should be an int chosen between 1 to 6.
[]
[ "Learning Flexible Body Collision Dynamics with Hierarchical Contact Mesh Transformer" ]
[ "iclr2024" ]
b3826ac7-3685-50c5-af82-1c5c9c5e4ba5
According to the paper at ACL 2023 that studies the projection of retrievers on vocabulary space, by how many percentage points did the strong MPNet model's performance on the BEIR benchmark improve after applying the proposed lexical enrichment at inference time?
Your answer should be a Python list of three floats (between 0 and 100, rounded to 1 decimal place), specifying the percentage point improvement and the before/after values, e.g., [improvement, before, after].
[]
[ "What Are You Token About? Dense Retrieval as Distributions Over the Vocabulary" ]
[ "acl2023" ]
b4a91fbd-d054-5434-a5c0-8a3c61d25c1b
In the experiment of MULAN with different noise schedule parameterizations, which parameterization yields perform the best?
Your answer should be a string, a kind of parameterizations.
[ "Diffusion Models With Learned Adaptive Noise" ]
[]
[]
b6544444-9718-50a9-861f-3d07342bc864
In the paper that identifies and mitigates "conditional image leakage" in I2V-DMs via Analytic-Init and TimeNoise which significantly enhances motion generation, in which domain does the method proposed by the author achieved significant progress compared to the baseline in figure 8? What percentage is the user preference in SVD model when Analytic-Init is used?
Your answer should be a Python list, the first is a string of the name of the domain and the second is a float number rounded to 2 decimal places.
[]
[ "Identifying and Solving Conditional Image Leakage in Image-to-Video Diffusion Model" ]
[ "neurips2024" ]
b73f57d6-9658-57d5-9f81-779c6a540f85
In ICLR 2024 Spotlight papers, a paper proposes a method named "Heuristic Blending". In this paper, the regret in Theorem 1 is bounded by which formula?
Your answer should be the formula in LaTeX format.
[]
[ "Improving Offline RL by Blending Heuristics" ]
[ "iclr2024" ]
b7650e20-d5e4-5612-b6ed-8176dc16650d
What is the core technological breakthrough that enabled R2I to achieve superhuman performance in the Memory Maze task? How does its "acyclic representation model" design enhance long-term memory?
Your answer should be two sentences, eaching answer one of the two questions.
[]
[ "Mastering Memory Tasks with World Models" ]
[ "iclr2024" ]
b7a4f87f-192f-583f-908c-571865359ae9
Among the papers at ICLR 2024 researching causal inference, what is the key theoretical result proposed in "Robust agents learn causal world models" regarding agents' ability to generalize under distributional shifts?
Your answer should be a Python string describing the key theoretical finding about the relationship between robust agents and causal models.
[ "Robust agents learn causal world models" ]
[]
[]
b8efc959-7538-53b0-abbf-50b696558b49
A recent paper proposes Conditional Mutual Information for Disentanglement (CMID), a method for learning disentangled representations in reinforcement learning with correlated features by minimizing conditional mutual information, guided by the causal structure of the Markov Decision Process. This approach improves generalization under correlation shifts and outperforms existing methods on continuous control tasks. Question: On which tasks or datasets was the effectiveness of this method evaluated?
Your answer should be the name of a dataset.
[ "Conditional Mutual Information for Disentangled Representations in Reinforcement Learning" ]
[]
[]
b916d6a2-0836-5375-8c68-1eefcd026e31
The UNet-FNO trained on a small dataset (1000 data pairs) achieves better performance compared to other neural operator architectures trained on a significantly larger dataset (10000 data pairs). What does it suggests?
Your answer should be a phrase
[ "Enhancing Fourier Neural Operators with Local Spatial Features" ]
[]
[]
b937c489-15c1-5980-9a62-dcd2e240dc31
What is the core idea of G-SHELL and what is its application?
Your answer should be a sentence answering the question.
[]
[ "Ghost on the Shell: An Expressive Representation of General 3D Shapes" ]
[ "iclr2024" ]
ba8dc980-d280-54dc-a7c8-7ff6261d161c
What certified accuracy does GNNCert achieve on the MUTAG dataset when an attacker arbitrarily adds or deletes one edge?
Your answer should be a Python integer representing the percentage of certified accuracy achieved on the MUTAG dataset.
[]
[ "GNNCert: Deterministic Certification of Graph Neural Networks against Adversarial Perturbations" ]
[ "iclr2024" ]
baf7e175-22b4-57ed-b29e-76a65f250f7f
In ICLR 2024 Poster papers, a paper attempts to solve how to effectively learn from off-policy data in reinforcement learning. Tell me the affiliation of the first author.
Your answer should be a Python string.
[]
[ "Skill or Luck? Return Decomposition via Advantage Functions" ]
[ "iclr2024" ]
bb47845b-137f-52b9-95a6-27b7b37f9a31
There is a recent paper that proposes a novel contrastive learning framework for multimodal sentiment analysis, which uniquely combines intra-sample feature decomposition and inter-sample contrastive learning. Each modality (text, vision, audio) is decomposed into similarity and dissimilarity features, with text-based similarity features used as anchors for contrastive alignment. I would like to know, on which dataset was the primary experiment of this work conducted?
Your answer should include only the single most significant dataset.
[]
[ "ConFEDE: Contrastive Feature Decomposition for Multimodal Sentiment Analysis" ]
[ "acl2023" ]
bb6258b6-939a-577e-9a61-77723526f0df
What is the size and annotation detail of the AbdomenAtlas 1.1 dataset?
Your answer should be a sentence describing the size and annotation detail of the AbdomenAtlas 1.1 dataset, including the number of CT volumes and the types of annotations provided.
[]
[ "How Well Do Supervised 3D Models Transfer to Medical Imaging Tasks?" ]
[ "iclr2024" ]
bba8b818-f1f0-544b-aa58-4749b9144879
In the paper that proposes proposing a theoretically-grounded plug-and-play module that enables efficient and accurate structure learning with minimal data requirements, what is the use and the caracterization of function h(B) in formula (4)?
Your answer should be a Python strings indicating the use and catacterization of h(B).
[]
[ "Differentiable Structure Learning with Partial Orders" ]
[ "neurips2024" ]
bbf242ad-cc6f-5771-bd20-e7b25531590b
In ICLR 2024 Spotlight papers, a paper attempts to alleviate the over-optimization problem that occurs when LLMs are optimized by reward models through human feedback. What is the affiliation of the corresponding author?
Your answer should be a Python string.
[]
[ "Confronting Reward Model Overoptimization with Constrained RLHF" ]
[ "iclr2024" ]
bd128925-adea-588e-aa7a-846e6053c7f4
In ICLR 2024 Spotlight papers, which paper introduces a novel algorithm, Robust Policy Improvement (RPI), which actively interleaves between IL and RL based on an online estimate of their performance?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Blending Imitation and Reinforcement Learning for Robust Policy Improvement" ]
[ "iclr2024" ]
bd1fbe22-f98c-595d-92db-7a32b47feda4
A recent paper proposes a unified benchmark framework for evaluating task-agnostic decoupling methods in privacy-preserving machine learning using synthetic image generation. It systematically integrates adversarial representation learning techniques into a synthetic data pipeline based on latent diffusion models (LDMs) and introduces standardized evaluation protocols for both privacy and utility. Could you please tell me which university the first author of this work is affiliated with?
Your answer should be a name of a university.
[ "DECO-Bench: Unified Benchmark for Decoupled Task-Agnostic Synthetic Data Release" ]
[]
[]
bd253538-fce4-5e7d-9d2a-fd27f4c6199b
In the paper "ZeroStance: Leveraging ChatGPT for Open-Domain Stance Detection via Dataset Generation," how many baseline human-annotated datasets are incorporated?
Your answer should be a Python int
[ "ZeroStance: Leveraging ChatGPT for Open-Domain Stance Detection via Dataset Generation" ]
[]
[]
bd3f2de4-c4a0-5ded-9bc2-b946724a04bf
In the paper introducing program-based reasoning, which benchmarks were used to evaluate performance on mathematical problems?
Your answer should be a Python list of strings.
[ "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" ]
[ "PAL: Program-aided Language Models" ]
[]
bd90400d-f8bf-5257-a64b-906a477992a8
In ICLR 2024 Spotlight papers, which paper unifies reinforcement learning and imitation learning methods under a dual framework?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Dual RL: Unification and New Methods for Reinforcement and Imitation Learning" ]
[ "iclr2024" ]
bdd210a1-1d6a-508d-af66-2f6a3efa8887
According to the paper that proposes Wanda, in what aspects does this pruning approach differ from SparseGPT?
Your answer should be a plain text that describes the differences between Wanda and SparseGPT.
[]
[ "A Simple and Effective Pruning Approach for Large Language Models" ]
[ "iclr2024" ]
bdfcfbaf-4835-51b2-a599-2b7f017936fa
In ICLR 2024 Spotlight papers, a paper introduces a new conception named "Effective Horizon". In this paper, what is the formula of SQIRL sample complexity?
Your answer should be the formula in LaTeX format.
[]
[ "The Effective Horizon Explains Deep RL Performance in Stochastic Environments" ]
[ "iclr2024" ]
be030ae0-db01-500c-8e06-7cc2ecac0bee
There is a recent paper that proposes a method called CCPA, a two-stage debiasing framework that combines continuous prompt augmentation and contrastive learning to mitigate social biases, particularly gender bias, in pre-trained language models. The authors validated the effectiveness of CCPA on the Bias-in-Bios dataset. Please answer: By how many percentage points does the Accuracy (all) metric of the model using CCPA improve compared to the original BERT model in absolute terms?
Your answer should be a Python float, rounded to two decimal places.
[]
[ "Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A Two-Stage Approach to Mitigate Social Biases" ]
[ "acl2023" ]
be0e3f75-f2a3-5bf4-a9c2-65153177fea7
Why did the authors use newly-published books as the database of BOOOOKSCORE?
Your answer should be a sentence
[]
[ "BooookScore: A systematic exploration of book-length summarization in the era of LLMs" ]
[ "iclr2024" ]
be4c59d2-3527-5048-a0e6-771dc091a486
A paper propose deleting edges to address over-squashing and over-smoothing of Message Passing Graph Neural Networks simultaneously. In their examination of effectiveness of the proposed edge modification algorithms in the spectral gap expansion, what makes their ideal baseline computationally too expensive?
Your answer should be a phrase which gives the reason.
[]
[ "Spectral Graph Pruning Against Over-Squashing and Over-Smoothing" ]
[ "neurips2024" ]
be4edd9f-0d2e-58db-b118-bd36b8549763
Which theorem is used to find the solution of formula (1). Why the linear approximator cannot be applied directly when non-negativity constraints are imposed on target function f \geq 0?
Your answer should be list of two elements, the first is the name of the theorem and the second is the reason why the linear approximator cannot be applied. Note that you should output the formula in the LaTex format.
[]
[ "Inverse M-Kernels for Linear Universal Approximators of Non-Negative Functions" ]
[ "neurips2024" ]
bf32600d-d8cd-5f62-b080-9f0003b6d875
In the paper that combines within-lifetime extrinsic learning and cross-lifetime intrinsic motivation in a single framework, three different behaviours are evaluated in figure 1. Which behavious's intrinsic motivation of most frequent choice is always lower than that of the sum of other choices?
Your answer should be a python strings of the description of the behaviour.
[]
[ "Using adaptive intrinsic motivation in RL to model learning across development" ]
[ "neurips2024" ]
bff9b330-bcd6-547f-8a07-2af88d99540d
Among the text-to-SQL papers in ACL 2023, which one achieves the best testsuite accuracy on the SPIDER dataset? Tell me the paper title and corresponding test accuracy.
Your answer should be a Python list of length two, with the first one being the title string and the second one being a float, the accuracy rounded to 3 decimals.
[]
[ "G3R: A Graph-Guided Generate-and-Rerank Framework for Complex and Cross-domain Text-to-SQL Generation" ]
[ "acl2023" ]
c06ce855-9266-5151-8ae1-2b227afb5a92
In ICLR 2024 Poster papers, a paper tackles the Offline Opponent Modeling problem by harnessing the full potential of the supervised pre-trained Transformers' in-context learning capabilities. Tell me the affiliation of the corresponding author.
Your answer should be a Python string.
[]
[ "Towards Offline Opponent Modeling with In-context Learning" ]
[ "iclr2024" ]
c072b272-90d2-500b-b8e0-d28da9918512
In the paper proposing MC-DiT, why does Gaussian noise affect the optimization process of L_{asym}?
Your answer should be a python strings with specific formula items.
[]
[ "MC-DiT: Contextual Enhancement via Clean-to-Clean Reconstruction for Masked Diffusion Models" ]
[ "neurips2024" ]
c0fa5ff0-16dd-5b2c-85c5-e17306e8a097
In ICLR 2024 Spotlight papers, which paper shows that the solution of this entropy-regularized problem corresponds to a Quantal Response Equilibrium (QRE), a generalization of Nash equilibria that accounts for bounded rationality?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Robust Adversarial Reinforcement Learning via Bounded Rationality Curricula" ]
[ "iclr2024" ]
c105bdc9-6f87-5a7a-9f2c-cf8608923544
In the paper that introduces PEACE dataset, where can I find the dataset? In which conference was this paper published?
Your answer should be a Python list of two strings. The first is a website URL starting with "https://", as given in the paper, the second is only the conference name string.
[ "PEACE: A Dataset of Pharmaceutical Care for Cancer Pain Analgesia Evaluation and Medication Decision" ]
[]
[]
c1380b1a-a444-57b5-bff9-1f983381bb01
There is a novel framework for end-to-end task-oriented dialog systems that decouples knowledge retrieval from response generation, named MAKER (Multi-grAined Knowledge Retriever). The implemented Knowledge Retriever consists of two selectors, one of which is the 'Entity Selector.' What is the name of the other selector?
Your answer should be the exact name of the other selector.
[]
[ "Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog" ]
[ "acl2023" ]
c19b4886-36b0-5461-b66d-9241125e0cb7
On the LSUN bedroom dataset in the paper "GENERALIZATION IN DIFFUSION MODELS ARISES FROM GEOMETRY-ADAPTIVE HARMONIC REPRESENTATIONS", when N=100, the cosine similarity of the generated samples to the nearest neighbors of the training set higher than how much will be considered memory overfit) ?
Your answer should be a float, rounded to 1 decimal place, evaluating in percentage.
[]
[ "Generalization in diffusion models arises from geometry-adaptive harmonic representations" ]
[ "iclr2024" ]
c34f09ae-cbe7-5bb3-afc0-6d97c16f8b9a
According to the paper at ACL 2023 that introduces MisGendered, what is the average accuracy of these models in predicting gender-neutral pronouns without fine-tuning?
Your answer should be a Python float value representing the percentage accuracy, between 0 and 100, rounded to 1 decimal place.
[]
[ "MISGENDERED: Limits of Large Language Models in Understanding Pronouns" ]
[ "acl2023" ]
c575a9be-3775-50ba-85ce-37978368d7db
How does the $\epsilon$ parameter of AdamW affect training?
Your answer should be a sentence, discussing how the relationship between $\epsilon$ and grad RMS impacts training.
[ "Small-scale proxies for large-scale Transformer training instabilities" ]
[]
[]
c6d978d6-9f92-506a-9536-fc88ede19568
A paper negates the invariable outperformance of Cross-Validation (CV) in the simple "plug-in" approach in terms of both the asymptotic bias and coverage accuracy. In their study, 2- and 5-fold CVs suffer from larger biases than plug-in, this phenomenon is more evident under small or large sample sizes?
Your answer should be chosen between "small" and "large".
[]
[ "Is Cross-validation the Gold Standard to Estimate Out-of-sample Model Performance?" ]
[ "neurips2024" ]
c76b81ad-fe54-5179-adaf-131c54f13ee8
In ICLR 2024 Poster papers, a paper proposes a framework named LaMo (Language Models for Motion Control). How many baselines are compared in Figure 1?
Your answer should be a Python integer.
[]
[ "Unleashing the Power of Pre-trained Language Models for Offline Reinforcement Learning" ]
[ "iclr2024" ]
c77d97f1-a74c-5768-b6db-1470144e545f
A recent paper introduces MassSpecGym, the first comprehensive benchmark for molecule discovery and identification from tandem mass spectrometry (MS/MS) data. Could you please retrieve the article and specify the exact number of dataset entries that contain normalized collision energies?
Your answer should be a Python int.
[]
[ "MassSpecGym: A benchmark for the discovery and identification of molecules" ]
[ "neurips2024" ]
c7afaa8a-8311-5dd1-9e5e-5250a4b5011f
According to this paper, which dataset(except the dataset proposed by this paper) also includes controls for unary vs binary predicates among the existing datasets for the formal analysis of reasoning ability? Can you describe the structure of each example in it in detail?
Your answer should be a single python list like this: ["dataset_name", "structure_description"]. Note that for the dataset name, the abbreviation is required. For the structure description, you should give a short string to describe the structure.
[ "Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought" ]
[ "On the Paradox of Learning to Reason from Data" ]
[]
c84e680f-65c0-503d-8e96-550cc16236e7
In the paper that proposes VATT, along which dimension are E_lm and E_a^M concatenated to obtain the fused features \[E_{mm} = \text{Concat}(\left[ E_{lm}, E_{a}^{M} \right])\]
Your answer should be a Python strings precising the dimension.
[]
[ "Tell What You Hear From What You See - Video to Audio Generation Through Text" ]
[ "neurips2024" ]
c9554ac4-8d41-5c2c-9811-4418260c0b89
In ICLR 2024 Oral papers, a paper presents PTGM, a novel method that pre-trains goal-based models to augment RL by providing temporal abstractions and behavior regularization. How many different tasks are considered in Figure 2?
Your answer should be a Python integer.
[]
[ "Pre-Training Goal-based Models for Sample-Efficient Reinforcement Learning" ]
[ "iclr2024" ]
c9e10791-b93c-5047-8b3e-08b5bba1962c
On the DeepCoder dataset, how much did ExeDec improve the average accuracy of the combined generalization task compared to the Transformer baseline?
Your answer should be a float, rounded to 1 decimal place, evaluating in percentage.
[ "ExeDec: Execution Decomposition for Compositional Generalization in Neural Program Synthesis" ]
[]
[]
cbcc28d8-65cd-5208-8ef1-c4381f8a936c
Which dataset(s) is ViewCo trained on? I want to get the newer dataset, can you give me the link to it?
Your answer should be s single python list like this: [["dataset_name1","dataset_name2"], "https://github.com/a/b"]. Note that for the dataset name, the abbreviation is required.
[ "ViewCo: Discovering Text-Supervised Segmentation Masks via Multi-View Semantic Consistency" ]
[ "Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts" ]
[]
cc8f0960-ae43-5917-a76b-f32bfce222d0
There is a paper that introduces Codable Text Watermarking for Large Language Models (CTWL), a novel framework for embedding multi-bit customizable information into LLM-generated texts. Its key contribution is a method called Balance-Marking, which leverages a proxy language model to partition the vocabulary into probability-balanced subsets. In the watermarking process of its experiments, the total time expenditure can be divided into Encoding Time and Decoding Time. Which part consumes more time?
Your answer should be one of ['Encoding Time', 'Decoding Time']
[ "Towards Codable Watermarking for Injecting Multi-Bits Information to LLMs" ]
[]
[]
cca56ecf-46d7-5d38-8e3b-c49d0bf84c7e
In the paper that demonstrates LLM-generated difficulty labels can outperform human labels in curriculum learning for fine-tuning, how much percentage does the accuracy gains of learning strategy with LLM-defined difficulty outperform that with human-defined difficulty on average by datasets, compared to the Random Shuffle baseline?
Your answer should be a float number, rounded to 2 decimal place.
[]
[ "Evaluating Fine-Tuning Efficiency of Human-Inspired Learning Strategies in Medical Question Answering" ]
[ "neurips2024" ]
cce305e1-083a-5a2f-8646-994bd490eaae
In the paper that introduces PPDPP, a plug-and-play dialogue policy planner for LLM-powered proactive dialogue agents, the author proposed a self-play framework utilizing LLM-based user simulation and reward modeling for interactive training. In the experiments related to training episodes, which model consistently maintains the highest Success Rate on the CraigslistBargain dataset?
Your answer should be the exact name of a model.
[]
[ "Plug-and-Play Policy Planner for Large Language Model Powered Dialogue Agents" ]
[ "iclr2024" ]
cdc2f046-8924-57b9-ab44-cd8c96edca2b
In the paper that models novelty emergence in science as an evolutionary game and reveals agents with selfish strategies maximise the diversity of novel ideas, in which generation has the highest Average Novelty Score? What does the image of the Average Novelty Score Time Evolution suggest?
Your answer should be a Python list. The first is an interger of the number of the generation and second is a Python strings of the result suggested by the image.
[]
[ "Creativity Has Entered the Chat, With a Stranger: Novelty is a Nash Equilibrium" ]
[ "neurips2024" ]
ce0e09e3-1e20-594f-8dff-786c9df521dd
In the paper titled "Can Whisper Perform Speech-Based In-Context Learning?" the experimental section utilized two types of Chinese dialects, one being Chongqing. What is the other one?
Your answer should be the specific name of the dialect without any additional prefixes or suffixes.
[ "Can Whisper Perform Speech-Based In-Context Learning?" ]
[]
[]
cf056656-2dd7-5085-b451-2e71e2c1000a
Recent studies use the method of decode the linguistic intricacies to decode DNA. A paper propose to replace k-mer tokenization with Byte Pair Encoding (BPE). They shows model performance averaged over each tasks (macro) and individual dataset (micro), in which kind of performance, macro or micro, the model loss about 2.5% of its performance point when the vocabulary changes from a middle size to a considerable size?
Your answer should be chosen between "macro" and "micro".
[]
[ "DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species Genomes" ]
[ "iclr2024" ]
cf121e7a-e188-5d69-aa44-e699bc62ea6f
In the paper that introdeces GRIT, whereby a large language model is trained to handle both generative and embedding tasks by distinguishing between them through instructions, which generative model in the Figure 1 behaves best in embedding perfornmance?
Your answer MUST be ONE single string without explanation of the method's name in abbreviation and its number of parameters in billions (B). For example, your answer could be 'LLaMA 2 70B'.
[]
[ "Generative Representational Instruction Tuning" ]
[ "iclr2024" ]
cf7fc88e-f139-5d73-a000-bc96995f9276
In the experiment of real-world data in the paper of "CALIBRATION MATTERS: TACKLING MAXIMIZATION BIAS IN LARGE-SCALE ADVERTISING RECOMMENDATION SYSTEMS", which base model was used? What's the architecture of this base model?
Your answer should be a single python list like this: ["model_name", "model_architecture"]. Note that for the model name, the abbreviation is required. For the model architecture, you should give a string to describe the architecture.
[ "Calibration Matters: Tackling Maximization Bias in Large-scale Advertising Recommendation Systems" ]
[ "Deep Learning Recommendation Model for Personalization and Recommendation Systems" ]
[]
cfb8f392-68a9-5438-9ecb-8ca68992bc83
In ICLR 2024 Poster papers, a paper attempts to solve how to effectively learn from off-policy data in reinforcement learning. The authors claim that their components can be learned jointly by maximizing the conditional evidence lower bound (ELBO), what is its formula?
Your answer should be the formula in Latex format.
[]
[ "Skill or Luck? Return Decomposition via Advantage Functions" ]
[ "iclr2024" ]
d0c50c53-8701-5219-ae18-70b1ed9c5ea9
In ICLR 2024 Poster papers, a paper attempts to conduct long-range dynamic modeling in an interactive environment. In this paper, why the authors introduce the Koopman operator?
Your answer should be plain text.
[ "Efficient Dynamics Modeling in Interactive Environments with Koopman Theory" ]
[]
[]
d1ef774b-713a-597c-8dff-a698c09f9c3b
A recent paper introduces ReactZyme, a large-scale benchmark dataset and retrieval-based framework that directly models enzyme functions through their catalyzed reactions rather than traditional annotations such as EC or GO. Could you please tell me how many #Molecule/Reaction are included in the proposed dataset?
Your answer should be a Python int
[]
[ "ReactZyme: A Benchmark for Enzyme-Reaction Prediction" ]
[ "neurips2024" ]
d222e6c3-03eb-5b9f-8ad7-d47ed389079e
In this paper, how many different inference frameworks of predictor are proposed?
Your answer should be a Python integer.
[ "Model-based Reinforcement Learning for Parameterized Action Spaces" ]
[]
[]
d331c7ef-1b86-5754-8ee5-4123b95704b6
This paper proposes a method called "DLPA", which kind of planning is used in it?
Your answer should be plain text.
[ "Model-based Reinforcement Learning for Parameterized Action Spaces" ]
[]
[]
d39792c8-ee30-5a3e-b119-6340d47292b1
Is there a paper published in ICLR 2024 which establishes a unified framework for Riemannian Batch Normalization (RBN) techniques on Lie groups?
Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.
[]
[ "A Lie Group Approach to Riemannian Batch Normalization" ]
[ "iclr2024" ]
d3d238de-2ff5-53b7-85a3-72cb840f59b8
According to the paper's experimental results, which genre of model performed best in the book-length summary task?
Your answer should be a sentence, comparing the models according to catogories.
[ "BooookScore: A systematic exploration of book-length summarization in the era of LLMs" ]
[]
[]
d45ea041-a00c-57b6-bf37-ec663aebaedd
A recent paper introduces SIMSR, a novel Smart Reply framework that leverages model-based simulation to optimize response set selection by directly maximizing the relevance of at least one reply through a learned Matching model acting as a world simulator. This work is a collaboration between a certain university and Nokia Bell Labs. Please provide the name of this university.
Your answer should be the name of the university.
[ "Model-Based Simulation for Optimising Smart Reply" ]
[]
[]
d49a681f-b91f-501e-a25a-b9a4bad14986
A recent paper introduces a novel temporal knowledge graph embedding model that uniquely maps relation-time pairs onto an Archimedean spiral in complex space. This design transforms the temporal link prediction task into a third-order tensor completion problem, enabling precise modeling of relation dynamics while maintaining time-invariant entity representations. The experiments are conducted on three TKGE datasets. Which of these datasets has the largest number of training samples?
Your answer should be one of the following: ICEWS14, ICEWS05-15, GDELT.
[]
[ "TeAST: Temporal Knowledge Graph Embedding via Archimedean Spiral Timeline" ]
[ "acl2023" ]
d5117812-39cd-5661-b208-2c249cba0320
What precision does I2D2 achieve in identifying true commonsense statements compared to GPT-3?
Your answer should be a Python dictionary with keys 'I2D2_precision' and 'GPT3_precision', with values as float numbers rounded to 2 decimal places.
[]
[ "I2D2: Inductive Knowledge Distillation with NeuroLogic and Self-Imitation" ]
[ "acl2023" ]
d52aa9c2-d5a4-51f6-be83-0d6d7f218169
In the paper proposing UNIT, what does the reconstruction loss mean in figure 1?
Your answer should be a python strings.
[]
[ "UNIT: Unifying Image and Text Recognition in One Vision Encoder" ]
[ "neurips2024" ]
d53cc386-bbaa-51a3-b3ab-315ce4b05fb5
In the paper that proposes a statistically robust and multi-metric comparison framework for classifiers, in which library can I find the data sets and the performance evaluation of the 80 datasets that the author used to compare to the SVM?
Your answer should be a Python string of the name of the library.
[]
[ "Statistical Multicriteria Benchmarking via the GSD-Front" ]
[ "neurips2024" ]
d6ebe0a6-1565-5447-83a6-d3d8712c7990
Among the papers in ICLR 2024, which paper proposes the conception called "Policy Rehearsing"? How the authors define "Optimal Policy Gap"?
Your answer should be the formula in LaTeX format.
[]
[ "Policy Rehearsing: Training Generalizable Policies for Reinforcement Learning" ]
[ "iclr2024" ]
d70f731b-f8e9-50c9-9f67-59fed7d45749
What type of input-output pairs are shown in the prompt for in-context learning of Boolean functions? How does the structure of the prompt in the Boolean function task differ from the in-context learning example with country-capital pairs?
Your answer should be a sentence answering the two questions.
[]
[ "Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions" ]
[ "iclr2024" ]
d74a2ccf-fd25-5523-839c-94dc3a8a156f
Which categories of KV cache compression techniques within PoD does WindowKV belong to?
Your answer should be a string that represents categories of KV cache compression techniques in PoD.
[ "WindowKV: Task-Adaptive Group-Wise KV Cache Window Selection for Efficient LLM Inference", "Compressing KV Cache for Long-Context LLM Inference with Inter-Layer Attention Similarity" ]
[]
[]
d82fa28a-ef22-588b-ae60-9493dca6a641
What is the formula of the cosine similarity distribution of random vectors in a high-dimensional space (in Proposition 3.1)?
Your answer should be a formula.
[ "One-shot Empirical Privacy Estimation for Federated Learning" ]
[]
[]
d8ab5195-6509-552f-beaf-0b51e65d9f76
There is a recent paper that investigates the phenomenon of 'overthinking' in pretrained language models (PTMs), specifically within open-world scenarios for out-of-domain (OOD) intent classification tasks. The authors propose a dynamic early-exiting inference method that utilizes ensemble-based internal classifiers to determine whether sufficient confidence has been reached for classifying OOD intents before completing inference. Who is the corresponding author of this work?
Your answer should be a name of a person.
[]
[ "Two Birds One Stone: Dynamic Ensemble for OOD Intent Classification" ]
[ "acl2023" ]
d94c7957-d423-521b-aa68-693d72ac8052
By how many percentage points did BREAK improve the joint goal accuracy on the MultiWOZ 2.1 dataset compared to the previous best-performing models?
Your answer should be a Python float value representing the percentage point improvement in joint goal accuracy, between 0 and 100, rounded to 1 decimal place.
[]
[ "BREAK: Breaking the Dialogue State Tracking Barrier with Beam Search and Re-ranking" ]
[ "acl2023" ]
da8849a2-6880-5061-8b5e-ce162376c043
A recent paper introduces the first large-scale UAV-based dataset explicitly designed for multi-object tracking (MOT) and re-identification (Re-ID) of wild animals, focusing on lekking blackbuck antelopes. The dataset includes over 1.2 million MOT annotations across 12 high-resolution drone videos and 730 Re-ID tracks captured from synchronized UAVs. Could you please provide the email address of one of the corresponding authors?
Your answer should be a mail address.
[]
[ "BuckTales: A multi-UAV dataset for multi-object tracking and re-identification of wild antelopes" ]
[ "neurips2024" ]
db5d82c1-3413-582b-9ffe-7ec212161a9e
A paper introduce a novel principled strategy for building an iterative learning algorithm via the optimisation of a sequence of surrogate training objectives. In the experiment, does their model SuPAC-CE converge faster than gradient descent?
Your answer should be "Yes" or "No".
[]
[ "Learning via Surrogate PAC-Bayes" ]
[ "neurips2024" ]
ddab353d-5af6-5505-9ba5-255878ab1aa8
In the paper that introduces asymptotically faster and memory-efficient ASQ, a paper in the references approaches the ASQ problem using a dynamic programming approach that allows one to optimize Q in polynomial time. In which conference was this paper published?
Your answer should be a Python strings of the name of the conference.
[]
[ "Optimal and Approximate Adaptive Stochastic Quantization" ]
[ "neurips2024" ]
dfbebfcc-9d45-5873-bfa8-008a12c22c03
A paper studies neuronal embeddings' stablility with respect to changes in model architecture and initialization. In their observation of a strong dependency of the neuronal embeddings on the type of readout mechanism. What is probablely a cause of the phenomenon that the structure that emerges for the Gaussian readout, arises trivially by very aggressively forcing weights to be zero?
Your answer should be a phrase, which reveal the reason.
[]
[ "Reproducibility of predictive networks for mouse visual cortex" ]
[ "neurips2024" ]
dfca728d-dfc6-57f6-b16c-4db878a7ef1c
In the paper that proposes ICTM, an algorithm to approximate the MAP solution to a variety of linear inverse problems using a flow prior, what does the x_t in formula (9) represent and how are x_1 and the prior term approximated computationally?
Your answer should be a Python strings indicating the representation of x_t and the method of the approximation.
[]
[ "Flow Priors for Linear Inverse Problems via Iterative Corrupted Trajectory Matching" ]
[ "neurips2024" ]
e096905a-759d-5ca6-86c5-c390753f6add
In the paper that proposes QIM-compatibility as a unified framework that extends graphical models to hypergraphs, enabling new capabilities for representing functional dependencies and cyclic causality, what does H_\mu (T_a \mid S_a) in formula (2) represent?
Your answer should be a Python strings of the representation of H_\mu (T_a \mid S_a), the formula in LaTeX format.
[]
[ "Learning from Children: Improving Image-Caption Pretraining via Curriculum" ]
[ "acl2023" ]
e0b47070-1dd3-5bcf-a7e6-9afa1a715ae6
According to the MedCalc-Bench paper, which datasets for LLM involves all five categories: Medical, Knowledge, Qual. Reasoning,Comput., Non-MCQ?
Your answer should be a string, a name of dataset
[ "MedCalc-Bench: Evaluating Large Language Models for Medical Calculations" ]
[]
[]
e0eecb4d-ab89-5dd1-b2e7-bbd6320576ed
In the single-object editing results on OBJect unseen object subset, which model gets the highest LPIPS in translation task?
Your answer should be a string, a name of model.
[ "Neural Assets: 3D-Aware Multi-Object Scene Synthesis with Image Diffusion Models" ]
[]
[]
e12c7cd6-4da2-5972-a489-a6b58eaaa37f
A recent paper introduces COFE, a novel benchmark derived from COGS, specifically designed to systematically study in-context compositional generalization in large language models. In the experiments investigating model performance with different levels of structural similarity, at what structural similarity level did code-davinci-002 achieve optimal performance on the PhraReco dataset?
Your answer should be one of the following options: ['Without Structural Similarity', 'Rough Structural Similarity', 'Precise Structural Similarity'].
[]
[ "How Do In-Context Examples Affect Compositional Generalization?" ]
[ "acl2023" ]
e2434b7a-cece-58fd-bf9b-12455fa94dca
A recent paper introduces a novel parameter-efficient fine-tuning method for pre-trained language models that selectively fine-tunes only the most informative and correlated attention heads. Specifically, the paper models head relationships through a graph that combines information richness (via SVD) and inter-head correlation, ranking them with the PageRank algorithm. In the experiments investigating the effect of the number of selected heads, on which dataset did the performance show the greatest absolute increase?
Your answer should be one of: ['MRPC', 'MNLI', 'RTE', 'CoLA']
[]
[ "HiFi: High-Information Attention Heads Hold for Parameter-Efficient Model Adaptation" ]
[ "acl2023" ]
e24ac1d8-6d4d-5a01-b811-4d207cad6cac
Can you recommend me a paper published in ICLR 2024 that introduces a lightweight schema for enabling machine learning over electronic health record data?
Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Medical Event Data Standard (MEDS): Facilitating Machine Learning for Health" ]
[ "iclr2024" ]
e2c49966-ea04-5043-8b09-494b33ad7e13
A recent paper introduces a novel large-scale dataset composed of over 159 billion tokens extracted from publicly available business disclosures (e.g., SEC EDGAR filings). It is uniquely characterized by its domain specificity (business and finance), high factuality, low toxicity, and rich temporal metadata. Please provide me with the email address of the corresponding author of this paper.
Your answer should be a mail address.
[]
[ "BeanCounter: A low-toxicity, large-scale, and open dataset of business-oriented text" ]
[ "neurips2024" ]
e2e2ce2e-db49-517f-b457-9e54882054f5
How is the forward diffusion process defined in the paper?
Your answer should be a formula
[ "Lipschitz Singularities in Diffusion Models" ]
[]
[]