uuid
string
question
string
answer_format
string
anchor_pdf
list
reference_pdf
list
conference
list
7349de43-d9af-54c9-8642-90b17c2e8879
How well does the DoRA, pretrained on a single Walking Tours video, perform compared to DINO, pretrained on ImageNet-1k in terms of semantic segmentation?
Your answer should be a sentence specifying the performance difference between DoRA and DINO in terms of semantic segmentation metrics, including the percentage improvement in mIoU.
[]
[ "Is ImageNet worth 1 video? Learning strong image encoders from 1 long unlabelled video" ]
[ "iclr2024" ]
75ac3579-007f-52d8-937c-3dc9ca3a6bfc
Is there a paper that introduces an image dataset consisting of over 20,000 images specifically designed for the task of emotion recognition? The dataset should feature complex scenes depicting multiple individuals in various naturalistic social settings.
Your answer should be the name of the paper.
[]
[ "FindingEmo: An Image Dataset for Emotion Recognition in the Wild" ]
[ "neurips2024" ]
75e818de-8a3c-5ce4-b618-a2c0cb57853f
In the paper that proposed Soft MoE, among the models trained for hundreds or thousands of TPU days, the performance on ImageNet of B/16 model with proposed method is close to that of another model with ViT. What's the difference in train TPU days of that two models?
Your answer should be a float, rounded to 1 decimal place.
[]
[ "From Sparse to Soft Mixtures of Experts" ]
[ "iclr2024" ]
7743611d-f271-53e7-ac1e-4d5a5c4de91a
In ICLR 2024 Poster papers, which papers attempts to mitigate the error accumulation issue in model-based estimation resulting from the classical training of conventional diffusion models?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "DMBP: Diffusion model-based predictor for robust offline reinforcement learning against state observation perturbations" ]
[ "iclr2024" ]
78545fcb-227f-5413-af1e-f2bda755e2fc
A paper propose ConvGQR to reduce the re-training cost. In its experiment, after normalizing the generated answer by the length of its corresponding relevant passage, how the PCC value of co-occurence performs?
Your answer should be a phrase which concludes the overall tendance of the mentionned data.
[]
[ "ConvGQR: Generative Query Reformulation for Conversational Search" ]
[ "acl2023" ]
785e0003-44c5-5fe6-aa93-ed9ead2e8677
In the paper proposing Self Pretraining (SPT), experiments demonstrate that self-pretraining on downstream task data using standard denoising objectives significantly enhances the performance of long-sequence models. This finding indicates that vanilla Transformers can achieve or exceed the performance of state-of-the-art models, such as S4, on the Long Range Arena without requiring architectural modifications. Please answer the following question: Which method, Masked SPT or Causal SPT, demonstrates higher average performance on the Long Range Arena in the paper?
Your answer should be one of [Masked SPT, Causal SPT]
[]
[ "Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors" ]
[ "iclr2024" ]
796c006c-7a40-5d3f-9d46-31d4ace0697f
What specific graph structures can r-loopy Weisfeiler-Leman (r-ℓWL) test count that the classical 1-WL cannot?
Your answer should be a Python string specifying the type of graph structures that r-ℓWL can count but 1-WL cannot.
[]
[ "Weisfeiler and Leman Go Loopy: A New Hierarchy for Graph Representational Learning" ]
[ "neurips2024" ]
79c50d06-b156-51e7-9c0e-999cb921df93
According to this paper, what is the rate at which the asymptotic covariance of the optimization iterate errors decreases with respect to the self-repellence parameter $\alpha$?
Your answer should be a Python string containing the rate of decrease in big O notation, including the exact mathematical expression.
[ "Accelerating Distributed Stochastic Optimization via Self-Repellent Random Walks" ]
[]
[]
7a086cc1-ac1e-55c6-9317-2b2b46051c0b
In the paper that proposes RSTC model, for the heavy imbalanced dataset GoogleNews-T, with which value of the proper hyper-parameter $\epsilon_2$ the accurancy reaches its peak?
Your answer should be a float rounded to 3 decimal places.
[]
[ "Robust Representation Learning with Reliable Pseudo-labels Generation via Self-Adaptive Optimal Transport for Short Text Clustering" ]
[ "acl2023" ]
7a2c0c3e-0da5-540f-8737-41085c0afe45
In ICLR 2024 Spotlight papers, a paper proposes a method called "SimNorm" for normalizing the latent representation. What is the core advantage of "SimNorm"?
Your answer should be plain text.
[]
[ "TD-MPC2: Scalable, Robust World Models for Continuous Control" ]
[ "iclr2024" ]
7ae5178b-1fd8-5dd9-a756-3d35216557b8
Which model is used as the language component in PaLI? Besides the sizes used in PaLI, how many sizes of this model are available in the source paper?
Your answer should be a single python list like this: ["model_name", integer].Note that the model name should use the abbreviation and not inuclude the size information. For the integer, you should give the number of sizes not used in the paper of PaLI, not the total number of sizes.
[ "PaLI: A Jointly-Scaled Multilingual Language-Image Model" ]
[ "mT5: A massively multilingual pre-trained text-to-text transformer" ]
[]
7b686de5-958c-5d2e-bab4-2308b04cae13
In "Domain-Specific Pruning of Large Mixture-of-Experts Models with Few-shot Demonstrations", which paper inspired the authors to calculate the expert-level token contribution?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[ "Domain-Specific Pruning of Large Mixture-of-Experts Models with Few-shot Demonstrations" ]
[ "ShortGPT: Layers in Large Language Models are More Redundant Than You Expect" ]
[]
7b8419f8-dd8d-58ae-a6b5-dc0751797910
Among the datasets and benchmarks at NeurIPS 2024 evaluating large language models' capabilities, what is the name of the benchmark introduced to evaluate AI models' understanding of humorous contradictions in comics?
Your answer should be a Python string containing the name of the benchmark.
[]
[ "7786376a-6fb2-5bde-a1be-2b8ab3bf4162" ]
[ "neurips2024" ]
7b92e525-dc4a-514f-be25-448e5973b488
A paper proposes Bayesian red teaming (BRT) to reduce the model's potentiel harmful response. In the experiment, does the author maintain all the offensive test cases during the generation?
Your answer should be "Yes" or "No".
[]
[ "Query-Efficient Black-Box Red Teaming via Bayesian Optimization" ]
[ "acl2023" ]
7c2602a5-e639-5eb2-9950-0636ba43d719
In the paper that establishes one of the first low-degree polynomial lower bound for tree broadcasting below the Kesten-Stigum threshold in a non-product-measure setting, what is the critical condition of the proof of formula (7)?
Your answer should be a Python strings indicating the critical condition of the formula.
[]
[ "Low Degree Hardness for Broadcasting on Trees" ]
[ "neurips2024" ]
7d07a700-2ec1-550d-9f48-e2b86e019057
Among the long papers at ACL 2023 focusing on emotion recognition in conversations, which modeling strategies (e.g., graph-based, representation learning, or fusion networks) are most commonly employed for improving fine-grained emotion recognition, and which approaches show the most notable performance gains on benchmarks like IEMOCAP or MELD?
Your answer should be a Python string describing the most common modeling strategies and which specific approaches (with their names) showed the best performance gains on IEMOCAP or MELD benchmarks.
[]
[ "DualGATs: Dual Graph Attention Networks for Emotion Recognition in Conversations", "A Cross-Modality Context Fusion and Semantic Refinement Network for Emotion Recognition in Conversation" ]
[ "acl2023" ]
7e3e9f76-0834-5dea-972d-fe08b308f2d4
According to the DP-OPT paper, while varing privacy parameters, which base model trade-off the less?
Your answer should be a string, the name of the base model along with its size, as originally presented in the paper.
[]
[ "DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer" ]
[ "iclr2024" ]
7f5573c8-5564-5b3a-898c-b63007390006
A recent paper introduces a universal framework for dataset characterization that integrates 23 types of model-driven meta-information, encompassing static measures, training dynamics, model uncertainty, and pre-trained knowledge, into a unified multidimensional feature space. The paper tests 10 selected samples using different selection methods on the QNLI dataset, employing their log determinant as a proxy measure for set informativeness. Which selection method corresponds to the lowest log determinant?
Your answer should be one of: ['Ambig', 'Hard', 'CoreSet', 'InfoVerse (DPP)']
[]
[ "infoVerse: A Universal Framework for Dataset Characterization with Multidimensional Meta-information" ]
[ "acl2023" ]
7f7c3dd4-9ea4-5bed-84f1-06ecf15e7775
In the paper that proposed the R2I method, which dataset covered in the experiment is the newest? In that dataset, how many environments are there for each tag?
Your answer should be a Python list of 2 elements, the first is a string, the name of the dataset, the second is a Python dict, the key is the tag and the value is the number of environments.
[]
[ "Mastering Memory Tasks with World Models", "POPGym: Benchmarking Partially Observable Reinforcement Learning" ]
[ "iclr2024" ]
7fbc249e-9708-5514-9faf-b0cc82b0014b
Among the spotlight papers at NeurIPS 2024 researching contrastive learning, what is the key mechanism identified in the paper "Exploitation of a Latent Mechanism in Graph Contrastive Learning: Representation Scattering" that enhances the performance of Graph Contrastive Learning (GCL) methods?
Your answer should be a Python string describing the key mechanism and how it enhances performance of Graph Contrastive Learning methods.
[ "Exploitation of a Latent Mechanism in Graph Contrastive Learning: Representation Scattering" ]
[]
[]
7fcdeddf-e29d-5e81-9a58-16ba70576c61
Which paper published in ICLR 2024 proposes LLM-grounded Video Diffusion (LVD), the first training-free pipeline that leverages LLM-generated dynamic scene layouts for enhanced ability to generate videos from intricate text prompts?
Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.
[]
[ "LLM-grounded Video Diffusion Models" ]
[ "iclr2024" ]
808b6754-65e3-55a5-93ee-fb88d7449af7
What is the main motivation of eliminating the homogeneous distractors in image or video?
Your answer should be plain text.
[ "AD3: Implicit Action is the Key for World Models to Distinguish the Diverse Visual Distractors" ]
[]
[]
80b37387-c0f7-5b90-9ba8-ba92bcb5ef9b
Look at Figure 1 in the paper. What two main components does the diagram show as inputs to the Language Model block? Based on the figure, what is the output of the model, and how is it evaluated?
Your answer should be a sentence answering the two questions.
[ "SWE-bench: Can Language Models Resolve Real-world Github Issues?" ]
[]
[]
82692ec9-8ee0-59c0-950d-460dc8fcc820
A recent paper introduces the first standardized benchmark for video-language continual learning, designed to evaluate models on three novel query-incremental tasks: Moment Query (MQ), Natural Language Query (NLQ), and Visual Query (VQ). Could you please specify which egocentric video-language dataset the proposed data collection is derived from?
Your answer should be a name of a dataset.
[ "ViLCo-Bench: VIdeo Language COntinual learning Benchmark" ]
[]
[]
830f1c03-4a94-5c36-863e-cbf523ec9785
In the experiment section of "InCharacter: Evaluating Personality Fidelity in Role-Playing Agents through Psychological Interviews", where does the character data come from, and how many characters are there in each character source dataset?
Your answer should be a Python dictionary of one or more key-value pairs, where each key is the name of the source character dataset and each value is an integer indicating the number of characters in that dataset. e.g. {"dataset1": 3, "dataset2": 5}
[ "InCharacter: Evaluating Personality Fidelity in Role-Playing Agents through Psychological Interviews" ]
[ "ChatHaruhi: Reviving Anime Character in Reality via Large Language Model", "RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models" ]
[]
83534573-34f6-5b77-8c88-c1267d7fdb3d
In the paper where a novel approach named SLAN is proposed, what is the role of \delta_{i}^{k} and P_k in formula (3)?
Your answer should be a Python strings indicating the role of \delta_{i}^{k} and P_k.
[]
[ "Multi-Label Open Set Recognition" ]
[ "neurips2024" ]
86148388-e8c0-524c-bdbf-b2d156a88151
How many vision encoders did Cambrian-1 evaluate to study different visual representation choices?
Your answer should be an integer indicating the number of vision encoders evaluated.
[]
[ "Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs" ]
[ "neurips2024" ]
861cf9ab-7091-55dc-aef7-9daa027ddf84
In ICLR 2024 Spotlight papers, a paper proposes a new Adversarial Imitation Learning (AIL) algorithm, aiming to address the sample efficiency and scalability issues of existing AIL methods when dealing with off-policy data. How many different tasks are considered in Figure 2.
Your answer should be a Python integer.
[]
[ "Adversarial Imitation Learning via Boosting" ]
[ "iclr2024" ]
86d7d0cb-e4e8-5172-8b8f-af7b783f5236
According to the spotlight paper at NeurIPS 2024 that applies user-level differentially private algorithms in federated learning, what range of privacy loss $\varepsilon$ did the one-shot empirical estimation method report in the scenario where only the final trained model is released?
Your answer should be a Python list of two float values: [lower_bound, upper_bound], representing the range of privacy loss $\varepsilon$ values, both rounded to 4 decimal places.
[]
[ "One-shot Empirical Privacy Estimation for Federated Learning" ]
[ "iclr2024" ]
8730f946-0a1d-536a-a852-b8633be3458f
What is the main difference between formula (5) and (6) in the paper "Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale"?
Your answer should be a python strings.
[ "Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale" ]
[]
[]
8857443d-9d39-562c-a956-aafe363ebc15
There is a paper that introduces a curriculum learning framework that leverages prior knowledge about sample difficulty, measured through annotation entropy and loss, to discover effective, often non-monotonic curricula tailored to NLP models and datasets. Which university's researchers proposed this work?
Your answer should be a name of a university.
[]
[ "HuCurl: Human-induced Curriculum Discovery" ]
[ "acl2023" ]
891967d0-ebf1-59c1-8b92-ee454038df8b
How much more accurate is InfoBatch compared to the static pruning method EL2N-2 on CIFAR-100 with 50% prune ratio?
Your answer should be a nomber, indicating the precentage of the difference of accuracy, rounded to 1 decimal place.
[]
[ "InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning" ]
[ "iclr2024" ]
8a1191ff-943e-5138-bf7f-13a2e7a3e492
For the second-best method shown in Figure 2, where can I find their GitHub repository to reproduce the results?
Please provide the GitHub repository URL for this dataset in the format: 'https://github.com/xxx'.
[ "WebRL: Training LLM Web Agents via Self-Evolving Online Curriculum Reinforcement Learning" ]
[ "DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning" ]
[]
8a3024f4-668e-5422-afbb-1000295ae11e
According to the first comprehensive study for LLM attribution at ACL 2023, what is the accuracy of the best attribution method in tracing fine-tuned models back to their pre-trained base models?
Your answer should be a Python string in the format 'X out of Y models', where X is the number of correctly attributed models and Y is the total number of models tested.
[]
[ "Matching Pairs: Attributing Fine-Tuned Models to their Pre-Trained Large Language Models" ]
[ "acl2023" ]
8a66a0bc-0a88-5981-a094-8e2c26b8da3a
In ICLR 2024 Poster papers, a paper first constructs a dynamics model from the expert demonstration, enforcing local Lipschitz continuity while skipping the discontinuous regions. What is the number of the pages of this paper?
Your answer should be a Python integer.
[ "CCIL: Continuity-Based Data Augmentation for Corrective Imitation Learning" ]
[]
[]
8a875858-2688-5948-a4f9-3ec0cd39f550
In the paper that reveals that retrieved information helps retrieval-augmented language models' (RALMs) performance when it is relevant, for Llama-2-13B few-shot prompted on five QA tasks, how many kinds of benchmarks benefit from a strong retrieval?
Your answer should be an int.
[]
[ "Making Retrieval-Augmented Language Models Robust to Irrelevant Context" ]
[ "iclr2024" ]
8a9f72db-685e-5f91-9446-f0072cec953d
In ICLR 2024 Poster papers, which paper introduces a generalized attack framework that has the flexibility to model to what extent the adversary is able to control the agent, and allows the attacker to regulate the state distribution shift and produce stealthier adversarial policies?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Rethinking Adversarial Policies: A Generalized Attack Formulation and Provable Defense in RL" ]
[ "iclr2024" ]
8ba79cd4-f36c-5812-a05d-fef0816ca504
In the paper that proposed CutSSL, how much does the CutSSL method outperform the state-of-the-art on Cifar-10 dataset when only a single sample from each class is provided and when in the large label-rate regime respectively?
Your answer should be a python list of two strings representing the outperformance in the form of percentage, rounded to 1 decimal place, e.g. "10.0%". Recall that order matters.
[]
[ "Continuous Partitioning for Graph-Based Semi-Supervised Learning" ]
[ "neurips2024" ]
8bd87a19-3c5f-5e92-9af1-30d76c34e4c4
A paper introduce Induced Model Matching (IMM) to train a full-featured (often larger) model with help of a very accurate (often small) predictive model using restricted features. In the comparaison of MDP trained without and with IMM incorporating POMDP, which model shows a high stability during the training, with IMM or no IMM?
Your answer should be chosen between "with IMM" and "no IMM".
[ "Induced Model Matching: Restricted Models Help Train Full-Featured Models" ]
[]
[]
8c147d46-63fc-5c50-beae-64bf4cc920ec
A paper introduces FlexLoRA, a simple yet effective aggregation scheme for Large Language Models' fine-tuning. In their experiment of FlexLoRA's performance in a controlled environment using homogeneous LoRA ranks, does FlexLoRA's aggregation negatively impact model performance?
Your answer should be "Yes" or "No".
[ "Federated Fine-tuning of Large Language Models under Heterogeneous Tasks and Client Resources" ]
[]
[]
8cc9f4de-9e7a-55a9-9cb6-b588251ee91d
Which method aims to recover camera poses and scene geometry from a large set of unordered or ordered images, in particular, by optimizing photometric errors?
Your answer should be a string, the name of the method
[ "Cameras as Rays: Pose Estimation via Ray Diffusion" ]
[]
[]
8d291d6e-f6ad-5d4c-984f-8be126ce33f5
There is a recent paper introducing a large-scale, high-resolution video-text dataset annotated with detailed, script-like captions averaging 145 words per clip, over 10 times longer than existing datasets. It uniquely captures not only scene content but also camera operations (e.g., shot types and movements). Could you please tell me which category within this dataset has a video count that exceeds 10% of the total?
Your answer should be a vedio category.
[]
[ "Vript: A Video Is Worth Thousands of Words" ]
[ "neurips2024" ]
8d4fdb6b-6638-5e61-9bf7-a62195198c24
A paper extend the notion of a risk controlling prediction set (RCPS) to the sequential setting, in their evaluation of our methods on the Imagenet dataset, among four methods, one method got the highest average rate of safety violations, what is it?
Your answer should be a string, a name of method which corresponds to the label in the figure.
[]
[ "Active, anytime-valid risk controlling prediction sets" ]
[ "neurips2024" ]
8ebbfa63-51d8-5193-9004-40215508e42a
Figure 1 illustrates the cosine value density plots for different noise levels and different number of canary repetitions. How these density plots illustrate the increased privacy preserving effect of canary as the noise increases?
Your answer should be a sentence.
[ "One-shot Empirical Privacy Estimation for Federated Learning" ]
[]
[]
8eee2499-a837-5f23-8c91-c820ec6e4d55
What is the affiliation of the first author of the paper?
Your answer should be a Python string.
[ "AD3: Implicit Action is the Key for World Models to Distinguish the Diverse Visual Distractors" ]
[]
[]
8f1a9a7d-041b-5104-8806-b67dde1453e8
What is the challenge of estimating camera poses now?
Your answer should be a string, describing the condition
[ "Cameras as Rays: Pose Estimation via Ray Diffusion" ]
[]
[]
8f64d96a-1439-57ec-a78e-53b96361cd4e
A recent paper introduces GEEP (GEnder Equality Prompt), a novel debiasing method that mitigates gender bias in large pre-trained language models such as RoBERTa without degrading their performance on downstream tasks. Could you please tell me who the first author of this paper is?
Your answer should be a name of a person.
[]
[ "Improving Gender Fairness of Pre-Trained Language Models without Catastrophic Forgetting" ]
[ "acl2023" ]
8ff213fb-137b-5c5b-9b05-86c797c01403
In the question that proposes a practical rephrasing-based method to estimate uncertainty in closed-source LLMs, combining simple memorizable rules with a theoretical framework for calibrated confidence scores, which model, dataset, and rephrasing method are used in Figure 3c to validate the logistic distribution assumption?
Your answer should be a Python list of three strings, the first is the model name, the second is the dataset name and the third is the rephrasing method.
[]
[ "Just rephrase it! Uncertainty estimation in closed-source language models via multiple rephrased queries" ]
[ "neurips2024" ]
9179b2b5-9f88-5999-a900-410b2a7a2e96
In Figure 2, what three filtering stages are depicted for constructing SWE-bench tasks? What specific criteria must a PR meet to pass the Execution Filter stage?
Your answer should be a sentence answering the two questions.
[ "SWE-bench: Can Language Models Resolve Real-world Github Issues?" ]
[]
[]
91c809b3-1e3a-5f12-bf7b-47d90898f6ea
A recent paper presents the first comprehensive benchmark for disentangling aleatoric and epistemic uncertainty in deep learning. It evaluates 19 uncertainty quantification methods across 13 tasks on ImageNet and CIFAR-10, revealing that existing decomposition formulas fail to produce truly disentangled estimators. Could you please provide the email address of the first author of this paper?
Your answer should be a mail address.
[ "Benchmarking Uncertainty Disentanglement: Specialized Uncertainties for Specialized Tasks" ]
[]
[]
93fac128-b7d3-53c9-b013-e11d9a9a0693
A paper propose a SummAttacker to generate adversarial samples based on language models efficiently. Performance of different models on the Gigaword test set vary when attacked by SummAttacker with different candidate number K. Generally, a larger K gives model positive or negative impact?
Your answer should be chosen between "positive" and "negative".
[ "Improving the Robustness of Summarization Systems with Dual Augmentation" ]
[]
[]
94d1244b-9b39-5c21-9c06-157be897e605
I remember there is a paper that develops a language, perhaps MathDL? How does it measure whether target model generates more concise solutions or not?
Your answer should be a math formula in LaTeX format WITHOUT ANY EXPLANATION.
[]
[ "MathDSL: A Domain-Specific Language for Concise Mathematical Solutions Via Program Synthesis" ]
[ "neurips2024" ]
95a672e8-3cda-5a6b-87a9-e95d0d518338
Among the papers at NeurIPS 2024 researching multi-agent systems, what is the maximum accuracy improvement achieved by MDAgents when incorporating moderator review and external medical knowledge in group collaboration?
Your answer should be a Python float value representing the percentage improvement in accuracy, between 0 and 100, rounded to 1 decimal place.
[]
[ "MDAgents: An Adaptive Collaboration of LLMs for Medical Decision-Making" ]
[ "neurips2024" ]
9611cd50-05a4-58e8-a1cf-25664758d1f8
Which paper at ICLR 2024 proposed a framework for efficient fine-tuning of bidirectional interleaved visual languages for reference image segmentation, which achieved an average score of 66.5 on three RefCOCO-related benchmarks? In this paper, how many images are there in the group of images related to kid runningninng?
Your answer should be a list of two strings, with the first being the paper title and second being a number.
[]
[ "BarLeRIa: An Efficient Tuning Framework for Referring Image Segmentation" ]
[ "iclr2024" ]
96255ea0-3b77-59e9-8f68-baadb43fffd1
In ICLR 2024 Poster papers, which paper proposes GRAD, a game-theoretic approach that treats the temporally-coupled robust RL problem as a partially-observable two-player zero-sum game?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Game-Theoretic Robust Reinforcement Learning Handles Temporally-Coupled Perturbations" ]
[ "iclr2024" ]
9642c1b7-0cca-5bd4-8fd8-6784c5433b9c
In ICLR 2024 Poster papers, a paper mainly studies how to more effectively utilize data augmentation techniques to improve sample efficiency and generalization ability in image-based deep reinforcement learning (DRL). Tell me the number of authors in this paper.
Your answer should be a Python integer.
[]
[ "Revisiting Data Augmentation in Deep Reinforcement Learning" ]
[ "iclr2024" ]
96620845-ee4c-5dc2-83bb-50ff72522bcb
Can you recommend me a paper published in ICLR 2024 that proposes a framework that synergistically integrates compiled neural networks (CoNNs) into the standard transformer architecture?
Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Mastering Symbolic Operations: Augmenting Language Models with Compiled Neural Networks" ]
[ "iclr2024" ]
979a11a6-63e0-5774-9714-3e6b3750c7e2
Among the long papers at ACL 2023 researching text classification, how many benchmark datasets were used in the experiments to validate the effectiveness of GetMTL?
Your answer should be a Python integer representing the number of benchmark datasets used in the experiments.
[]
[ "Improving Gradient Trade-offs between Tasks in Multi-task Text Classification" ]
[ "acl2023" ]
97d91a12-270b-5030-9c04-af9ddf90a0cc
What percentage of experts can be pruned from the NLLB-200 model without further finetuning and with negligible loss in translation quality?
Your answer should be a Python float value representing the percentage of experts that can be pruned, between 0 and 100, rounded to 1 decimal place.
[]
[ "Memory-efficient NLLB-200: Language-specific Expert Pruning of a Massively Multilingual Machine Translation Model" ]
[ "acl2023" ]
988cad30-22af-567f-b03d-57e929c59e30
A recent paper introduces the first document-level relation extraction (RE) dataset in the historical domain, which includes bilingual annotations in both Korean and Hanja. Constructed from the Yeonhaengnok travel records of the Joseon dynasty, the proposed dataset provides annotated entities, relations, and supporting evidence across variable-length textual units. How many documents does this dataset contain?
Your answer should be a Python int
[]
[ "HistRED: A Historical Document-Level Relation Extraction Dataset" ]
[ "acl2023" ]
998e69c0-f3a3-532d-8175-ce6e562f4b2a
In the paper proposing a unified framework to model semantic segmentation and semantic image synthesis as a pair of reverse problems, why can the ODE model model these two problems simultaneously?
Your answer should be a python strings.
[ "SemFlow: Binding Semantic Segmentation and Image Synthesis via Rectified Flow" ]
[]
[]
9a875755-338d-5c6d-a86a-b8e5be3f7742
What performance improvements does LoftQ achieve over QLoRA in 2-bit quantization settings on the LLaMA-2-13B model for the GSM8K dataset?
Your answer should be a sentence, stating the accuracy achieved by LoftQ and the comparison with QLoRA.
[]
[ "LoftQ: LoRA-Fine-Tuning-aware Quantization for Large Language Models" ]
[ "iclr2024" ]
9ab6d86c-f98a-5f42-84e6-69dc0fedf49b
In the paper that reveals that LLMs develop a two-phase abstraction process during training, and give initial evidence that their brain-like encoding ability stems from compositional learning rather than next-word prediction, why is the choose of k which controls the neighborhood size duing the nonlinear ID estimation necesary?
Your answer should be a Python string indicating the reason why we shoud have a scale analysis on k.
[]
[ "Evidence from fMRI Supports a Two-Phase Abstraction Process in Language Models" ]
[ "neurips2024" ]
9ab746c3-4d32-5999-b209-783689738b35
What is the main purpose of Figure 1 and how does it demonstrate the role specialization in MetaGPT?
Your answer should be a sentence answering the two questions.
[ "MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework" ]
[]
[]
9b114135-e411-5d35-94bb-815be3d51f41
A paper introduce ROBUSTAL PACAEVAL, a new benchmark, to address the sensitivity of large language models (LLMs) to the phrasing of prompts. In their examination of the model-agnostic attributes of the worst prompts, Llama family or Gemma family get the higher overlap rate of the worst-k prompts?
Your answer should be chosen between "Llama" and "Gemma".
[]
[ "On the Worst Prompt Performance of Large Language Models" ]
[ "neurips2024" ]
9b5168ec-96c6-54ca-afba-0b40bbbb8edc
In the paper of offline Q-learning, which state-of-the-art return-conditioned supervised method was mentionned? Which conference was this method published in?
Your answer should be a single python list like ["paper_title", "conference_name_and_year"], the paper title should be the full title of state-of-the-art return-conditioned supervised method, and conference_name_and_year should be the abbreviation of the conference and the year like "ACL2021".Note that arxiv should not be included as the conference name.
[ "Offline Q-learning on Diverse Multi-Task Data Both Scales And Generalizes" ]
[ "Multi-Game Decision Transformers" ]
[]
9be4e9dc-b3b4-5082-8a85-244be8d32283
The paper titled "Automatic Camera Pose Estimation by Key-Point Matching of Reference Objects" was conducted by researchers from which country?
Your answer should be a name of a country
[ "Automatic Camera Pose Estimation by Key-Point Matching of Reference Objects" ]
[]
[]
9be80cee-f524-5c63-9098-b9b4cd4a4921
In the paper that introduces POLICY-LEARN, a new approach that learns how to select subgraphs in an iterative manner, what is formula 3 used for?
Your answer should be a brief summary of the formula's purpose and it should begin with 'To ...'.
[]
[ "Efficient Subgraph GNNs by Learning Effective Selection Policies" ]
[ "iclr2024" ]
9cc1e97e-16e4-5326-a985-76a6a49380f2
A paper introduce a novel training algorithm, Learn-To-be-Efficient (LTE), to achieve a better trade-off between sparsity and performance. In the the 5-shot MMLU accuracy comparison, which model performs the worst across all sparsity levels?
Your answer should be chosen among "Deja Vu", "R-Llama" and "LTE".
[]
[ "Learn To be Efficient: Build Structured Sparsity in Large Language Models" ]
[ "neurips2024" ]
9cdde36c-2819-58f4-8077-129932dc6d20
Which is one of the common error appears across the models?
Your answer should be a string which mentions a type of error.
[ "LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low Resource and Extinct Languages" ]
[]
[]
9ceda169-3d70-5354-8ce1-70d7d09debd1
How does Code4Struct perform in zero-resource event types when utilizing 10-shot training data from a sibling event type?
Your answer should be an integer between 0 and 100 specifying the absolute F1 improvement over the zero-shot baseline.
[]
[ "Code4Struct: Code Generation for Few-Shot Event Structure Prediction" ]
[ "acl2023" ]
9d1f4f14-3fc0-5649-a566-373eb9690d42
In the paper that proposes a graph rewiring framework that establishes express connections between distant nodes for non-local message passing, overcoming the over-smoothing problem without requiring deep architectures, list the name of nine graph datasets where the experiments are carried out.
Your answer should be a Python list of strings indicating the name of nine datasets.
[]
[ "Non-local Exchange: Introduce Non-locality via Graph Re-wiring to Graph Neural Networks" ]
[ "neurips2024" ]
9db91baa-9382-5fe8-9ea0-7c504f98bd97
What overall accuracy does GPT-4V achieve on the MathVista benchmark?
Your answer should be a Python float value representing the percentage accuracy, between 0 and 100, rounded to 1 decimal place.
[]
[ "MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts" ]
[ "iclr2024" ]
9e2ccce0-6410-5597-afd3-ab72ff7c6310
According to this paper, among Open-weight LLMs, which model has the best memory sub-score?
Your answer should be a string, a name of model
[ "AgentBoard: An Analytical Evaluation Board of Multi-turn LLM Agents" ]
[]
[]
9e393d26-7acd-5e99-a673-d29693b11a0f
A recent paper introduces the first large-scale, open-source, high-fidelity 3D CFD dataset based on 355 geometrical variations of the Windsor car body. Each case is simulated using GPU-native Wall-Modeled Large-Eddy Simulations (WMLES) with over 280 million cells, capturing detailed aerodynamic flow features relevant to real road vehicles. Under what license is this dataset available as open source?
Your answer should be a license name and must adhere precisely to the format presented in the paper without version information.
[]
[ "WindsorML: High-Fidelity Computational Fluid Dynamics Dataset For Automotive Aerodynamics" ]
[ "neurips2024" ]
9ecb7607-8057-5e6c-8019-22d0b252d0cf
What is the main contribution of the proposed method which called Averaged-DQN in this paper?
Your answer should be plain text
[ "Averaged-DQN: Variance Reduction and Stabilization for Deep Reinforcement Learning" ]
[]
[]
9f66741c-f110-56ea-8b20-ee1d75f7af0e
What is the training objective (loss function) for the reverse diffusion process in the paper?
Your answer should be a formula
[ "Lipschitz Singularities in Diffusion Models" ]
[]
[]
9fc4f699-2ea4-546f-908f-a0aabe58cada
What is the mathematical expression for the first loss function proposed to measure approximate Nash equilibria?
Your answer should be a formula.
[]
[ "Approximating Nash Equilibria in Normal-Form Games via Stochastic Optimization" ]
[ "iclr2024" ]
9ff9a8a5-f7a8-55cb-8640-2aa59b030d9e
A paper propose a tractable surrogate model of choice (CRCS), which show a better basis for preference learning, could this model works on choice sets of variable size?
Your answer should be "Yes" or "No".
[]
[ "Preference Learning of Latent Decision Utilities with a Human-like Model of Preferential Choice" ]
[ "neurips2024" ]
a07a5196-cd2b-53e9-885c-3f59ea6b1ac2
A paper propose a two-stage Differentially Private (DP) generation method, in the second step, it generates utterances based on the parses. Does the distribution function p_priv(x) discribes private user utterances with a high accurancy comparing with selecting them by active learning?
Your answer should be "Yes" or "No".
[ "Privacy-Preserving Domain Adaptation of Semantic Parsers" ]
[]
[]
a16255a2-cf19-5d82-8acc-244fe50b6581
In the paper that proposes SGRLD, the author compare their SGRLD method to other MCMC methods. Which optimizer does the second method extends to the SCLD setting? And in the paper that proposes this optimizer, which two methods eventually converge considerably faster in Convolutional neural networks training cost?
Your answer should be a python list of two elements, the first is a single word, the name of the optimizer, and the second is a python list of two words, the name of the two methods as given in the paper.
[]
[ "Stochastic Gradient MCMC for Gaussian Process Inference on Massive Geostatistical Data", "Adam: A Method for Stochastic Optimization" ]
[ "neurips2024" ]
a16a7b27-48c1-5544-8482-e741487c4129
What is the simplification to S4 called Diagonal Linear RNN according to the paper that improves the best reported results of SSMs on the PathX-256 task by 20 absolute points
Your answer should be a list of formulas
[]
[ "Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors" ]
[ "iclr2024" ]
a1c19569-3326-5313-a7ee-87e1bb3afe2d
In the paper that proposes MHCD_IFF, list three models in figures 3 which have the best performance.
Your answer should be a python list of three strings, each string being the name of a model.
[]
[ "Multi-hypotheses Conditioned Point Cloud Diffusion for 3D Human Reconstruction from Occluded Images" ]
[ "neurips2024" ]
a1cc17f0-fd10-5273-abc3-f291526bf741
In the test of Qwen2-72B-Instruct, Qwen2.5-Turbo, Qwen2-0.5B-Instruct, Qwen2-57B-A14B-Instruct based on the context length of a given document and the ability of document depth retrieval, what is the difference in the retrieved Context Length?
Your answer should be a python lidt of four strings, explaining the retrieval content length of the four models respectively. eg " model name: range from 0 to roughly 20k tokens".
[ "Qwen2 Technical Report", "Qwen2.5 Technical Report" ]
[]
[]
a1fc3902-bbcd-5151-ac3b-cfd649bed022
In ICLR 2024 Poster papers, a paper attempts to address the challenges faced when learning from pixel-level inputs in multi-object manipulation tasks. Tell me the affiliation of the first author of this paper.
Your answer should be a Python string.
[ "Entity-Centric Reinforcement Learning for Object Manipulation from Pixels" ]
[]
[]
a3aaf5a0-c018-5c1e-9c35-ef4245b4acb4
A recent paper introduces a scientist-curated benchmark for evaluating language models on real-world scientific coding problems across 16 natural science subfields. Comprising 80 main problems decomposed into 338 subproblems, each annotated and validated by domain experts, the benchmark assesses models' abilities in knowledge recall, reasoning, and code synthesis. Please retrieve the paper and provide me with the link for data, code, and the leaderboard corresponding to this work.
Your answer should be a link only without any additional prefixes or suffixes.
[]
[ "SciCode: A Research Coding Benchmark Curated by Scientists" ]
[ "neurips2024" ]
a433afb8-eaf9-5c66-aee6-ea02f50a000e
What is the sample complexity bound for achieving an $\varepsilon$-optimal estimator in the non-parametric distributional TD learning (NTD) method?
Your answer should be a Python string containing the mathematical expression for the sample complexity bound in LaTeX-like format.
[]
[ "Statistical Efficiency of Distributional Temporal Difference Learning" ]
[ "neurips2024" ]
a4ea769e-c0fa-5b81-8f3e-1c1359672055
There is a paper that introduces a novel constrained decoding algorithm called Prefix-Suffix Guided Decoding (PSGD) for the Translation Suggestion (TS) task in interactive machine translation. Unlike prior methods that require retraining or generate the full sequence, PSGD decodes only the selected incorrect span while maximizing the probability of the entire sentence, conditioned on given prefix and suffix constraints. Question: What is the average BLEU score of PSGD in the experiments conducted on the WMT22-TS test sets?
Your answer should be a Python float rounded to 2 decimal places.
[]
[ "Easy Guided Decoding in Providing Suggestions for Interactive Machine Translation" ]
[ "acl2023" ]
a51711dd-e4e8-52e2-bb9d-d78794ec5930
In ICLR 2024 Poster papers, which paper addresses the curse of dimensionality by learning the inherent structure of action-wise similar MDP to appropriately balance the performance degradation versus sample/computational complexity?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Achieving Sample and Computational Efficient Reinforcement Learning by Action Space Reduction via Grouping" ]
[ "iclr2024" ]
a63ff94c-8cc1-5a5b-a47c-42bac09f4700
What're the 4 main findings in the spotlight paper at NeurIPS 2024 that investigate the relationship between self-recognition and self-preference for LLMs?
Your answer should be a Python string describing the 4 main findings of the paper.
[]
[ "LLM Evaluators Recognize and Favor Their Own Generations" ]
[ "neurips2024" ]
a641a2d3-f953-5481-8362-5b814ab33e72
Recent advancements use knowledge transfer techniques like Score Distillation Sampling (SDS) to overcome the limited availability of comprehensive annotated training data. A paper study this method deeply. In the figure which shows the comparison of using normal-SDS jointly with RGB-SDS, which two methods generate a panda with armor on its body, give one of them.
Your answer should be a string, which gives the name of method.
[]
[ "Enhancing High-Resolution 3D Generation through Pixel-wise Gradient Clipping" ]
[ "iclr2024" ]
a6a526c0-a063-52d7-8afd-a83a4baafcc8
In ICLR 2024 Poster papers, a paper attempts to solve the problem of how to enhance the arithmetic reasoning capabilities of large language models (LLMs) through zero-sample hint optimization. Tell me the codebase url of the paper.
Your answer should be a Python string.
[]
[ "Query-Dependent Prompt Evaluation and Optimization with Offline Inverse RL" ]
[ "iclr2024" ]
a72a94c2-00be-5f74-8c57-2dc88eaeaea9
In the paper that introduces a novel method that employs dynamic and directed weight adjustment techniques to modulate the synthesis process during dataset distillation, how much percent does the DWA method outperform the SRe2L method in accuracy compared to the baseline on Tiny-lmageNet, lmageNet-1K and CIFAR100 dataset respectively?
Your answer should be a Python list of three float numbers, all rounded to 1 decimal place.
[]
[ "Diversity-Driven Synthesis: Enhancing Dataset Distillation through Directed Weight Adjustment" ]
[ "neurips2024" ]
a77ed2c5-64a9-5771-b2b4-8cc644472720
In ICLR 2024 Poster papers, a paper attempts to propose a meta-reinforcement learning algorithm that is improved in multiple aspects, especially in terms of sample efficiency, generalization ability, and handling of high-dimensional task distributions, by combining the latest model-based RL techniques and meta-RL techniques. What is the formula of "General Regret Bounds"?
Your answer should be the formula in LaTeX format.
[]
[ "MAMBA: an Effective World Model Approach for Meta-Reinforcement Learning" ]
[ "iclr2024" ]
a7dcfbd0-c65d-540f-8e5c-46204d6b9d4c
Among the Model-based Reinforcement Learning papers in ICLR 2024, which one proposes the model called "Skipper". Tell me what $\pi$ means in figure 2.
Your answer should be a python string about the meaning of the math expreesion in the paper. You"d better use the names as they are referred to in the paper.
[]
[ "Consciousness-Inspired Spatio-Temporal Abstractions for Better Generalization in Reinforcement Learning" ]
[ "neurips2024" ]
a81c9bfa-0fc6-5522-b0bb-343981682cd4
In ICLR 2024 Poster papers, a paper proposes a novel framework named PARL (Policy Alignment in Reinforcement Learning), aiming to address the policy alignment problem in Reinforcement Learning (RL). What is the formula of the standard finite horizon policy optimization problem in this paper?
Your answer should be a Python string.
[]
[ "PARL: A Unified Framework for Policy Alignment in Reinforcement Learning from Human Feedback" ]
[ "iclr2024" ]
a82d2e00-6e65-59e2-ba90-9315c62caa7c
Compare to the baseline called "DreamerV3", how much improvement does "Hybrid RSSM" achieve in "Lift Cube" in average?
Your answer should be a Python float number rounded to 1 decimal place. e.g. 20.3
[ "Learning Latent Dynamic Robust Representations for World Models" ]
[]
[]
a86de892-c1e5-5271-b36a-ba531a214c64
Among the papers in ICLR 2024, which paper proposes the conception called "Policy Rehearsing"? Explain the conception of "Policy Rehearsing" in the paper.
Your answer should be plain text.
[ "Policy Rehearsing: Training Generalizable Policies for Reinforcement Learning" ]
[]
[]
a977af97-4915-586c-bdf5-ab7a46479951
In the paper that proposed MDAgents, for image+text queries, how much higher is the accuracy of the Adaptive setting, compared to the High setting?
Your answer should be a float rounded to 1 decimal place.
[]
[ "MDAgents: An Adaptive Collaboration of LLMs for Medical Decision-Making" ]
[ "neurips2024" ]
a9eea92a-c4a0-54f5-967d-854d7ae8bf32
What visual elements in Figure 1 distinguish the three fine-tuning scenarios (a, b, c)? How does the Initial harmfulness score differ between subfigures (a), (b), and (c)
Your answer should be a sentence answering the two questions.
[ "Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!" ]
[]
[]