uuid string | question string | answer_format string | anchor_pdf list | reference_pdf list | conference list |
|---|---|---|---|---|---|
e2e9d734-5433-56b6-9aeb-d4ec4c99ae65 | There is a paper that introduces a unified and effective sequence tagging framework for relational structure extraction tasks, such as event argument extraction, relation extraction, and task-oriented semantic parsing. By appending verbalized representations of conditions and relationships to the input text, a method termed priming, the framework leverages pre-trained language models to generate condition- and relation-aware contextual embeddings. The main experiments in this work were conducted on the MTOP dataset. specifically, how many types of languages of data from MTOP were used in the study? | Your answer should be a Python int | [] | [
"TAGPRIME: A Unified Framework for Relational Structure Extraction"
] | [
"acl2023"
] |
e4672532-ce79-54e9-a1b8-c9d32a32d5e3 | Which paper did Xueyi Liu and Li Yi pubulish in ICLR 2024 poster volume? | Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION. | [] | [
"GeneOH Diffusion: Towards Generalizable Hand-Object Interaction Denoising via Denoising Diffusion"
] | [
"iclr2024"
] |
e506e04e-5550-5773-9242-6051d65e3ed8 | In the paper that proposes a dynamic agentic framework with task-graph decomposition, introduces domain-specific evaluation metrics, and provides a specialized dataset for analyzing LLM-based autonomous agents, what are the core metrics the author uses to evaluate the accuracy and completeness of tool identification? | Your answer should be a Python list indicating the name of the metrics. | [
"Advancing Agentic Systems: Dynamic Task Decomposition, Tool Integration and Evaluation using Novel Metrics and Dataset"
] | [] | [] |
e523227a-8276-5bd7-9bda-419df56f782d | How many percentage of statement-level and category-level tasks are there in the MGToolBench dataset? | Your answer should be an integer between 0 and 100, without a decimal point | [
"ToolPlanner: A Tool Augmented LLM for Multi Granularity Instructions with Path Planning and Feedback"
] | [] | [] |
e5e80c20-0e6c-5e57-ad52-6ee1ca461e1f | In the paper that Seohong Park(the first author) published in the ICLR 2024 oral volume, which method performs second best in Quantitative comparison with unsupervised skill discovery methods? | Your answer MUST be ONE single string of ONE word without explanation of the method's name in abbreviation. | [] | [
"METRA: Scalable Unsupervised RL with Metric-Aware Abstraction"
] | [
"iclr2024"
] |
e657a6d9-a59b-5c6a-ba56-a4d32b0085e9 | Describe the attention pattern differences in Layer 1 before vs. after the abrupt transition. How does Layer 2's target-specific attention (bottom row) explain the network's post-transition IC accuracy? | Your answer should be a sentence answering the two questions. | [] | [
"The mechanistic basis of data dependence and abrupt learning in an in-context classification task"
] | [
"iclr2024"
] |
e657e45a-358b-5e63-8a99-877f436e7859 | In the paper that releases Content Behavior Corpus, according to Figure 2, LCBM lags behind on only one perspective. In that perspective, which model has the best performance? | Your answer should be a Python string, the model name as shown in the corresponding figure. | [] | [
"Large Content And Behavior Models To Understand, Simulate, And Optimize Content And Behavior"
] | [
"iclr2024"
] |
e7a8cd80-c71d-5af1-bb2d-2852e904f665 | Can you recommend me a paper published in ICLR 2024 that introduces the Dynamic Signal Distribution(DSD) classification task? | Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Role of Locality and Weight Sharing in Image-Based Tasks: A Sample Complexity Separation between CNNs, LCNs, and FCNs"
] | [
"iclr2024"
] |
e853552b-4597-5a04-9b7c-7f61fbe5b799 | Can you recommend me a paper published in ICLR 2024 that develops the Information-Theoretic Hierarchical Perception (ITHP) model, which utilizes the concept of the information bottleneck? | Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Neuro-Inspired Information-Theoretic Hierarchical Perception for Multimodal Learning"
] | [
"iclr2024"
] |
e8581429-a888-5930-b254-adfb225709d2 | Is there a paper published in ICLR 2024 which introduce an end-to-end interpretability framework designed to quantify context usage in language models' generations? | Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Quantifying the Plausibility of Context Reliance in Neural Machine Translation"
] | [
"iclr2024"
] |
e86f282d-1a30-5f7a-8ae8-4aeaf896e254 | What is the core advantage of "Highway policy"? | Your answer should be plain text. | [
"Dr. Strategy: Model-Based Generalist Agents with Strategic Dreaming"
] | [] | [] |
e8c3c5e7-2853-525f-b8f0-9d7395977500 | In NeurIPS 2024 Poster papers, a paper proposes a methods called "BECAUSE". In this paper, what is the formula of the Theorem 1 (Performance guarantee)? | Your answer should be the formula in LaTeX format. | [] | [
"BECAUSE: Bilinear Causal Representation for Generalizable Offline Model-based Reinforcement Learning"
] | [
"neurips2024"
] |
e992ec75-07e7-5023-9e97-fee5d898eeb7 | What pruning metric is used by Wanda? | Your answer should be the context that describes the pruning metric used by the paper. | [] | [
"A Simple and Effective Pruning Approach for Large Language Models"
] | [
"iclr2024"
] |
e9bb0c10-8951-5d47-94af-cf65d938607b | In ICLR 2024 Poster papers, a paper proposes a framework named LaMo (Language Models for Motion Control). Tell me the affiliation of the first author. | Your answer should be a Python string. | [
"Unleashing the Power of Pre-trained Language Models for Offline Reinforcement Learning"
] | [] | [] |
ebfb9b1e-dbe5-5130-8818-fd6158ec1015 | For the MiniARC task examples in Figure 1, what is listed as a Bad Rule? How does this Bad Rule differ from the Good Rule in terms of output when applied to instances? | Your answer should be a sentence answering the two questions. | [
"Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement"
] | [] | [] |
eca8edf3-ca78-583e-87c3-31158780d7fa | A recent paper introduces the SENSORIUM 2023 Benchmark Competition, a large-scale standardized framework for evaluating predictive models of mouse primary visual cortex responses to dynamic visual stimuli. Please retrieve the paper and provide me with the link to the corresponding GitHub code repository for this work. | Your answer should be a link only without any additional prefixes or suffixes. | [
"Retrospective for the Dynamic Sensorium Competition for predicting large-scale mouse primary visual cortex activity from videos"
] | [] | [] |
ecea33d3-1013-5514-bfd6-a1c34401a85f | How many papers are published as an oral in ICLR 2024 Workshop on Large Language Model (LLM) Agents? | Your answer should be a single Python integer. | [] | [
"AutoAct: Automatic Agent Learning from Scratch for QA via Self-Planning",
"Large Language Models can Strategically Deceive their Users when Put Under Pressure",
"Executable Code Actions Elicit Better LLM Agents",
"AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation",
"Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow",
"Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View"
] | [
"iclr2024"
] |
ecf271ed-7f01-500d-8085-71e59afecf07 | A recent paper analyzes intermediate training checkpoints of OPT language models of varying sizes (ranging from 125M to 175B parameters) across token prediction, sequence-level generation, and downstream tasks. This work is a collaboration between a certain university and Meta AI. Please provide the name of this university. | Your answer should be the name of the university | [
"Training Trajectories of Language Models Across Scales"
] | [] | [] |
edc3a84c-2383-52bf-a6a7-4325c7baaede | In this paper, which example is taken to explain their model's limitation of low model-human similarity? | Your answer should be a string, a phrase which indicates the example and how it works. | [
"DevBench: A multimodal developmental benchmark for language learning"
] | [] | [] |
f07343cd-1974-57a8-bfc9-3bfffe2f5fa7 | According to the oral paper at ICLR 2024 that adopts Pseudo-Huber losses to replace LPIPS, what Frechet Inception Distance (FID) scores does it achieve on CIFAR-10 and ImageNet 64x64? | Your answer should be a Python list of two float values rounded to 2 decimal places, representing the FID scores achieved on CIFAR-10 and ImageNet 64x64 respectively. | [] | [
"Improved Techniques for Training Consistency Models"
] | [
"iclr2024"
] |
f0f5eaf9-df43-58fb-8a87-3deca1ba6463 | In the paper that proposes SOFO, on which website can I find the author's own flexible implementation of batched JVPs based on OCaml-Torch? | Your answer should be a Python strings of the name of the website, the website URL starting with "https://", as given in the paper. | [] | [
"Second-order forward-mode optimization of recurrent neural networks for neuroscience"
] | [
"neurips2024"
] |
f17087c9-23d6-54fd-8a5f-2bc1792cea6b | In the paper that proposes dSTAR, which method performs best on CIFAR10 under Empire and Little attack respectively? | Your answer should be a Python list of two strings of the name of the method. | [] | [
"dSTAR: Straggler Tolerant and Byzantine Resilient Distributed SGD"
] | [
"neurips2024"
] |
f1faadd6-0e1c-5d3a-a43e-ed4ce0a2dc6a | In this paper, how many different tasks are illustrated in Figure 2? | Your answer should be a Python integer. | [
"Privileged Sensing Scaffolds Reinforcement Learning"
] | [] | [] |
f308c2ff-b5a4-546a-ac5c-e03404bfe0fb | What are the key innovations of DiffMatch? | Your answer should be a string, a sentence. | [] | [
"Diffusion Model for Dense Matching"
] | [
"iclr2024"
] |
f33e9298-4f06-559a-a78a-d9ccecda69f4 | A paper derive Contrastive Preference Learning (CPL), an algorithm for learning optimal policies from preferences without learning reward functions. In the figure which shows ablations on CPL's hyperparameters on Drawer Open from State, from which training step, because of an event, the model's success rate increases sharply? | Your answer should be an int giving the number of training step. | [] | [
"Contrastive Preference Learning: Learning from Human Feedback without Reinforcement Learning"
] | [
"iclr2024"
] |
f49efa82-1ee1-5981-b683-da0dedf8f1e4 | In the paper that This paper uniquely demonstrates that all current private adaptation methods for closed LLMs fundamentally leak data, while open LLMs with local training provide superior privacy, performance and cost efficiency, in which dataset does the PromptPATE have the highest Top1 Accuracy in figure 2? | Your answer should be s Python string indicating the name of the dataset. | [] | [
"Open LLMs are Necessary for Current Private Adaptations and Outperform their Closed Alternatives"
] | [
"neurips2024"
] |
f4b5990f-2522-5e0a-a383-57e615c01519 | There is a paper that introduces an interactive formal verification environment that leverages large language models (LLMs) by translating code into Isabelle for theorem proving. It also proposes a large-scale dataset called FVELER, which includes 758 Isabelle theories, 29,125 lemmas, and 200,646 proof steps, extracted from seL4 verification. Could you please tell me how many proof steps are included in the test set of the FVELER dataset? | Your answer should be a Python int. | [] | [
"FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving"
] | [
"neurips2024"
] |
f5e0caae-457c-5b6c-bf82-3f251a68f6b9 | In ICLR 2024 Poster papers, a paper attempts to solve the problem of how to enhance the arithmetic reasoning capabilities of large language models (LLMs) through zero-sample hint optimization. What is the formula of the Cross-Entropy loss given the demonstration data collected process? | Your answer should be the formula in LaTeX format. | [] | [
"Query-Dependent Prompt Evaluation and Optimization with Offline Inverse RL"
] | [
"iclr2024"
] |
f6be1126-599e-5913-bbd3-b9a2473d5ec1 | A recent paper introduces the first million-scale multi-modal, multi-turn, open-domain dialogue dataset, containing over 1.08 million dialogues and 1.53 million associated images collected from real-world social media conversations. In the paper, the authors conduct a detailed statistical comparison between their released dataset and another previous multimodal dialogue dataset. Which dataset has a greater "Average Turns per Dialogue"? | Your answer should be the name of the dataset. | [] | [
"MMDialog: A Large-scale Multi-turn Dialogue Dataset Towards Multi-modal Open-domain Conversation"
] | [
"acl2023"
] |
f7296f13-3f99-5cd5-a32a-c2906335396f | A recent paper introduces TAG (Table-to-Graph generation), a novel end-to-end framework for joint document-level entity and relation extraction. TAG unifies coreference resolution and relation extraction using a coarse-to-fine table filling strategy and dynamically constructs latent graphs that encode semantic, relational, and syntactic dependencies. This work is a collaboration between a certain university and TopGraph.AI. Please provide the name of this university. | Your answer should be the name of the university. | [
"A Novel Table-to-Graph Generation Approach for Document-Level Joint Entity and Relation Extraction"
] | [] | [] |
f7961583-fc21-5c0f-9632-bf9b7d266189 | For the BF-CNN architecture, how many times does the number of parameters grow when the image resolution increases from 40x40 to 80x80? | Your answer should be a float, rounded to 1 decimal place | [
"Generalization in diffusion models arises from geometry-adaptive harmonic representations"
] | [] | [] |
f7a7b8f7-fc8c-5ea5-af89-7710c9276c6f | Tell me what is the "harmonious loss" in regularization term. | Your answer should be the formula in LaTeX format. | [
"HarmonyDream: Task Harmonization Inside World Models"
] | [] | [] |
f8216c4f-3b92-54ed-bfc8-77d613736eda | In ICLR 2024 Poster papers, a paper attempts to conduct long-range dynamic modeling in an interactive environment. What is the affiliation of the corresponding author? | Your answer should be a Python string. | [] | [
"Efficient Dynamics Modeling in Interactive Environments with Koopman Theory"
] | [
"iclr2024"
] |
fa04d3b0-411d-5e38-91df-adc87e113ba7 | In ICLR 2024 Poster papers, which paper proposes a framework that integrates DRL with classical planning by automatically inducing task structures and substructures from a few demonstrations? | Your answer should be the title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Integrating Planning and Deep Reinforcement Learning via Automatic Induction of Task Substructures"
] | [
"iclr2024"
] |
fa5f2809-0e1b-558d-8b5c-192c6d967f9a | Can you recommend me a paper published in ICLR 2024 that proposes a novel learning-based method called ElliDock, which predicts an elliptic paraboloid to represent the protein-protein docking interface? | Your answer MUST be the pure title of the paper WITHOUT ANY EXPLANATION. | [] | [
"Rigid Protein-Protein Docking via Equivariant Elliptic-Paraboloid Interface Prediction"
] | [
"iclr2024"
] |
fb6b7aa4-f2e6-545e-a213-787756b919b4 | Previous NFN often depend on permutation symmetries in neural networks' weights, a paper design corresponding equivariant and invariant layers to incorporate scaling/sign-flipping symmetries. In the experiment, when the weights undergo more extensive scaling and permutation, does their model maintains stable performance? | Your answer should be "Yes" or "No". | [
"Monomial Matrix Group Equivariant Neural Functional Networks"
] | [] | [] |
fc15db88-ee0e-506e-9762-6a447219b34d | There is a recent paper introducing SuperCon3D, the first dataset combining 3D crystal structures, including both ordered and disordered materials, with experimentally measured superconducting critical temperatures. Please inform me of the name of the corresponding author of this paper. | Your answer should be a name of a person. | [] | [
"Learning Superconductivity from Ordered and Disordered Material Structures"
] | [
"neurips2024"
] |
fc3738ff-6e9e-56c3-8c4a-746a44364699 | What is the average BLEU score improvement achieved by kNN-TL over strong baselines across four low-resource translation tasks? | Your answer should be a Python float value representing the average BLEU score improvement in points, rounded to 1 decimal place. | [] | [
"kNN-TL: k-Nearest-Neighbor Transfer Learning for Low-Resource Neural Machine Translation"
] | [
"acl2023"
] |
fc776028-9602-57f0-ac6e-8075ce900cae | In the paper "BigVGAN: A Universal Neural Vocoder with Large-Scale Training," the authors utilized a well-known TTS dataset for training purposes. Could you please clarify how this dataset compares to another prominent dataset, LibriSpeech, in terms of the distribution of audio duration per speaker? Specifically, which dataset exhibits a more dispersed and balanced distribution? | Your answer should be a string | [
"BigVGAN: A Universal Neural Vocoder with Large-Scale Training"
] | [
"LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech"
] | [] |
fcd6d3f2-09de-5d8b-bcb1-21ad45eb728c | A paper introduce a methodology called Induced Model Matching (IMM) to develop a very accurate (often small) predictive model to a full-featured (often larger) model. In their study, which kind of model is considered as a model probablely lacking in features, personalized or general-public? | Your answer should be chosen between "personalized" and "general-public" | [
"Induced Model Matching: Restricted Models Help Train Full-Featured Models"
] | [] | [] |
fcf4de02-61d8-5309-a588-d8e17f3ee7a1 | In ICLR 2024 Spotlight papers, a paper tries to solve the performance issues of the DICE (DIstribution Correction Estimation) method in offline reinforcement learning (RL) and imitation learning (IL). Tell me the formula of the projected backward gradient in this paper. | Your answer should be the formula in LaTeX format. | [] | [
"ODICE: Revealing the Mystery of Distribution Correction Estimation via Orthogonal-gradient Update"
] | [
"iclr2024"
] |
fdc6051a-6c91-5724-8249-d4e503137e52 | Faced with the problem that the exact mechanism of how teleportation improves convergence in optimizing non-convex objectives remains elusive, what has this paper done to improve? | Your answer should be a sentence. | [
"Improving Convergence and Generalization Using Parameter Symmetries"
] | [] | [] |
fec66973-fa80-57aa-a883-e54da1b140b3 | A recent paper provides the first systematic study of benchmark data repositories in machine learning, introducing the concept of a "benchmark repository" as a distinct entity from general-purpose or domain-specific data repositories. Could you please provide the email address of the first author? | Your answer should be a mail address. | [] | [
"Benchmark Data Repositories for Better Benchmarking"
] | [
"neurips2024"
] |
fee7cbc2-ec8a-5400-9c63-3322717c5ba0 | Among the Model-based Reinforcement Learning papers in ICLR 2024, which one proposes the model called "Skipper"? Explain the conception of spatio-abstraction and temporal-abstraction in the paper, respectively. | Your answer should be plain text | [] | [
"Consciousness-Inspired Spatio-Temporal Abstractions for Better Generalization in Reinforcement Learning"
] | [
"neurips2024"
] |
fef9b33e-2b5c-56a2-bc98-ed77f0ed37f0 | What is the average improvement in pass rate achieved by LEGO-Prover over previous methods on the miniF2F dataset? | Your answer should be a Python float value representing the percentage improvement in pass rate, between 0 and 100, rounded to 2 decimal places. | [] | [
"LEGO-Prover: Neural Theorem Proving with Growing Libraries"
] | [
"iclr2024"
] |
ffb1bd4b-e55a-56c7-8d55-c7eed3f716c4 | According to this paper, which model performes the second best over all test domains? | Your answer should be a string, a name of model | [
"Improving Domain Generalization with Domain Relations"
] | [] | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.