title
stringlengths
15
138
url
stringlengths
42
42
detail_url
stringlengths
42
42
authors
stringlengths
7
526
tags
stringclasses
3 values
abstract
stringlengths
480
3.09k
pdf
stringlengths
71
71
Retrieval-based Disentangled Representation Learning with Natural Language Supervision
https://openreview.net/forum?id=ZlQRiFmq7Y
https://openreview.net/forum?id=ZlQRiFmq7Y
Jiawei Zhou,Xiaoguang Li,Lifeng Shang,Xin Jiang,Qun Liu,Lei Chen
ICLR 2024,Spotlight
Disentangled representation learning remains challenging as the underlying factors of variation in the data do not naturally exist. The inherent complexity of real-world data makes it unfeasible to exhaustively enumerate and encapsulate all its variations within a finite set of factors. However, it is worth noting that...
https://openreview.net/pdf/806a04ba3fc6094730d982164ed4de6b3cf4f351.pdf
On the Markov Property of Neural Algorithmic Reasoning: Analyses and Methods
https://openreview.net/forum?id=Kn7tWhuetn
https://openreview.net/forum?id=Kn7tWhuetn
Montgomery Bohde,Meng Liu,Alexandra Saxton,Shuiwang Ji
ICLR 2024,Spotlight
Neural algorithmic reasoning is an emerging research direction that endows neural networks with the ability to mimic algorithmic executions step-by-step. A common paradigm in existing designs involves the use of historical embeddings in predicting the results of future execution steps. Our observation in this work is t...
https://openreview.net/pdf/46ea9907175ecd6c88621bba3b5478fb9390eea8.pdf
TRAM: Bridging Trust Regions and Sharpness Aware Minimization
https://openreview.net/forum?id=kxebDHZ7b7
https://openreview.net/forum?id=kxebDHZ7b7
Tom Sherborne,Naomi Saphra,Pradeep Dasigi,Hao Peng
ICLR 2024,Spotlight
Sharpness-aware minimization (SAM) reports improving domain generalization by reducing the loss surface curvature in the parameter space. However, generalization during _fine-tuning_ is often more dependent on the transferability of _representations_ in the function space. Trust-region methods (TR) target this goal by ...
https://openreview.net/pdf/15fa46e9fb64654d30da84732fc37543dd3a94ca.pdf
CAMIL: Context-Aware Multiple Instance Learning for Cancer Detection and Subtyping in Whole Slide Images
https://openreview.net/forum?id=rzBskAEmoc
https://openreview.net/forum?id=rzBskAEmoc
Olga Fourkioti,Matt De Vries,Chris Bakal
ICLR 2024,Spotlight
The visual examination of tissue biopsy sections is fundamental for cancer diagnosis, with pathologists analyzing sections at multiple magnifications to discern tumor cells and their subtypes. However, existing attention-based multiple instance learning (MIL) models used for analyzing Whole Slide Images (WSIs) in cance...
https://openreview.net/pdf/b4d6251d3b1639d170a910826e7643be5d050285.pdf
DyST: Towards Dynamic Neural Scene Representations on Real-World Videos
https://openreview.net/forum?id=MnMWa94t12
https://openreview.net/forum?id=MnMWa94t12
Maximilian Seitzer,Sjoerd van Steenkiste,Thomas Kipf,Klaus Greff,Mehdi S. M. Sajjadi
ICLR 2024,Spotlight
Visual understanding of the world goes beyond the semantics and flat structure of individual images. In this work, we aim to capture both the 3D structure and dynamics of real-world scenes from monocular real-world videos. Our Dynamic Scene Transformer (DyST) model leverages recent work in neural scene representation t...
https://openreview.net/pdf/cb1c5f7dc44ea3c18ca42146caaee182fe578c30.pdf
Bilevel Optimization under Unbounded Smoothness: A New Algorithm and Convergence Analysis
https://openreview.net/forum?id=LqRGsGWOTX
https://openreview.net/forum?id=LqRGsGWOTX
Jie Hao,Xiaochuan Gong,Mingrui Liu
ICLR 2024,Spotlight
Bilevel optimization is an important formulation for many machine learning problems, such as meta-learning and hyperparameter optimization. Current bilevel optimization algorithms assume that the gradient of the upper-level function is Lipschitz (i.e., the upper-level function has a bounded smoothness parameter). Howev...
https://openreview.net/pdf/1a34c4fa191cbbf4c1a8a8ca78bf84ce2094b701.pdf
Bounds on Representation-Induced Confounding Bias for Treatment Effect Estimation
https://openreview.net/forum?id=d3xKPQVjSc
https://openreview.net/forum?id=d3xKPQVjSc
Valentyn Melnychuk,Dennis Frauen,Stefan Feuerriegel
ICLR 2024,Spotlight
State-of-the-art methods for conditional average treatment effect (CATE) estimation make widespread use of representation learning. Here, the idea is to reduce the variance of the low-sample CATE estimation by a (potentially constrained) low-dimensional representation. However, low-dimensional representations can lose ...
https://openreview.net/pdf/d06dd3ea5318958c6924d08f905235b1512fde33.pdf
DSPy: Compiling Declarative Language Model Calls into State-of-the-Art Pipelines
https://openreview.net/forum?id=sY5N0zY5Od
https://openreview.net/forum?id=sY5N0zY5Od
Omar Khattab,Arnav Singhvi,Paridhi Maheshwari,Zhiyuan Zhang,Keshav Santhanam,Sri Vardhamanan A,Saiful Haq,Ashutosh Sharma,Thomas T. Joshi,Hanna Moazam,Heather Miller,Matei Zaharia,Christopher Potts
ICLR 2024,Spotlight
The ML community is rapidly exploring techniques for prompting language models (LMs) and for stacking them into pipelines that solve complex tasks. Unfortunately, existing LM pipelines are typically implemented using hard-coded “prompt templates”, i.e. lengthy strings discovered via trial and error. Toward a more syste...
https://openreview.net/pdf/41028bc2988c119c4fb5c213ab3919ceae696846.pdf
Impact of Computation in Integral Reinforcement Learning for Continuous-Time Control
https://openreview.net/forum?id=xJEd8PkdNz
https://openreview.net/forum?id=xJEd8PkdNz
Wenhan Cao,Wei Pan
ICLR 2024,Spotlight
Integral reinforcement learning (IntRL) demands the precise computation of the utility function's integral at its policy evaluation (PEV) stage. This is achieved through quadrature rules, which are weighted sums of utility functions evaluated from state samples obtained in discrete time. Our research reveals a critical...
https://openreview.net/pdf/9eca44e1414a070f87a6a21de74fc149bd37de96.pdf
Masks, Signs, And Learning Rate Rewinding
https://openreview.net/forum?id=qODvxQ8TXW
https://openreview.net/forum?id=qODvxQ8TXW
Advait Harshal Gadhikar,Rebekka Burkholz
ICLR 2024,Spotlight
Learning Rate Rewinding (LRR) has been established as a strong variant of Iterative Magnitude Pruning (IMP) to find lottery tickets in deep overparameterized neural networks. While both iterative pruning schemes couple structure and parameter learning, understanding how LRR excels in both aspects can bring us closer to...
https://openreview.net/pdf/8049c38689012fa79be944abb2bec1446b8ed012.pdf
Gradual Domain Adaptation via Gradient Flow
https://openreview.net/forum?id=iTTZFKrlGV
https://openreview.net/forum?id=iTTZFKrlGV
Zhan Zhuang,Yu Zhang,Ying Wei
ICLR 2024,Spotlight
Domain shift degrades classification models on new data distributions. Conventional unsupervised domain adaptation (UDA) aims to learn features that bridge labeled source and unlabeled target domains. In contrast to feature learning, gradual domain adaptation (GDA) leverages extra continuous intermediate domains with p...
https://openreview.net/pdf/ff915349976b783c6976376bdd9392b8a18f7773.pdf
Maximum Entropy Heterogeneous-Agent Reinforcement Learning
https://openreview.net/forum?id=tmqOhBC4a5
https://openreview.net/forum?id=tmqOhBC4a5
Jiarong Liu,Yifan Zhong,Siyi Hu,Haobo Fu,QIANG FU,Xiaojun Chang,Yaodong Yang
ICLR 2024,Spotlight
*Multi-agent reinforcement learning* (MARL) has been shown effective for cooperative games in recent years. However, existing state-of-the-art methods face challenges related to sample complexity, training instability, and the risk of converging to a suboptimal Nash Equilibrium. In this paper, we propose a unified fram...
https://openreview.net/pdf/82bacc9b0a9551bf4922e43270f4c315044f70af.pdf
Hybrid Directional Graph Neural Network for Molecules
https://openreview.net/forum?id=BBD6KXIGJL
https://openreview.net/forum?id=BBD6KXIGJL
Junyi An,Chao Qu,Zhipeng Zhou,Fenglei Cao,Xu Yinghui,Yuan Qi,Furao Shen
ICLR 2024,Spotlight
Equivariant message passing neural networks have emerged as the prevailing approach for predicting chemical properties of molecules due to their ability to leverage translation and rotation symmetries, resulting in a strong inductive bias. However, the equivariant operations in each layer can impose excessive constrain...
https://openreview.net/pdf/fdaba18af51e693376f79fad547fdec1e1913044.pdf
Unbiased Watermark for Large Language Models
https://openreview.net/forum?id=uWVC5FVidc
https://openreview.net/forum?id=uWVC5FVidc
Zhengmian Hu,Lichang Chen,Xidong Wu,Yihan Wu,Hongyang Zhang,Heng Huang
ICLR 2024,Spotlight
The recent advancements in large language models (LLMs) have sparked a growing apprehension regarding the potential misuse. One approach to mitigating this risk is to incorporate watermarking techniques into LLMs, allowing for the tracking and attribution of model outputs. This study examines a crucial aspect of waterm...
https://openreview.net/pdf/fdb6b7b2517ce71ee9ed99a12175e4a0273d2b3f.pdf
Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust Closed-Loop Control
https://openreview.net/forum?id=EriR6Ec69a
https://openreview.net/forum?id=EriR6Ec69a
Neehal Tumma,Mathias Lechner,Noel Loo,Ramin Hasani,Daniela Rus
ICLR 2024,Spotlight
Developing autonomous agents that can interact with changing environments is an open challenge in machine learning. Robustness is particularly important in these settings as agents are often fit offline on expert demonstrations but deployed online where they must generalize to the closed feedback loop within the enviro...
https://openreview.net/pdf/480e9c477c5c570d2bb4494763d1237fdf11f122.pdf
CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction
https://openreview.net/forum?id=DjzvJCRsVf
https://openreview.net/forum?id=DjzvJCRsVf
Size Wu,Wenwei Zhang,Lumin Xu,Sheng Jin,Xiangtai Li,Wentao Liu,Chen Change Loy
ICLR 2024,Spotlight
Open-vocabulary dense prediction tasks including object detection and image segmentation have been advanced by the success of Contrastive Language-Image Pre-training (CLIP). CLIP models, particularly those incorporating vision transformers (ViTs), have exhibited remarkable generalization ability in zero-shot image clas...
https://openreview.net/pdf/126c5bcbf7072558944cfd391f4b42a43cdd40b1.pdf
Large Brain Model for Learning Generic Representations with Tremendous EEG Data in BCI
https://openreview.net/forum?id=QzTpTRVtrP
https://openreview.net/forum?id=QzTpTRVtrP
Weibang Jiang,Liming Zhao,Bao-liang Lu
ICLR 2024,Spotlight
The current electroencephalogram (EEG) based deep learning models are typically designed for specific datasets and applications in brain-computer interaction (BCI), limiting the scale of the models and thus diminishing their perceptual capabilities and generalizability. Recently, Large Language Models (LLMs) have achie...
https://openreview.net/pdf/ce4dc6959056452394dc1ad7b8f64005d65d2165.pdf
Towards LLM4QPE: Unsupervised Pretraining of Quantum Property Estimation and A Benchmark
https://openreview.net/forum?id=vrBVFXwAmi
https://openreview.net/forum?id=vrBVFXwAmi
Yehui Tang,Hao Xiong,Nianzu Yang,Tailong Xiao,Junchi Yan
ICLR 2024,Spotlight
Estimating the properties of quantum systems such as quantum phase has been critical in addressing the essential quantum many-body problems in physics and chemistry. Deep learning models have been recently introduced to property estimation, surpassing conventional statistical approaches. However, these methods are tai...
https://openreview.net/pdf/511ed0e0d3b143e5589b96afaba84da894f71df7.pdf
GTMGC: Using Graph Transformer to Predict Molecule’s Ground-State Conformation
https://openreview.net/forum?id=F7QnIKlC1N
https://openreview.net/forum?id=F7QnIKlC1N
Guikun Xu,Yongquan Jiang,PengChuan Lei,Yan Yang,Jim Chen
ICLR 2024,Spotlight
The ground-state conformation of a molecule is often decisive for its properties. However, experimental or computational methods, such as density functional theory (DFT), are time-consuming and labor-intensive for obtaining this conformation. Deep learning (DL) based molecular representation learning (MRL) has made sig...
https://openreview.net/pdf/c141834e1d331e6055ab503795d49d4e4b8548fb.pdf
Generalization of Scaled Deep ResNets in the Mean-Field Regime
https://openreview.net/forum?id=tMzPZTvz2H
https://openreview.net/forum?id=tMzPZTvz2H
Yihang Chen,Fanghui Liu,Yiping Lu,Grigorios Chrysos,Volkan Cevher
ICLR 2024,Spotlight
Despite the widespread empirical success of ResNet, the generalization properties of deep ResNet are rarely explored beyond the lazy training regime. In this work, we investigate scaled ResNet in the limit of infinitely deep and wide neural networks, of which the gradient flow is described by a partial differential equ...
https://openreview.net/pdf/72b4830ed0321f0098f96447794bfcc965134752.pdf
ODE Discovery for Longitudinal Heterogeneous Treatment Effects Inference
https://openreview.net/forum?id=pxI5IPeWgW
https://openreview.net/forum?id=pxI5IPeWgW
Krzysztof Kacprzyk,Samuel Holt,Jeroen Berrevoets,Zhaozhi Qian,Mihaela van der Schaar
ICLR 2024,Spotlight
Inferring unbiased treatment effects has received widespread attention in the machine learning community. In recent years, our community has proposed numerous solutions in standard settings, high-dimensional treatment settings, and even longitudinal settings. While very diverse, the solution has mostly relied on neural...
https://openreview.net/pdf/2e710a9328ce1b12daf4fde40da8165ca071d5db.pdf
Learning Hierarchical World Models with Adaptive Temporal Abstractions from Discrete Latent Dynamics
https://openreview.net/forum?id=TjCDNssXKU
https://openreview.net/forum?id=TjCDNssXKU
Christian Gumbsch,Noor Sajid,Georg Martius,Martin V. Butz
ICLR 2024,Spotlight
Hierarchical world models can significantly improve model-based reinforcement learning (MBRL) and planning by enabling reasoning across multiple time scales. Nonetheless, the majority of state-of-the-art MBRL methods employ flat, non-hierarchical models. We propose Temporal Hierarchies from Invariant Context Kernels (T...
https://openreview.net/pdf/3e5df2ed6659f21032c8784d5836ef8147d1413a.pdf
Prediction without Preclusion: Recourse Verification with Reachable Sets
https://openreview.net/forum?id=SCQfYpdoGE
https://openreview.net/forum?id=SCQfYpdoGE
Avni Kothari,Bogdan Kulynych,Tsui-Wei Weng,Berk Ustun
ICLR 2024,Spotlight
Machine learning models are often used to decide who receives a loan, a job interview, or a public benefit. Models in such settings use features without considering their *actionability*. As a result, they can assign predictions that are \emph{fixed} -- meaning that individuals who are denied loans and interviews are, ...
https://openreview.net/pdf/08180baa9640c55bee2805e14b90ecc715509ee3.pdf
ODICE: Revealing the Mystery of Distribution Correction Estimation via Orthogonal-gradient Update
https://openreview.net/forum?id=L8UNn7Llt4
https://openreview.net/forum?id=L8UNn7Llt4
Liyuan Mao,Haoran Xu,Weinan Zhang,Xianyuan Zhan
ICLR 2024,Spotlight
In this study, we investigate the DIstribution Correction Estimation (DICE) methods, an important line of work in offline reinforcement learning (RL) and imitation learning (IL). DICE-based methods impose state-action-level behavior constraint, which is an ideal choice for offline learning. However, they typically perf...
https://openreview.net/pdf/833ece7fade579c01692e5603d476db35ce59989.pdf
Improving Non-Transferable Representation Learning by Harnessing Content and Style
https://openreview.net/forum?id=FYKVPOHCpE
https://openreview.net/forum?id=FYKVPOHCpE
Ziming Hong,Zhenyi Wang,Li Shen,Yu Yao,Zhuo Huang,Shiming Chen,Chuanwu Yang,Mingming Gong,Tongliang Liu
ICLR 2024,Spotlight
Non-transferable learning (NTL) aims to restrict the generalization of models toward the target domain(s). To this end, existing works learn non-transferable representations by reducing statistical dependence between the source and target domain. However, such statistical methods essentially neglect to distinguish betw...
https://openreview.net/pdf/4d359626d33d8cb2e10f6d1cf6728b805a2b5316.pdf
ModernTCN: A Modern Pure Convolution Structure for General Time Series Analysis
https://openreview.net/forum?id=vpJMJerXHU
https://openreview.net/forum?id=vpJMJerXHU
Luo donghao,wang xue
ICLR 2024,Spotlight
Recently, Transformer-based and MLP-based models have emerged rapidly and won dominance in time series analysis. In contrast, convolution is losing steam in time series tasks nowadays for inferior performance. This paper studies the open question of how to better use convolution in time series analysis and makes effort...
https://openreview.net/pdf/c0de77eed380b4b2736dfe855ed3cf0d62f7d8c1.pdf
Towards Robust Out-of-Distribution Generalization Bounds via Sharpness
https://openreview.net/forum?id=tPEwSYPtAC
https://openreview.net/forum?id=tPEwSYPtAC
Yingtian Zou,Kenji Kawaguchi,Yingnan Liu,Jiashuo Liu,Mong-Li Lee,Wynne Hsu
ICLR 2024,Spotlight
Generalizing to out-of-distribution (OOD) data or unseen domain, termed OOD generalization, still lacks appropriate theoretical guarantees. Canonical OOD bounds focus on different distance measurements between source and target domains but fail to consider the optimization property of the learned model. As empirically ...
https://openreview.net/pdf/36904eee9458f8a5da9944cdcd92446a053dfa88.pdf
MAPE-PPI: Towards Effective and Efficient Protein-Protein Interaction Prediction via Microenvironment-Aware Protein Embedding
https://openreview.net/forum?id=itGkF993gz
https://openreview.net/forum?id=itGkF993gz
Lirong Wu,Yijun Tian,Yufei Huang,Siyuan Li,Haitao Lin,Nitesh V Chawla,Stan Z. Li
ICLR 2024,Spotlight
Protein-Protein Interactions (PPIs) are fundamental in various biological processes and play a key role in life activities. The growing demand and cost of experimental PPI assays require computational methods for efficient PPI prediction. While existing methods rely heavily on protein sequence for PPI prediction, it is...
https://openreview.net/pdf/72464b5e34ef4de8f928bfdd6309981dbe271cf6.pdf
Negative Label Guided OOD Detection with Pretrained Vision-Language Models
https://openreview.net/forum?id=xUO1HXz4an
https://openreview.net/forum?id=xUO1HXz4an
Xue Jiang,Feng Liu,Zhen Fang,Hong Chen,Tongliang Liu,Feng Zheng,Bo Han
ICLR 2024,Spotlight
Out-of-distribution (OOD) detection aims at identifying samples from unknown classes, playing a crucial role in trustworthy models against errors on unexpected inputs. Extensive research has been dedicated to exploring OOD detection in the vision modality. {Vision-language models (VLMs) can leverage both textual and...
https://openreview.net/pdf/b9ad30ff96f366ad87a0053257956ba3b2a4ece6.pdf
OPTIMAL ROBUST MEMORIZATION WITH RELU NEURAL NETWORKS
https://openreview.net/forum?id=47hDbAMLbc
https://openreview.net/forum?id=47hDbAMLbc
Lijia Yu,Xiao-Shan Gao,Lijun Zhang
ICLR 2024,Spotlight
Memorization with neural networks is to study the expressive power of neural networks to interpolate a finite classification data set, which is closely related to the generalizability of deep learning. However, the important problem of robust memorization has not been thoroughly studied. In this paper, several basic pr...
https://openreview.net/pdf/c5be8a576cab367723fcf91c4b950557846a3e1a.pdf
Neural Contractive Dynamical Systems
https://openreview.net/forum?id=iAYIRHOYy8
https://openreview.net/forum?id=iAYIRHOYy8
Hadi Beik Mohammadi,Søren Hauberg,Georgios Arvanitidis,Nadia Figueroa,Gerhard Neumann,Leonel Rozo
ICLR 2024,Spotlight
Stability guarantees are crucial when ensuring that a fully autonomous robot does not take undesirable or potentially harmful actions. Unfortunately, global stability guarantees are hard to provide in dynamical systems learned from data, especially when the learned dynamics are governed by neural networks. We propose a...
https://openreview.net/pdf/a89591eec311a0efbd01f7135555a21d2d682c1c.pdf
Scaling Laws for Associative Memories
https://openreview.net/forum?id=Tzh6xAJSll
https://openreview.net/forum?id=Tzh6xAJSll
Vivien Cabannes,Elvis Dohmatob,Alberto Bietti
ICLR 2024,Spotlight
Learning arguably involves the discovery and memorization of abstract rules. The aim of this paper is to study associative memory mechanisms. Our model is based on high-dimensional matrices consisting of outer products of embeddings, which relates to the inner layers of transformer language models. We derive precise sc...
https://openreview.net/pdf/ba075a88abc0ad2b7f00577253a950d3264c5f2f.pdf
Text2Reward: Reward Shaping with Language Models for Reinforcement Learning
https://openreview.net/forum?id=tUM39YTRxH
https://openreview.net/forum?id=tUM39YTRxH
Tianbao Xie,Siheng Zhao,Chen Henry Wu,Yitao Liu,Qian Luo,Victor Zhong,Yanchao Yang,Tao Yu
ICLR 2024,Spotlight
Designing reward functions is a longstanding challenge in reinforcement learning (RL); it requires specialized knowledge or domain data, leading to high costs for development. To address this, we introduce Text2Reward, a data-free framework that automates the generation and shaping of dense reward functions based on la...
https://openreview.net/pdf/a52e7202163a42116fae8ada42123e37f2aef287.pdf
Towards Meta-Pruning via Optimal Transport
https://openreview.net/forum?id=sMoifbuxjB
https://openreview.net/forum?id=sMoifbuxjB
Alexander Theus,Olin Geimer,Friedrich Wicke,Thomas Hofmann,Sotiris Anagnostidis,Sidak Pal Singh
ICLR 2024,Spotlight
Structural pruning of neural networks conventionally relies on identifying and discarding less important neurons, a practice often resulting in significant accuracy loss that necessitates subsequent fine-tuning efforts. This paper introduces a novel approach named Intra-Fusion, challenging this prevailing pruning parad...
https://openreview.net/pdf/07560e42af2e42df14ac71025723b0b97a0924dd.pdf
InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding and Generation
https://openreview.net/forum?id=MLBdiWu4Fw
https://openreview.net/forum?id=MLBdiWu4Fw
Yi Wang,Yinan He,Yizhuo Li,Kunchang Li,Jiashuo Yu,Xin Ma,Xinhao Li,Guo Chen,Xinyuan Chen,Yaohui Wang,Ping Luo,Ziwei Liu,Yali Wang,Limin Wang,Yu Qiao
ICLR 2024,Spotlight
This paper introduces InternVid, a large-scale video-centric multimodal dataset that enables learning powerful and transferable video-text representations for multimodal understanding and generation. InternVid contains over 7 million videos lasting nearly 760K hours, yielding 234M video clips accompanied by detailed de...
https://openreview.net/pdf/5355ce2fec3ff26dca65a969b767fd7b1102bb05.pdf
Dictionary Contrastive Learning for Efficient Local Supervision without Auxiliary Networks
https://openreview.net/forum?id=Gg7cXo3S8l
https://openreview.net/forum?id=Gg7cXo3S8l
Suhwan Choi,Myeongho Jeon,Yeonjung Hwang,Jeonglyul Oh,Sungjun Lim,Joonseok Lee,Myungjoo Kang
ICLR 2024,Spotlight
While backpropagation (BP) has achieved widespread success in deep learning, it faces two prominent challenges: computational inefficiency and biological implausibility. In response to these challenges, local supervision, encompassing Local Learning (LL) and Forward Learning (FL), has emerged as a promising research di...
https://openreview.net/pdf/f9734ebbb92e7bdafcdb35c2da50c63e5e5ad16d.pdf
Bounding Box Stability against Feature Dropout Reflects Detector Generalization across Environments
https://openreview.net/forum?id=lmM4Ecm4HJ
https://openreview.net/forum?id=lmM4Ecm4HJ
Yang Yang,Wenhai Wang,Zhe Chen,Jifeng Dai,Liang Zheng
ICLR 2024,Spotlight
Bounding boxes uniquely characterize object detection, where a good detector gives accurate bounding boxes of categories of interest. However, in the real-world where test ground truths are not provided, it is non-trivial to find out whether bounding boxes are accurate, thus preventing us from assessing the detector ge...
https://openreview.net/pdf/5510c4a1e453a12979e2d2a9f12b836fdc0436c8.pdf
Deep Geodesic Canonical Correlation Analysis for Covariance-Based Neuroimaging Data
https://openreview.net/forum?id=PnR1MNen7u
https://openreview.net/forum?id=PnR1MNen7u
Ce Ju,Reinmar J Kobler,Liyao Tang,Cuntai Guan,Motoaki Kawanabe
ICLR 2024,Spotlight
In human neuroimaging, multi-modal imaging techniques are frequently combined to enhance our comprehension of whole-brain dynamics and improve diagnosis in clinical practice. Modalities like electroencephalography and functional magnetic resonance imaging provide distinct views to the brain dynamics due to diametral sp...
https://openreview.net/pdf/4ccf9cac26244e14e3fd2742852e226018c0e4b8.pdf
SWAP-NAS: Sample-Wise Activation Patterns for Ultra-fast NAS
https://openreview.net/forum?id=tveiUXU2aa
https://openreview.net/forum?id=tveiUXU2aa
Yameng Peng,Andy Song,Haytham M. Fayek,Vic Ciesielski,Xiaojun Chang
ICLR 2024,Spotlight
Training-free metrics (a.k.a. zero-cost proxies) are widely used to avoid resource-intensive neural network training, especially in Neural Architecture Search (NAS). Recent studies show that existing training-free metrics have several limitations, such as limited correlation and poor generalisation across different sea...
https://openreview.net/pdf/37b8588fc5e1d4701d0dd7f69b3af45b36b148e9.pdf
RT-Trajectory: Robotic Task Generalization via Hindsight Trajectory Sketches
https://openreview.net/forum?id=F1TKzG8LJO
https://openreview.net/forum?id=F1TKzG8LJO
Jiayuan Gu,Sean Kirmani,Paul Wohlhart,Yao Lu,Montserrat Gonzalez Arenas,Kanishka Rao,Wenhao Yu,Chuyuan Fu,Keerthana Gopalakrishnan,Zhuo Xu,Priya Sundaresan,Peng Xu,Hao Su,Karol Hausman,Chelsea Finn,Quan Vuong,Ted Xiao
ICLR 2024,Spotlight
Generalization remains one of the most important desiderata for robust robot learning systems. While recently proposed approaches show promise in generalization to novel objects, semantic concepts, or visual distribution shifts, generalization to new tasks remains challenging. For example, a language-conditioned policy...
https://openreview.net/pdf/99c49fe414f0c5349b9a1f94d32198a847626df5.pdf
NaturalSpeech 2: Latent Diffusion Models are Natural and Zero-Shot Speech and Singing Synthesizers
https://openreview.net/forum?id=Rc7dAwVL3v
https://openreview.net/forum?id=Rc7dAwVL3v
Kai Shen,Zeqian Ju,Xu Tan,Eric Liu,Yichong Leng,Lei He,Tao Qin,sheng zhao,Jiang Bian
ICLR 2024,Spotlight
Scaling text-to-speech (TTS) to large-scale, multi-speaker, and in-the-wild datasets is important to capture the diversity in human speech such as speaker identities, prosodies, and styles (e.g., singing). Current large TTS systems usually quantize speech into discrete tokens and use language models to generate these t...
https://openreview.net/pdf/509cda476b8eb36072d873b3fb7a5b5868bb7ce7.pdf
Submodular Reinforcement Learning
https://openreview.net/forum?id=loYSzjSaAK
https://openreview.net/forum?id=loYSzjSaAK
Manish Prajapat,Mojmir Mutny,Melanie Zeilinger,Andreas Krause
ICLR 2024,Spotlight
In reinforcement learning (RL), rewards of states are typically considered additive, and following the Markov assumption, they are independent of states visited previously. In many important applications, such as coverage control, experiment design and informative path planning, rewards naturally have diminishing retur...
https://openreview.net/pdf/8fc77d8529744661d87719d0416984370812942f.pdf
Making Pre-trained Language Models Great on Tabular Prediction
https://openreview.net/forum?id=anzIzGZuLi
https://openreview.net/forum?id=anzIzGZuLi
Jiahuan Yan,Bo Zheng,Hongxia Xu,Yiheng Zhu,Danny Chen,Jimeng Sun,Jian Wu,Jintai Chen
ICLR 2024,Spotlight
The transferability of deep neural networks (DNNs) has made significant progress in image and language processing. However, due to the heterogeneity among tables, such DNN bonus is still far from being well exploited on tabular data prediction (e.g., regression or classification tasks). Condensing knowledge from divers...
https://openreview.net/pdf/c4a9c6bae09d696686e4f491b7316a399127722b.pdf
Solving Inverse Problems with Latent Diffusion Models via Hard Data Consistency
https://openreview.net/forum?id=j8hdRqOUhN
https://openreview.net/forum?id=j8hdRqOUhN
Bowen Song,Soo Min Kwon,Zecheng Zhang,Xinyu Hu,Qing Qu,Liyue Shen
ICLR 2024,Spotlight
Latent diffusion models have been demonstrated to generate high-quality images, while offering efficiency in model training compared to diffusion models operating in the pixel space. However, incorporating latent diffusion models to solve inverse problems remains a challenging problem due to the nonlinearity of the enc...
https://openreview.net/pdf/da11a915f62958de563c258cf1a15b945a4040f0.pdf
The False Promise of Imitating Proprietary Language Models
https://openreview.net/forum?id=Kz3yckpCN5
https://openreview.net/forum?id=Kz3yckpCN5
Arnav Gudibande,Eric Wallace,Charlie Victor Snell,Xinyang Geng,Hao Liu,Pieter Abbeel,Sergey Levine,Dawn Song
ICLR 2024,Spotlight
An emerging method to cheaply improve a weaker language model is to finetune it on outputs from a stronger model, such as a proprietary system like ChatGPT (e.g., Alpaca, Self-Instruct, and others). In this work, we critically analyze this approach of imitating language models. We first finetune a series of LMs that im...
https://openreview.net/pdf/b4739f190faa45d202f9d7847e19ebde7844eb25.pdf
Sample-Efficient Linear Representation Learning from Non-IID Non-Isotropic Data
https://openreview.net/forum?id=Tr3fZocrI6
https://openreview.net/forum?id=Tr3fZocrI6
Thomas TCK Zhang,Leonardo Felipe Toso,James Anderson,Nikolai Matni
ICLR 2024,Spotlight
A powerful concept behind much of the recent progress in machine learning is the extraction of common features across data from heterogeneous sources or tasks. Intuitively, using all of one's data to learn a common representation function benefits both computational effort and statistical generalization by leaving a sm...
https://openreview.net/pdf/e7dccb9a39e6905c3e57dd5f906c31fbd3cab350.pdf
Information Retention via Learning Supplemental Features
https://openreview.net/forum?id=o83eu4H9Mb
https://openreview.net/forum?id=o83eu4H9Mb
Zhipeng Xie,Yahe Li
ICLR 2024,Spotlight
The information bottleneck principle provides an information-theoretic method for learning a good representation as a trade-off between conciseness and predictive ability, which can reduce information redundancy, eliminate irrelevant and superfluous features, and thus enhance the in-domain generalizability. However, in...
https://openreview.net/pdf/7f425cda0816de3fc282c27ce87697f5b5c44077.pdf
Mayfly: a Neural Data Structure for Graph Stream Summarization
https://openreview.net/forum?id=n7Sr8SW4bn
https://openreview.net/forum?id=n7Sr8SW4bn
Yuan Feng,Yukun Cao,Wang Hairu,Xike Xie,S Kevin Zhou
ICLR 2024,Spotlight
A graph is a structure made up of vertices and edges used to represent complex relationships between entities, while a graph stream is a continuous flow of graph updates that convey evolving relationships between entities. The massive volume and high dynamism of graph streams promote research on data structures of grap...
https://openreview.net/pdf/e7d88ca4c807b194ba332ff83811bbd8c79934bc.pdf
Exploring the Common Appearance-Boundary Adaptation for Nighttime Optical Flow
https://openreview.net/forum?id=776lhoaulC
https://openreview.net/forum?id=776lhoaulC
Hanyu Zhou,Yi Chang,Haoyue Liu,YAN WENDING,Yuxing Duan,Zhiwei Shi,Luxin Yan
ICLR 2024,Spotlight
We investigate a challenging task of nighttime optical flow, which suffers from weakened texture and amplified noise. These degradations weaken discriminative visual features, thus causing invalid motion feature matching. Typically, existing methods employ domain adaptation to transfer knowledge from auxiliary domain t...
https://openreview.net/pdf/e3328346222dfff5580d6899ec8ffbac04ef6de9.pdf
Graphical Multioutput Gaussian Process with Attention
https://openreview.net/forum?id=6N8TW504aa
https://openreview.net/forum?id=6N8TW504aa
Yijue Dai,Wenzhong Yan,Feng Yin
ICLR 2024,Spotlight
Integrating information while recognizing dependence from multiple data sources and enhancing the predictive performance of the multi-output regression are challenging tasks. Multioutput Gaussian Process (MOGP) methods offer outstanding solutions with tractable predictions and uncertainty quantification. However, their...
https://openreview.net/pdf/0b6ac06d4a4184388fc33af01e76741a7603c341.pdf
Soft Contrastive Learning for Time Series
https://openreview.net/forum?id=pAsQSWlDUf
https://openreview.net/forum?id=pAsQSWlDUf
Seunghan Lee,Taeyoung Park,Kibok Lee
ICLR 2024,Spotlight
Contrastive learning has shown to be effective to learn representations from time series in a self-supervised way. However, contrasting similar time series instances or values from adjacent timestamps within a time series leads to ignore their inherent correlations, which results in deteriorating the quality of learned...
https://openreview.net/pdf/310a449b3f99f247f4a3f30cd2a2f8806296770d.pdf
Enhancing Group Fairness in Online Settings Using Oblique Decision Forests
https://openreview.net/forum?id=E1NxN5QMOE
https://openreview.net/forum?id=E1NxN5QMOE
Somnath Basu Roy Chowdhury,Nicholas Monath,Ahmad Beirami,Rahul Kidambi,Kumar Avinava Dubey,Amr Ahmed,Snigdha Chaturvedi
ICLR 2024,Spotlight
Fairness, especially group fairness, is an important consideration in the context of machine learning systems. The most commonly adopted group fairness-enhancing techniques are in-processing methods that rely on a mixture of a fairness objective (e.g., demographic parity) and a task-specific objective (e.g., cross-entr...
https://openreview.net/pdf/a8b785960a7be6f38289cdad3923ad1ba27c3a26.pdf
Generative Learning for Financial Time Series with Irregular and Scale-Invariant Patterns
https://openreview.net/forum?id=CdjnzWsQax
https://openreview.net/forum?id=CdjnzWsQax
Hongbin Huang,Minghua Chen,Xiao Qiao
ICLR 2024,Spotlight
Limited data availability poses a major obstacle in training deep learning models for financial applications. Synthesizing financial time series to augment real-world data is challenging due to the irregular and scale-invariant patterns uniquely associated with financial time series - temporal dynamics that repeat with...
https://openreview.net/pdf/afa4bb323e04cbec65604b1a8df0f2eebc2962f3.pdf
Multiscale Positive-Unlabeled Detection of AI-Generated Texts
https://openreview.net/forum?id=5Lp6qU9hzV
https://openreview.net/forum?id=5Lp6qU9hzV
Yuchuan Tian,Hanting Chen,Xutao Wang,Zheyuan Bai,QINGHUA ZHANG,Ruifeng Li,Chao Xu,Yunhe Wang
ICLR 2024,Spotlight
Recent releases of Large Language Models (LLMs), e.g. ChatGPT, are astonishing at generating human-like texts, but they may impact the authenticity of texts. Previous works proposed methods to detect these AI-generated texts, including simple ML classifiers, pretrained-model-based zero-shot methods, and finetuned langu...
https://openreview.net/pdf/bd6826c79f81e0e0ac6f4c84f2b46d80eb3d130b.pdf
A Lightweight Method for Tackling Unknown Participation Statistics in Federated Averaging
https://openreview.net/forum?id=ZKEuFKfCKA
https://openreview.net/forum?id=ZKEuFKfCKA
Shiqiang Wang,Mingyue Ji
ICLR 2024,Spotlight
In federated learning (FL), clients usually have diverse participation statistics that are unknown a priori, which can significantly harm the performance of FL if not handled properly. Existing works aiming at addressing this problem are usually based on global variance reduction, which requires a substantial amount of...
https://openreview.net/pdf/deb3da6004c1c25ab01ed64fde43ebe424d7a09c.pdf
Identifying the Risks of LM Agents with an LM-Emulated Sandbox
https://openreview.net/forum?id=GEcwtMk1uA
https://openreview.net/forum?id=GEcwtMk1uA
Yangjun Ruan,Honghua Dong,Andrew Wang,Silviu Pitis,Yongchao Zhou,Jimmy Ba,Yann Dubois,Chris J. Maddison,Tatsunori Hashimoto
ICLR 2024,Spotlight
Recent advances in Language Model (LM) agents and tool use, exemplified by applications like ChatGPT Plugins, enable a rich set of capabilities but also amplify potential risks—such as leaking private data or causing financial losses. Identifying these risks is labor-intensive, necessitating implementing the tools, set...
https://openreview.net/pdf/d1601f78407737fc216de9e6ec0085038f8c885f.pdf
Coeditor: Leveraging Repo-level Diffs for Code Auto-editing
https://openreview.net/forum?id=ALVwQjZRS8
https://openreview.net/forum?id=ALVwQjZRS8
Jiayi Wei,Greg Durrett,Isil Dillig
ICLR 2024,Spotlight
Developers often dedicate significant time to maintaining and refactoring existing code. However, most prior work on generative models for code focuses solely on creating new code, overlooking the distinctive needs of editing existing code. In this work, we explore a multi-round code auto-editing setting, aiming to pre...
https://openreview.net/pdf/a68ee5b156d07bd4d39e7718b01a1ecdc5b5c3cb.pdf
FITS: Modeling Time Series with $10k$ Parameters
https://openreview.net/forum?id=bWcnvZ3qMb
https://openreview.net/forum?id=bWcnvZ3qMb
Zhijian Xu,Ailing Zeng,Qiang Xu
ICLR 2024,Spotlight
In this paper, we introduce FITS, a lightweight yet powerful model for time series analysis. Unlike existing models that directly process raw time-domain data, FITS operates on the principle that time series can be manipulated through interpolation in the complex frequency domain, achieving performance comparable to st...
https://openreview.net/pdf/b24cfba5a0bb5ddb925050c72614c266f677f9a0.pdf
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
https://openreview.net/forum?id=N8N0hgNDRt
https://openreview.net/forum?id=N8N0hgNDRt
Longhui Yu,Weisen Jiang,Han Shi,Jincheng YU,Zhengying Liu,Yu Zhang,James Kwok,Zhenguo Li,Adrian Weller,Weiyang Liu
ICLR 2024,Spotlight
Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (\eg, LLaMA-2) are still far away from satisfactory for solving mathematical problems due to the complex reasoning procedures. ...
https://openreview.net/pdf/f6e244230affa5173ef87947c86c25bd2891100d.pdf
Query-Policy Misalignment in Preference-Based Reinforcement Learning
https://openreview.net/forum?id=UoBymIwPJR
https://openreview.net/forum?id=UoBymIwPJR
Xiao Hu,Jianxiong Li,Xianyuan Zhan,Qing-Shan Jia,Ya-Qin Zhang
ICLR 2024,Spotlight
Preference-based reinforcement learning (PbRL) provides a natural way to align RL agents’ behavior with human desired outcomes, but is often restrained by costly human feedback. To improve feedback efficiency, most existing PbRL methods focus on selecting queries to maximally improve the overall quality of the reward m...
https://openreview.net/pdf/7731e84686a5dad0613dd42e1b05f91259ca9066.pdf
Feature-aligned N-BEATS with Sinkhorn divergence
https://openreview.net/forum?id=TS8HoIWAPQ
https://openreview.net/forum?id=TS8HoIWAPQ
Joonhun Lee,Myeongho Jeon,Myungjoo Kang,Kyunghyun Park
ICLR 2024,Spotlight
We propose Feature-aligned N-BEATS as a domain-generalized time series forecasting model. It is a nontrivial extension of N-BEATS with doubly residual stacking principle (Oreshkin et al. [45]) into a representation learning framework. In particular, it revolves around marginal feature probability measures induced by th...
https://openreview.net/pdf/47a2e8bbe6cc10d3bff9d06eb5871437eede86e2.pdf
Instructive Decoding: Instruction-Tuned Large Language Models are Self-Refiner from Noisy Instructions
https://openreview.net/forum?id=LebzzClHYw
https://openreview.net/forum?id=LebzzClHYw
Taehyeon Kim,Joonkee Kim,Gihun Lee,Se-Young Yun
ICLR 2024,Spotlight
While instruction-tuned language models have demonstrated impressive zero-shot generalization, these models often struggle to generate accurate responses when faced with instructions that fall outside their training set. This paper presents Instructive Decoding (ID), a simple yet effective approach that augments the ef...
https://openreview.net/pdf/41130f3ca565e158b2e1217fa3f5da2ba15efd6e.pdf
Consistent Multi-Class Classification from Multiple Unlabeled Datasets
https://openreview.net/forum?id=fW7DOHDQvF
https://openreview.net/forum?id=fW7DOHDQvF
Zixi Wei,Senlin Shu,Yuzhou Cao,Hongxin Wei,Bo An,Lei Feng
ICLR 2024,Spotlight
Weakly supervised learning aims to construct effective predictive models from imperfectly labeled data. The recent trend of weakly supervised learning has focused on how to learn an accurate classifier from completely unlabeled data, given little supervised information such as class priors. In this paper, we consider a...
https://openreview.net/pdf/c4cacd1a99a9f6cd491de8f23cdb492b4906cbce.pdf
SpikePoint: An Efficient Point-based Spiking Neural Network for Event Cameras Action Recognition
https://openreview.net/forum?id=7etoNfU9uF
https://openreview.net/forum?id=7etoNfU9uF
Hongwei Ren,Yue Zhou,Xiaopeng LIN,Yulong Huang,Haotian FU,Jie Song,Bojun Cheng
ICLR 2024,Spotlight
Event cameras are bio-inspired sensors that respond to local changes in light intensity and feature low latency, high energy efficiency, and high dynamic range. Meanwhile, Spiking Neural Networks (SNNs) have gained significant attention due to their remarkable efficiency and fault tolerance. By synergistically harnessi...
https://openreview.net/pdf/1d9c8889139a212409d5faf9dc557045e96dcc89.pdf
Inverse Approximation Theory for Nonlinear Recurrent Neural Networks
https://openreview.net/forum?id=yC2waD70Vj
https://openreview.net/forum?id=yC2waD70Vj
Shida Wang,Zhong Li,Qianxiao Li
ICLR 2024,Spotlight
We prove an inverse approximation theorem for the approximation of nonlinear sequence-to-sequence relationships using recurrent neural networks (RNNs). This is a so-called Bernstein-type result in approximation theory, which deduces properties of a target function under the assumption that it can be effectively approxi...
https://openreview.net/pdf/a89df38f3e96bab890df4328af64ca3eb34b8df0.pdf
Kernel Metric Learning for In-Sample Off-Policy Evaluation of Deterministic RL Policies
https://openreview.net/forum?id=plebgsdiiV
https://openreview.net/forum?id=plebgsdiiV
Haanvid Lee,Tri Wahyu Guntara,Jongmin Lee,Yung-Kyun Noh,Kee-Eung Kim
ICLR 2024,Spotlight
We consider off-policy evaluation (OPE) of deterministic target policies for reinforcement learning (RL) in environments with continuous action spaces. While it is common to use importance sampling for OPE, it suffers from high variance when the behavior policy deviates significantly from the target policy. In order to...
https://openreview.net/pdf/24aac816499705b0d6a509f4908ba2e27ed10775.pdf
Large Language Models are Efficient Learners of Noise-Robust Speech Recognition
https://openreview.net/forum?id=ceATjGPTUD
https://openreview.net/forum?id=ceATjGPTUD
Yuchen Hu,CHEN CHEN,Chao-Han Huck Yang,Ruizhe Li,Chao Zhang,Pin-Yu Chen,EngSiong Chng
ICLR 2024,Spotlight
Recent advances in large language models (LLMs) have promoted generative error correction (GER) for automatic speech recognition (ASR), which leverages the rich linguistic knowledge and powerful reasoning ability of LLMs to improve recognition results. The latest work proposes a GER benchmark with "HyPoradise" dataset ...
https://openreview.net/pdf/2403a00daa1fa0949fa21d4c5bb972bd398f4dea.pdf
H2O-SDF: Two-phase Learning for 3D Indoor Reconstruction using Object Surface Fields
https://openreview.net/forum?id=P1ANzoGg3W
https://openreview.net/forum?id=P1ANzoGg3W
Minyoung Park,Mirae Do,Yeon Jae Shin,Jaeseok Yoo,Jongkwang Hong,Joongrock Kim,Chul Lee
ICLR 2024,Spotlight
Advanced techniques using Neural Radiance Fields (NeRF), Signed Distance Fields (SDF), and Occupancy Fields have recently emerged as solutions for 3D indoor scene reconstruction. We introduce a novel two-phase learning approach, H2O-SDF, that discriminates between object and non-object regions within indoor environmen...
https://openreview.net/pdf/efccbd7a6e50a44711e740d5009616c9e19fb6e9.pdf
Sample-Efficient Quality-Diversity by Cooperative Coevolution
https://openreview.net/forum?id=JDud6zbpFv
https://openreview.net/forum?id=JDud6zbpFv
Ke Xue,Ren-Jian Wang,Pengyi Li,Dong Li,Jianye HAO,Chao Qian
ICLR 2024,Spotlight
Quality-Diversity (QD) algorithms, as a subset of evolutionary algorithms, have emerged as a powerful optimization paradigm with the aim of generating a set of high-quality and diverse solutions. Although QD has demonstrated competitive performance in reinforcement learning, its low sample efficiency remains a signific...
https://openreview.net/pdf/fcc91cb60f0dd347bc02c8beadb05d7d55b9f04f.pdf
SILO Language Models: Isolating Legal Risk In a Nonparametric Datastore
https://openreview.net/forum?id=ruk0nyQPec
https://openreview.net/forum?id=ruk0nyQPec
Sewon Min,Suchin Gururangan,Eric Wallace,Weijia Shi,Hannaneh Hajishirzi,Noah A. Smith,Luke Zettlemoyer
ICLR 2024,Spotlight
The legality of training language models (LMs) on copyrighted or otherwise restricted data is under intense debate. However, as we show, model performance significantly degrades if trained only on low-risk text (e.g., out-of-copyright books or government documents), due to its limited size and domain coverage. We prese...
https://openreview.net/pdf/34c8fb489d21452c21be4b0037700e7157e99c21.pdf
Dynamic Discounted Counterfactual Regret Minimization
https://openreview.net/forum?id=6PbvbLyqT6
https://openreview.net/forum?id=6PbvbLyqT6
Hang Xu,Kai Li,Haobo Fu,QIANG FU,Junliang Xing,Jian Cheng
ICLR 2024,Spotlight
Counterfactual regret minimization (CFR) is a family of iterative algorithms showing promising results in solving imperfect-information games. Recent novel CFR variants (e.g., CFR+, DCFR) have significantly improved the convergence rate of the vanilla CFR. The key to these CFR variants’ performance is weighting each it...
https://openreview.net/pdf/336422d3878e37b0144f3b3da58f90bde675aa6a.pdf
GIO: Gradient Information Optimization for Training Dataset Selection
https://openreview.net/forum?id=3NnfJnbJT2
https://openreview.net/forum?id=3NnfJnbJT2
Dante Everaert,Christopher Potts
ICLR 2024,Spotlight
It is often advantageous to train models on a subset of the available train examples, because the examples are of variable quality or because one would like to train with fewer examples, without sacrificing performance. We present Gradient Information Optimization (GIO), a scalable, task-agnostic approach to this data ...
https://openreview.net/pdf/5ca46b1a1d6645c98d731e33e243896ae32be3d3.pdf
SNIP: Bridging Mathematical Symbolic and Numeric Realms with Unified Pre-training
https://openreview.net/forum?id=KZSEgJGPxu
https://openreview.net/forum?id=KZSEgJGPxu
Kazem Meidani,Parshin Shojaee,Chandan K. Reddy,Amir Barati Farimani
ICLR 2024,Spotlight
In an era where symbolic mathematical equations are indispensable for modeling complex natural phenomena, scientific inquiry often involves collecting observations and translating them into mathematical expressions. Recently, deep learning has emerged as a powerful tool for extracting insights from data. However, exist...
https://openreview.net/pdf/1ba8f83f76d43ebf7625a6e87d0060d6361310f6.pdf
Fantastic Gains and Where to Find Them: On the Existence and Prospect of General Knowledge Transfer between Any Pretrained Model
https://openreview.net/forum?id=m50eKHCttz
https://openreview.net/forum?id=m50eKHCttz
Karsten Roth,Lukas Thede,A. Sophia Koepke,Oriol Vinyals,Olivier J Henaff,Zeynep Akata
ICLR 2024,Spotlight
Training deep networks requires various design decisions regarding for instance their architecture, data augmentation, or optimization. In this work, we find these training variations to result in networks learning unique feature sets from the data. Using public model libraries comprising thousands of models trained on...
https://openreview.net/pdf/122f5389127b21435f80c82696204c736a116976.pdf
Robustifying State-space Models for Long Sequences via Approximate Diagonalization
https://openreview.net/forum?id=DjeQ39QoLQ
https://openreview.net/forum?id=DjeQ39QoLQ
Annan Yu,Arnur Nigmetov,Dmitriy Morozov,Michael W. Mahoney,N. Benjamin Erichson
ICLR 2024,Spotlight
State-space models (SSMs) have recently emerged as a framework for learning long-range sequence tasks. An example is the structured state-space sequence (S4) layer, which uses the diagonal-plus-low-rank structure of the HiPPO initialization framework. However, the complicated structure of the S4 layer poses challenges;...
https://openreview.net/pdf/204207dab9f475c4c40ddb4a399f19c5fac72105.pdf
Provable Offline Preference-Based Reinforcement Learning
https://openreview.net/forum?id=tVMPfEGT2w
https://openreview.net/forum?id=tVMPfEGT2w
Wenhao Zhan,Masatoshi Uehara,Nathan Kallus,Jason D. Lee,Wen Sun
ICLR 2024,Spotlight
In this paper, we investigate the problem of offline Preference-based Reinforcement Learning (PbRL) with human feedback where feedback is available in the form of preference between trajectory pairs rather than explicit rewards. Our proposed algorithm consists of two main steps: (1) estimate the implicit reward using M...
https://openreview.net/pdf/ef2a33a9b6e9fd7ea7de7dbba6688f49e0e58206.pdf
Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory
https://openreview.net/forum?id=gmg7t8b4s0
https://openreview.net/forum?id=gmg7t8b4s0
Niloofar Mireshghallah,Hyunwoo Kim,Xuhui Zhou,Yulia Tsvetkov,Maarten Sap,Reza Shokri,Yejin Choi
ICLR 2024,Spotlight
Existing efforts on quantifying privacy implications for large language models (LLMs) solely focus on measuring leakage of training data. In this work, we shed light on the often-overlooked interactive settings where an LLM receives information from multiple sources and generates an output to be shared with other entit...
https://openreview.net/pdf/915e98b16264c3e1d6d3db0a8d69afc76b90ae14.pdf
Provable Reward-Agnostic Preference-Based Reinforcement Learning
https://openreview.net/forum?id=yTBXeXdbMf
https://openreview.net/forum?id=yTBXeXdbMf
Wenhao Zhan,Masatoshi Uehara,Wen Sun,Jason D. Lee
ICLR 2024,Spotlight
Preference-based Reinforcement Learning (PbRL) is a paradigm in which an RL agent learns to optimize a task using pair-wise preference-based feedback over trajectories, rather than explicit reward signals. While PbRL has demonstrated practical success in fine-tuning language models, existing theoretical work focuses on...
https://openreview.net/pdf/cc1b1fb83857ac10b10a13b0a2c7d061594bfdd8.pdf
Unleashing the Potential of Fractional Calculus in Graph Neural Networks with FROND
https://openreview.net/forum?id=wcka3bd7P4
https://openreview.net/forum?id=wcka3bd7P4
Qiyu Kang,Kai Zhao,Qinxu Ding,Feng Ji,Xuhao Li,Wenfei Liang,Yang Song,Wee Peng Tay
ICLR 2024,Spotlight
We introduce the FRactional-Order graph Neural Dynamical network (FROND), a new continuous graph neural network (GNN) framework. Unlike traditional continuous GNNs that rely on integer-order differential equations, FROND employs the Caputo fractional derivative to leverage the non-local properties of fractional calculu...
https://openreview.net/pdf/62b51824b9a914534dd00158380ffd4aa835c48a.pdf
MetaPhysiCa: Improving OOD Robustness in Physics-informed Machine Learning
https://openreview.net/forum?id=KrWuDiW4Qm
https://openreview.net/forum?id=KrWuDiW4Qm
S Chandra Mouli,Muhammad Alam,Bruno Ribeiro
ICLR 2024,Spotlight
A fundamental challenge in physics-informed machine learning (PIML) is the design of robust PIML methods for out-of-distribution (OOD) forecasting tasks. These OOD tasks require learning-to-learn from observations of the same (ODE) dynamical system with different unknown ODE parameters, and demand accurate forecasts ev...
https://openreview.net/pdf/e8efd660b312112ef0fd22c7e460d8a72eb51253.pdf
Views Can Be Deceiving: Improved SSL Through Feature Space Augmentation
https://openreview.net/forum?id=mutJBk3ILg
https://openreview.net/forum?id=mutJBk3ILg
Kimia Hamidieh,Haoran Zhang,Swami Sankaranarayanan,Marzyeh Ghassemi
ICLR 2024,Spotlight
Supervised learning methods have been found to exhibit inductive biases favoring simpler features. When such features are spuriously correlated with the label, this can result in suboptimal performance on minority subgroups. Despite the growing popularity of methods which learn from unlabeled data, the extent to which ...
https://openreview.net/pdf/b3e9f812dd9a2de2308f2211b33b7d419ab89fc1.pdf
Project and Probe: Sample-Efficient Adaptation by Interpolating Orthogonal Features
https://openreview.net/forum?id=f6CBQYxXvr
https://openreview.net/forum?id=f6CBQYxXvr
Annie S Chen,Yoonho Lee,Amrith Setlur,Sergey Levine,Chelsea Finn
ICLR 2024,Spotlight
Transfer learning with a small amount of target data is an effective and common approach to adapting a pre-trained model to distribution shifts. In some situations, target data labels may be expensive to obtain, so we may only have access to a limited number of target data points. To make the most of a very small targe...
https://openreview.net/pdf/c7ebd7fa822b912a9fa27ca0702572a707ec85e6.pdf
Implicit bias of SGD in $L_2$-regularized linear DNNs: One-way jumps from high to low rank
https://openreview.net/forum?id=P1aobHnjjj
https://openreview.net/forum?id=P1aobHnjjj
Zihan Wang,Arthur Jacot
ICLR 2024,Spotlight
The $L_{2}$-regularized loss of Deep Linear Networks (DLNs) with more than one hidden layers has multiple local minima, corresponding to matrices with different ranks. In tasks such as matrix completion, the goal is to converge to the local minimum with the smallest rank that still fits the training data. While rank-un...
https://openreview.net/pdf/c809967fdabfa761319f0239253e78d629fa1684.pdf
Whole-Song Hierarchical Generation of Symbolic Music Using Cascaded Diffusion Models
https://openreview.net/forum?id=sn7CYWyavh
https://openreview.net/forum?id=sn7CYWyavh
Ziyu Wang,Lejun Min,Gus Xia
ICLR 2024,Spotlight
Recent deep music generation studies have put much emphasis on long-term generation with structures. However, we are yet to see high-quality, well-structured **whole-song** generation. In this paper, we make the first attempt to model a full music piece under the realization of *compositional hierarchy*. With a focus o...
https://openreview.net/pdf/36e2505cb773c92384616d6a2cc198d112c0cfab.pdf
Evaluating the Zero-shot Robustness of Instruction-tuned Language Models
https://openreview.net/forum?id=g9diuvxN6D
https://openreview.net/forum?id=g9diuvxN6D
Jiuding Sun,Chantal Shaib,Byron C Wallace
ICLR 2024,Spotlight
Instruction fine-tuning has recently emerged as a promising approach for improving the zero-shot capabilities of Large Language Models (LLMs) on new tasks. This technique has shown particular strength in improving the performance of modestly sized LLMs, sometimes inducing performance competitive with much larger model ...
https://openreview.net/pdf/96fd9e94bdfa38510258729a25b2ba3b5aa4064d.pdf
Critical Learning Periods Emerge Even in Deep Linear Networks
https://openreview.net/forum?id=Aq35gl2c1k
https://openreview.net/forum?id=Aq35gl2c1k
Michael Kleinman,Alessandro Achille,Stefano Soatto
ICLR 2024,Spotlight
Critical learning periods are periods early in development where temporary sensory deficits can have a permanent effect on behavior and learned representations. Despite the radical differences between biological and artificial networks, critical learning periods have been empirically observed in both systems. This sug...
https://openreview.net/pdf/62e86f3312ebf0b894a2af8cffc4f37094ff6695.pdf
MOTOR: A Time-to-Event Foundation Model For Structured Medical Records
https://openreview.net/forum?id=NialiwI2V6
https://openreview.net/forum?id=NialiwI2V6
Ethan Steinberg,Jason Alan Fries,Yizhe Xu,Nigam Shah
ICLR 2024,Spotlight
We present a self-supervised, time-to-event (TTE) foundation model called MOTOR (Many Outcome Time Oriented Representations) which is pretrained on timestamped sequences of events in electronic health records (EHR) and health insurance claims. TTE models are used for estimating the probability distribution of the time ...
https://openreview.net/pdf/4183f6aeee58dddcc690f9265adb21cd0dac6757.pdf
GenSim: Generating Robotic Simulation Tasks via Large Language Models
https://openreview.net/forum?id=OI3RoHoWAN
https://openreview.net/forum?id=OI3RoHoWAN
Lirui Wang,Yiyang Ling,Zhecheng Yuan,Mohit Shridhar,Chen Bao,Yuzhe Qin,Bailin Wang,Huazhe Xu,Xiaolong Wang
ICLR 2024,Spotlight
Collecting large amounts of real-world interaction data to train general robotic policies is often prohibitively expensive, thus motivating the use of simulation data. However, existing methods for data generation have generally focused on scene-level diversity (e.g., object instances and poses) rather than task-level ...
https://openreview.net/pdf/d84b32393144549665a7888268a368b1eb84b7c3.pdf
Understanding Augmentation-based Self-Supervised Representation Learning via RKHS Approximation and Regression
https://openreview.net/forum?id=Ax2yRhCQr1
https://openreview.net/forum?id=Ax2yRhCQr1
Runtian Zhai,Bingbin Liu,Andrej Risteski,J Zico Kolter,Pradeep Kumar Ravikumar
ICLR 2024,Spotlight
Data augmentation is critical to the empirical success of modern self-supervised representation learning, such as contrastive learning and masked language modeling. However, a theoretical understanding of the exact role of the augmentation remains limited. Recent work has built the connection between self-supervised le...
https://openreview.net/pdf/2e845de474870e7f97f44a9beeff24e04dd224b6.pdf
Sudden Drops in the Loss: Syntax Acquisition, Phase Transitions, and Simplicity Bias in MLMs
https://openreview.net/forum?id=MO5PiKHELW
https://openreview.net/forum?id=MO5PiKHELW
Angelica Chen,Ravid Shwartz-Ziv,Kyunghyun Cho,Matthew L Leavitt,Naomi Saphra
ICLR 2024,Spotlight
Most interpretability research in NLP focuses on understanding the behavior and features of a fully trained model. However, certain insights into model behavior may only be accessible by observing the trajectory of the training process. We present a case study of syntax acquisition in masked language models (MLMs) that...
https://openreview.net/pdf/8d2d2b9084da09d4b41f5ad2da660350019c5412.pdf
SE(3)-Stochastic Flow Matching for Protein Backbone Generation
https://openreview.net/forum?id=kJFIH23hXb
https://openreview.net/forum?id=kJFIH23hXb
Joey Bose,Tara Akhound-Sadegh,Guillaume Huguet,Kilian FATRAS,Jarrid Rector-Brooks,Cheng-Hao Liu,Andrei Cristian Nica,Maksym Korablyov,Michael M. Bronstein,Alexander Tong
ICLR 2024,Spotlight
The computational design of novel protein structures has the potential to impact numerous scientific disciplines greatly. Toward this goal, we introduce \foldflow, a series of novel generative models of increasing modeling power based on the flow-matching paradigm over $3\mathrm{D}$ rigid motions---i.e. the group $\mat...
https://openreview.net/pdf/2ecf8626dae97e88a2770c4d2e119db485d03748.pdf
DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer
https://openreview.net/forum?id=Ifz3IgsEPX
https://openreview.net/forum?id=Ifz3IgsEPX
Junyuan Hong,Jiachen T. Wang,Chenhui Zhang,Zhangheng LI,Bo Li,Zhangyang Wang
ICLR 2024,Spotlight
Large Language Models (LLMs) have emerged as dominant tools for various tasks, particularly when tailored for a specific target by prompt tuning. Nevertheless, concerns surrounding data privacy present obstacles due to the tuned prompts' dependency on sensitive private information. A practical solution is to host a loc...
https://openreview.net/pdf/6dfeb74c7c420594a6132f6bfe094a53dbf73317.pdf
Geographic Location Encoding with Spherical Harmonics and Sinusoidal Representation Networks
https://openreview.net/forum?id=PudduufFLa
https://openreview.net/forum?id=PudduufFLa
Marc Rußwurm,Konstantin Klemmer,Esther Rolf,Robin Zbinden,Devis Tuia
ICLR 2024,Spotlight
Learning representations of geographical space is vital for any machine learning model that integrates geolocated data, spanning application domains such as remote sensing, ecology, or epidemiology. Recent work embeds coordinates using sine and cosine projections based on Double Fourier Sphere (DFS) features. These emb...
https://openreview.net/pdf/11eead9eb25de1cd772e111da1b931604a8fe49a.pdf
A General Framework for User-Guided Bayesian Optimization
https://openreview.net/forum?id=NjU0jtXcYn
https://openreview.net/forum?id=NjU0jtXcYn
Carl Hvarfner,Frank Hutter,Luigi Nardi
ICLR 2024,Spotlight
The optimization of expensive-to-evaluate black-box functions is prevalent in various scientific disciplines. Bayesian optimization is an automatic, general and sample-efficient method to solve these problems with minimal knowledge of the the underlying function dynamics. However, the ability of Bayesian optimization t...
https://openreview.net/pdf/9fcb921b69956d08aba85b088ea1c5d6c9b8c037.pdf
Lemur: Harmonizing Natural Language and Code for Language Agents
https://openreview.net/forum?id=hNhwSmtXRh
https://openreview.net/forum?id=hNhwSmtXRh
Yiheng Xu,Hongjin SU,Chen Xing,Boyu Mi,Qian Liu,Weijia Shi,Binyuan Hui,Fan Zhou,Yitao Liu,Tianbao Xie,Zhoujun Cheng,Siheng Zhao,Lingpeng Kong,Bailin Wang,Caiming Xiong,Tao Yu
ICLR 2024,Spotlight
We introduce Lemur and Lemur-Chat, openly accessible language models optimized for both natural language and coding capabilities to serve as the backbone of versatile language agents. The evolution from language chat models to functional language agents demands that models not only master human interaction, reasoning, ...
https://openreview.net/pdf/8ef62990871ebf2cac77dc6ea498085f167f070a.pdf
A path-norm toolkit for modern networks: consequences, promises and challenges
https://openreview.net/forum?id=hiHZVUIYik
https://openreview.net/forum?id=hiHZVUIYik
Antoine Gonon,Nicolas Brisebarre,Elisa Riccietti,Rémi Gribonval
ICLR 2024,Spotlight
This work introduces the first toolkit around path-norms that fully encompasses general DAG ReLU networks with biases, skip connections and any operation based on the extraction of order statistics: max pooling, GroupSort etc. This toolkit notably allows us to establish generalization bounds for modern neural networks ...
https://openreview.net/pdf/6dba7f474e1381840f1c444d21ab27a1c1a22129.pdf
Large Multilingual Models Pivot Zero-Shot Multimodal Learning across Languages
https://openreview.net/forum?id=Kuh5qgCGCp
https://openreview.net/forum?id=Kuh5qgCGCp
Jinyi Hu,Yuan Yao,Chongyi Wang,SHAN WANG,Yinxu Pan,Qianyu Chen,Tianyu Yu,Hanghao Wu,Yue Zhao,Haoye Zhang,Xu Han,Yankai Lin,Jiao Xue,dahai li,Zhiyuan Liu,Maosong Sun
ICLR 2024,Spotlight
Recently there has been a significant surge in multimodal learning in terms of both image-to-text and text-to-image generation. However, the success is typically limited to English, leaving other languages largely behind. Building a competitive counterpart in other languages is highly challenging due to the low-resourc...
https://openreview.net/pdf/07df702c6aa71499ac1bb0cc1988bd883407f9de.pdf
From Sparse to Soft Mixtures of Experts
https://openreview.net/forum?id=jxpsAj7ltE
https://openreview.net/forum?id=jxpsAj7ltE
Joan Puigcerver,Carlos Riquelme Ruiz,Basil Mustafa,Neil Houlsby
ICLR 2024,Spotlight
Sparse mixture of expert architectures (MoEs) scale model capacity without significant increases in training or inference costs. Despite their success, MoEs suffer from a number of issues: training instability, token dropping, inability to scale the number of experts, or ineffective finetuning. In this work, we propose...
https://openreview.net/pdf/fd68ff38ff599fb1021a7e6add08b00e8fec95b9.pdf
Selective Mixup Fine-Tuning for Optimizing Non-Decomposable Objectives
https://openreview.net/forum?id=rxVBKhyfSo
https://openreview.net/forum?id=rxVBKhyfSo
Shrinivas Ramasubramanian,Harsh Rangwani,Sho Takemori,Kunal Samanta,Yuhei Umeda,Venkatesh Babu Radhakrishnan
ICLR 2024,Spotlight
The rise in internet usage has led to the generation of massive amounts of data, resulting in the adoption of various supervised and semi-supervised machine learning algorithms, which can effectively utilize the colossal amount of data to train models. However, before deploying these models in the real world, these mus...
https://openreview.net/pdf/42154f6a78eb07727368d3e4f20969606728ec4b.pdf
NoiseDiffusion: Correcting Noise for Image Interpolation with Diffusion Models beyond Spherical Linear Interpolation
https://openreview.net/forum?id=6O3Q6AFUTu
https://openreview.net/forum?id=6O3Q6AFUTu
PengFei Zheng,Yonggang Zhang,Zhen Fang,Tongliang Liu,Defu Lian,Bo Han
ICLR 2024,Spotlight
Image interpolation based on diffusion models is promising in creating fresh and interesting images. Advanced interpolation methods mainly focus on spherical linear interpolation, where images are encoded into the noise space and then interpolated for denoising to images. However, existing methods face challenges in ...
https://openreview.net/pdf/9bcefae56342d9cecd1e962c0a0c0cab8b325854.pdf