title
stringlengths
14
163
category
stringclasses
2 values
authors
stringlengths
7
859
abstract
stringlengths
177
2.55k
paper_link
stringlengths
104
117
bibtex
stringlengths
54
54
supplemental_link
stringlengths
111
124
Modelling Cellular Perturbations with the Sparse Additive Mechanism Shift Variational Autoencoder
Main Conference Track
Michael Bereket, Theofanis Karaletsos
Generative models of observations under interventions have been a vibrant topic of interest across machine learning and the sciences in recent years. For example, in drug discovery, there is a need to model the effects of diverse interventions on cells in order to characterize unknown biological mechanisms of action. W...
https://papers.nips.cc/paper_files/paper/2023/file/0001ca33ba34ce0351e4612b744b3936-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20165-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0001ca33ba34ce0351e4612b744b3936-Supplemental-Conference.pdf
Cross-Episodic Curriculum for Transformer Agents
Main Conference Track
Lucy Xiaoyang Shi, Yunfan Jiang, Jake Grigsby, Linxi Fan, Yuke Zhu
We present a new algorithm, Cross-Episodic Curriculum (CEC), to boost the learning efficiency and generalization of Transformer agents. Central to CEC is the placement of cross-episodic experiences into a Transformer’s context, which forms the basis of a curriculum. By sequentially structuring online learning trials an...
https://papers.nips.cc/paper_files/paper/2023/file/001608167bb652337af5df0129aeaabd-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22418-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/001608167bb652337af5df0129aeaabd-Supplemental-Conference.pdf
PaintSeg: Painting Pixels for Training-free Segmentation
Main Conference Track
Xiang Li, Chung-Ching Lin, Yinpeng Chen, Zicheng Liu, Jinglu Wang, Rita Singh, Bhiksha Raj
The paper introduces PaintSeg, a new unsupervised method for segmenting objects without any training. We propose an adversarial masked contrastive painting (AMCP) process, which creates a contrast between the original image and a painted image in which a masked area is painted using off-the-shelf generative models. Dur...
https://papers.nips.cc/paper_files/paper/2023/file/0021c2cb1b9b6a71ac478ea52a93b25a-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19673-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0021c2cb1b9b6a71ac478ea52a93b25a-Supplemental-Conference.zip
Bootstrapping Vision-Language Learning with Decoupled Language Pre-training
Main Conference Track
Yiren Jian, Chongyang Gao, Soroush Vosoughi
We present a novel methodology aimed at optimizing the application of frozen large language models (LLMs) for resource-intensive vision-language (VL) pre-training. The current paradigm uses visual features as prompts to guide language models, with a focus on determining the most relevant visual features for correspondi...
https://papers.nips.cc/paper_files/paper/2023/file/002262941c9edfd472a79298b2ac5e17-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20571-/bibtex
null
Path following algorithms for $\ell_2$-regularized $M$-estimation with approximation guarantee
Main Conference Track
Yunzhang Zhu, Renxiong Liu
Many modern machine learning algorithms are formulated as regularized M-estimation problems, in which a regularization (tuning) parameter controls a trade-off between model fit to the training data and model complexity. To select the ``best'' tuning parameter value that achieves a good trade-off, an approximated soluti...
https://papers.nips.cc/paper_files/paper/2023/file/00296c0e10cd24d415c2db63ea2a2c68-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22675-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/00296c0e10cd24d415c2db63ea2a2c68-Supplemental-Conference.pdf
PDF: Point Diffusion Implicit Function for Large-scale Scene Neural Representation
Main Conference Track
Yuhan Ding, Fukun Yin, Jiayuan Fan, Hui Li, Xin Chen, Wen Liu, Chongshan Lu, Gang Yu, Tao Chen
Recent advances in implicit neural representations have achieved impressive results by sampling and fusing individual points along sampling rays in the sampling space. However, due to the explosively growing sampling space, finely representing and synthesizing detailed textures remains a challenge for unbounded large-s...
https://papers.nips.cc/paper_files/paper/2023/file/0073cc73e1873b35345209b50a3dab66-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22964-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0073cc73e1873b35345209b50a3dab66-Supplemental-Conference.pdf
Natural Actor-Critic for Robust Reinforcement Learning with Function Approximation
Main Conference Track
Ruida Zhou, Tao Liu, Min Cheng, Dileep Kalathil, P. R. Kumar, Chao Tian
We study robust reinforcement learning (RL) with the goal of determining a well-performing policy that is robust against model mismatch between the training simulator and the testing environment. Previous policy-based robust RL algorithms mainly focus on the tabular setting under uncertainty sets that facilitate robust...
https://papers.nips.cc/paper_files/paper/2023/file/007f4927e60699392425f267d43f0940-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19552-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/007f4927e60699392425f267d43f0940-Supplemental-Conference.zip
Adaptive Selective Sampling for Online Prediction with Experts
Main Conference Track
Rui Castro, Fredrik Hellström, Tim van Erven
We consider online prediction of a binary sequence with expert advice. For this setting, we devise label-efficient forecasting algorithms, which use a selective sampling scheme that enables collecting much fewer labels than standard procedures. For the general case without a perfect expert, we prove best-of-both-worlds...
https://papers.nips.cc/paper_files/paper/2023/file/00b67df24009747e8bbed4c2c6f9c825-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19791-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/00b67df24009747e8bbed4c2c6f9c825-Supplemental-Conference.zip
Gigastep - One Billion Steps per Second Multi-agent Reinforcement Learning
Datasets and Benchmarks Track
Mathias Lechner, lianhao yin, Tim Seyde, Tsun-Hsuan Johnson Wang, Wei Xiao, Ramin Hasani, Joshua Rountree, Daniela Rus
Multi-agent reinforcement learning (MARL) research is faced with a trade-off: it either uses complex environments requiring large compute resources, which makes it inaccessible to researchers with limited resources, or relies on simpler dynamics for faster execution, which makes the transferability of the results to mo...
https://papers.nips.cc/paper_files/paper/2023/file/00ba06ba5c324efdfb068865ca44cf0b-Paper-Datasets_and_Benchmarks.pdf
https://papers.nips.cc/paper_files/paper/22366-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/00ba06ba5c324efdfb068865ca44cf0b-Supplemental-Datasets_and_Benchmarks.pdf
Attentive Transfer Entropy to Exploit Transient Emergence of Coupling Effect
Main Conference Track
Xiaolei Ru, XINYA ZHANG, Zijia Liu, Jack Murdoch Moore, Gang Yan
We consider the problem of reconstructing coupled networks (e.g., biological neural networks) connecting large numbers of variables (e.g.,nerve cells), of which state evolution is governed by dissipative dynamics consisting of strong self-drive (dominants the evolution) and weak coupling-drive. The core difficulty is s...
https://papers.nips.cc/paper_files/paper/2023/file/00bb4e415ef117f2dee2fc3b778d806d-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21975-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/00bb4e415ef117f2dee2fc3b778d806d-Supplemental-Conference.zip
PopSign ASL v1.0: An Isolated American Sign Language Dataset Collected via Smartphones
Datasets and Benchmarks Track
Thad Starner, Sean Forbes, Matthew So, David Martin, Rohit Sridhar, Gururaj Deshpande, Sam Sepah, Sahir Shahryar, Khushi Bhardwaj, Tyler Kwok, Daksh Sehgal, Saad Hassan, Bill Neubauer, Sofia Vempala, Alec Tan, Jocelyn Heath, Unnathi Kumar, Priyanka Mosur, Tavenner Hall, Rajandeep Singh, Christopher Cui, Glenn Cameron, ...
PopSign is a smartphone-based bubble-shooter game that helps hearing parentsof deaf infants learn sign language. To help parents practice their ability to sign,PopSign is integrating sign language recognition as part of its gameplay. Fortraining the recognizer, we introduce the PopSign ASL v1.0 dataset that collectsexa...
https://papers.nips.cc/paper_files/paper/2023/file/00dada608b8db212ea7d9d92b24c68de-Paper-Datasets_and_Benchmarks.pdf
https://papers.nips.cc/paper_files/paper/22632-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/00dada608b8db212ea7d9d92b24c68de-Supplemental-Datasets_and_Benchmarks.pdf
Provable Adversarial Robustness for Group Equivariant Tasks: Graphs, Point Clouds, Molecules, and More
Main Conference Track
Jan Schuchardt, Yan Scholten, Stephan Günnemann
A machine learning model is traditionally considered robust if its prediction remains (almost) constant under input perturbations with small norm. However, real-world tasks like molecular property prediction or point cloud segmentation have inherent equivariances, such as rotation or permutation equivariance. In such t...
https://papers.nips.cc/paper_files/paper/2023/file/00db17c36b5435195760520efa96d99c-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21081-/bibtex
null
Self-Supervised Motion Magnification by Backpropagating Through Optical Flow
Main Conference Track
Zhaoying Pan, Daniel Geng, Andrew Owens
This paper presents a simple, self-supervised method for magnifying subtle motions in video: given an input video and a magnification factor, we manipulate the video such that its new optical flow is scaled by the desired amount. To train our model, we propose a loss function that estimates the optical flow of the gene...
https://papers.nips.cc/paper_files/paper/2023/file/00ed9ab006311be67879ecef8f80d7c5-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22090-/bibtex
null
TexQ: Zero-shot Network Quantization with Texture Feature Distribution Calibration
Main Conference Track
Xinrui Chen, Yizhi Wang, Renao YAN, Yiqing Liu, Tian Guan, Yonghong He
Quantization is an effective way to compress neural networks. By reducing the bit width of the parameters, the processing efficiency of neural network models at edge devices can be notably improved. Most conventional quantization methods utilize real datasets to optimize quantization parameters and fine-tune. Due to th...
https://papers.nips.cc/paper_files/paper/2023/file/0113ef4642264adc2e6924a3cbbdf532-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20087-/bibtex
null
Ambient Diffusion: Learning Clean Distributions from Corrupted Data
Main Conference Track
Giannis Daras, Kulin Shah, Yuval Dagan, Aravind Gollakota, Alex Dimakis, Adam Klivans
We present the first diffusion-based framework that can learn an unknown distribution using only highly-corrupted samples. This problem arises in scientific applications where access to uncorrupted samples is impossible or expensive to acquire. Another benefit of our approach is the ability to train generative models t...
https://papers.nips.cc/paper_files/paper/2023/file/012af729c5d14d279581fc8a5db975a1-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21484-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/012af729c5d14d279581fc8a5db975a1-Supplemental-Conference.zip
Scalable Membership Inference Attacks via Quantile Regression
Main Conference Track
Martin Bertran, Shuai Tang, Aaron Roth, Michael Kearns, Jamie H. Morgenstern, Steven Z. Wu
Membership inference attacks are designed to determine, using black box access to trained models, whether a particular example was used in training or not. Membership inference can be formalized as a hypothesis testing problem. The most effective existing attacks estimate the distribution of some test statistic (usuall...
https://papers.nips.cc/paper_files/paper/2023/file/01328d0767830e73a612f9073e9ff15f-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20306-/bibtex
null
ESSEN: Improving Evolution State Estimation for Temporal Networks using Von Neumann Entropy
Main Conference Track
Qiyao Huang, Yingyue Zhang, Zhihong Zhang, Edwin Hancock
Temporal networks are widely used as abstract graph representations for real-world dynamic systems. Indeed, recognizing the network evolution states is crucial in understanding and analyzing temporal networks. For instance, social networks will generate the clustering and formation of tightly-knit groups or communities...
https://papers.nips.cc/paper_files/paper/2023/file/0147d967a5db3b8dde08d2a327b24568-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19868-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0147d967a5db3b8dde08d2a327b24568-Supplemental-Conference.pdf
Label Correction of Crowdsourced Noisy Annotations with an Instance-Dependent Noise Transition Model
Main Conference Track
Hui GUO, Boyu Wang, Grace Yi
The predictive ability of supervised learning algorithms hinges on the quality of annotated examples, whose labels often come from multiple crowdsourced annotators with diverse expertise. To aggregate noisy crowdsourced annotations, many existing methods employ an annotator-specific instance-independent noise transitio...
https://papers.nips.cc/paper_files/paper/2023/file/015a8c69bedcb0a7b2ed2e1678f34399-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20354-/bibtex
null
Diffused Task-Agnostic Milestone Planner
Main Conference Track
Mineui Hong, Minjae Kang, Songhwai Oh
Addressing decision-making problems using sequence modeling to predict future trajectories shows promising results in recent years.In this paper, we take a step further to leverage the sequence predictive method in wider areas such as long-term planning, vision-based control, and multi-task decision-making.To this end,...
https://papers.nips.cc/paper_files/paper/2023/file/0163ca1c69f848e766cfb0b7bb7e17f4-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20632-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0163ca1c69f848e766cfb0b7bb7e17f4-Supplemental-Conference.zip
Task-aware Distributed Source Coding under Dynamic Bandwidth
Main Conference Track
Po-han Li, Sravan Kumar Ankireddy, Ruihan (Philip) Zhao, Hossein Nourkhiz Mahjoub, Ehsan Moradi Pari, Ufuk Topcu, Sandeep Chinchali, Hyeji Kim
Efficient compression of correlated data is essential to minimize communication overload in multi-sensor networks. In such networks, each sensor independently compresses the data and transmits them to a central node. A decoder at the central node decompresses and passes the data to a pre-trained machine learning-based ...
https://papers.nips.cc/paper_files/paper/2023/file/016c63403370d81c24c1ca0123de6cfa-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20137-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/016c63403370d81c24c1ca0123de6cfa-Supplemental-Conference.pdf
BubbleML: A Multiphase Multiphysics Dataset and Benchmarks for Machine Learning
Datasets and Benchmarks Track
Sheikh Md Shakeel Hassan, Arthur Feeney, Akash Dhruv, Jihoon Kim, Youngjoon Suh, Jaiyoung Ryu, Yoonjin Won, Aparna Chandramowlishwaran
In the field of phase change phenomena, the lack of accessible and diverse datasets suitable for machine learning (ML) training poses a significant challenge. Existing experimental datasets are often restricted, with limited availability and sparse ground truth, impeding our understanding of this complex multiphysics p...
https://papers.nips.cc/paper_files/paper/2023/file/01726ae05d72ddba3ac784a5944fa1ef-Paper-Datasets_and_Benchmarks.pdf
https://papers.nips.cc/paper_files/paper/20423-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/01726ae05d72ddba3ac784a5944fa1ef-Supplemental-Datasets_and_Benchmarks.pdf
ANTN: Bridging Autoregressive Neural Networks and Tensor Networks for Quantum Many-Body Simulation
Main Conference Track
Zhuo Chen, Laker Newhouse, Eddie Chen, Di Luo, Marin Soljacic
Quantum many-body physics simulation has important impacts on understanding fundamental science and has applications to quantum materials design and quantum technology. However, due to the exponentially growing size of the Hilbert space with respect to the particle number, a direct simulation is intractable. While repr...
https://papers.nips.cc/paper_files/paper/2023/file/01772a8b0420baec00c4d59fe2fbace6-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19514-/bibtex
null
Causal Effect Identification in Uncertain Causal Networks
Main Conference Track
Sina Akbari, Fateme Jamshidi, Ehsan Mokhtarian, Matthew Vowels, Jalal Etesami, Negar Kiyavash
Causal identification is at the core of the causal inference literature, where complete algorithms have been proposed to identify causal queries of interest. The validity of these algorithms hinges on the restrictive assumption of having access to a correctly specified causal structure. In this work, we study the setti...
https://papers.nips.cc/paper_files/paper/2023/file/017c897b4d85a744f345ccbf9d71e501-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21831-/bibtex
null
FAST: a Fused and Accurate Shrinkage Tree for Heterogeneous Treatment Effects Estimation
Main Conference Track
Jia Gu, Caizhi Tang, Han Yan, Qing Cui, Longfei Li, Jun Zhou
This paper proposes a novel strategy for estimating the heterogeneous treatment effect called the Fused and Accurate Shrinkage Tree ($\mathrm{FAST}$). Our approach utilizes both trial and observational data to improve the accuracy and robustness of the estimator. Inspired by the concept of shrinkage estimation in stat...
https://papers.nips.cc/paper_files/paper/2023/file/01830c92c6558179fa6d7fb1edff692c-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20615-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/01830c92c6558179fa6d7fb1edff692c-Supplemental-Conference.pdf
Characterizing Graph Datasets for Node Classification: Homophily-Heterophily Dichotomy and Beyond
Main Conference Track
Oleg Platonov, Denis Kuznedelev, Artem Babenko, Liudmila Prokhorenkova
Homophily is a graph property describing the tendency of edges to connect similar nodes; the opposite is called heterophily. It is often believed that heterophilous graphs are challenging for standard message-passing graph neural networks (GNNs), and much effort has been put into developing efficient methods for this s...
https://papers.nips.cc/paper_files/paper/2023/file/01b681025fdbda8e935a66cc5bb6e9de-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20330-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/01b681025fdbda8e935a66cc5bb6e9de-Supplemental-Conference.zip
Equivariant Flow Matching with Hybrid Probability Transport for 3D Molecule Generation
Main Conference Track
Yuxuan Song, Jingjing Gong, Minkai Xu, Ziyao Cao, Yanyan Lan, Stefano Ermon, Hao Zhou, Wei-Ying Ma
The generation of 3D molecules requires simultaneously deciding the categorical features (atom types) and continuous features (atom coordinates). Deep generative models, especially Diffusion Models (DMs), have demonstrated effectiveness in generating feature-rich geometries. However, existing DMs typically suffer from ...
https://papers.nips.cc/paper_files/paper/2023/file/01d64478381c33e29ed611f1719f5a37-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22270-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/01d64478381c33e29ed611f1719f5a37-Supplemental-Conference.zip
Hyperbolic VAE via Latent Gaussian Distributions
Main Conference Track
Seunghyuk Cho, Juyong Lee, Dongwoo Kim
We propose a Gaussian manifold variational auto-encoder (GM-VAE) whose latent space consists of a set of Gaussian distributions. It is known that the set of the univariate Gaussian distributions with the Fisher information metric form a hyperbolic space, which we call a Gaussian manifold. To learn the VAE endowed with ...
https://papers.nips.cc/paper_files/paper/2023/file/01ecd39ca49ddecc5729ca996304781b-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22775-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/01ecd39ca49ddecc5729ca996304781b-Supplemental-Conference.zip
A Simple Solution for Offline Imitation from Observations and Examples with Possibly Incomplete Trajectories
Main Conference Track
Kai Yan, Alex Schwing, Yu-Xiong Wang
Offline imitation from observations aims to solve MDPs where only task-specific expert states and task-agnostic non-expert state-action pairs are available. Offline imitation is useful in real-world scenarios where arbitrary interactions are costly and expert actions are unavailable. The state-of-the-art ‘DIstribution ...
https://papers.nips.cc/paper_files/paper/2023/file/0203f489345567b4a048c38f507cdbfa-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20292-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0203f489345567b4a048c38f507cdbfa-Supplemental-Conference.zip
Defending against Data-Free Model Extraction by Distributionally Robust Defensive Training
Main Conference Track
Zhenyi Wang, Li Shen, Tongliang Liu, Tiehang Duan, Yanjun Zhu, Donglin Zhan, DAVID DOERMANN, Mingchen Gao
Data-Free Model Extraction (DFME) aims to clone a black-box model without knowing its original training data distribution, making it much easier for attackers to steal commercial models. Defense against DFME faces several challenges: (i) effectiveness; (ii) efficiency; (iii) no prior on the attacker's query data distri...
https://papers.nips.cc/paper_files/paper/2023/file/0207c9ea9faf66c6e892c3fa3c167b75-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22026-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0207c9ea9faf66c6e892c3fa3c167b75-Supplemental-Conference.zip
Large language models transition from integrating across position-yoked, exponential windows to structure-yoked, power-law windows
Main Conference Track
David Skrill, Samuel Norman-Haignere
Modern language models excel at integrating across long temporal scales needed to encode linguistic meaning and show non-trivial similarities to biological neural systems. Prior work suggests that human brain responses to language exhibit hierarchically organized "integration windows" that substantially constrain the o...
https://papers.nips.cc/paper_files/paper/2023/file/020ad0ac6a1974e6748e4a5a48110a07-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22176-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/020ad0ac6a1974e6748e4a5a48110a07-Supplemental-Conference.pdf
Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?
Main Conference Track
Arjun Majumdar, Karmesh Yadav, Sergio Arnaud, Jason Ma, Claire Chen, Sneha Silwal, Aryan Jain, Vincent-Pierre Berges, Tingfan Wu, Jay Vakil, Pieter Abbeel, Jitendra Malik, Dhruv Batra, Yixin Lin, Oleksandr Maksymets, Aravind Rajeswaran, Franziska Meier
We present the largest and most comprehensive empirical study of pre-trained visual representations (PVRs) or visual ‘foundation models’ for Embodied AI. First, we curate CortexBench, consisting of 17 different tasks spanning locomotion, navigation, dexterous, and mobile manipulation. Next, we systematically evaluate e...
https://papers.nips.cc/paper_files/paper/2023/file/022ca1bed6b574b962c48a2856eb207b-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20933-/bibtex
null
Belief Projection-Based Reinforcement Learning for Environments with Delayed Feedback
Main Conference Track
Jangwon Kim, Hangyeol Kim, Jiwook Kang, Jongchan Baek, Soohee Han
We present a novel actor-critic algorithm for an environment with delayed feedback, which addresses the state-space explosion problem of conventional approaches. Conventional approaches use an augmented state constructed from the last observed state and actions executed since visiting the last observed state. Using the...
https://papers.nips.cc/paper_files/paper/2023/file/0252a434b18962c94910c07cd9a7fecc-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21787-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0252a434b18962c94910c07cd9a7fecc-Supplemental-Conference.zip
Batchnorm Allows Unsupervised Radial Attacks
Main Conference Track
Amur Ghose, Apurv Gupta, Yaoliang Yu, Pascal Poupart
The construction of adversarial examples usually requires the existence of soft or hard labels for each instance, with respect to which a loss gradient provides the signal for construction of the example. We show that for batch normalized deep image recognition architectures, intermediate latents that are produced afte...
https://papers.nips.cc/paper_files/paper/2023/file/0266d95023740481d22d437aa8aba0e9-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21559-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0266d95023740481d22d437aa8aba0e9-Supplemental-Conference.zip
Detecting Any Human-Object Interaction Relationship: Universal HOI Detector with Spatial Prompt Learning on Foundation Models
Main Conference Track
Yichao Cao, Qingfei Tang, Xiu Su, Song Chen, Shan You, Xiaobo Lu, Chang Xu
Human-object interaction (HOI) detection aims to comprehend the intricate relationships between humans and objects, predicting triplets, and serving as the foundation for numerous computer vision tasks. The complexity and diversity of human-object interactions in the real world, however, pose significant challenges fo...
https://papers.nips.cc/paper_files/paper/2023/file/02687e7b22abc64e651be8da74ec610e-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20272-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/02687e7b22abc64e651be8da74ec610e-Supplemental-Conference.pdf
Smoothing the Landscape Boosts the Signal for SGD: Optimal Sample Complexity for Learning Single Index Models
Main Conference Track
Alex Damian, Eshaan Nichani, Rong Ge, Jason D. Lee
We focus on the task of learning a single index model $\sigma(w^\star \cdot x)$ with respect to the isotropic Gaussian distribution in $d$ dimensions. Prior work has shown that the sample complexity of learning $w^\star$ is governed by the information exponent $k^\star$ of the link function $\sigma$, which is defined a...
https://papers.nips.cc/paper_files/paper/2023/file/02763667a5761ff92bb15d8751bcd223-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19842-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/02763667a5761ff92bb15d8751bcd223-Supplemental-Conference.zip
A Scale-Invariant Sorting Criterion to Find a Causal Order in Additive Noise Models
Main Conference Track
Alexander Reisach, Myriam Tami, Christof Seiler, Antoine Chambaz, Sebastian Weichwald
Additive Noise Models (ANMs) are a common model class for causal discovery from observational data. Due to a lack of real-world data for which an underlying ANM is known, ANMs with randomly sampled parameters are commonly used to simulate data for the evaluation of causal discovery algorithms. While some parameters may...
https://papers.nips.cc/paper_files/paper/2023/file/027e86facfe7c1ea52ca1fca7bc1402b-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19690-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/027e86facfe7c1ea52ca1fca7bc1402b-Supplemental-Conference.zip
PROTES: Probabilistic Optimization with Tensor Sampling
Main Conference Track
Anastasiia Batsheva, Andrei Chertkov, Gleb Ryzhakov, Ivan Oseledets
We developed a new method PROTES for black-box optimization, which is based on the probabilistic sampling from a probability density function given in the low-parametric tensor train format. We tested it on complex multidimensional arrays and discretized multivariable functions taken, among others, from real-world appl...
https://papers.nips.cc/paper_files/paper/2023/file/028957869e560af14243ac37663a471e-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21075-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/028957869e560af14243ac37663a471e-Supplemental-Conference.zip
Perturbation Towards Easy Samples Improves Targeted Adversarial Transferability
Main Conference Track
Junqi Gao, Biqing Qi, Yao Li, Zhichang Guo, Dong Li, Yuming Xing, Dazhi Zhang
The transferability of adversarial perturbations provides an effective shortcut for black-box attacks. Targeted perturbations have greater practicality but are more difficult to transfer between models. In this paper, we experimentally and theoretically demonstrated that neural networks trained on the same dataset have...
https://papers.nips.cc/paper_files/paper/2023/file/028fcbcf85435d39a40c4d61b42c99a4-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19716-/bibtex
null
AllSim: Simulating and Benchmarking Resource Allocation Policies in Multi-User Systems
Datasets and Benchmarks Track
Jeroen Berrevoets, Daniel Jarrett, Alex Chan, Mihaela van der Schaar
Numerous real-world systems, ranging from healthcare to energy grids, involve users competing for finite and potentially scarce resources. Designing policies for resource allocation in such real-world systems is challenging for many reasons, including the changing nature of user types and their (possibly urgent) need f...
https://papers.nips.cc/paper_files/paper/2023/file/0296e17ec30fc36007edaaa2f96b5f17-Paper-Datasets_and_Benchmarks.pdf
https://papers.nips.cc/paper_files/paper/20181-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0296e17ec30fc36007edaaa2f96b5f17-Supplemental-Datasets_and_Benchmarks.pdf
AVIS: Autonomous Visual Information Seeking with Large Language Model Agent
Main Conference Track
Ziniu Hu, Ahmet Iscen, Chen Sun, Kai-Wei Chang, Yizhou Sun, David Ross, Cordelia Schmid, Alireza Fathi
In this paper, we propose an autonomous information seeking visual question answering framework, AVIS. Our method leverages a Large Language Model (LLM) to dynamically strategize the utilization of external tools and to investigate their outputs via tree search, thereby acquiring the indispensable knowledge needed to p...
https://papers.nips.cc/paper_files/paper/2023/file/029df12a9363313c3e41047844ecad94-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19636-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/029df12a9363313c3e41047844ecad94-Supplemental-Conference.pdf
Conformal Prediction Sets for Ordinal Classification
Main Conference Track
Prasenjit Dey, Srujana Merugu, Sivaramakrishnan R Kaveri
Ordinal classification (OC), i.e., labeling instances along classes with a natural ordering, is common in multiple applications such as size or budget based recommendations and disease severity labeling. Often in practical scenarios, it is desirable to obtain a small set of likely classes with a guaranteed high chanc...
https://papers.nips.cc/paper_files/paper/2023/file/029f699912bf3db747fe110948cc6169-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21757-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/029f699912bf3db747fe110948cc6169-Supplemental-Conference.pdf
Minimax-Optimal Location Estimation
Main Conference Track
Shivam Gupta, Jasper Lee, Eric Price, Paul Valiant
Location estimation is one of the most basic questions in parametric statistics. Suppose we have a known distribution density $f$, and we get $n$ i.i.d. samples from $f(x-\mu)$ for some unknown shift $\mu$.The task is to estimate $\mu$ to high accuracy with high probability.The maximum likelihood estimator (MLE) is kno...
https://papers.nips.cc/paper_files/paper/2023/file/02a589ef9a4f6f1e2dcc1cfb3b978a51-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22256-/bibtex
null
Tight Bounds for Volumetric Spanners and Applications
Main Conference Track
Aditya Bhaskara, Sepideh Mahabadi, Ali Vakilian
Given a set of points of interest, a volumetric spanner is a subset of the points using which all the points can be expressed using "small" coefficients (measured in an appropriate norm). Formally, given a set of vectors $X = [v_1, v_2, \dots, v_n]$, the goal is to find $T \subseteq [n]$ such that every $v \in X$ can b...
https://papers.nips.cc/paper_files/paper/2023/file/02a92b52670752daf17b53f04f1ab405-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19739-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/02a92b52670752daf17b53f04f1ab405-Supplemental-Conference.pdf
Wyze Rule: Federated Rule Dataset for Rule Recommendation Benchmarking
Datasets and Benchmarks Track
Mohammad Mahdi Kamani, Yuhang Yao, Hanjia Lyu, Zhongwei Cheng, Lin Chen, Liangju Li, Carlee Joe-Wong, Jiebo Luo
In the rapidly evolving landscape of smart home automation, the potential of IoT devices is vast. In this realm, rules are the main tool utilized for this automation, which are predefined conditions or triggers that establish connections between devices, enabling seamless automation of specific processes. However, one ...
https://papers.nips.cc/paper_files/paper/2023/file/02b9d1e6d1b5295a6f883969ddc1bbbd-Paper-Datasets_and_Benchmarks.pdf
https://papers.nips.cc/paper_files/paper/21966-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/02b9d1e6d1b5295a6f883969ddc1bbbd-Supplemental-Datasets_and_Benchmarks.zip
Learning better with Dale’s Law: A Spectral Perspective
Main Conference Track
Pingsheng Li, Jonathan Cornford, Arna Ghosh, Blake Richards
Most recurrent neural networks (RNNs) do not include a fundamental constraint of real neural circuits: Dale's Law, which implies that neurons must be excitatory (E) or inhibitory (I). Dale's Law is generally absent from RNNs because simply partitioning a standard network's units into E and I populations impairs learnin...
https://papers.nips.cc/paper_files/paper/2023/file/02dd0db10c40092de3d9ec2508d12f60-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22650-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/02dd0db10c40092de3d9ec2508d12f60-Supplemental-Conference.pdf
Dense-Exponential Random Features: Sharp Positive Estimators of the Gaussian Kernel
Main Conference Track
Valerii Likhosherstov, Krzysztof M Choromanski, Kumar Avinava Dubey, Frederick Liu, Tamas Sarlos, Adrian Weller
The problem of efficient approximation of a linear operator induced by the Gaussian or softmax kernel is often addressed using random features (RFs) which yield an unbiased approximation of the operator's result. Such operators emerge in important applications ranging from kernel methods to efficient Transformers. We p...
https://papers.nips.cc/paper_files/paper/2023/file/02dec8877fb7c6aa9a79f81661baca7c-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20164-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/02dec8877fb7c6aa9a79f81661baca7c-Supplemental-Conference.zip
Projection-Free Online Convex Optimization via Efficient Newton Iterations
Main Conference Track
Khashayar Gatmiry, Zak Mhammedi
This paper presents new projection-free algorithms for Online Convex Optimization (OCO) over a convex domain $\mathcal{K} \subset \mathbb{R}^d$. Classical OCO algorithms (such as Online Gradient Descent) typically need to perform Euclidean projections onto the convex set $\mathcal{K}$ to ensure feasibility of their ite...
https://papers.nips.cc/paper_files/paper/2023/file/03261886741f1f21f52f2a2d570616a2-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22091-/bibtex
null
Read and Reap the Rewards: Learning to Play Atari with the Help of Instruction Manuals
Main Conference Track
Yue Wu, Yewen Fan, Paul Pu Liang, Amos Azaria, Yuanzhi Li, Tom M. Mitchell
High sample complexity has long been a challenge for RL. On the other hand, humans learn to perform tasks not only from interaction or demonstrations, but also by reading unstructured text documents, e.g., instruction manuals. Instruction manuals and wiki pages are among the most abundant data that could inform agents ...
https://papers.nips.cc/paper_files/paper/2023/file/034d7bfeace2a9a258648b16fc626298-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20787-/bibtex
null
Sharpness Minimization Algorithms Do Not Only Minimize Sharpness To Achieve Better Generalization
Main Conference Track
Kaiyue Wen, Zhiyuan Li, Tengyu Ma
Despite extensive studies, the underlying reason as to why overparameterizedneural networks can generalize remains elusive. Existing theory shows that common stochastic optimizers prefer flatter minimizers of the training loss, and thusa natural potential explanation is that flatness implies generalization. This workcr...
https://papers.nips.cc/paper_files/paper/2023/file/0354767c6386386be17cabe4fc59711b-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21224-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0354767c6386386be17cabe4fc59711b-Supplemental-Conference.zip
Feature-Learning Networks Are Consistent Across Widths At Realistic Scales
Main Conference Track
Nikhil Vyas, Alexander Atanasov, Blake Bordelon, Depen Morwani, Sabarish Sainathan, Cengiz Pehlevan
We study the effect of width on the dynamics of feature-learning neural networks across a variety of architectures and datasets. Early in training, wide neural networks trained on online data have not only identical loss curves but also agree in their point-wise test predictions throughout training. For simple tasks su...
https://papers.nips.cc/paper_files/paper/2023/file/03600ae6c3392fd65ad7c3a90c6f7ce8-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22545-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/03600ae6c3392fd65ad7c3a90c6f7ce8-Supplemental-Conference.zip
Taylor TD-learning
Main Conference Track
Michele Garibbo, Maxime Robeyns, Laurence Aitchison
Many reinforcement learning approaches rely on temporal-difference (TD) learning to learn a critic.However, TD-learning updates can be high variance.Here, we introduce a model-based RL framework, Taylor TD, which reduces this variance in continuous state-action settings. Taylor TD uses a first-order Taylor series expan...
https://papers.nips.cc/paper_files/paper/2023/file/036912a83bdbb1fd792baf6532f102d8-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22597-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/036912a83bdbb1fd792baf6532f102d8-Supplemental-Conference.pdf
Calibrating Neural Simulation-Based Inference with Differentiable Coverage Probability
Main Conference Track
Maciej Falkiewicz, Naoya Takeishi, Imahn Shekhzadeh, Antoine Wehenkel, Arnaud Delaunoy, Gilles Louppe, Alexandros Kalousis
Bayesian inference allows expressing the uncertainty of posterior belief under a probabilistic model given prior information and the likelihood of the evidence. Predominantly, the likelihood function is only implicitly established by a simulator posing the need for simulation-based inference (SBI). However, the existin...
https://papers.nips.cc/paper_files/paper/2023/file/03a9a9c1e15850439653bb971a4ad4b3-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21076-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/03a9a9c1e15850439653bb971a4ad4b3-Supplemental-Conference.zip
Agnostic Multi-Group Active Learning
Main Conference Track
Nicholas Rittler, Kamalika Chaudhuri
Inspired by the problem of improving classification accuracy on rare or hard subsets of a population, there has been recent interest in models of learning where the goal is to generalize to a collection of distributions, each representing a ``group''. We consider a variant of this problem from the perspective of active...
https://papers.nips.cc/paper_files/paper/2023/file/03b1043052700b1a471996b0baf309d4-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22161-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/03b1043052700b1a471996b0baf309d4-Supplemental-Conference.pdf
Self-Weighted Contrastive Learning among Multiple Views for Mitigating Representation Degeneration
Main Conference Track
Jie Xu, Shuo Chen, Yazhou Ren, Xiaoshuang Shi, Hengtao Shen, Gang Niu, Xiaofeng Zhu
Recently, numerous studies have demonstrated the effectiveness of contrastive learning (CL), which learns feature representations by pulling in positive samples while pushing away negative samples. Many successes of CL lie in that there exists semantic consistency between data augmentations of the same instance. In mul...
https://papers.nips.cc/paper_files/paper/2023/file/03b13b0db740b95cb741e007178ef5e5-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20190-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/03b13b0db740b95cb741e007178ef5e5-Supplemental-Conference.pdf
Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features
Main Conference Track
Mingli Zhu, Shaokui Wei, Hongyuan Zha, Baoyuan Wu
Recent studies have demonstrated the susceptibility of deep neural networks to backdoor attacks. Given a backdoored model, its prediction of a poisoned sample with trigger will be dominated by the trigger information, though trigger information and benign information coexist. Inspired by the mechanism of the optical po...
https://papers.nips.cc/paper_files/paper/2023/file/03df5246cc78af497940338dd3eacbaa-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22465-/bibtex
null
Tools for Verifying Neural Models' Training Data
Main Conference Track
Dami Choi, Yonadav Shavit, David K. Duvenaud
It is important that consumers and regulators can verify the provenance of large neural models to evaluate their capabilities and risks. We introduce the concept of a "Proof-of-Training-Data": any protocol that allows a model trainer to convince a Verifier of the training data that produced a set of model weights. Such...
https://papers.nips.cc/paper_files/paper/2023/file/03e33e1f62e3302b47fe1d38a235921e-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22555-/bibtex
null
Towards Higher Ranks via Adversarial Weight Pruning
Main Conference Track
Yuchuan Tian, Hanting Chen, Tianyu Guo, Chao Xu, Yunhe Wang
Convolutional Neural Networks (CNNs) are hard to deploy on edge devices due to its high computation and storage complexities. As a common practice for model compression, network pruning consists of two major categories: unstructured and structured pruning, where unstructured pruning constantly performs better. However,...
https://papers.nips.cc/paper_files/paper/2023/file/040ace837dd270a87055bb10dd7c0392-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21480-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/040ace837dd270a87055bb10dd7c0392-Supplemental-Conference.pdf
On the Overlooked Pitfalls of Weight Decay and How to Mitigate Them: A Gradient-Norm Perspective
Main Conference Track
Zeke Xie, Zhiqiang Xu, Jingzhao Zhang, Issei Sato, Masashi Sugiyama
Weight decay is a simple yet powerful regularization technique that has been very widely used in training of deep neural networks (DNNs). While weight decay has attracted much attention, previous studies fail to discover some overlooked pitfalls on large gradient norms resulted by weight decay. In this paper, we discov...
https://papers.nips.cc/paper_files/paper/2023/file/040d3b6af368bf71f952c18da5713b48-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19692-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/040d3b6af368bf71f952c18da5713b48-Supplemental-Conference.pdf
Leveraging Early-Stage Robustness in Diffusion Models for Efficient and High-Quality Image Synthesis
Main Conference Track
Yulhwa Kim, Dongwon Jo, Hyesung Jeon, Taesu Kim, Daehyun Ahn, Hyungjun Kim, jae-joon kim
While diffusion models have demonstrated exceptional image generation capabilities, the iterative noise estimation process required for these models is compute-intensive and their practical implementation is limited by slow sampling speeds. In this paper, we propose a novel approach to speed up the noise estimation net...
https://papers.nips.cc/paper_files/paper/2023/file/04261fce1705c4f02f062866717d592a-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21515-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/04261fce1705c4f02f062866717d592a-Supplemental-Conference.zip
Adversarial Model for Offline Reinforcement Learning
Main Conference Track
Mohak Bhardwaj, Tengyang Xie, Byron Boots, Nan Jiang, Ching-An Cheng
We propose a novel model-based offline Reinforcement Learning (RL) framework, called Adversarial Model for Offline Reinforcement Learning (ARMOR), which can robustly learn policies to improve upon an arbitrary reference policy regardless of data coverage. ARMOR is designed to optimize policies for the worst-case perfor...
https://papers.nips.cc/paper_files/paper/2023/file/0429ececfb199efc93182990169e73bb-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19741-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0429ececfb199efc93182990169e73bb-Supplemental-Conference.zip
Training Your Image Restoration Network Better with Random Weight Network as Optimization Function
Main Conference Track
man zhou, Naishan Zheng, Yuan Xu, Chun-Le Guo, Chongyi Li
The blooming progress made in deep learning-based image restoration has been largely attributed to the availability of high-quality, large-scale datasets and advanced network structures. However, optimization functions such as L1 and L2 are still de facto. In this study, we propose to investigate new optimization func...
https://papers.nips.cc/paper_files/paper/2023/file/043f0503c4f652c737add3690aa5d12c-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20909-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/043f0503c4f652c737add3690aa5d12c-Supplemental-Conference.pdf
Passive learning of active causal strategies in agents and language models
Main Conference Track
Andrew Lampinen, Stephanie Chan, Ishita Dasgupta, Andrew Nam, Jane Wang
What can be learned about causality and experimentation from passive data? This question is salient given recent successes of passively-trained language models in interactive domains such as tool use. Passive learning is inherently limited. However, we show that purely passive learning can in fact allow an agent to lea...
https://papers.nips.cc/paper_files/paper/2023/file/045c87def0c02e3ad0d3d849766d7f1e-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22193-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/045c87def0c02e3ad0d3d849766d7f1e-Supplemental-Conference.pdf
Zero-Regret Performative Prediction Under Inequality Constraints
Main Conference Track
Wenjing YAN, Xuanyu Cao
Performative prediction is a recently proposed framework where predictions guide decision-making and hence influence future data distributions. Such performative phenomena are ubiquitous in various areas, such as transportation, finance, public policy, and recommendation systems. To date, work on performative predictio...
https://papers.nips.cc/paper_files/paper/2023/file/047397849f63b4fcfced4ff720159f3d-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20204-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/047397849f63b4fcfced4ff720159f3d-Supplemental-Conference.pdf
Towards Free Data Selection with General-Purpose Models
Main Conference Track
Yichen Xie, Mingyu Ding, Masayoshi TOMIZUKA, Wei Zhan
A desirable data selection algorithm can efficiently choose the most informative samples to maximize the utility of limited annotation budgets. However, current approaches, represented by active learning methods, typically follow a cumbersome pipeline that iterates the time-consuming model training and batch data selec...
https://papers.nips.cc/paper_files/paper/2023/file/047682108c3b053c61ad2da5a6057b4e-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21335-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/047682108c3b053c61ad2da5a6057b4e-Supplemental-Conference.pdf
Communication-Efficient Federated Bilevel Optimization with Global and Local Lower Level Problems
Main Conference Track
Junyi Li, Feihu Huang, Heng Huang
Bilevel Optimization has witnessed notable progress recently with new emerging efficient algorithms. However, its application in the Federated Learning setting remains relatively underexplored, and the impact of Federated Learning's inherent challenges on the convergence of bilevel algorithms remain obscure.In this wor...
https://papers.nips.cc/paper_files/paper/2023/file/04bd683d5428d91c5fbb5a7d2c27064d-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20233-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/04bd683d5428d91c5fbb5a7d2c27064d-Supplemental-Conference.pdf
Partial Multi-Label Learning with Probabilistic Graphical Disambiguation
Main Conference Track
Jun-Yi Hang, Min-Ling Zhang
In partial multi-label learning (PML), each training example is associated with a set of candidate labels, among which only some labels are valid. As a common strategy to tackle PML problem, disambiguation aims to recover the ground-truth labeling information from such inaccurate annotations. However, existing approach...
https://papers.nips.cc/paper_files/paper/2023/file/04e05ba5cbc36044f6499d1edf15247e-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21623-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/04e05ba5cbc36044f6499d1edf15247e-Supplemental-Conference.pdf
Reward Scale Robustness for Proximal Policy Optimization via DreamerV3 Tricks
Main Conference Track
Ryan Sullivan, Akarsh Kumar, Shengyi Huang, John Dickerson, Joseph Suarez
Most reinforcement learning methods rely heavily on dense, well-normalized environment rewards. DreamerV3 recently introduced a model-based method with a number of tricks that mitigate these limitations, achieving state-of-the-art on a wide range of benchmarks with a single set of hyperparameters. This result sparked d...
https://papers.nips.cc/paper_files/paper/2023/file/04f61ec02d1b3a025a59d978269ce437-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19965-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/04f61ec02d1b3a025a59d978269ce437-Supplemental-Conference.pdf
Emergent Correspondence from Image Diffusion
Main Conference Track
Luming Tang, Menglin Jia, Qianqian Wang, Cheng Perng Phoo, Bharath Hariharan
Finding correspondences between images is a fundamental problem in computer vision. In this paper, we show that correspondence emerges in image diffusion models without any explicit supervision. We propose a simple strategy to extract this implicit knowledge out of diffusion networks as image features, namely DIffusion...
https://papers.nips.cc/paper_files/paper/2023/file/0503f5dce343a1d06d16ba103dd52db1-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19892-/bibtex
null
Robust Learning with Progressive Data Expansion Against Spurious Correlation
Main Conference Track
Yihe Deng, Yu Yang, Baharan Mirzasoleiman, Quanquan Gu
While deep learning models have shown remarkable performance in various tasks, they are susceptible to learning non-generalizable _spurious features_ rather than the core features that are genuinely correlated to the true label. In this paper, beyond existing analyses of linear models, we theoretically examine the lear...
https://papers.nips.cc/paper_files/paper/2023/file/0506ad3d1bcc8398a920db9340f27fe4-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22024-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0506ad3d1bcc8398a920db9340f27fe4-Supplemental-Conference.pdf
Multiclass Boosting: Simple and Intuitive Weak Learning Criteria
Main Conference Track
Nataly Brukhim, Amit Daniely, Yishay Mansour, Shay Moran
We study a generalization of boosting to the multiclass setting.We introduce a weak learning condition for multiclass classification that captures the original notion of weak learnability as being “slightly better than random guessing”. We give a simple and efficient boosting algorithm, that does not require realizabil...
https://papers.nips.cc/paper_files/paper/2023/file/050f8591be3874b52fdac4e1060eeb29-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20243-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/050f8591be3874b52fdac4e1060eeb29-Supplemental-Conference.pdf
Approximate Heavy Tails in Offline (Multi-Pass) Stochastic Gradient Descent
Main Conference Track
Kruno Lehman, Alain Durmus, Umut Simsekli
A recent line of empirical studies has demonstrated that SGD might exhibit a heavy-tailed behavior in practical settings, and the heaviness of the tails might correlate with the overall performance. In this paper, we investigate the emergence of such heavy tails. Previous works on this problem only considered, up to ou...
https://papers.nips.cc/paper_files/paper/2023/file/0525a72df7fb2cd943c780d059b94774-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19638-/bibtex
null
Uncovering Neural Scaling Laws in Molecular Representation Learning
Datasets and Benchmarks Track
Dingshuo Chen, Yanqiao Zhu, Jieyu Zhang, Yuanqi Du, Zhixun Li, Qiang Liu, Shu Wu, Liang Wang
Molecular Representation Learning (MRL) has emerged as a powerful tool for drug and materials discovery in a variety of tasks such as virtual screening and inverse design. While there has been a surge of interest in advancing model-centric techniques, the influence of both data quantity and quality on molecular represe...
https://papers.nips.cc/paper_files/paper/2023/file/052e22cfdd344c79634f7ec76fa03e22-Paper-Datasets_and_Benchmarks.pdf
https://papers.nips.cc/paper_files/paper/19545-/bibtex
null
FlowCam: Training Generalizable 3D Radiance Fields without Camera Poses via Pixel-Aligned Scene Flow
Main Conference Track
Cameron Smith, Yilun Du, Ayush Tewari, Vincent Sitzmann
Reconstruction of 3D neural fields from posed images has emerged as a promising method for self-supervised representation learning. The key challenge preventing the deployment of these 3D scene learners on large-scale video data is their dependence on precise camera poses from structure-from-motion, which is prohibitiv...
https://papers.nips.cc/paper_files/paper/2023/file/0534abc9e6db91683d82186ef0d68202-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21853-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0534abc9e6db91683d82186ef0d68202-Supplemental-Conference.zip
Minimum Description Length and Generalization Guarantees for Representation Learning
Main Conference Track
Milad Sefidgaran, Abdellatif Zaidi, Piotr Krasnowski
A major challenge in designing efficient statistical supervised learning algorithms is finding representations that perform well not only on available training samples but also on unseen data. While the study of representation learning has spurred much interest, most existing such approaches are heuristic; and very lit...
https://papers.nips.cc/paper_files/paper/2023/file/054e9f9a286671ababa3213d6e59c1c2-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21923-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/054e9f9a286671ababa3213d6e59c1c2-Supplemental-Conference.pdf
From Discrete Tokens to High-Fidelity Audio Using Multi-Band Diffusion
Main Conference Track
Robin San Roman, Yossi Adi, Antoine Deleforge, Romain Serizel, Gabriel Synnaeve, Alexandre Defossez
Deep generative models can generate high-fidelity audio conditioned on varioustypes of representations (e.g., mel-spectrograms, Mel-frequency Cepstral Coefficients(MFCC)). Recently, such models have been used to synthesize audiowaveforms conditioned on highly compressed representations. Although suchmethods produce imp...
https://papers.nips.cc/paper_files/paper/2023/file/054f771d614df12fe8def8ecdbe4e8e1-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21274-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/054f771d614df12fe8def8ecdbe4e8e1-Supplemental-Conference.pdf
Fixing the NTK: From Neural Network Linearizations to Exact Convex Programs
Main Conference Track
Rajat Vadiraj Dwaraknath, Tolga Ergen, Mert Pilanci
Recently, theoretical analyses of deep neural networks have broadly focused on two directions: 1) Providing insight into neural network training by SGD in the limit of infinite hidden-layer width and infinitesimally small learning rate (also known as gradient flow) via the Neural Tangent Kernel (NTK), and 2) Globally o...
https://papers.nips.cc/paper_files/paper/2023/file/055fc19a3ce780b96cff15ffe738c1f1-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21678-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/055fc19a3ce780b96cff15ffe738c1f1-Supplemental-Conference.zip
Birth of a Transformer: A Memory Viewpoint
Main Conference Track
Alberto Bietti, Vivien Cabannes, Diane Bouchacourt, Herve Jegou, Leon Bottou
Large language models based on transformers have achieved great empirical successes. However, as they are deployed more widely, there is a growing need to better understand their internal mechanisms in order to make them more reliable. These models appear to store vast amounts of knowledge from their training data, and...
https://papers.nips.cc/paper_files/paper/2023/file/0561738a239a995c8cd2ef0e50cfa4fd-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19662-/bibtex
null
A Variational Perspective on High-Resolution ODEs
Main Conference Track
Hoomaan Maskan, Konstantinos Zygalakis, Alp Yurtsever
We consider unconstrained minimization of smooth convex functions. We propose a novel variational perspective using forced Euler-Lagrange equation that allows for studying high-resolution ODEs. Through this, we obtain a faster convergence rate for gradient norm minimization using Nesterov's accelerated gradient method....
https://papers.nips.cc/paper_files/paper/2023/file/0569458210c88d8db2985799da830d27-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20103-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0569458210c88d8db2985799da830d27-Supplemental-Conference.zip
What You See is What You Read? Improving Text-Image Alignment Evaluation
Main Conference Track
Michal Yarom, Yonatan Bitton, Soravit Changpinyo, Roee Aharoni, Jonathan Herzig, Oran Lang, Eran Ofek, Idan Szpektor
Automatically determining whether a text and a corresponding image are semantically aligned is a significant challenge for vision-language models, with applications in generative text-to-image and image-to-text tasks. In this work, we study methods for automatic text-image alignment evaluation. We first introduce SeeTR...
https://papers.nips.cc/paper_files/paper/2023/file/056e8e9c8ca9929cb6cf198952bf1dbb-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22359-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/056e8e9c8ca9929cb6cf198952bf1dbb-Supplemental-Conference.zip
On the Robustness of Mechanism Design under Total Variation Distance
Main Conference Track
Anuran Makur, Marios Mertzanidis, Alexandros Psomas, Athina Terzoglou
We study the problem of designing mechanisms when agents' valuation functions are drawn from unknown and correlated prior distributions. In particular, we are given a prior distribution $D$, and we are interested in designing a (truthful) mechanism that has good performance for all "true distributions" that are close t...
https://papers.nips.cc/paper_files/paper/2023/file/058983528186511a74968e88a6d0ad63-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20388-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/058983528186511a74968e88a6d0ad63-Supplemental-Conference.pdf
$\mathcal{M}^4$: A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities and Models
Datasets and Benchmarks Track
Xuhong Li, Mengnan Du, Jiamin Chen, Yekun Chai, Himabindu Lakkaraju, Haoyi Xiong
While Explainable Artificial Intelligence (XAI) techniques have been widely studied to explain predictions made by deep neural networks, the way to evaluate the faithfulness of explanation results remains challenging, due to the heterogeneity of explanations for various models and the lack of ground-truth explanations....
https://papers.nips.cc/paper_files/paper/2023/file/05957c194f4c77ac9d91e1374d2def6b-Paper-Datasets_and_Benchmarks.pdf
https://papers.nips.cc/paper_files/paper/22790-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/05957c194f4c77ac9d91e1374d2def6b-Supplemental-Datasets_and_Benchmarks.zip
A generative model of the hippocampal formation trained with theta driven local learning rules
Main Conference Track
Tom M George, Kimberly L. Stachenfeld, Caswell Barry, Claudia Clopath, Tomoki Fukai
Advances in generative models have recently revolutionised machine learning. Meanwhile, in neuroscience, generative models have long been thought fundamental to animal intelligence. Understanding the biological mechanisms that support these processes promises to shed light on the relationship between biological and art...
https://papers.nips.cc/paper_files/paper/2023/file/05ab457c7b769f01c2973e2a5ab66ad9-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22627-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/05ab457c7b769f01c2973e2a5ab66ad9-Supplemental-Conference.pdf
Risk-Averse Model Uncertainty for Distributionally Robust Safe Reinforcement Learning
Main Conference Track
James Queeney, Mouhacine Benosman
Many real-world domains require safe decision making in uncertain environments. In this work, we introduce a deep reinforcement learning framework for approaching this important problem. We consider a distribution over transition models, and apply a risk-averse perspective towards model uncertainty through the use of c...
https://papers.nips.cc/paper_files/paper/2023/file/05b63fa06784b71aab3939004e0f0a0d-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19925-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/05b63fa06784b71aab3939004e0f0a0d-Supplemental-Conference.pdf
Optimal approximation using complex-valued neural networks
Main Conference Track
Paul Geuchen, Felix Voigtlaender
Complex-valued neural networks (CVNNs) have recently shown promising empirical success, for instance for increasing the stability of recurrent neural networks and for improving the performance in tasks with complex-valued inputs, such as MRI fingerprinting. While the overwhelming success of Deep Learning in the real-va...
https://papers.nips.cc/paper_files/paper/2023/file/05b69cc4c8ff6e24c5de1ecd27223d37-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22175-/bibtex
null
BayesDAG: Gradient-Based Posterior Inference for Causal Discovery
Main Conference Track
Yashas Annadani, Nick Pawlowski, Joel Jennings, Stefan Bauer, Cheng Zhang, Wenbo Gong
Bayesian causal discovery aims to infer the posterior distribution over causal models from observed data, quantifying epistemic uncertainty and benefiting downstream tasks. However, computational challenges arise due to joint inference over combinatorial space of Directed Acyclic Graphs (DAGs) and nonlinear functions. ...
https://papers.nips.cc/paper_files/paper/2023/file/05cf28e3d3c9a179d789c55270fe6f72-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19548-/bibtex
null
Bounce: Reliable High-Dimensional Bayesian Optimization for Combinatorial and Mixed Spaces
Main Conference Track
Leonard Papenmeier, Luigi Nardi, Matthias Poloczek
Impactful applications such as materials discovery, hardware design, neural architecture search, or portfolio optimization require optimizing high-dimensional black-box functions with mixed and combinatorial input spaces.While Bayesian optimization has recently made significant progress in solving such problems, an in-...
https://papers.nips.cc/paper_files/paper/2023/file/05d2175de7ee637588d1b5ced8b15b32-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20766-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/05d2175de7ee637588d1b5ced8b15b32-Supplemental-Conference.zip
Uniform-in-Time Wasserstein Stability Bounds for (Noisy) Stochastic Gradient Descent
Main Conference Track
Lingjiong Zhu, Mert Gurbuzbalaban, Anant Raj, Umut Simsekli
Algorithmic stability is an important notion that has proven powerful for deriving generalization bounds for practical algorithms. The last decade has witnessed an increasing number of stability bounds for different algorithms applied on different classes of loss functions. While these bounds have illuminated various p...
https://papers.nips.cc/paper_files/paper/2023/file/05d6b5b6901fb57d2c287e1d3ce6d63c-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21554-/bibtex
null
Towards Generic Semi-Supervised Framework for Volumetric Medical Image Segmentation
Main Conference Track
Haonan Wang, Xiaomeng Li
Volume-wise labeling in 3D medical images is a time-consuming task that requires expertise. As a result, there is growing interest in using semi-supervised learning (SSL) techniques to train models with limited labeled data. However, the challenges and practical applications extend beyond SSL to settings such as unsupe...
https://papers.nips.cc/paper_files/paper/2023/file/05dc08730e32441edff52b0fa6caab5f-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19625-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/05dc08730e32441edff52b0fa6caab5f-Supplemental-Conference.pdf
Stochastic Distributed Optimization under Average Second-order Similarity: Algorithms and Analysis
Main Conference Track
Dachao Lin, Yuze Han, Haishan Ye, Zhihua Zhang
We study finite-sum distributed optimization problems involving a master node and $n-1$ local nodes under the popular $\delta$-similarity and $\mu$-strong convexity conditions. We propose two new algorithms, SVRS and AccSVRS, motivated by previous works. The non-accelerated SVRS method combines the techniques of gradie...
https://papers.nips.cc/paper_files/paper/2023/file/05e552739c2629f3324c1063a382b4bd-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22682-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/05e552739c2629f3324c1063a382b4bd-Supplemental-Conference.pdf
PolyDiffuse: Polygonal Shape Reconstruction via Guided Set Diffusion Models
Main Conference Track
Jiacheng Chen, Ruizhi Deng, Yasutaka Furukawa
This paper presents \textit{PolyDiffuse}, a novel structured reconstruction algorithm that transforms visual sensor data into polygonal shapes with Diffusion Models (DM), an emerging machinery amid exploding generative AI, while formulating reconstruction as a generation process conditioned on sensor data. The task of ...
https://papers.nips.cc/paper_files/paper/2023/file/05f0e2fa003602db2d98ca72b79dec51-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21588-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/05f0e2fa003602db2d98ca72b79dec51-Supplemental-Conference.pdf
Can You Rely on Your Model Evaluation? Improving Model Evaluation with Synthetic Test Data
Main Conference Track
Boris van Breugel, Nabeel Seedat, Fergus Imrie, Mihaela van der Schaar
Evaluating the performance of machine learning models on diverse and underrepresented subgroups is essential for ensuring fairness and reliability in real-world applications. However, accurately assessing model performance becomes challenging due to two main issues: (1) a scarcity of test data, especially for small sub...
https://papers.nips.cc/paper_files/paper/2023/file/05fb0f4e645cad23e0ab59d6b9901428-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20383-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/05fb0f4e645cad23e0ab59d6b9901428-Supplemental-Conference.pdf
Rethinking the Backward Propagation for Adversarial Transferability
Main Conference Track
Wang Xiaosen, Kangheng Tong, Kun He
Transfer-based attacks generate adversarial examples on the surrogate model, which can mislead other black-box models without access, making it promising to attack real-world applications. Recently, several works have been proposed to boost adversarial transferability, in which the surrogate model is usually overlooked...
https://papers.nips.cc/paper_files/paper/2023/file/05fe0c633ae41756540dba2a99a36306-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20937-/bibtex
null
Bullying10K: A Large-Scale Neuromorphic Dataset towards Privacy-Preserving Bullying Recognition
Datasets and Benchmarks Track
Yiting Dong, Yang Li, Dongcheng Zhao, Guobin Shen, Yi Zeng
The prevalence of violence in daily life poses significant threats to individuals' physical and mental well-being. Using surveillance cameras in public spaces has proven effective in proactively deterring and preventing such incidents. However, concerns regarding privacy invasion have emerged due to their widespread de...
https://papers.nips.cc/paper_files/paper/2023/file/05ffe69463062b7f9fb506c8351ffdd7-Paper-Datasets_and_Benchmarks.pdf
https://papers.nips.cc/paper_files/paper/22580-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/05ffe69463062b7f9fb506c8351ffdd7-Supplemental-Datasets_and_Benchmarks.pdf
Compression with Bayesian Implicit Neural Representations
Main Conference Track
Zongyu Guo, Gergely Flamich, Jiajun He, Zhibo Chen, José Miguel Hernández-Lobato
Many common types of data can be represented as functions that map coordinates to signal values, such as pixel locations to RGB values in the case of an image. Based on this view, data can be compressed by overfitting a compact neural network to its functional representation and then encoding the network weights. Howev...
https://papers.nips.cc/paper_files/paper/2023/file/060b2af0081a460f7f466f7f174d9052-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21267-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/060b2af0081a460f7f466f7f174d9052-Supplemental-Conference.pdf
Towards Unbounded Machine Unlearning
Main Conference Track
Meghdad Kurmanji, Peter Triantafillou, Jamie Hayes, Eleni Triantafillou
Deep machine unlearning is the problem of 'removing' from a trained neural network a subset of its training set. This problem is very timely and has many applications, including the key tasks of removing biases (RB), resolving confusion (RC) (caused by mislabelled data in trained models), as well as allowing users to e...
https://papers.nips.cc/paper_files/paper/2023/file/062d711fb777322e2152435459e6e9d9-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21511-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/062d711fb777322e2152435459e6e9d9-Supplemental-Conference.zip
Collaborative Learning via Prediction Consensus
Main Conference Track
Dongyang Fan, Celestine Mendler-Dünner, Martin Jaggi
We consider a collaborative learning setting where the goal of each agent is to improve their own model by leveraging the expertise of collaborators, in addition to their own training data. To facilitate the exchange of expertise among agents, we propose a distillation-based method leveraging shared unlabeled auxiliary...
https://papers.nips.cc/paper_files/paper/2023/file/065e259a1d2d955e63b99aac6a3a3081-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22332-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/065e259a1d2d955e63b99aac6a3a3081-Supplemental-Conference.pdf
Identification of Nonlinear Latent Hierarchical Models
Main Conference Track
Lingjing Kong, Biwei Huang, Feng Xie, Eric Xing, Yuejie Chi, Kun Zhang
Identifying latent variables and causal structures from observational data is essential to many real-world applications involving biological data, medical data, and unstructured data such as images and languages. However, this task can be highly challenging, especially when observed variables are generated by causally ...
https://papers.nips.cc/paper_files/paper/2023/file/065ef23a944b3995de7dd4a3e203d133-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20693-/bibtex
null
Sample Efficient Reinforcement Learning in Mixed Systems through Augmented Samples and Its Applications to Queueing Networks
Main Conference Track
Honghao Wei, Xin Liu, Weina Wang, Lei Ying
This paper considers a class of reinforcement learning problems, which involve systems with two types of states: stochastic and pseudo-stochastic. In such systems, stochastic states follow a stochastic transition kernel while the transitions of pseudo-stochastic states are deterministic {\em given} the stochastic state...
https://papers.nips.cc/paper_files/paper/2023/file/0663a39baab211328fc865f91abc75ab-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22719-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0663a39baab211328fc865f91abc75ab-Supplemental-Conference.zip
Temporal Graph Benchmark for Machine Learning on Temporal Graphs
Datasets and Benchmarks Track
Shenyang Huang, Farimah Poursafaei, Jacob Danovitch, Matthias Fey, Weihua Hu, Emanuele Rossi, Jure Leskovec, Michael Bronstein, Guillaume Rabusseau, Reihaneh Rabbany
We present the Temporal Graph Benchmark (TGB), a collection of challenging and diverse benchmark datasets for realistic, reproducible, and robust evaluation of machine learning models on temporal graphs. TGB datasets are of large scale, spanning years in duration, incorporate both node and edge-level prediction tasks a...
https://papers.nips.cc/paper_files/paper/2023/file/066b98e63313162f6562b35962671288-Paper-Datasets_and_Benchmarks.pdf
https://papers.nips.cc/paper_files/paper/22446-/bibtex
null
Navigating Data Heterogeneity in Federated Learning: A Semi-Supervised Federated Object Detection
Main Conference Track
Taehyeon Kim, Eric Lin, Junu Lee, Christian Lau, Vaikkunth Mugunthan
Federated Learning (FL) has emerged as a potent framework for training models across distributed data sources while maintaining data privacy. Nevertheless, it faces challenges with limited high-quality labels and non-IID client data, particularly in applications like autonomous driving. To address these hurdles, we nav...
https://papers.nips.cc/paper_files/paper/2023/file/066e4dbfeccb5dc2851acd5eca584937-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22736-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/066e4dbfeccb5dc2851acd5eca584937-Supplemental-Conference.pdf