title
stringlengths
14
163
category
stringclasses
2 values
authors
stringlengths
7
859
abstract
stringlengths
177
2.55k
paper_link
stringlengths
104
117
bibtex
stringlengths
54
54
supplemental_link
stringlengths
111
124
DeepfakeBench: A Comprehensive Benchmark of Deepfake Detection
Datasets and Benchmarks Track
Zhiyuan Yan, Yong Zhang, Xinhang Yuan, Siwei Lyu, Baoyuan Wu
A critical yet frequently overlooked challenge in the field of deepfake detection is the lack of a standardized, unified, comprehensive benchmark. This issue leads to unfair performance comparisons and potentially misleading results. Specifically, there is a lack of uniformity in data processing pipelines, resulting in...
https://papers.nips.cc/paper_files/paper/2023/file/0e735e4b4f07de483cbe250130992726-Paper-Datasets_and_Benchmarks.pdf
https://papers.nips.cc/paper_files/paper/20722-/bibtex
null
DreamWaltz: Make a Scene with Complex 3D Animatable Avatars
Main Conference Track
Yukun Huang, Jianan Wang, Ailing Zeng, He CAO, Xianbiao Qi, Yukai Shi, Zheng-Jun Zha, Lei Zhang
We present DreamWaltz, a novel framework for generating and animating complex 3D avatars given text guidance and parametric human body prior. While recent methods have shown encouraging results for text-to-3D generation of common objects, creating high-quality and animatable 3D avatars remains challenging. To create hi...
https://papers.nips.cc/paper_files/paper/2023/file/0e769ec2c2cd99b6ad69c9d75113e386-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20640-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0e769ec2c2cd99b6ad69c9d75113e386-Supplemental-Conference.zip
Where2Explore: Few-shot Affordance Learning for Unseen Novel Categories of Articulated Objects
Main Conference Track
Chuanruo Ning, Ruihai Wu, Haoran Lu, Kaichun Mo, Hao Dong
Articulated object manipulation is a fundamental yet challenging task in robotics. Due to significant geometric and semantic variations across object categories, previous manipulation models struggle to generalize to novel categories. Few-shot learning is a promising solution for alleviating this issue by allowing robo...
https://papers.nips.cc/paper_files/paper/2023/file/0e7e2af2e5ba822c9ad35a37b31b5dd4-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22802-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0e7e2af2e5ba822c9ad35a37b31b5dd4-Supplemental-Conference.zip
OpenProteinSet: Training data for structural biology at scale
Datasets and Benchmarks Track
Gustaf Ahdritz, Nazim Bouatta, Sachin Kadyan, Lukas Jarosch, Dan Berenberg, Ian Fisk, Andrew Watkins, Stephen Ra, Richard Bonneau, Mohammed AlQuraishi
Multiple sequence alignments (MSAs) of proteins encode rich biological information and have been workhorses in bioinformatic methods for tasks like protein design and protein structure prediction for decades. Recent breakthroughs like AlphaFold2 that use transformers to attend directly over large quantities of raw MSAs...
https://papers.nips.cc/paper_files/paper/2023/file/0eb82171240776fe19da498bef3b1abe-Paper-Datasets_and_Benchmarks.pdf
https://papers.nips.cc/paper_files/paper/19704-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0eb82171240776fe19da498bef3b1abe-Supplemental-Datasets_and_Benchmarks.pdf
Counting Distinct Elements in the Turnstile Model with Differential Privacy under Continual Observation
Main Conference Track
Palak Jain, Iden Kalemaj, Sofya Raskhodnikova, Satchit Sivakumar, Adam Smith
Privacy is a central challenge for systems that learn from sensitive data sets, especially when a system's outputs must be continuously updated to reflect changing data. We consider the achievable error for differentially private continual release of a basic statistic---the number of distinct items---in a stream where...
https://papers.nips.cc/paper_files/paper/2023/file/0ef1afa0daa888d695dcd5e9513bafa3-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19665-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0ef1afa0daa888d695dcd5e9513bafa3-Supplemental-Conference.pdf
Demystifying Softmax Gating Function in Gaussian Mixture of Experts
Main Conference Track
Huy Nguyen, TrungTin Nguyen, Nhat Ho
Understanding the parameter estimation of softmax gating Gaussian mixture of experts has remained a long-standing open problem in the literature. It is mainly due to three fundamental theoretical challenges associated with the softmax gating function: (i) the identifiability only up to the translation of parameters; (i...
https://papers.nips.cc/paper_files/paper/2023/file/0ef6ffcb85a2d238fc4761860c31ded4-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19509-/bibtex
null
Hybrid Policy Optimization from Imperfect Demonstrations
Main Conference Track
Hanlin Yang, Chao Yu, peng sun, Siji Chen
Exploration is one of the main challenges in Reinforcement Learning (RL), especially in environments with sparse rewards. Learning from Demonstrations (LfD) is a promising approach to solving this problem by leveraging expert demonstrations. However, expert demonstrations of high quality are usually costly or even imp...
https://papers.nips.cc/paper_files/paper/2023/file/0f0a30c7b46be23a83317c5cb721fc43-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21727-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0f0a30c7b46be23a83317c5cb721fc43-Supplemental-Conference.zip
What is Flagged in Uncertainty Quantification? Latent Density Models for Uncertainty Categorization
Main Conference Track
Hao Sun, Boris van Breugel, Jonathan Crabbé, Nabeel Seedat, Mihaela van der Schaar
Uncertainty quantification (UQ) is essential for creating trustworthy machine learning models. Recent years have seen a steep rise in UQ methods that can flag suspicious examples, however, it is often unclear what exactly these methods identify. In this work, we propose a framework for categorizing uncertain examples f...
https://papers.nips.cc/paper_files/paper/2023/file/0f0c4f3d83c58df58380af3b0729354c-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21078-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0f0c4f3d83c58df58380af3b0729354c-Supplemental-Conference.pdf
Datasets and Benchmarks for Nanophotonic Structure and Parametric Design Simulations
Datasets and Benchmarks Track
Jungtaek Kim, Mingxuan Li, Oliver Hinder, Paul Leu
Nanophotonic structures have versatile applications including solar cells, anti-reflective coatings, electromagnetic interference shielding, optical filters, and light emitting diodes. To design and understand these nanophotonic structures, electrodynamic simulations are essential. These simulations enable us to model ...
https://papers.nips.cc/paper_files/paper/2023/file/0f12c9975ff4f2e44a5a26ef01b0b249-Paper-Datasets_and_Benchmarks.pdf
https://papers.nips.cc/paper_files/paper/20807-/bibtex
null
Efficient Data Subset Selection to Generalize Training Across Models: Transductive and Inductive Networks
Main Conference Track
Eeshaan Jain, Tushar Nandy, Gaurav Aggarwal, Ashish Tendulkar, Rishabh Iyer, Abir De
Existing subset selection methods for efficient learning predominantly employ discrete combinatorial and model-specific approaches, which lack generalizability--- for each new model, the algorithm has to be executed from the beginning. Therefore, for an unseen architecture, one cannot use the subset chosen for a differ...
https://papers.nips.cc/paper_files/paper/2023/file/0f25eb6e9dc26c933a5d7516abf1eb8c-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20881-/bibtex
null
NIS3D: A Completely Annotated Benchmark for Dense 3D Nuclei Image Segmentation
Datasets and Benchmarks Track
Wei Zheng, Cheng Peng, Zeyuan Hou, Boyu Lyu, Mengfan Wang, Xuelong Mi, Shuoxuan Qiao, Yinan Wan, Guoqiang Yu
3D segmentation of nuclei images is a fundamental task for many biological studies. Despite the rapid advances of large-volume 3D imaging acquisition methods and the emergence of sophisticated algorithms to segment the nuclei in recent years, a benchmark with all cells completely annotated is still missing, making it h...
https://papers.nips.cc/paper_files/paper/2023/file/0f2cd3d09a132757555b602e2dd43784-Paper-Datasets_and_Benchmarks.pdf
https://papers.nips.cc/paper_files/paper/22037-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0f2cd3d09a132757555b602e2dd43784-Supplemental-Datasets_and_Benchmarks.pdf
HiBug: On Human-Interpretable Model Debug
Main Conference Track
Muxi Chen, YU LI, Qiang Xu
Machine learning models can frequently produce systematic errors on critical subsets (or slices) of data that share common attributes. Discovering and explaining such model bugs is crucial for reliable model deployment. However, existing bug discovery and interpretation methods usually involve heavy human intervention ...
https://papers.nips.cc/paper_files/paper/2023/file/0f53ecc0d36a5d5d3d3e94d42c4b23ca-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20019-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0f53ecc0d36a5d5d3d3e94d42c4b23ca-Supplemental-Conference.zip
A Theoretical Analysis of the Test Error of Finite-Rank Kernel Ridge Regression
Main Conference Track
Tin Sum Cheng, Aurelien Lucchi, Anastasis Kratsios, Ivan Dokmanić, David Belius
Existing statistical learning guarantees for general kernel regressors often yield loose bounds when used with finite-rank kernels. Yet, finite-rank kernels naturally appear in a number of machine learning problems, e.g. when fine-tuning a pre-trained deep neural network's last layer to adapt it to a novel task when pe...
https://papers.nips.cc/paper_files/paper/2023/file/0f580c1ace3b857a390575ca42de7938-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21763-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0f580c1ace3b857a390575ca42de7938-Supplemental-Conference.zip
Learning Invariant Representations with a Nonparametric Nadaraya-Watson Head
Main Conference Track
Alan Wang, Minh Nguyen, Mert Sabuncu
Machine learning models will often fail when deployed in an environment with a data distribution that is different than the training distribution. When multiple environments are available during training, many methods exist that learn representations which are invariant across the different distributions, with the hope...
https://papers.nips.cc/paper_files/paper/2023/file/0f6931a9e339a012a9909306d7c758b4-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21343-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0f6931a9e339a012a9909306d7c758b4-Supplemental-Conference.pdf
Conformalized matrix completion
Main Conference Track
Yu Gui, Rina Barber, Cong Ma
Matrix completion aims to estimate missing entries in a data matrix, using the assumption of a low-complexity structure (e.g., low-rankness) so that imputation is possible. While many effective estimation algorithms exist in the literature, uncertainty quantification for this problem has proved to be challenging, and e...
https://papers.nips.cc/paper_files/paper/2023/file/0f7e4bb7a35dd4cb426203c91a4bfa10-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21309-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0f7e4bb7a35dd4cb426203c91a4bfa10-Supplemental-Conference.zip
Mixture Weight Estimation and Model Prediction in Multi-source Multi-target Domain Adaptation
Main Conference Track
Yuyang Deng, Ilja Kuzborskij, Mehrdad Mahdavi
We consider a problem of learning a model from multiple sources with the goal to performwell on a new target distribution. Such problem arises inlearning with data collected from multiple sources (e.g. crowdsourcing) orlearning in distributed systems, where the data can be highly heterogeneous. Thegoal of learner is t...
https://papers.nips.cc/paper_files/paper/2023/file/0fa81c3f0d57f95b8776de3a248ef0ed-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20339-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0fa81c3f0d57f95b8776de3a248ef0ed-Supplemental-Conference.pdf
CELLE-2: Translating Proteins to Pictures and Back with a Bidirectional Text-to-Image Transformer
Main Conference Track
Emaad Khwaja, Yun Song, Aaron Agarunov, Bo Huang
We present CELL-E 2, a novel bidirectional transformer that can generate images depicting protein subcellular localization from the amino acid sequences (and vice versa). Protein localization is a challenging problem that requires integrating sequence and image information, which most existing methods ignore. CELL-E 2 ...
https://papers.nips.cc/paper_files/paper/2023/file/0fb7c02d420c993385c7de44c2b5bf01-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21411-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0fb7c02d420c993385c7de44c2b5bf01-Supplemental-Conference.zip
HeadSculpt: Crafting 3D Head Avatars with Text
Main Conference Track
Xiao Han, Yukang Cao, Kai Han, Xiatian Zhu, Jiankang Deng, Yi-Zhe Song, Tao Xiang, Kwan-Yee K. Wong
Recently, text-guided 3D generative methods have made remarkable advancements in producing high-quality textures and geometry, capitalizing on the proliferation of large vision-language and image diffusion models. However, existing methods still struggle to create high-fidelity 3D head avatars in two aspects: (1) They ...
https://papers.nips.cc/paper_files/paper/2023/file/0fb98d483fa580e0354bcdd3a003a3f3-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21803-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0fb98d483fa580e0354bcdd3a003a3f3-Supplemental-Conference.zip
CBD: A Certified Backdoor Detector Based on Local Dominant Probability
Main Conference Track
Zhen Xiang, Zidi Xiong, Bo Li
Backdoor attack is a common threat to deep neural networks. During testing, samples embedded with a backdoor trigger will be misclassified as an adversarial target by a backdoored model, while samples without the backdoor trigger will be correctly classified. In this paper, we present the first certified backdoor detec...
https://papers.nips.cc/paper_files/paper/2023/file/0fbf046448d7eea18b982001320b9a10-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22363-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0fbf046448d7eea18b982001320b9a10-Supplemental-Conference.pdf
SheetCopilot: Bringing Software Productivity to the Next Level through Large Language Models
Main Conference Track
Hongxin Li, Jingran Su, Yuntao Chen, Qing Li, ZHAO-XIANG ZHANG
Computer end users have spent billions of hours completing daily tasks like tabular data processing and project timeline scheduling. Most of these tasks are repetitive and error-prone, yet most end users lack the skill to automate these burdensome works. With the advent of large language models (LLMs), directing softwa...
https://papers.nips.cc/paper_files/paper/2023/file/0ff30c4bf31db0119a6219e0d250e037-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19798-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0ff30c4bf31db0119a6219e0d250e037-Supplemental-Conference.zip
Beyond Uniform Sampling: Offline Reinforcement Learning with Imbalanced Datasets
Main Conference Track
Zhang-Wei Hong, Aviral Kumar, Sathwik Karnik, Abhishek Bhandwaldar, Akash Srivastava, Joni Pajarinen, Romain Laroche, Abhishek Gupta, Pulkit Agrawal
Offline reinforcement learning (RL) enables learning a decision-making policy without interaction with the environment. This makes it particularly beneficial in situations where such interactions are costly. However, a known challenge for offline RL algorithms is the distributional mismatch between the state-action dis...
https://papers.nips.cc/paper_files/paper/2023/file/0ff3502bb29570b219967278db150a50-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20617-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0ff3502bb29570b219967278db150a50-Supplemental-Conference.zip
Variational Weighting for Kernel Density Ratios
Main Conference Track
Sangwoong Yoon, Frank Park, Gunsu YUN, Iljung Kim, Yung-Kyun Noh
Kernel density estimation (KDE) is integral to a range of generative and discriminative tasks in machine learning. Drawing upon tools from the multidimensional calculus of variations, we derive an optimal weight function that reduces bias in standard kernel density estimates for density ratios, leading to improved esti...
https://papers.nips.cc/paper_files/paper/2023/file/0ff54b4ec4f70b3ae12c8621ca8a49f4-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22867-/bibtex
null
Adversarial Examples Exist in Two-Layer ReLU Networks for Low Dimensional Linear Subspaces
Main Conference Track
Odelia Melamed, Gilad Yehudai, Gal Vardi
Despite a great deal of research, it is still not well-understood why trained neural networks are highly vulnerable to adversarial examples.In this work we focus on two-layer neural networks trained using data which lie on a low dimensional linear subspace.We show that standard gradient methods lead to non-robust neura...
https://papers.nips.cc/paper_files/paper/2023/file/0ffd11b5bce666816802b86c77b54cf7-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22852-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/0ffd11b5bce666816802b86c77b54cf7-Supplemental-Conference.pdf
Complexity of Derivative-Free Policy Optimization for Structured $\mathcal{H}_\infty$ Control
Main Conference Track
Xingang Guo, Darioush Keivan, Geir Dullerud, Peter Seiler, Bin Hu
The applications of direct policy search in reinforcement learning and continuous control have received increasing attention.In this work, we present novel theoretical results on the complexity of derivative-free policy optimization on an important class of robust control tasks, namely the structured $H_\infty$ synthes...
https://papers.nips.cc/paper_files/paper/2023/file/1052b823a161aa2c808dd51c0f58dc37-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22271-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/1052b823a161aa2c808dd51c0f58dc37-Supplemental-Conference.zip
Meet in the Middle: A New Pre-training Paradigm
Main Conference Track
Anh Nguyen, Nikos Karampatziakis, Weizhu Chen
Most language models (LMs) are trained and applied in an autoregressive left-to-right fashion, predicting the next token from the preceding ones. However, this ignores that the full sequence is available during training. In this paper, we introduce ``Meet in the Middle'' (MIM) a new pre-training paradigm that improves ...
https://papers.nips.cc/paper_files/paper/2023/file/105fdc31cc9eb927cc5a0110f4031287-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19641-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/105fdc31cc9eb927cc5a0110f4031287-Supplemental-Conference.pdf
Score-based Source Separation with Applications to Digital Communication Signals
Main Conference Track
Tejas Jayashankar, Gary C.F. Lee, Alejandro Lancho, Amir Weiss, Yury Polyanskiy, Gregory Wornell
We propose a new method for separating superimposed sources using diffusion-based generative models. Our method relies only on separately trained statistical priors of independent sources to establish a new objective function guided by $\textit{maximum a posteriori}$ estimation with an $\textit{$\alpha$-posterior}$, a...
https://papers.nips.cc/paper_files/paper/2023/file/106b2434b8d496c6aed9235d478678af-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22879-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/106b2434b8d496c6aed9235d478678af-Supplemental-Conference.pdf
Fair Streaming Principal Component Analysis: Statistical and Algorithmic Viewpoint
Main Conference Track
Junghyun Lee, Hanseul Cho, Se-Young Yun, Chulhee Yun
Fair Principal Component Analysis (PCA) is a problem setting where we aim to perform PCA while making the resulting representation fair in that the projected distributions, conditional on the sensitive attributes, match one another. However, existing approaches to fair PCA have two main problems: theoretically, there h...
https://papers.nips.cc/paper_files/paper/2023/file/1074541383db5ef12d6ac66d2f8e8d34-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21752-/bibtex
null
DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models
Main Conference Track
Ge Zheng, Bin Yang, Jiajin Tang, Hong-Yu Zhou, Sibei Yang
A long-standing goal of AI systems is to perform complex multimodal reasoning like humans. Recently, large language models (LLMs) have made remarkable strides in such multi-step reasoning on the language modality solely by leveraging the chain of thought (CoT) to mimic human thinking. However, the transfer of these adv...
https://papers.nips.cc/paper_files/paper/2023/file/108030643e640ac050e0ed5e6aace48f-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22480-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/108030643e640ac050e0ed5e6aace48f-Supplemental-Conference.pdf
Adversarially Robust Learning with Uncertain Perturbation Sets
Main Conference Track
Tosca Lechner, Vinayak Pathak, Ruth Urner
In many real-world settings exact perturbation sets to be used by an adversary are not plausibly available to a learner. While prior literature has studied both scenarios with completely known and completely unknown perturbation sets, we propose an in-between setting of learning with respect to a class of perturbation ...
https://papers.nips.cc/paper_files/paper/2023/file/1097a0aeaf00cacfa8f6aced24f3a8bd-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21570-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/1097a0aeaf00cacfa8f6aced24f3a8bd-Supplemental-Conference.pdf
Common Ground in Cooperative Communication
Main Conference Track
Xiaoran Hao, Yash Jhaveri, Patrick Shafto
Cooperative communication plays a fundamental role in theories of human-human interaction--cognition, culture, development, language, etc.--as well as human-robot interaction. The core challenge in cooperative communication is the problem of common ground: having enough shared knowledge and understanding to successfull...
https://papers.nips.cc/paper_files/paper/2023/file/10b7e27c8eb9571fbbd2ae6a9f8c3855-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21847-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/10b7e27c8eb9571fbbd2ae6a9f8c3855-Supplemental-Conference.pdf
Keep Various Trajectories: Promoting Exploration of Ensemble Policies in Continuous Control
Main Conference Track
Chao Li, Chen GONG, Qiang He, Xinwen Hou
The combination of deep reinforcement learning (DRL) with ensemble methods has been proved to be highly effective in addressing complex sequential decision-making problems. This success can be primarily attributed to the utilization of multiple models, which enhances both the robustness of the policy and the accuracy o...
https://papers.nips.cc/paper_files/paper/2023/file/10cb15f4559b3d578b7f24966d48a137-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19863-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/10cb15f4559b3d578b7f24966d48a137-Supplemental-Conference.pdf
ReSync: Riemannian Subgradient-based Robust Rotation Synchronization
Main Conference Track
Huikang Liu, Xiao Li, Anthony Man-Cho So
This work presents ReSync, a Riemannian subgradient-based algorithm for solving the robust rotation synchronization problem, which arises in various engineering applications. ReSync solves a least-unsquared minimization formulation over the rotation group, which is nonsmooth and nonconvex, and aims at recovering the un...
https://papers.nips.cc/paper_files/paper/2023/file/10e9204f14c4daa08041343455435308-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21611-/bibtex
null
On the Exploration of Local Significant Differences For Two-Sample Test
Main Conference Track
Zhijian Zhou, Jie Ni, Jia-He Yao, Wei Gao
Recent years have witnessed increasing attentions on two-sample test with diverse real applications, while this work takes one more step on the exploration of local significant differences for two-sample test. We propose the ME$_\text{MaBiD}$, an effective test for two-sample testing, and the basic idea is to exploit l...
https://papers.nips.cc/paper_files/paper/2023/file/10fc83943b4540a9524af6fc67a23fef-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19802-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/10fc83943b4540a9524af6fc67a23fef-Supplemental-Conference.zip
Fine-Grained Cross-View Geo-Localization Using a Correlation-Aware Homography Estimator
Main Conference Track
Xiaolong Wang, Runsen Xu, Zhuofan Cui, Zeyu Wan, Yu Zhang
In this paper, we introduce a novel approach to fine-grained cross-view geo-localization. Our method aligns a warped ground image with a corresponding GPS-tagged satellite image covering the same area using homography estimation. We first employ a differentiable spherical transform, adhering to geometric principles, to...
https://papers.nips.cc/paper_files/paper/2023/file/112d8e0c7563de6e3408b49a09b4d8a3-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21857-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/112d8e0c7563de6e3408b49a09b4d8a3-Supplemental-Conference.pdf
DataPerf: Benchmarks for Data-Centric AI Development
Datasets and Benchmarks Track
Mark Mazumder, Colby Banbury, Xiaozhe Yao, Bojan Karlaš, William Gaviria Rojas, Sudnya Diamos, Greg Diamos, Lynn He, Alicia Parrish, Hannah Rose Kirk, Jessica Quaye, Charvi Rastogi, Douwe Kiela, David Jurado, David Kanter, Rafael Mosquera, Will Cukierski, Juan Ciro, Lora Aroyo, Bilge Acun, Lingjiao Chen, Mehul Raje, Ma...
Machine learning research has long focused on models rather than datasets, and prominent datasets are used for common ML tasks without regard to the breadth, difficulty, and faithfulness of the underlying problems. Neglecting the fundamental importance of data has given rise to inaccuracy, bias, and fragility in real-w...
https://papers.nips.cc/paper_files/paper/2023/file/112db88215e25b3ae2750e9eefcded94-Paper-Datasets_and_Benchmarks.pdf
https://papers.nips.cc/paper_files/paper/20953-/bibtex
null
Non-Smooth Weakly-Convex Finite-sum Coupled Compositional Optimization
Main Conference Track
Quanqi Hu, Dixian Zhu, Tianbao Yang
This paper investigates new families of compositional optimization problems, called non-smooth weakly-convex finite-sum coupled compositional optimization (NSWC FCCO). There has been a growing interest in FCCO due to its wide-ranging applications in machine learning and AI, as well as its ability to address the shortco...
https://papers.nips.cc/paper_files/paper/2023/file/1160792eab11de2bbaf9e71fce191e8c-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20680-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/1160792eab11de2bbaf9e71fce191e8c-Supplemental-Conference.zip
Optimal Transport for Treatment Effect Estimation
Main Conference Track
Hao Wang, Jiajun Fan, Zhichao Chen, Haoxuan Li, Weiming Liu, Tianqiao Liu, Quanyu Dai, Yichao Wang, Zhenhua Dong, Ruiming Tang
Estimating individual treatment effects from observational data is challenging due to treatment selection bias. Prevalent methods mainly mitigate this issue by aligning different treatment groups in the latent space, the core of which is the calculation of distribution discrepancy. However, two issues that are often ov...
https://papers.nips.cc/paper_files/paper/2023/file/1160e7f31d0a74abbbe1bbf7924b949c-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21490-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/1160e7f31d0a74abbbe1bbf7924b949c-Supplemental-Conference.zip
Initialization Matters: Privacy-Utility Analysis of Overparameterized Neural Networks
Main Conference Track
Jiayuan Ye, Zhenyu Zhu, Fanghui Liu, Reza Shokri, Volkan Cevher
We analytically investigate how over-parameterization of models in randomized machine learning algorithms impacts the information leakage about their training data. Specifically, we prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets, and explore its dependence on ...
https://papers.nips.cc/paper_files/paper/2023/file/1165af8b913fb836c6280b42d6e0084f-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22043-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/1165af8b913fb836c6280b42d6e0084f-Supplemental-Conference.pdf
Cause-Effect Inference in Location-Scale Noise Models: Maximum Likelihood vs. Independence Testing
Main Conference Track
Xiangyu Sun, Oliver Schulte
A fundamental problem of causal discovery is cause-effect inference, to learn the correct causal direction between two random variables. Significant progress has been made through modelling the effect as a function of its cause and a noise term, which allows us to leverage assumptions about the generating function clas...
https://papers.nips.cc/paper_files/paper/2023/file/11715d433f6f8b9106baae0df023deb3-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21089-/bibtex
null
M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models
Datasets and Benchmarks Track
Wenxuan Zhang, Mahani Aljunied, Chang Gao, Yew Ken Chia, Lidong Bing
Despite the existence of various benchmarks for evaluating natural language processing models, we argue that human exams are a more suitable means of evaluating general intelligence for large language models (LLMs), as they inherently demand a much wider range of abilities such as language understanding, domain knowled...
https://papers.nips.cc/paper_files/paper/2023/file/117c5c8622b0d539f74f6d1fb082a2e9-Paper-Datasets_and_Benchmarks.pdf
https://papers.nips.cc/paper_files/paper/21055-/bibtex
null
CROMA: Remote Sensing Representations with Contrastive Radar-Optical Masked Autoencoders
Main Conference Track
Anthony Fuller, Koreen Millard, James Green
A vital and rapidly growing application, remote sensing offers vast yet sparsely labeled, spatially aligned multimodal data; this makes self-supervised learning algorithms invaluable. We present CROMA: a framework that combines contrastive and reconstruction self-supervised objectives to learn rich unimodal and multimo...
https://papers.nips.cc/paper_files/paper/2023/file/11822e84689e631615199db3b75cd0e4-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20458-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/11822e84689e631615199db3b75cd0e4-Supplemental-Conference.zip
OpenAGI: When LLM Meets Domain Experts
Datasets and Benchmarks Track
Yingqiang Ge, Wenyue Hua, Kai Mei, jianchao ji, Juntao Tan, Shuyuan Xu, Zelong Li, Yongfeng Zhang
Human Intelligence (HI) excels at combining basic skills to solve complex tasks. This capability is vital for Artificial Intelligence (AI) and should be embedded in comprehensive AI Agents, enabling them to harness expert models for complex task-solving towards Artificial General Intelligence (AGI). Large Language Mode...
https://papers.nips.cc/paper_files/paper/2023/file/1190733f217404edc8a7f4e15a57f301-Paper-Datasets_and_Benchmarks.pdf
https://papers.nips.cc/paper_files/paper/19966-/bibtex
null
Neural Frailty Machine: Beyond proportional hazard assumption in neural survival regressions
Main Conference Track
Ruofan Wu, Jiawei Qiao, Mingzhe Wu, Wen Yu, Ming Zheng, Tengfei LIU, Tianyi Zhang, Weiqiang Wang
We present neural frailty machine (NFM), a powerful and flexible neural modeling framework for survival regressions. The NFM framework utilizes the classical idea of multiplicative frailty in survival analysis as a principled way of extending the proportional hazard assumption, at the same time being able to leverage t...
https://papers.nips.cc/paper_files/paper/2023/file/11a7f429d75f9f8c6e9c630aeb6524b5-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21494-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/11a7f429d75f9f8c6e9c630aeb6524b5-Supplemental-Conference.zip
Non-autoregressive Machine Translation with Probabilistic Context-free Grammar
Main Conference Track
Shangtong Gui, Chenze Shao, Zhengrui Ma, xishan zhang, Yunji Chen, Yang Feng
Non-autoregressive Transformer(NAT) significantly accelerates the inference of neural machine translation. However, conventional NAT models suffer from limited expression power and performance degradation compared to autoregressive (AT) models due to the assumption of conditional independence among target tokens. To ad...
https://papers.nips.cc/paper_files/paper/2023/file/11c7f1dd168439884b6dfb43a7891432-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20876-/bibtex
null
Constrained Policy Optimization with Explicit Behavior Density For Offline Reinforcement Learning
Main Conference Track
Jing Zhang, Chi Zhang, Wenjia Wang, Bingyi Jing
Due to the inability to interact with the environment, offline reinforcement learning (RL) methods face the challenge of estimating the Out-of-Distribution (OOD) points. Existing methods for addressing this issue either control policy to exclude the OOD action or make the $Q$ function pessimistic. However, these method...
https://papers.nips.cc/paper_files/paper/2023/file/11e1900e680f5fe1893a8e27362dbe2c-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21268-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/11e1900e680f5fe1893a8e27362dbe2c-Supplemental-Conference.pdf
Large Language Models are Fixated by Red Herrings: Exploring Creative Problem Solving and Einstellung Effect using the Only Connect Wall Dataset
Datasets and Benchmarks Track
Saeid Alavi Naeini, Raeid Saqur, Mozhgan Saeidi, John Giorgi, Babak Taati
The quest for human imitative AI has been an enduring topic in AI research since inception. The technical evolution and emerging capabilities of the latest cohort of large language models (LLMs) have reinvigorated the subject beyond academia to cultural zeitgeist. While recent NLP evaluation benchmark tasks test some a...
https://papers.nips.cc/paper_files/paper/2023/file/11e3e0f1b29dcd31bd0952bfc1357f68-Paper-Datasets_and_Benchmarks.pdf
https://papers.nips.cc/paper_files/paper/21625-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/11e3e0f1b29dcd31bd0952bfc1357f68-Supplemental-Datasets_and_Benchmarks.pdf
Formalizing locality for normative synaptic plasticity models
Main Conference Track
Colin Bredenberg, Ezekiel Williams, Cristina Savin, Blake Richards, Guillaume Lajoie
In recent years, many researchers have proposed new models for synaptic plasticity in the brain based on principles of machine learning. The central motivation has been the development of learning algorithms that are able to learn difficult tasks while qualifying as "biologically plausible". However, the concept of a b...
https://papers.nips.cc/paper_files/paper/2023/file/120339238f293d4ae53a7167403abc4b-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21103-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/120339238f293d4ae53a7167403abc4b-Supplemental-Conference.pdf
Exact Verification of ReLU Neural Control Barrier Functions
Main Conference Track
Hongchao Zhang, Junlin Wu, Yevgeniy Vorobeychik, Andrew Clark
Control Barrier Functions (CBFs) are a popular approach for safe control of nonlinear systems. In CBF-based control, the desired safety properties of the system are mapped to nonnegativity of a CBF, and the control input is chosen to ensure that the CBF remains nonnegative for all time. Recently, machine learning metho...
https://papers.nips.cc/paper_files/paper/2023/file/120ed726cf129dbeb8375b6f8a0686f8-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22009-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/120ed726cf129dbeb8375b6f8a0686f8-Supplemental-Conference.pdf
Normalization-Equivariant Neural Networks with Application to Image Denoising
Main Conference Track
Sébastien Herbreteau, Emmanuel Moebel, Charles Kervrann
In many information processing systems, it may be desirable to ensure that any change of the input, whether by shifting or scaling, results in a corresponding change in the system response. While deep neural networks are gradually replacing all traditional automatic processing methods, they surprisingly do not guaran...
https://papers.nips.cc/paper_files/paper/2023/file/12143893d9d37c3569dda800b95cabd9-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21948-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/12143893d9d37c3569dda800b95cabd9-Supplemental-Conference.pdf
Budgeting Counterfactual for Offline RL
Main Conference Track
Yao Liu, Pratik Chaudhari, Rasool Fakoor
The main challenge of offline reinforcement learning, where data is limited, arises from a sequence of counterfactual reasoning dilemmas within the realm of potential actions: What if we were to choose a different course of action? These circumstances frequently give rise to extrapolation errors, which tend to accumula...
https://papers.nips.cc/paper_files/paper/2023/file/121db870b0470dd63bb5bc59c724275a-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21755-/bibtex
null
Federated Conditional Stochastic Optimization
Main Conference Track
Xidong Wu, Jianhui Sun, Zhengmian Hu, Junyi Li, Aidong Zhang, Heng Huang
Conditional stochastic optimization has found applications in a wide range of machine learning tasks, such as invariant learning, AUPRC maximization, and meta-learning. As the demand for training models with large-scale distributed data grows in these applications, there is an increasing need for communication-efficien...
https://papers.nips.cc/paper_files/paper/2023/file/1229eaae5bf1db93e1e4c539258eb472-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22715-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/1229eaae5bf1db93e1e4c539258eb472-Supplemental-Conference.pdf
LaFTer: Label-Free Tuning of Zero-shot Classifier using Language and Unlabeled Image Collections
Main Conference Track
Muhammad Jehanzeb Mirza, Leonid Karlinsky, Wei Lin, Horst Possegger, Mateusz Kozinski, Rogerio Feris, Horst Bischof
Recently, large-scale pre-trained Vision and Language (VL) models have set a new state-of-the-art (SOTA) in zero-shot visual classification enabling open-vocabulary recognition of potentially unlimited set of categories defined as simple language prompts. However, despite these great advances, the performance of these ...
https://papers.nips.cc/paper_files/paper/2023/file/123a18dfd821c8b440f42a00a27648d6-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22935-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/123a18dfd821c8b440f42a00a27648d6-Supplemental-Conference.pdf
Contextually Affinitive Neighborhood Refinery for Deep Clustering
Main Conference Track
Chunlin Yu, Ye Shi, Jingya Wang
Previous endeavors in self-supervised learning have enlightened the research of deep clustering from an instance discrimination perspective. Built upon this foundation, recent studies further highlight the importance of grouping semantically similar instances. One effective method to achieve this is by promoting the se...
https://papers.nips.cc/paper_files/paper/2023/file/123cfe7d8b7702ac97aaf4468fc05fa5-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22937-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/123cfe7d8b7702ac97aaf4468fc05fa5-Supplemental-Conference.pdf
Differentiable Blocks World: Qualitative 3D Decomposition by Rendering Primitives
Main Conference Track
Tom Monnier, Jake Austin, Angjoo Kanazawa, Alexei Efros, Mathieu Aubry
Given a set of calibrated images of a scene, we present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives. While many approaches focus on recovering high-fidelity 3D scenes, we focus on parsing a scene into mid-level 3D representations made of a small set of t...
https://papers.nips.cc/paper_files/paper/2023/file/123fd8a56501194823c8e0dca00733df-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20052-/bibtex
null
Learning Shared Safety Constraints from Multi-task Demonstrations
Main Conference Track
Konwoo Kim, Gokul Swamy, ZUXIN LIU, DING ZHAO, Sanjiban Choudhury, Steven Z. Wu
Regardless of the particular task we want to perform in an environment, there are often shared safety constraints we want our agents to respect. For example, regardless of whether it is making a sandwich or clearing the table, a kitchen robot should not break a plate. Manually specifying such a constraint can be both t...
https://papers.nips.cc/paper_files/paper/2023/file/124dde499d62b58e97e42a45b26d7369-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21155-/bibtex
null
Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner
Main Conference Track
Zhengxiang Shi, Aldo Lipani
Language models (LMs) trained on vast quantities of unlabelled data have greatly advanced the field of natural language processing (NLP). In this study, we re-visit the widely accepted notion in NLP that continued pre-training LMs on task-related texts improves the performance of fine-tuning (FT) in downstream tasks. T...
https://papers.nips.cc/paper_files/paper/2023/file/1289f9195d2ef8cfdfe5f50930c4a7c4-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21456-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/1289f9195d2ef8cfdfe5f50930c4a7c4-Supplemental-Conference.pdf
GIMLET: A Unified Graph-Text Model for Instruction-Based Molecule Zero-Shot Learning
Main Conference Track
Haiteng Zhao, Shengchao Liu, Ma Chang, Hannan Xu, Jie Fu, Zhihong Deng, Lingpeng Kong, Qi Liu
Molecule property prediction has gained significant attention in recent years. The main bottleneck is the label insufficiency caused by expensive lab experiments. In order to alleviate this issue and to better leverage textual knowledge for tasks, this study investigates the feasibility of employing natural language in...
https://papers.nips.cc/paper_files/paper/2023/file/129033c7c08be683059559e8d6bfd460-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22664-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/129033c7c08be683059559e8d6bfd460-Supplemental-Conference.zip
GEX: A flexible method for approximating influence via Geometric Ensemble
Main Conference Track
SungYub Kim, Kyungsu Kim, Eunho Yang
Through a deeper understanding of predictions of neural networks, Influence Function (IF) has been applied to various tasks such as detecting and relabeling mislabeled samples, dataset pruning, and separation of data sources in practice. However, we found standard approximations of IF suffer from performance degradatio...
https://papers.nips.cc/paper_files/paper/2023/file/1297ca5c906f4bada8f5f6f4e80f9dd2-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20608-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/1297ca5c906f4bada8f5f6f4e80f9dd2-Supplemental-Conference.pdf
Offline Reinforcement Learning for Mixture-of-Expert Dialogue Management
Main Conference Track
Dhawal Gupta, Yinlam Chow, Azamat Tulepbergenov, Mohammad Ghavamzadeh, Craig Boutilier
Reinforcement learning (RL) has shown great promise for developing agents for dialogue management (DM) that are non-myopic, conduct rich conversations, and maximize overall user satisfaction. Despite the advancements in RL and language models (LMs), employing RL to drive conversational chatbots still poses significant ...
https://papers.nips.cc/paper_files/paper/2023/file/12bcf58a1c09a0fcb5310f3589291ab4-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21843-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/12bcf58a1c09a0fcb5310f3589291ab4-Supplemental-Conference.pdf
Binary Classification with Confidence Difference
Main Conference Track
Wei Wang, Lei Feng, Yuchen Jiang, Gang Niu, Min-Ling Zhang, Masashi Sugiyama
Recently, learning with soft labels has been shown to achieve better performance than learning with hard labels in terms of model generalization, calibration, and robustness. However, collecting pointwise labeling confidence for all training examples can be challenging and time-consuming in real-world scenarios. This p...
https://papers.nips.cc/paper_files/paper/2023/file/12c118ef87fde56a10bd858842781b34-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19888-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/12c118ef87fde56a10bd858842781b34-Supplemental-Conference.zip
On student-teacher deviations in distillation: does it pay to disobey?
Main Conference Track
Vaishnavh Nagarajan, Aditya K. Menon, Srinadh Bhojanapalli, Hossein Mobahi, Sanjiv Kumar
Knowledge distillation (KD) has been widely used to improve the test accuracy of a "student" network, by training it to mimic the soft probabilities of a trained "teacher" network. Yet, it has been shown in recent work that, despite being trained to fit the teacher's probabilities, the student may not only significantl...
https://papers.nips.cc/paper_files/paper/2023/file/12d286282e1be5431ea05262a21f415c-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21316-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/12d286282e1be5431ea05262a21f415c-Supplemental-Conference.pdf
Resilient Multiple Choice Learning: A learned scoring scheme with application to audio scene analysis
Main Conference Track
Victor Letzelter, Mathieu Fontaine, Mickael Chen, Patrick Pérez, Slim Essid, Gaël Richard
We introduce Resilient Multiple Choice Learning (rMCL), an extension of the MCL approach for conditional distribution estimation in regression settings where multiple targets may be sampled for each training input.Multiple Choice Learning is a simple framework to tackle multimodal density estimation, using the Winner-T...
https://papers.nips.cc/paper_files/paper/2023/file/12d7ba753894ed348904df1bf0ce02ec-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21296-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/12d7ba753894ed348904df1bf0ce02ec-Supplemental-Conference.zip
Graph of Circuits with GNN for Exploring the Optimal Design Space
Main Conference Track
Aditya Shahane, Saripilli Swapna Manjiri, Ankesh Jain, Sandeep Kumar
The design automation of analog circuits poses significant challenges in terms of the large design space, complex interdependencies between circuit specifications, and resource-intensive simulations. To address these challenges, this paper presents an innovative framework called the Graph of Circuits Explorer (GCX). Le...
https://papers.nips.cc/paper_files/paper/2023/file/12da92b7c64176eb6eb6ad0ae31554fd-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22503-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/12da92b7c64176eb6eb6ad0ae31554fd-Supplemental-Conference.zip
Structure-free Graph Condensation: From Large-scale Graphs to Condensed Graph-free Data
Main Conference Track
Xin Zheng, Miao Zhang, Chunyang Chen, Quoc Viet Hung Nguyen, Xingquan Zhu, Shirui Pan
Graph condensation, which reduces the size of a large-scale graph by synthesizing a small-scale condensed graph as its substitution, has immediate benefits for various graph learning tasks.However, existing graph condensation methods rely on the joint optimization of nodes and structures in the condensed graph, and ove...
https://papers.nips.cc/paper_files/paper/2023/file/13183a224208671a6fc33ba1aa661ec4-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20559-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/13183a224208671a6fc33ba1aa661ec4-Supplemental-Conference.pdf
Visual Programming for Step-by-Step Text-to-Image Generation and Evaluation
Main Conference Track
Jaemin Cho, Abhay Zala, Mohit Bansal
As large language models have demonstrated impressive performance in many domains, recent works have adopted language models (LMs) as controllers of visual modules for vision-and-language tasks. While existing work focuses on equipping LMs with visual understanding, we propose two novel interpretable/explainable visual...
https://papers.nips.cc/paper_files/paper/2023/file/13250eb13871b3c2c0a0667b54bad165-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21764-/bibtex
null
Auditing Fairness by Betting
Main Conference Track
Ben Chugg, Santiago Cortes-Gomez, Bryan Wilder, Aaditya Ramdas
We provide practical, efficient, and nonparametric methods for auditing the fairness of deployed classification and regression models. Whereas previous work relies on a fixed-sample size, our methods are sequential and allow for the continuous monitoring of incoming data, making them highly amenable to tracking the fai...
https://papers.nips.cc/paper_files/paper/2023/file/1338c277525011f20166cf740952bb47-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21104-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/1338c277525011f20166cf740952bb47-Supplemental-Conference.zip
Truly Scale-Equivariant Deep Nets with Fourier Layers
Main Conference Track
Md Ashiqur Rahman, Raymond A. Yeh
In computer vision, models must be able to adapt to changes in image resolution to effectively carry out tasks such as image segmentation; This is known as scale-equivariance. Recent works have made progress in developing scale-equivariant convolutional neural networks, e.g., through weight-sharing and kernel resizing....
https://papers.nips.cc/paper_files/paper/2023/file/1343edb2739a61a6e20bd8764e814b50-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22742-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/1343edb2739a61a6e20bd8764e814b50-Supplemental-Conference.pdf
Projection-Free Methods for Stochastic Simple Bilevel Optimization with Convex Lower-level Problem
Main Conference Track
Jincheng Cao, Ruichen Jiang, Nazanin Abolfazli, Erfan Yazdandoost Hamedani, Aryan Mokhtari
In this paper, we study a class of stochastic bilevel optimization problems, also known as stochastic simple bilevel optimization, where we minimize a smooth stochastic objective function over the optimal solution set of another stochastic convex optimization problem. We introduce novel stochastic bilevel optimization ...
https://papers.nips.cc/paper_files/paper/2023/file/136729ae4b0fee25a0d28077442506da-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19779-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/136729ae4b0fee25a0d28077442506da-Supplemental-Conference.zip
On the Implicit Bias of Linear Equivariant Steerable Networks
Main Conference Track
Ziyu Chen, Wei Zhu
We study the implicit bias of gradient flow on linear equivariant steerable networks in group-invariant binary classification. Our findings reveal that the parameterized predictor converges in direction to the unique group-invariant classifier with a maximum margin defined by the input group action. Under a unitary ass...
https://papers.nips.cc/paper_files/paper/2023/file/136a45cd9b841bf785625709a19c6508-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22553-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/136a45cd9b841bf785625709a19c6508-Supplemental-Conference.pdf
Memory-Constrained Algorithms for Convex Optimization
Main Conference Track
Moise Blanchard, Junhui Zhang, Patrick Jaillet
We propose a family of recursive cutting-plane algorithms to solve feasibility problems with constrained memory, which can also be used for first-order convex optimization. Precisely, in order to find a point within a ball of radius $\epsilon$ with a separation oracle in dimension $d$---or to minimize $1$-Lipschitz con...
https://papers.nips.cc/paper_files/paper/2023/file/1395b425d06a50e42fafe91cf04f3a98-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19481-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/1395b425d06a50e42fafe91cf04f3a98-Supplemental-Conference.pdf
Nonparametric Boundary Geometry in Physics Informed Deep Learning
Main Conference Track
Scott Cameron, Arnu Pretorius, S Roberts
Engineering design problems frequently require solving systems ofpartial differential equations with boundary conditions specified onobject geometries in the form of a triangular mesh. These boundarygeometries are provided by a designer and are problem dependent.The efficiency of the design process greatly benefits fro...
https://papers.nips.cc/paper_files/paper/2023/file/13aef57cf532e88c476a10ff372e44e5-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21403-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/13aef57cf532e88c476a10ff372e44e5-Supplemental-Conference.zip
Tracking Most Significant Shifts in Nonparametric Contextual Bandits
Main Conference Track
Joe Suk, Samory Kpotufe
We study nonparametric contextual bandits where Lipschitz mean reward functions may change over time.We first establish the minimax dynamic regret rate in this less understood setting in terms of number of changes $L$ and total-variation $V$, both capturing all changes in distribution over context space, and argue that...
https://papers.nips.cc/paper_files/paper/2023/file/13b501c58ae3bfe9635a259f4414e943-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21797-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/13b501c58ae3bfe9635a259f4414e943-Supplemental-Conference.pdf
Empowering Collaborative Filtering with Principled Adversarial Contrastive Loss
Main Conference Track
An Zhang, Leheng Sheng, Zhibo Cai, Xiang Wang, Tat-Seng Chua
Contrastive Learning (CL) has achieved impressive performance in self-supervised learning tasks, showing superior generalization ability. Inspired by the success, adopting CL into collaborative filtering (CF) is prevailing in semi-supervised topK recommendations. The basic idea is to routinely conduct heuristic-based d...
https://papers.nips.cc/paper_files/paper/2023/file/13f1750b825659394a6499399e7637fc-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20159-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/13f1750b825659394a6499399e7637fc-Supplemental-Conference.pdf
The Rashomon Importance Distribution: Getting RID of Unstable, Single Model-based Variable Importance
Main Conference Track
Jon Donnelly, Srikar Katta, Cynthia Rudin, Edward Browne
Quantifying variable importance is essential for answering high-stakes questions in fields like genetics, public policy, and medicine. Current methods generally calculate variable importance for a given model trained on a given dataset. However, for a given dataset, there may be many models that explain the target outc...
https://papers.nips.cc/paper_files/paper/2023/file/1403ab1a427050538ec59c7f570aec8b-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20230-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/1403ab1a427050538ec59c7f570aec8b-Supplemental-Conference.pdf
Model-Based Control with Sparse Neural Dynamics
Main Conference Track
Ziang Liu, Genggeng Zhou, Jeff He, Tobia Marcucci, Fei-Fei Li, Jiajun Wu, Yunzhu Li
Learning predictive models from observations using deep neural networks (DNNs) is a promising new approach to many real-world planning and control problems. However, common DNNs are too unstructured for effective planning, and current control methods typically rely on extensive sampling or local gradient descent. In th...
https://papers.nips.cc/paper_files/paper/2023/file/142cdba4b8d1e03f9ee131ac86bb0afc-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20781-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/142cdba4b8d1e03f9ee131ac86bb0afc-Supplemental-Conference.zip
AmadeusGPT: a natural language interface for interactive animal behavioral analysis
Main Conference Track
Shaokai Ye, Jessy Lauer, Mu Zhou, Alexander Mathis, Mackenzie Mathis
The process of quantifying and analyzing animal behavior involves translating the naturally occurring descriptive language of their actions into machine-readable code. Yet, codifying behavior analysis is often challenging without deep understanding of animal behavior and technical machine learning knowledge. To limit t...
https://papers.nips.cc/paper_files/paper/2023/file/1456560769bbc38e4f8c5055048ea712-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19972-/bibtex
null
Provably Efficient Algorithm for Nonstationary Low-Rank MDPs
Main Conference Track
Yuan Cheng, Jing Yang, Yingbin Liang
Reinforcement learning (RL) under changing environment models many real-world applications via nonstationary Markov Decision Processes (MDPs), and hence gains considerable interest. However, theoretical studies on nonstationary MDPs in the literature have mainly focused on tabular and linear (mixture) MDPs, which do no...
https://papers.nips.cc/paper_files/paper/2023/file/145c28cd4b1df9b426990fd68045f4f7-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20371-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/145c28cd4b1df9b426990fd68045f4f7-Supplemental-Conference.pdf
Time-uniform confidence bands for the CDF under nonstationarity
Main Conference Track
Paul Mineiro, Steven Howard
Estimation of a complete univariate distribution from a sequence of observations is a useful primitive for both manual and automated decision making. This problem has received extensive attention in the i.i.d. setting, but the arbitrary data dependent setting remains largely unaddressed. We present computationally feli...
https://papers.nips.cc/paper_files/paper/2023/file/148bbc25b934211d80435b5cad5a7198-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20139-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/148bbc25b934211d80435b5cad5a7198-Supplemental-Conference.zip
Risk-Averse Active Sensing for Timely Outcome Prediction under Cost Pressure
Main Conference Track
Yuchao Qin, Mihaela van der Schaar, Changhee Lee
Timely outcome prediction is essential in healthcare to enable early detection and intervention of adverse events. However, in longitudinal follow-ups to patients' health status, cost-efficient acquisition of patient covariates is usually necessary due to the significant expense involved in screening and lab tests. To ...
https://papers.nips.cc/paper_files/paper/2023/file/1498a03a04f9bcd3a7d44058fc5dc639-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20432-/bibtex
null
Single-Pass Pivot Algorithm for Correlation Clustering. Keep it simple!
Main Conference Track
Konstantin Makarychev, Sayak Chakrabarty
We show that a simple single-pass semi-streaming variant of the Pivot algorithm for Correlation Clustering gives a (3+eps)-approximation using O(n/eps) words of memory. This is a slight improvement over the recent results of Cambus, Kuhn, Lindy, Pai, and Uitto, who gave a (3+eps)-approximation using O(n log n) words of...
https://papers.nips.cc/paper_files/paper/2023/file/149ad6e32c08b73a3ecc3d11977fcc47-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22992-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/149ad6e32c08b73a3ecc3d11977fcc47-Supplemental-Conference.zip
SPACE: Single-round Participant Amalgamation for Contribution Evaluation in Federated Learning
Main Conference Track
Yi-Chung Chen, Hsi-Wen Chen, Shun-Gui Wang, Ming-syan Chen
The evaluation of participant contribution in federated learning (FL) has recently gained significant attention due to its applicability in various domains, such as incentive mechanisms, robustness enhancement, and client selection. Previous approaches have predominantly relied on the widely adopted Shapley value for p...
https://papers.nips.cc/paper_files/paper/2023/file/14a812fa4b6bf244d055e37a7cd2f557-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21428-/bibtex
null
SAME: Uncovering GNN Black Box with Structure-aware Shapley-based Multipiece Explanations
Main Conference Track
Ziyuan Ye, Rihan Huang, Qilin Wu, Quanying Liu
Post-hoc explanation techniques on graph neural networks (GNNs) provide economical solutions for opening the black-box graph models without model retraining. Many GNN explanation variants have achieved state-of-the-art explaining results on a diverse set of benchmarks, while they rarely provide theoretical analysis for...
https://papers.nips.cc/paper_files/paper/2023/file/14cdc9013d80338bf81483a7736ea05c-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19635-/bibtex
null
Federated Learning with Client Subsampling, Data Heterogeneity, and Unbounded Smoothness: A New Algorithm and Lower Bounds
Main Conference Track
Michael Crawshaw, Yajie Bao, Mingrui Liu
We study the problem of Federated Learning (FL) under client subsampling and data heterogeneity with an objective function that has potentially unbounded smoothness. This problem is motivated by empirical evidence that the class of relaxed smooth functions, where the Lipschitz constant of the gradient scales linearly w...
https://papers.nips.cc/paper_files/paper/2023/file/14ecbfb2216bab76195b60bfac7efb1f-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22460-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/14ecbfb2216bab76195b60bfac7efb1f-Supplemental-Conference.zip
NeuroGraph: Benchmarks for Graph Machine Learning in Brain Connectomics
Datasets and Benchmarks Track
Anwar Said, Roza Bayrak, Tyler Derr, Mudassir Shabbir, Daniel Moyer, Catie Chang, Xenofon Koutsoukos
Machine learning provides a valuable tool for analyzing high-dimensional functional neuroimaging data, and is proving effective in predicting various neurological conditions, psychiatric disorders, and cognitive patterns. In functional magnetic resonance imaging (MRI) research, interactions between brain regions are co...
https://papers.nips.cc/paper_files/paper/2023/file/14f656f21d09a4114666f60a45aab1aa-Paper-Datasets_and_Benchmarks.pdf
https://papers.nips.cc/paper_files/paper/21940-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/14f656f21d09a4114666f60a45aab1aa-Supplemental-Datasets_and_Benchmarks.pdf
Quantifying the Cost of Learning in Queueing Systems
Main Conference Track
Daniel Freund, Thodoris Lykouris, Wentao Weng
Queueing systems are widely applicable stochastic models with use cases in communication networks, healthcare, service systems, etc. Although their optimal control has been extensively studied, most existing approaches assume perfect knowledge of the system parameters. Of course, this assumption rarely holds in practic...
https://papers.nips.cc/paper_files/paper/2023/file/1502957929fc4257dd1b6daf7d869c2f-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22861-/bibtex
null
One-Line-of-Code Data Mollification Improves Optimization of Likelihood-based Generative Models
Main Conference Track
Ba-Hien Tran, Giulio Franzese, Pietro Michiardi, Maurizio Filippone
Generative Models (GMs) have attracted considerable attention due to their tremendous success in various domains, such as computer vision where they are capable to generate impressive realistic-looking images. Likelihood-based GMs are attractive due to the possibility to generate new data by a single model evaluation. ...
https://papers.nips.cc/paper_files/paper/2023/file/1516a7f7507d5550db5c7f29e995ec8c-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22311-/bibtex
null
FLSL: Feature-level Self-supervised Learning
Main Conference Track
Qing Su, Anton Netchaev, Hai Li, Shihao Ji
Current self-supervised learning (SSL) methods (e.g., SimCLR, DINO, VICReg, MOCOv3) target primarily on representations at instance level and do not generalize well to dense prediction tasks, such as object detection and segmentation. Towards aligning SSL with dense predictions, this paper demonstrates for the first ti...
https://papers.nips.cc/paper_files/paper/2023/file/15212bd2265c4a3ab0dbc1b1982c1b69-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21181-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/15212bd2265c4a3ab0dbc1b1982c1b69-Supplemental-Conference.pdf
FeCAM: Exploiting the Heterogeneity of Class Distributions in Exemplar-Free Continual Learning
Main Conference Track
Dipam Goswami, Yuyang Liu, Bartłomiej Twardowski, Joost van de Weijer
Exemplar-free class-incremental learning (CIL) poses several challenges since it prohibits the rehearsal of data from previous tasks and thus suffers from catastrophic forgetting. Recent approaches to incrementally learning the classifier by freezing the feature extractor after the first task have gained much attention...
https://papers.nips.cc/paper_files/paper/2023/file/15294ba2dcfb4521274f7aa1c26f4dd4-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19556-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/15294ba2dcfb4521274f7aa1c26f4dd4-Supplemental-Conference.pdf
Learning non-Markovian Decision-Making from State-only Sequences
Main Conference Track
Aoyang Qin, Feng Gao, Qing Li, Song-Chun Zhu, Sirui Xie
Conventional imitation learning assumes access to the actions of demonstrators, but these motor signals are often non-observable in naturalistic settings. Additionally, sequential decision-making behaviors in these settings can deviate from the assumptions of a standard Markov Decision Process (MDP). To address these c...
https://papers.nips.cc/paper_files/paper/2023/file/154926e0b66e2b2a8c1120852f31a12d-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/22856-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/154926e0b66e2b2a8c1120852f31a12d-Supplemental-Conference.pdf
Spectral Invariant Learning for Dynamic Graphs under Distribution Shifts
Main Conference Track
Zeyang Zhang, Xin Wang, Ziwei Zhang, Zhou Qin, Weigao Wen, Hui Xue', Haoyang Li, Wenwu Zhu
Dynamic graph neural networks (DyGNNs) currently struggle with handling distribution shifts that are inherent in dynamic graphs.Existing work on DyGNNs with out-of-distribution settings only focuses on the time domain, failing to handle cases involving distribution shifts in the spectral domain. In this paper, we disco...
https://papers.nips.cc/paper_files/paper/2023/file/154b90fcc9ba3dee96779c05c3108908-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19486-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/154b90fcc9ba3dee96779c05c3108908-Supplemental-Conference.pdf
Efficient Activation Function Optimization through Surrogate Modeling
Main Conference Track
Garrett Bingham, Risto Miikkulainen
Carefully designed activation functions can improve the performance of neural networks in many machine learning tasks. However, it is difficult for humans to construct optimal activation functions, and current activation function search algorithms are prohibitively expensive. This paper aims to improve the state of t...
https://papers.nips.cc/paper_files/paper/2023/file/154d63285d3ed7826e7f026c0b350d69-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19860-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/154d63285d3ed7826e7f026c0b350d69-Supplemental-Conference.zip
Data Market Design through Deep Learning
Main Conference Track
Sai Srivatsa Ravindranath, Yanchen Jiang, David C. Parkes
The data market design problem is a problem in economic theory to find a set of signaling schemes (statistical experiments) to maximize expected revenue to the information seller, where each experiment reveals some of the information known to a seller and has a corresponding price. Each buyer has their own decision to...
https://papers.nips.cc/paper_files/paper/2023/file/1577ea3eaf8dacb99f64e4496c3ecddf-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20547-/bibtex
null
When Visual Prompt Tuning Meets Source-Free Domain Adaptive Semantic Segmentation
Main Conference Track
Xinhong Ma, Yiming Wang, Hao Liu, Tianyu Guo, Yunhe Wang
Source-free domain adaptive semantic segmentation aims to adapt a pre-trained source model to the unlabeled target domain without accessing the private source data. Previous methods usually fine-tune the entire network, which suffers from expensive parameter tuning. To avoid this problem, we propose to utilize visual p...
https://papers.nips.cc/paper_files/paper/2023/file/157c30da6a988e1cbef2095f7b9521db-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20131-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/157c30da6a988e1cbef2095f7b9521db-Supplemental-Conference.pdf
Benchmarking and Analyzing 3D-aware Image Synthesis with a Modularized Codebase
Datasets and Benchmarks Track
Qiuyu Wang, Zifan Shi, Kecheng Zheng, Yinghao Xu, Sida Peng, Yujun Shen
Despite the rapid advance of 3D-aware image synthesis, existing studies usually adopt a mixture of techniques and tricks, leaving it unclear how each part contributes to the final performance in terms of generality. Following the most popular and effective paradigm in this field, which incorporates a neural radiance fi...
https://papers.nips.cc/paper_files/paper/2023/file/1585da86b5a3c4fb15520a2b3682051f-Paper-Datasets_and_Benchmarks.pdf
https://papers.nips.cc/paper_files/paper/22346-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/1585da86b5a3c4fb15520a2b3682051f-Supplemental-Datasets_and_Benchmarks.zip
RL-ViGen: A Reinforcement Learning Benchmark for Visual Generalization
Datasets and Benchmarks Track
Zhecheng Yuan, Sizhe Yang, Pu Hua, Can Chang, Kaizhe Hu, Huazhe Xu
Visual Reinforcement Learning (Visual RL), coupled with high-dimensional observations, has consistently confronted the long-standing challenge of out-of-distribution generalization. Despite the focus on algorithms aimed at resolving visual generalization problems, we argue that the devil is in the existing benchmarks a...
https://papers.nips.cc/paper_files/paper/2023/file/15c9f64ec172b046470d2a4d2b7669fc-Paper-Datasets_and_Benchmarks.pdf
https://papers.nips.cc/paper_files/paper/22239-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/15c9f64ec172b046470d2a4d2b7669fc-Supplemental-Datasets_and_Benchmarks.pdf
DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent Method
Main Conference Track
Ahmed Khaled, Konstantin Mishchenko, Chi Jin
This paper proposes a new easy-to-implement parameter-free gradient-based optimizer: DoWG (Distance over Weighted Gradients). We prove that DoWG is efficient---matching the convergence rate of optimally tuned gradient descent in convex optimization up to a logarithmic factor without tuning any parameters, and universal...
https://papers.nips.cc/paper_files/paper/2023/file/15ce36d35622f126f38e90167de1a350-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20047-/bibtex
null
Multitask Learning with No Regret: from Improved Confidence Bounds to Active Learning
Main Conference Track
Pier Giuseppe Sessa, Pierre Laforgue, Nicolò Cesa-Bianchi, Andreas Krause
Multitask learning is a powerful framework that enables one to simultaneously learn multiple related tasks by sharing information between them. Quantifying uncertainty in the estimated tasks is of pivotal importance for many downstream applications, such as online or active learning. In this work, we provide novel conf...
https://papers.nips.cc/paper_files/paper/2023/file/15d15045f93b44d933a260b249608d43-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/20358-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/15d15045f93b44d933a260b249608d43-Supplemental-Conference.pdf
Posterior Sampling with Delayed Feedback for Reinforcement Learning with Linear Function Approximation
Main Conference Track
Nikki Lijing Kuang, Ming Yin, Mengdi Wang, Yu-Xiang Wang, Yian Ma
Recent studies in reinforcement learning (RL) have made significant progress by leveraging function approximation to alleviate the sample complexity hurdle for better performance. Despite the success, existing provably efficient algorithms typically rely on the accessibility of immediate feedback upon taking actions. T...
https://papers.nips.cc/paper_files/paper/2023/file/15d3d4a4bd808605e3a3c1ea0fd0eba4-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/19609-/bibtex
null
Macro Placement by Wire-Mask-Guided Black-Box Optimization
Main Conference Track
Yunqi Shi, Ke Xue, Song Lei, Chao Qian
The development of very large-scale integration (VLSI) technology has posed new challenges for electronic design automation (EDA) techniques in chip floorplanning. During this process, macro placement is an important subproblem, which tries to determine the positions of all macros with the aim of minimizing half-perime...
https://papers.nips.cc/paper_files/paper/2023/file/15d6717f8bb33b3a74df26ce1eee0b9a-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21885-/bibtex
https://papers.nips.cc/paper_files/paper/2023/file/15d6717f8bb33b3a74df26ce1eee0b9a-Supplemental-Conference.zip
Reconciling Competing Sampling Strategies of Network Embedding
Main Conference Track
Yuchen Yan, Baoyu Jing, Lihui Liu, Ruijie Wang, Jinning Li, Tarek Abdelzaher, Hanghang Tong
Network embedding plays a significant role in a variety of applications. To capture the topology of the network, most of the existing network embedding algorithms follow a sampling training procedure, which maximizes the similarity (e.g., embedding vectors' dot product) between positively sampled node pairs and minimiz...
https://papers.nips.cc/paper_files/paper/2023/file/15dc2344ea9bdc01ffb8bb2d692e4018-Paper-Conference.pdf
https://papers.nips.cc/paper_files/paper/21308-/bibtex
null