platform stringclasses 1
value | venue stringclasses 4
values | year int32 2.02k 2.03k | title stringlengths 8 177 | abstract stringlengths 310 3.08k | keywords stringlengths 0 613 | areas stringclasses 152
values | tldr stringlengths 0 281 | scores listlengths 0 8 | decision stringclasses 21
values | authors stringlengths 6 834 | author_ids stringlengths 8 956 | cdate stringclasses 976
values | url stringlengths 41 45 | platform_id stringlengths 9 13 | bibtex stringlengths 228 1.26k ⌀ | figure_path stringlengths 61 79 | figure_number stringclasses 134
values | figure_caption stringlengths 8 2.35k | figure_context stringlengths 0 20.2k | figure_type stringclasses 1
value | confidence float32 0.85 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
OpenReview | ICLR | 2,021 | Predicting Infectiousness for Proactive Contact Tracing | The COVID-19 pandemic has spread rapidly worldwide, overwhelming manual contact tracing in many countries and resulting in widespread lockdowns for emergency containment. Large-scale digital contact tracing (DCT) has emerged as a potential solution to resume economic and social activity while minimizing spread of the v... | covid-19, contact tracing, distributed inference, set transformer, deepset, epidemiology, applications, domain randomization, retraining, simulation | Proposes a framework called Proactive Contact Tracing which uses distributed inference of expected Covid-19 infectiousness to provide individualized, private recommendations. | [
7,
7,
9
] | Accept (Spotlight) | Yoshua Bengio, Prateek Gupta, Tegan Maharaj, Nasim Rahaman, Martin Weiss, Tristan Deleu, Eilif Benjamin Muller, Meng Qu, victor schmidt, Pierre-Luc St-Charles, hannah alsdurf, Olexa Bilaniuk, david buckeridge, gaetan caron, pierre luc carrier, Joumana Ghosn, satya ortiz gagne, Christopher Pal, Irina Rish, Bernhard Schö... | ~Yoshua_Bengio1, ~Prateek_Gupta2, ~Tegan_Maharaj1, ~Nasim_Rahaman1, ~Martin_Weiss4, ~Tristan_Deleu1, ~Eilif_Benjamin_Muller1, ~Meng_Qu2, victor.schmidt@mila.quebec, ~Pierre-Luc_St-Charles3, halsdurf@uottawa.ca, ~Olexa_Bilaniuk1, david.buckeridge@mcgill.ca, gaetan.marceau.caron@mila.quebec, pierre.luc.carrier@mila.quebe... | 20200928 | https://openreview.net/forum?id=lVgB2FUbzuQ | lVgB2FUbzuQ | @inproceedings{
bengio2021predicting,
title={Predicting Infectiousness for Proactive Contact Tracing},
author={Yoshua Bengio and Prateek Gupta and Tegan Maharaj and Nasim Rahaman and Martin Weiss and Tristan Deleu and Eilif Benjamin Muller and Meng Qu and victor schmidt and Pierre-Luc St-Charles and hannah alsdurf and ... | OpenReview/ICLR/figures/2021/accept_spotlight/lVgB2FUbzuQ/Figure2.png | 2 | Figure 2: PCT model architecture. Diagram showing Left: The embedding network combining | <paragraph_1>To these ends, we construct a general architectural scaffold in which any neural network that maps between sets may be used. In this work, we experiment with Set Transformers (Lee et al., 2018) and a variant of Deep Sets (Zaheer et al., 2017). The former is a variant of Transformers (Vaswani et al., 2017),... | diagram | 0.990317 | |
OpenReview | ICLR | 2,021 | MARS: Markov Molecular Sampling for Multi-objective Drug Discovery | Searching for novel molecules with desired chemical properties is crucial in drug discovery. Existing work focuses on developing neural models to generate either molecular sequences or chemical graphs. However, it remains a big challenge to find novel and diverse compounds satisfying several properties. In this paper, ... | drug discovery, molecular graph generation, MCMC sampling | In this paper, we propose a self-adaptive MCMC sampling method (MARS) to generate molecules targeting multiple objectives for drug discovery for multi-objective drug discovery. | [
4,
7,
6,
8
] | Accept (Spotlight) | Yutong Xie, Chence Shi, Hao Zhou, Yuwei Yang, Weinan Zhang, Yong Yu, Lei Li | ~Yutong_Xie3, ~Chence_Shi1, zhouhao.nlp@bytedance.com, yuwei.yang@bytedance.com, ~Weinan_Zhang1, ~Yong_Yu1, ~Lei_Li11 | 20200928 | https://openreview.net/forum?id=kHSu4ebxFXY | kHSu4ebxFXY | @inproceedings{
xie2021mars,
title={{\{}MARS{\}}: Markov Molecular Sampling for Multi-objective Drug Discovery},
author={Yutong Xie and Chence Shi and Hao Zhou and Yuwei Yang and Weinan Zhang and Yong Yu and Lei Li},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.... | OpenReview/ICLR/figures/2021/accept_spotlight/kHSu4ebxFXY/Figure1.png | 1 | Figure 1: The framework of MARS. During the sampling process: (a) starting from an arbitrary initial molecule x(0) in the molecular space X , (b) sampling a candidate molecule x′ ∈ X from the proposal distribution q(x′ | x(t−1)) at each step, and (c/d) the candidate x′ is either accepted or rejected according to the ac... | <paragraph_1>Specifically, as shown in Figure 1, the sampling procedure of MARS starts from an initial molecule x(0) ∈X. At each time step t, a molecule candidate x′ ∈X will be sampled from the proposal distribution q(x′ | x(t−1)), where x(t−1) denotes the molecule at time step t −1. Then the proposed candidate x′ could... | diagram | 0.995599 | |
OpenReview | ICLR | 2,021 | HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark | HardWare-aware Neural Architecture Search (HW-NAS) has recently gained tremendous attention by automating the design of deep neural networks deployed in more resource-constrained daily life devices. Despite its promising performance, developing optimal HW-NAS solutions can be prohibitively challenging as it requires cr... | Hardware-Aware Neural Architecture Search, AutoML, Benchmark | A Hardware-Aware Neural Architecture Search Benchmark | [
7,
6,
7,
7
] | Accept (Spotlight) | Chaojian Li, Zhongzhi Yu, Yonggan Fu, Yongan Zhang, Yang Zhao, Haoran You, Qixuan Yu, Yue Wang, Cong Hao, Yingyan Lin | ~Chaojian_Li1, ~Zhongzhi_Yu1, ~Yonggan_Fu1, ~Yongan_Zhang1, ~Yang_Zhao1, ~Haoran_You1, ~Qixuan_Yu1, ~Yue_Wang3, hc.onioncc@gmail.com, ~Yingyan_Lin1 | 20200928 | https://openreview.net/forum?id=_0kaDkv3dVf | _0kaDkv3dVf | @inproceedings{
li2021hwnasbench,
title={{\{}HW{\}}-{\{}NAS{\}}-Bench: Hardware-Aware Neural Architecture Search Benchmark},
author={Chaojian Li and Zhongzhi Yu and Yonggan Fu and Yongan Zhang and Yang Zhao and Haoran You and Qixuan Yu and Yue Wang and Cong Hao and Yingyan Lin},
booktitle={International Conference on L... | OpenReview/ICLR/figures/2021/accept_spotlight/_0kaDkv3dVf/Figure2.png | 2 | Figure 2: Illustrating the hardware-cost collection pipeline applicable to various hardware devices. | <paragraph_1>To collect the hardware-cost data for all the architectures in both the NAS-Bench-201 and FBNet search spaces, we construct a generic hardware-cost collection pipeline (see Figure 2) to automate the process. The pipeline mainly consists of the target devices and corresponding deployment tools (e.g., compil... | diagram | 0.99708 | |
OpenReview | ICLR | 2,021 | Mathematical Reasoning via Self-supervised Skip-tree Training | We demonstrate that self-supervised language modeling applied to mathematical formulas enables logical reasoning. To measure the logical reasoning abilities of language models, we formulate several evaluation (downstream) tasks, such as inferring types, suggesting missing assumptions and completing equalities. For trai... | self-supervised learning, mathematics, reasoning, theorem proving, language modeling | We demonstrate that self-supervised language modeling applied to mathematical formulas enables logical reasoning. | [
7,
7,
7,
7
] | Accept (Spotlight) | Markus Norman Rabe, Dennis Lee, Kshitij Bansal, Christian Szegedy | ~Markus_Norman_Rabe1, ~Dennis_Lee1, ~Kshitij_Bansal1, ~Christian_Szegedy1 | 20200928 | https://openreview.net/forum?id=YmqAnY0CMEy | YmqAnY0CMEy | @inproceedings{
rabe2021mathematical,
title={Mathematical Reasoning via Self-supervised Skip-tree Training},
author={Markus Norman Rabe and Dennis Lee and Kshitij Bansal and Christian Szegedy},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=YmqAnY0CME... | OpenReview/ICLR/figures/2021/accept_spotlight/YmqAnY0CMEy/Figure2.png | 2 | Figure 2: The skip-tree training task for the example of the equality operator on boolean constants (original formula). In this example we assume that a part of the type was sampled to be the subexpression to be predicted, and that subexpression c was sampled to be masked out additionally. Note the input to the decoder... | <paragraph_1>In this section we define the skip-tree training task. We parse a given mathematical statement into a tree of subexpressions, and replace one of the subexpressions by a <PREDICT> token. The task is to predict the subexpression replaced by <PREDICT>. See Figure 2 for an example.</paragraph_1> | diagram | 0.996334 | |
OpenReview | ICLR | 2,021 | Deep Neural Network Fingerprinting by Conferrable Adversarial Examples | In Machine Learning as a Service, a provider trains a deep neural network and gives many users access. The hosted (source) model is susceptible to model stealing attacks, where an adversary derives a surrogate model from API access to the source model. For post hoc detection of such attacks, the provider needs a robust... | Fingerprinting, Adversarial Examples, Transferability, Conferrability | Proposal of a new property called "conferrability" for adversarial examples that we use as a method for DNN fingerprinting robust to model extraction. | [
6,
6,
7,
6
] | Accept (Spotlight) | Nils Lukas, Yuxuan Zhang, Florian Kerschbaum | ~Nils_Lukas1, ~Yuxuan_Zhang1, ~Florian_Kerschbaum1 | 20200928 | https://openreview.net/forum?id=VqzVhqxkjH1 | VqzVhqxkjH1 | @inproceedings{
lukas2021deep,
title={Deep Neural Network Fingerprinting by Conferrable Adversarial Examples},
author={Nils Lukas and Yuxuan Zhang and Florian Kerschbaum},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=VqzVhqxkjH1}
} | OpenReview/ICLR/figures/2021/accept_spotlight/VqzVhqxkjH1/Figure7.png | 7 | Figure 7: A schematic illustration of the source model and the two types of models, the surrogate S and the reference model R, that a fingerprint verification should distinguish. ’Distill’ is any distillation attack that results in a surrogate model with similar performance as the source model and ’Classify’ returns th... | <paragraph_1>Note that our definition of a robust fingerprint differs from related work (Cao et al., 2019) because we include in the set of removal attacks A also model extraction attacks. Fig. 7 schematically shows the types of models that a fingerprint verification should distinguish.</paragraph_1> | diagram | 0.998589 | |
OpenReview | ICLR | 2,021 | Implicit Normalizing Flows | Normalizing flows define a probability distribution by an explicit invertible transformation $\boldsymbol{\mathbf{z}}=f(\boldsymbol{\mathbf{x}})$. In this work, we present implicit normalizing flows (ImpFlows), which generalize normalizing flows by allowing the mapping to be implicitly defined by the roots of an equati... | Normalizing flows, deep generative models, probabilistic inference, implicit functions | We generalize normalizing flows, allowing the mapping to be implicitly defined by the roots of an equation and enlarging the expressiveness power while retaining the tractability. | [
8,
7,
7,
8
] | Accept (Spotlight) | Cheng Lu, Jianfei Chen, Chongxuan Li, Qiuhao Wang, Jun Zhu | ~Cheng_Lu5, ~Jianfei_Chen1, ~Chongxuan_Li1, ~Qiuhao_Wang1, ~Jun_Zhu2 | 20200928 | https://openreview.net/forum?id=8PS8m9oYtNy | 8PS8m9oYtNy | @inproceedings{
lu2021implicit,
title={Implicit Normalizing Flows},
author={Cheng Lu and Jianfei Chen and Chongxuan Li and Qiuhao Wang and Jun Zhu},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=8PS8m9oYtNy}
} | OpenReview/ICLR/figures/2021/accept_spotlight/8PS8m9oYtNy/Figure1.png | 1 | Figure 1: An illustration of our main theoretical results on the expressiveness power of ImpFlows and ResFlows. Panel (a) and Panel (b) correspond to results in Sec. 4.2 and Sec. 4.3 respectively. | <paragraph_1>We first present some preliminaries on Lipschitz continuous functions in Sec. 4.1 and then formally study the expressiveness power of ImpFlows, especially in comparison to ResFlows. In particular, we prove that the function space of ImpFlows is strictly richer than that of ResFlows in Sec. 4.2 (see an illus... | diagram | 0.949295 | |
OpenReview | ICLR | 2,021 | Learning from Protein Structure with Geometric Vector Perceptrons | Learning on 3D structures of large biomolecules is emerging as a distinct area in machine learning, but there has yet to emerge a unifying network architecture that simultaneously leverages the geometric and relational aspects of the problem domain. To address this gap, we introduce geometric vector perceptrons, which ... | structural biology, graph neural networks, proteins, geometric deep learning | We introduce a novel graph neural network layer to learn from the structure of macromolecules. | [
7,
10,
6,
6
] | Accept (Spotlight) | Bowen Jing, Stephan Eismann, Patricia Suriana, Raphael John Lamarre Townshend, Ron Dror | ~Bowen_Jing1, ~Stephan_Eismann1, psuriana@stanford.edu, ~Raphael_John_Lamarre_Townshend1, ~Ron_Dror1 | 20200928 | https://openreview.net/forum?id=1YLJDvSx6J4 | 1YLJDvSx6J4 | @inproceedings{
jing2021learning,
title={Learning from Protein Structure with Geometric Vector Perceptrons},
author={Bowen Jing and Stephan Eismann and Patricia Suriana and Raphael John Lamarre Townshend and Ron Dror},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openrevie... | OpenReview/ICLR/figures/2021/accept_spotlight/1YLJDvSx6J4/Figure1.png | 1 | Figure 1: (A) Schematic of the geometric vector perceptron illustrating Algorithm 1. Given a tuple of scalar and vector input features (s,V), the perceptron computes an updated tuple (s′,V′). s′ is a function of both s and V. (B) Illustration of the structure-based prediction tasks. In computational protein design (top... | <paragraph_1>The geometric vector perceptron is a simple module for learning vector-valued and scalar-valued functions over geometric vectors and scalars. That is, given a tuple (s, V) of scalar features s ∈Rn and vector features V ∈Rν×3, we compute new features (s′, V′) ∈Rm × Rµ×3. The computation is illustrated in Fi... | diagram | 0.923749 | |
OpenReview | ICLR | 2,021 | Learning to Reach Goals via Iterated Supervised Learning | Current reinforcement learning (RL) algorithms can be brittle and difficult to use, especially when learning goal-reaching behaviors from sparse rewards. Although supervised imitation learning provides a simple and stable alternative, it requires access to demonstrations from a human supervisor. In this paper, we study... | goal reaching, reinforcement learning, behavior cloning, goal-conditioned RL | We present GCSL, a simple RL method that uses supervised learning to learn goal-reaching policies. | [
8,
7,
8,
7
] | Accept (Oral) | Dibya Ghosh, Abhishek Gupta, Ashwin Reddy, Justin Fu, Coline Manon Devin, Benjamin Eysenbach, Sergey Levine | ~Dibya_Ghosh1, ~Abhishek_Gupta1, ~Ashwin_Reddy1, ~Justin_Fu1, ~Coline_Manon_Devin1, ~Benjamin_Eysenbach1, ~Sergey_Levine1 | 20200928 | https://openreview.net/forum?id=rALA0Xo6yNJ | rALA0Xo6yNJ | @inproceedings{
ghosh2021learning,
title={Learning to Reach Goals via Iterated Supervised Learning},
author={Dibya Ghosh and Abhishek Gupta and Ashwin Reddy and Justin Fu and Coline Manon Devin and Benjamin Eysenbach and Sergey Levine},
booktitle={International Conference on Learning Representations},
year={2021},
url=... | OpenReview/ICLR/figures/2021/accept_oral/rALA0Xo6yNJ/Figure1.png | 1 | Figure 1: Goal-conditioned supervised learning (GCSL): The agent learns how to reach goals by sampling trajectories, relabeling the trajectories to be optimal in hindsight and treating them as expert data, and then performing supervised learning via behavioral cloning. | <paragraph_1>In this section, we show how imitation learning via behavior cloning with data relabeling can be utilized in an iterative procedure that optimizes a lower bound on the RL objective. The resulting procedure, in which an agent continually relabels and imitates its own experience, is not an imitation learning... | diagram | 0.940656 | |
OpenReview | ICLR | 2,021 | A Distributional Approach to Controlled Text Generation | We propose a Distributional Approach for addressing Controlled Text Generation from pre-trained Language Models (LM). This approach permits to specify, in a single formal framework, both “pointwise’” and “distributional” constraints over the target LM — to our knowledge, the first model with such generality —while... | Controlled NLG, Pretrained Language Models, Bias in Language Models, Energy-Based Models, Information Geometry, Exponential Families | We propose a novel approach to Controlled NLG, relying on Constraints over Distributions, Information Geometry, and Sampling from Energy-Based Models. | [
7,
8,
7
] | Accept (Oral) | Muhammad Khalifa, Hady Elsahar, Marc Dymetman | ~Muhammad_Khalifa2, ~Hady_Elsahar2, ~Marc_Dymetman1 | 20200928 | https://openreview.net/forum?id=jWkw45-9AbL | jWkw45-9AbL | @inproceedings{
khalifa2021a,
title={A Distributional Approach to Controlled Text Generation},
author={Muhammad Khalifa and Hady Elsahar and Marc Dymetman},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=jWkw45-9AbL}
} | OpenReview/ICLR/figures/2021/accept_oral/jWkw45-9AbL/Figure5.png | 5 | Figure 5: Transitivity of Information Projection (aka Generalized MaxEnt). | <paragraph_1>(Csisz´ar, 1996) gives only a minimal proof sketch, but it is instructive to provide the details, as we do now, because the proof is a neat illustration of the power of information geometry for problems of the kind we consider. The proof, illustrated in Figure 5, is very similar to one of the proofs for th... | diagram | 0.971733 | |
OpenReview | ICLR | 2,021 | Rethinking Architecture Selection in Differentiable NAS | Differentiable Neural Architecture Search is one of the most popular Neural Architecture Search (NAS) methods for its search efficiency and simplicity, accomplished by jointly optimizing the model weight and architecture parameters in a weight-sharing supernet via gradient-based algorithms. At the end of the search pha... | [
7,
7,
10,
7
] | Accept (Oral) | Ruochen Wang, Minhao Cheng, Xiangning Chen, Xiaocheng Tang, Cho-Jui Hsieh | ~Ruochen_Wang2, ~Minhao_Cheng1, ~Xiangning_Chen1, ~Xiaocheng_Tang1, ~Cho-Jui_Hsieh1 | 20200928 | https://openreview.net/forum?id=PKubaeJkw3 | PKubaeJkw3 | @inproceedings{
wang2021rethinking,
title={Rethinking Architecture Selection in Differentiable {NAS}},
author={Ruochen Wang and Minhao Cheng and Xiangning Chen and Xiaocheng Tang and Cho-Jui Hsieh},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=PKuba... | OpenReview/ICLR/figures/2021/accept_oral/PKubaeJkw3/Figure24.png | 24 | Figure 24: Normal and Reduction cells discovered by DARTS+PT on svhn on Space S4 | <paragraph_1>(b) Reduction Cell Figure 24: Normal and Reduction cells discovered by DARTS+PT on svhn on Space S4</paragraph_1> | diagram | 0.997426 | |||
OpenReview | ICLR | 2,021 | Rethinking Architecture Selection in Differentiable NAS | Differentiable Neural Architecture Search is one of the most popular Neural Architecture Search (NAS) methods for its search efficiency and simplicity, accomplished by jointly optimizing the model weight and architecture parameters in a weight-sharing supernet via gradient-based algorithms. At the end of the search pha... | [
7,
7,
10,
7
] | Accept (Oral) | Ruochen Wang, Minhao Cheng, Xiangning Chen, Xiaocheng Tang, Cho-Jui Hsieh | ~Ruochen_Wang2, ~Minhao_Cheng1, ~Xiangning_Chen1, ~Xiaocheng_Tang1, ~Cho-Jui_Hsieh1 | 20200928 | https://openreview.net/forum?id=PKubaeJkw3 | PKubaeJkw3 | @inproceedings{
wang2021rethinking,
title={Rethinking Architecture Selection in Differentiable {NAS}},
author={Ruochen Wang and Minhao Cheng and Xiangning Chen and Xiaocheng Tang and Cho-Jui Hsieh},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=PKuba... | OpenReview/ICLR/figures/2021/accept_oral/PKubaeJkw3/Figure15.png | 15 | Figure 15: Normal and Reduction cells discovered by DARTS+PT on cifar10 on Space S3 | <paragraph_1>(b) Reduction Cell Figure 15: Normal and Reduction cells discovered by DARTS+PT on cifar10 on Space S3</paragraph_1> | diagram | 0.967654 | |||
OpenReview | ICLR | 2,021 | Complex Query Answering with Neural Link Predictors | Neural link predictors are immensely useful for identifying missing edges in large scale Knowledge Graphs. However, it is still not clear how to use these models for answering more complex queries that arise in a number of domains, such as queries using logical conjunctions ($\land$), disjunctions ($\lor$) and existent... | neural link prediction, complex query answering | We show how to answer complex queries by answering their sub-queries via neural link predictors, aggregating results via t-norms and t-conorms, and identifying the optimal variable substitutions by solving an optimisation problem. | [
9,
8,
6,
9
] | Accept (Oral) | Erik Arakelyan, Daniel Daza, Pasquale Minervini, Michael Cochez | erik.arakelyan.18@alumni.ucl.ac.uk, dfdazac@gmail.com, ~Pasquale_Minervini1, ~Michael_Cochez2 | 20200928 | https://openreview.net/forum?id=Mos9F9kDwkz | Mos9F9kDwkz | @inproceedings{
arakelyan2021complex,
title={Complex Query Answering with Neural Link Predictors},
author={Erik Arakelyan and Daniel Daza and Pasquale Minervini and Michael Cochez},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=Mos9F9kDwkz}
} | OpenReview/ICLR/figures/2021/accept_oral/Mos9F9kDwkz/Figure2.png | 2 | Figure 2: Query structures considered in our experiments, as proposed by Ren et al. (2020) – the naming of each query structure corresponds to projection (p), intersection (i), and union (u), and reflects how they were implemented in the Query2Box model (Ren et al., 2020). An example of a pi query is ?T : ∃V.p(a, V ), ... | <paragraph_1>Following Ren et al. (2020), we evaluate our approach on FB15k (Bordes et al., 2013) and FB15k-237 (Toutanova & Chen, 2015) – two subset of the Freebase knowledge graph – and NELL995 (Xiong et al., 2017), a KG generated by the NELL system (Mitchell et al., 2015). In order to compare with previous work on q... | diagram | 0.960453 | |
OpenReview | ICLR | 2,022 | Learning Generalizable Representations for Reinforcement Learning via Adaptive Meta-learner of Behavioral Similarities | How to learn an effective reinforcement learning-based model for control tasks from high-level visual observations is a practical and challenging problem. A key to solving this problem is to learn low-dimensional state representations from observations, from which an effective policy can be learned. In order to boost t... | deep reinforcement learning, deep learning, representation learning | [
6,
5,
6,
6
] | Accept (Poster) | Jianda Chen, Sinno Pan | ~Jianda_Chen1, ~Sinno_Pan1 | 20210928 | https://openreview.net/forum?id=zBOI9LFpESK | zBOI9LFpESK | @inproceedings{
chen2022learning,
title={Learning Generalizable Representations for Reinforcement Learning via Adaptive Meta-learner of Behavioral Similarities},
author={Jianda Chen and Sinno Pan},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=zBOI9L... | OpenReview/ICLR/figures/2022/accept_poster/zBOI9LFpESK/Figure1.png | 1 | Figure 1: Architecture of our AMBS framework. The dotted arrow represents the regression target and the dash arrow means stop gradient. Left: the learning process of meta-learner. Right: the model architecture for SAC with adaptive weight c which is jointly learned with SAC objective. | <paragraph_1>In this section, we propose a framework named Adaptive Meta-learner of Behavioral Similarities (AMBS) to learn generalizable states representation regarding the π-bisimulation metric. The learning procedure is demonstrated in Figure 1. Observe that the π-bisimulation metric is composed of two terms: |Rπ si... | diagram | 0.996412 | ||
OpenReview | ICLR | 2,022 | Image BERT Pre-training with Online Tokenizer | The success of language Transformers is primarily attributed to the pretext task of masked language modeling (MLM), where texts are first tokenized into semantically meaningful pieces.
In this work, we study masked image modeling (MIM) and indicate the necessity and challenges of using a semantically meaningful visual ... | online tokenizer, masked image modeling, vision transformer | We present a self-supervised framework iBOT that can perform masked image modeling with an online tokenizer, achieving the state-of-the-art results in downstream tasks. | [
6,
6,
8
] | Accept (Poster) | Jinghao Zhou, Chen Wei, Huiyu Wang, Wei Shen, Cihang Xie, Alan Yuille, Tao Kong | ~Jinghao_Zhou1, ~Chen_Wei2, ~Huiyu_Wang1, ~Wei_Shen2, ~Cihang_Xie3, ~Alan_Yuille1, ~Tao_Kong3 | 20210928 | https://openreview.net/forum?id=ydopy-e6Dg | ydopy-e6Dg | @inproceedings{
zhou2022image,
title={Image {BERT} Pre-training with Online Tokenizer},
author={Jinghao Zhou and Chen Wei and Huiyu Wang and Wei Shen and Cihang Xie and Alan Yuille and Tao Kong},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=ydopy-e6... | OpenReview/ICLR/figures/2022/accept_poster/ydopy-e6Dg/Figure7.png | 7 | Figure 7: Computation pipelines for iBOT with or without multi-crop augmentation. (a) iBOT w/o multi-crop augmentation. (b), (c), and (d) are three pipelines w/ multi-crop augmentation. (b) does not perform MIM for local crops, whereas (c) performs MIM for all crops. (d) only performs MIM for one of the two global crop... | <paragraph_1>Stability of MIM Pre-trained with Multi-Crop. We first showcase several practices where training instability occurs, shown in Fig. 7. To reveal the instability, we monitor the NMI curves during training for each epoch as shown in Fig. 8. The most intuitive ideas are to compute as (b) or (c). In (b), MIM is ... | diagram | 0.994258 | |
OpenReview | ICLR | 2,022 | Differentiable Scaffolding Tree for Molecule Optimization | The structural design of functional molecules, also called molecular optimization, is an essential chemical science and engineering task with important applications, such as drug discovery. Deep generative models and combinatorial optimization methods achieve initial success but still struggle with directly modeling di... | make the molecular optimization problem differentiable at the structure level | [
6,
10,
8,
5
] | Accept (Poster) | Tianfan Fu, Wenhao Gao, Cao Xiao, Jacob Yasonik, Connor W. Coley, Jimeng Sun | ~Tianfan_Fu1, ~Wenhao_Gao1, ~Cao_Xiao2, jyasonik@mit.edu, ~Connor_W._Coley1, ~Jimeng_Sun3 | 20210928 | https://openreview.net/forum?id=w_drCosT76 | w_drCosT76 | @inproceedings{
fu2022differentiable,
title={Differentiable Scaffolding Tree for Molecule Optimization},
author={Tianfan Fu and Wenhao Gao and Cao Xiao and Jacob Yasonik and Connor W. Coley and Jimeng Sun},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?... | OpenReview/ICLR/figures/2022/accept_poster/w_drCosT76/Figure2.png | 2 | Figure 2: Example of differentiable scaffolding tree. We show non-leaf nodes (grey), leaf nodes (yellow), expansion nodes (blue). The dashed nodes and edges are learnable, corresponding to nodes’ identity and existence. w̃ and à share the learnable parameters {ŵ3, ŵ4, ŵ5|3, ŵ6|4, ŵ7|1, ŵ8|2}. | <paragraph_1>3.1.3 Differentiable scaffolding tree Similar to a scaffolding tree, a differentiable scaffolding tree (DST) also contains (i) node indicator matrix, (ii) adjacency matrix, and (iii) node weight vector, but with additional expansion nodes. Specifically, while inheriting leaf node set Vleaf and non-leaf node... | diagram | 0.988689 | ||
OpenReview | ICLR | 2,022 | Continual Normalization: Rethinking Batch Normalization for Online Continual Learning | Existing continual learning methods use Batch Normalization (BN) to facilitate training and improve generalization across tasks. However, the non-i.i.d and non-stationary nature of continual learning data, especially in the online setting, amplify the discrepancy between training and testing in BN and hinder the perfor... | Continual Learning, Batch Normalization | A negative effect of BN in online continual learning and a simple strategy to alleviate it. | [
6,
5,
8,
6
] | Accept (Poster) | Quang Pham, Chenghao Liu, Steven HOI | ~Quang_Pham1, ~Chenghao_Liu1, ~Steven_HOI1 | 20210928 | https://openreview.net/forum?id=vwLLQ-HwqhZ | vwLLQ-HwqhZ | @inproceedings{
pham2022continual,
title={Continual Normalization: Rethinking Batch Normalization for Online Continual Learning},
author={Quang Pham and Chenghao Liu and Steven HOI},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=vwLLQ-HwqhZ}
} | OpenReview/ICLR/figures/2022/accept_poster/vwLLQ-HwqhZ/Figure1.png | 1 | Figure 1: An illustration of different normalization methods using cube diagrams derived from Wu & He (2018). Each cube represents a feature map tensor with N as the batch axis, C as the channel axis, and (H,W) as the channel axes. Pixels in blue are normalized by the same moments calculated from different samples whil... | <paragraph_1>GN has shown comparable performance to BN with large mini-batch sizes (e.g. 32 or more), while significantly outperformed BN with small mini-batch sizes (e.g. one or two). Notably, when putting all channels into a single group (setting G = 1), GN is equivalent to Layer Normalization (LN) (Ba et al., 2016), ... | diagram | 0.9524 | |
OpenReview | ICLR | 2,022 | Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon Reasoning | Reinforcement learning can train policies that effectively perform complex tasks. However for long-horizon tasks, the performance of these methods degrades with horizon, often necessitating reasoning over and chaining lower-level skills. Hierarchical reinforcement learning aims to enable this by providing a bank of low... | hierarchical reinforcement learning, planning, representation learning, robotics | We introduce value function spaces, a learned representation of state through the values of low-level skills, which capture affordances and ignores distractors to enable long-horizon reasoning and zero-shot generalization. | [
6,
6,
6,
6
] | Accept (Poster) | Dhruv Shah, Peng Xu, Yao Lu, Ted Xiao, Alexander T Toshev, Sergey Levine, brian ichter | ~Dhruv_Shah1, ~Peng_Xu9, ~Yao_Lu13, ~Ted_Xiao1, ~Alexander_T_Toshev1, ~Sergey_Levine1, ~brian_ichter1 | 20210928 | https://openreview.net/forum?id=vgqS1vkkCbE | vgqS1vkkCbE | @inproceedings{
shah2022value,
title={Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon Reasoning},
author={Dhruv Shah and Alexander T Toshev and Sergey Levine and brian ichter},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=vg... | OpenReview/ICLR/figures/2022/accept_poster/vgqS1vkkCbE/Figure1.png | 1 | Figure 1: Visualizing VFS embeddings in an example desk rearrangement task. VFS can capture the affordances of the low-level skills while ignoring exogenous distractors. | <paragraph_1>Figure 1a illustrates the state abstraction constructed by VFS for the desk rearrangement example discussed above: VFS captures the affordances of the skills and represents the state of the environment, along with preconditions for the low-level skills, forming a functional representation to plan over. Sin... | diagram | 0.941577 | |
OpenReview | ICLR | 2,022 | Solving Inverse Problems in Medical Imaging with Score-Based Generative Models | Reconstructing medical images from partial measurements is an important inverse problem in Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). Existing solutions based on machine learning typically train a model to directly map measurements to medical images, leveraging a training dataset of paired images an... | score-based generative modeling, inverse problems, sparse-view CT, undersampled MRI, metal artifact removal, diffusion | [
8,
6,
6
] | Accept (Poster) | Yang Song, Liyue Shen, Lei Xing, Stefano Ermon | ~Yang_Song1, ~Liyue_Shen1, ~Lei_Xing1, ~Stefano_Ermon1 | 20210928 | https://openreview.net/forum?id=vaRCHVj0uGI | vaRCHVj0uGI | @inproceedings{
song2022solving,
title={Solving Inverse Problems in Medical Imaging with Score-Based Generative Models},
author={Yang Song and Liyue Shen and Lei Xing and Stefano Ermon},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=vaRCHVj0uGI}
} | OpenReview/ICLR/figures/2022/accept_poster/vaRCHVj0uGI/Figure3.png | 3 | Figure 3: (Left) An overview of our method for solving inverse problems with score-based generative models. (Right) An illustration about how to combine x̂ti and y to form x̂1ti . | <paragraph_1>where ˆxtN „ πpxq, ˆyti „ ptipyti | yq, and 0 ď λ ď 1 is a hyper-parameter. We provide an illustration of this process in Fig. 3. The iteration function kp¨, ˆyti, λq : Rn Ñ Rn promotes data consistency by solving a proximal optimization step (Nesterov, 2003; Boyd et al., 2004; Hammernik et al., 2021) that... | diagram | 0.963974 | ||
OpenReview | ICLR | 2,022 | Generalization of Neural Combinatorial Solvers Through the Lens of Adversarial Robustness | End-to-end (geometric) deep learning has seen first successes in approximating the solution of combinatorial optimization problems. However, generating data in the realm of NP-hard/-complete tasks brings practical and theoretical challenges, resulting in evaluation protocols that are too optimistic. Specifically, most ... | Generalization, Neural Combinatorial Optimization, Adversarial Robustness | We study the generalization of combinatorial optimization w.r.t. to adversarial attacks since current evaluation protocols are too optimistic and we show that neural solvers are indeed vulnerable under label-preserving perturbations. | [
8,
8,
6
] | Accept (Poster) | Simon Geisler, Johanna Sommer, Jan Schuchardt, Aleksandar Bojchevski, Stephan Günnemann | ~Simon_Geisler1, ~Johanna_Sommer1, ~Jan_Schuchardt1, ~Aleksandar_Bojchevski1, ~Stephan_Günnemann1 | 20210928 | https://openreview.net/forum?id=vJZ7dPIjip3 | vJZ7dPIjip3 | @inproceedings{
geisler2022generalization,
title={Generalization of Neural Combinatorial Solvers Through the Lens of Adversarial Robustness},
author={Simon Geisler and Johanna Sommer and Jan Schuchardt and Aleksandar Bojchevski and Stephan G{\"u}nnemann},
booktitle={International Conference on Learning Representations}... | OpenReview/ICLR/figures/2022/accept_poster/vJZ7dPIjip3/Figure7.png | 7 | Figure 7: Examples of the optimal route Y and perturbed routes Ỹ for DecisionTSP. | <paragraph_1>Decision TSP Solver. If a route of target cost exists, our attack successfully fools the neural solver in most of the cases. This low adversarial accuracy highlights again that the clean accuracy is far too optimistic and that the model likely suffers from challenges (1) easier subproblem and/or (2) spurio... | diagram | 0.925552 | |
OpenReview | ICLR | 2,022 | Knowledge Infused Decoding | Pre-trained language models (LMs) have been shown to memorize a substantial amount of knowledge from the pre-training corpora; however, they are still limited in recalling factually correct knowledge given a certain context. Hence. they tend to suffer from counterfactual or hallucinatory generation when used in knowled... | natural language, decoding, reinforcement learning, knowledge integration, generation | We propose a new decoding algorithm for language model generation, to obtain better performance in knowledge-intensive tasks. | [
6,
5,
8,
6
] | Accept (Poster) | Ruibo Liu, Guoqing Zheng, Shashank Gupta, Radhika Gaonkar, Chongyang Gao, Soroush Vosoughi, Milad Shokouhi, Ahmed Hassan Awadallah | ~Ruibo_Liu1, ~Guoqing_Zheng1, ~Shashank_Gupta3, ~Radhika_Gaonkar1, ~Chongyang_Gao1, ~Soroush_Vosoughi1, ~Milad_Shokouhi1, ~Ahmed_Hassan_Awadallah1 | 20210928 | https://openreview.net/forum?id=upnDJ7itech | upnDJ7itech | @inproceedings{
liu2022knowledge,
title={Knowledge Infused Decoding},
author={Ruibo Liu and Guoqing Zheng and Shashank Gupta and Radhika Gaonkar and Chongyang Gao and Soroush Vosoughi and Milad Shokouhi and Ahmed Hassan Awadallah},
booktitle={International Conference on Learning Representations},
year={2022},
url={http... | OpenReview/ICLR/figures/2022/accept_poster/upnDJ7itech/Figure1.png | 1 | Figure 1: Overview of our KID decoding algorithm. For a given context xcontext, we first retrieve k most relevant Wikipedia documents z[1,...,k] with a knowledge retriever (Step 1), and then convert them into compressed knowledge trie Gext (Step 2). Meanwhile, the local memory Gloc which is a first-in-first-out list wi... | <paragraph_1>We detail the implementation of KID in this section. As shown in Figure 1, KID comprises of three steps: retrieving relevant knowledge (§3.1), constructing external and local knowledge memory (§3.2), and guiding current step decoding under the constraint of the knowledge trie (§3.3).</paragraph_1> | diagram | 0.999148 | |
OpenReview | ICLR | 2,022 | Language model compression with weighted low-rank factorization | Factorizing a large matrix into small matrices is a popular strategy for model compression. Singular value decomposition (SVD) plays a vital role in this compression strategy, approximating a learned matrix with fewer parameters. However, SVD minimizes the squared error toward reconstructing the original matrix without... | model compression, low-rank approximation, transformer, language model | Fisher-weighted SVD for language model compression | [
6,
6,
6
] | Accept (Poster) | Yen-Chang Hsu, Ting Hua, Sungen Chang, Qian Lou, Yilin Shen, Hongxia Jin | ~Yen-Chang_Hsu1, ~Ting_Hua1, chang.sun@northeastern.edu, ~Qian_Lou1, ~Yilin_Shen1, ~Hongxia_Jin1 | 20210928 | https://openreview.net/forum?id=uPv9Y3gmAI5 | uPv9Y3gmAI5 | @inproceedings{
hsu2022language,
title={Language model compression with weighted low-rank factorization},
author={Yen-Chang Hsu and Ting Hua and Sungen Chang and Qian Lou and Yilin Shen and Hongxia Jin},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=... | OpenReview/ICLR/figures/2022/accept_poster/uPv9Y3gmAI5/Figure3.png | 3 | Figure 3: The schematic effect of our Fisher-Weighted SVD (FWSVD). Î is a diagonal matrix containing estimated Fisher information of parameters. By involving Fisher information to weigh the importance, our method reduces the overlap between meshed orange and green, making less performance drop after truncation. | <paragraph_1>Equation 6 can be solved by the standard SVD on ˆIW. We use the notation svd(ˆIW) = (U ∗, S∗, V ∗), then the solution of Equation (6) will be A = ˆI−1U ∗S∗, and B = V ∗T . In other words, the solution is the result of removing the information ˆI from the factorized matrices. Figure 3 illustrates this proce... | diagram | 0.943348 | |
OpenReview | ICLR | 2,022 | On the Certified Robustness for Ensemble Models and Beyond | Recent studies show that deep neural networks (DNN) are vulnerable to adversarial examples, which aim to mislead DNNs by adding perturbations with small magnitude. To defend against such attacks, both empirical and theoretical defense approaches have been extensively studied for a single ML model. In this work, we aim ... | robustness, ensemble, certified robustness | Inspired by theoretical analysis, we propose Diversity Regularized Training to enhance the certified robustness of ensemble models and DRT significantly outperforms existing methods. | [
6,
8,
6,
6,
8
] | Accept (Poster) | Zhuolin Yang, Linyi Li, Xiaojun Xu, Bhavya Kailkhura, Tao Xie, Bo Li | ~Zhuolin_Yang1, ~Linyi_Li1, ~Xiaojun_Xu1, ~Bhavya_Kailkhura1, ~Tao_Xie4, ~Bo_Li19 | 20210928 | https://openreview.net/forum?id=tUa4REjGjTf | tUa4REjGjTf | @inproceedings{
yang2022on,
title={On the Certified Robustness for Ensemble Models and Beyond},
author={Zhuolin Yang and Linyi Li and Xiaojun Xu and Bhavya Kailkhura and Tao Xie and Bo Li},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=tUa4REjGjTf}
} | OpenReview/ICLR/figures/2022/accept_poster/tUa4REjGjTf/Figure2.png | 2 | Figure 2: Pipeline for DRT-based ensemble. | <paragraph_1>Pipeline. After the base models are trained with DRT, we aggregate them to form the ensemble M, using either WE or MME protocol (see Definitions 2 and 3). If we use WE, to filter out the effect of different weights, we adopt the average ensemble where all weights are equal. We also studied how optimizing wei... | diagram | 0.982324 | |
OpenReview | ICLR | 2,022 | In a Nutshell, the Human Asked for This: Latent Goals for Following Temporal Specifications | We address the problem of building agents whose goal is to learn to execute out-of distribution (OOD) multi-task instructions expressed in temporal logic (TL) by using deep reinforcement learning (DRL). Recent works provided evidence that the agent's neural architecture is a key feature when DRL agents are learning to ... | Deep Reinforcement Learning, Out-Of-Distribution Generalisation, Temporal Logic | Inducing architectures to generate low-dimensional representations of their current goal processing observations and instructions together yields stronger out-of-distribution generalisation | [
6,
3,
8,
8
] | Accept (Poster) | Borja G. León, Murray Shanahan, Francesco Belardinelli | ~Borja_G._León1, ~Murray_Shanahan1, ~Francesco_Belardinelli1 | 20210928 | https://openreview.net/forum?id=rUwm9wCjURV | rUwm9wCjURV | @inproceedings{
le{\'o}n2022in,
title={In a Nutshell, the Human Asked for This: Latent Goals for Following Temporal Specifications},
author={Borja G. Le{\'o}n and Murray Shanahan and Francesco Belardinelli},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum... | OpenReview/ICLR/figures/2022/accept_poster/rUwm9wCjURV/Figure1.png | 1 | Figure 1: Neural network architectures. Left: standard architecture from previous literature. The central module can be either fully-connected or relational layers. Outputs π(at) and Vt refer to the actor and critic respectively (Mnih et al., 2016). Right: proposed latent-goal architecture. | <paragraph_1>Consequently, we propose an architecture configuration that induces agents to compute both human instructions and sensory inputs in a dedicated channel, generating a latent representation of the current goal. We call this type of deep learning model a latent-goal architecture. Figure 1 right shows our propo... | diagram | 0.97407 | |
OpenReview | ICLR | 2,022 | On Redundancy and Diversity in Cell-based Neural Architecture Search | Searching for the architecture cells is a dominant paradigm in NAS. However, little attention has been devoted to the analysis of the cell-based search spaces even though it is highly important for the continual development of NAS.
In this work, we conduct an empirical post-hoc analysis of architectures from the popul... | NAS, machine learning architectures, AutoML | We analyse and explore the redundancies and diversity of popular cell-based search spaces in NAS. | [
5,
6,
8,
6,
6
] | Accept (Poster) | Xingchen Wan, Binxin Ru, Pedro M Esperança, Zhenguo Li | ~Xingchen_Wan1, ~Binxin_Ru1, ~Pedro_M_Esperança1, ~Zhenguo_Li1 | 20210928 | https://openreview.net/forum?id=rFJWoYoxrDB | rFJWoYoxrDB | @inproceedings{
wan2022on,
title={On Redundancy and Diversity in Cell-based Neural Architecture Search},
author={Xingchen Wan and Binxin Ru and Pedro M Esperan{\c{c}}a and Zhenguo Li},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=rFJWoYoxrDB}
} | OpenReview/ICLR/figures/2022/accept_poster/rFJWoYoxrDB/Figure19.png | 19 | Figure 19: Genotypes of GAEA architecture (Li et al., 2021). Note that the edited genotype is identical to the original normal genotype as it is already compliant with both Prim and Skip constraints. | diagram | 0.986788 | ||
OpenReview | ICLR | 2,022 | It Takes Four to Tango: Multiagent Self Play for Automatic Curriculum Generation | We are interested in training general-purpose reinforcement learning agents that can solve a wide variety of goals. Training such agents efficiently requires automatic generation of a goal curriculum. This is challenging as it requires (a) exploring goals of increasing difficulty, while ensuring that the agent (b) is e... | curriculum generation, unsupervised reinforcement learning, goal conditioned reinforcement learning, multi agent | [
8,
6,
5,
6
] | Accept (Poster) | Yuqing Du, Pieter Abbeel, Aditya Grover | ~Yuqing_Du1, ~Pieter_Abbeel2, ~Aditya_Grover1 | 20210928 | https://openreview.net/forum?id=q4tZR1Y-UIs | q4tZR1Y-UIs | @inproceedings{
du2022it,
title={It Takes Four to Tango: Multiagent Self Play for Automatic Curriculum Generation},
author={Yuqing Du and Pieter Abbeel and Aditya Grover},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=q4tZR1Y-UIs}
} | OpenReview/ICLR/figures/2022/accept_poster/q4tZR1Y-UIs/Figure1.png | 1 | Figure 1: Interactions between learners A , B and teachers GA, GB in Self Play variations. In (a), A plays against variations of itself in a zero-sum game to automatically generate a curriculum. In (b), A acts as both an agent and a demonstrator by being rewarded for proposing hard goals to B. In (c), we separate the g... | <paragraph_1>An alternative method for automatic curriculum generation that has seen great success is self-play, where an agent plays against other versions of itself (Silver et al., 2016; Bansal et al., 2018; Baker et al., 2019) (Fig. 1a). Unfortunately, this method does not apply directly to complex tasks where we do... | diagram | 0.991407 | ||
OpenReview | ICLR | 2,022 | Vector-quantized Image Modeling with Improved VQGAN | Pretraining language models with next-token prediction on massive text corpora has delivered phenomenal zero-shot, few-shot, transfer learning and multi-tasking capabilities on both generative and discriminative language tasks. Motivated by this success, we explore a Vector-quantized Image Modeling (VIM) approach that ... | VQGAN, Vision Transformers, Vector-quantized Image Modeling | We propose the ViT-VQGAN and further explore a Vector-quantized Image Modeling (VIM) approach on both generative and discriminative tasks on images. | [
6,
6,
6,
6
] | Accept (Poster) | Jiahui Yu, Xin Li, Jing Yu Koh, Han Zhang, Ruoming Pang, James Qin, Alexander Ku, Yuanzhong Xu, Jason Baldridge, Yonghui Wu | ~Jiahui_Yu1, ~Xin_Li41, ~Jing_Yu_Koh2, ~Han_Zhang1, ruoming@gmail.com, ~James_Qin1, ~Alexander_Ku1, ~Yuanzhong_Xu1, ~Jason_Baldridge1, ~Yonghui_Wu1 | 20210928 | https://openreview.net/forum?id=pfNyExj7z2 | pfNyExj7z2 | @inproceedings{
yu2022vectorquantized,
title={Vector-quantized Image Modeling with Improved {VQGAN}},
author={Jiahui Yu and Xin Li and Jing Yu Koh and Han Zhang and Ruoming Pang and James Qin and Alexander Ku and Yuanzhong Xu and Jason Baldridge and Yonghui Wu},
booktitle={International Conference on Learning Represent... | OpenReview/ICLR/figures/2022/accept_poster/pfNyExj7z2/Figure4.png | 4 | Figure 4: Illustration of factorized codes and codebook details. | <paragraph_1>As we introduced in Section 3.2, we use a linear projection to reduce the encoded embedding to a low-dimensional variable space for code lookup. A detailed illustration is shown in Figure 4.</paragraph_1> | diagram | 0.948137 | |
OpenReview | ICLR | 2,022 | PF-GNN: Differentiable particle filtering based approximation of universal graph representations | Message passing Graph Neural Networks (GNNs) are known to be limited in expressive power by the 1-WL color-refinement test for graph isomorphism. Other more expressive models either are computationally expensive or need preprocessing to extract structural features from the graph. In this work, we propose to make GNNs u... | Graph Neural Networks, Graph representation learning, Expressive GNN | Increasing the expressive power of Graph Neural Networks by using techniques from exact isomorphism solvers with a particle filtering approach. | [
6,
8,
6,
8
] | Accept (Poster) | Mohammed Haroon Dupty, Yanfei Dong, Wee Sun Lee | ~Mohammed_Haroon_Dupty1, ~Yanfei_Dong1, ~Wee_Sun_Lee1 | 20210928 | https://openreview.net/forum?id=oh4TirnfSem | oh4TirnfSem | @inproceedings{
dupty2022pfgnn,
title={{PF}-{GNN}: Differentiable particle filtering based approximation of universal graph representations},
author={Mohammed Haroon Dupty and Yanfei Dong and Wee Sun Lee},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?i... | OpenReview/ICLR/figures/2022/accept_poster/oh4TirnfSem/Figure1.png | 1 | Figure 1: Two 1-WL equivalent graphs with different colorings after one step of individualization and refinement. | <paragraph_1>for coloring the graph. Individualization is the process of artificially introducing asymmetry by recoloring a vertex and thereby, distinguishing it from the rest of the vertices. Refinement refers to 1-WL refinement which can propagate this information to recolor the rest of the graph. The two graphs shown i... | diagram | 0.99818 | |
OpenReview | ICLR | 2,022 | DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR | We present in this paper a novel query formulation using dynamic anchor boxes for DETR (DEtection TRansformer) and offer a deeper understanding of the role of queries in DETR. This new formulation directly uses box coordinates as queries in Transformer decoders and dynamically updates them layer by layer. Using box coo... | Object detection, Transformer | We present in this paper a novel query formulation using dynamic anchor boxes for DETR and offer a deeper understanding of the role of queries in DETR. | [
5,
8,
6
] | Accept (Poster) | Shilong Liu, Feng Li, Hao Zhang, Xiao Yang, Xianbiao Qi, Hang Su, Jun Zhu, Lei Zhang | ~Shilong_Liu1, ~Feng_Li9, ~Hao_Zhang39, ~Xiao_Yang4, ~Xianbiao_Qi2, ~Hang_Su3, ~Jun_Zhu2, ~Lei_Zhang23 | 20210928 | https://openreview.net/forum?id=oMI9PjOb9Jl | oMI9PjOb9Jl | @inproceedings{
liu2022dabdetr,
title={{DAB}-{DETR}: Dynamic Anchor Boxes are Better Queries for {DETR}},
author={Shilong Liu and Feng Li and Hao Zhang and Xiao Yang and Xianbiao Qi and Hang Su and Jun Zhu and Lei Zhang},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openre... | OpenReview/ICLR/figures/2022/accept_poster/oMI9PjOb9Jl/Figure5.png | 5 | Figure 5: Framework of our proposed DAB-DETR. | <paragraph_1>Following DETR (Carion et al., 2020), our model is an end-to-end object detector which includes a CNN backbone, Transformer (Vaswani et al., 2017) encoders and decoders, and prediction heads for boxes and labels. We mainly improve the decoder part, as shown in Fig. 5.</paragraph_1>
<paragraph_2>Following t... | diagram | 0.987221 | |
OpenReview | ICLR | 2,022 | Likelihood Training of Schrödinger Bridge using Forward-Backward SDEs Theory | Schrödinger Bridge (SB) is an entropy-regularized optimal transport problem that has received increasing attention in deep generative modeling for its mathematical flexibility compared to the Scored-based Generative Model (SGM). However, it remains unclear whether the optimization principle of SB relates to the modern ... | Schrödinger Bridge, score-based generative model, optimal transport, forward-backward stochastic differential equations, stochastic optimal control | We present a new computational framework, grounded on Forward-Backward SDEs theory, for the log-likelihood training of Schrödinger Bridge and provide theoretical connections to score-baesd generative models. | [
6,
8,
5,
8
] | Accept (Poster) | Tianrong Chen, Guan-Horng Liu, Evangelos Theodorou | ~Tianrong_Chen1, ~Guan-Horng_Liu1, ~Evangelos_Theodorou1 | 20210928 | https://openreview.net/forum?id=nioAdKCEdXB | nioAdKCEdXB | @inproceedings{
chen2022likelihood,
title={Likelihood Training of Schr\"odinger Bridge using Forward-Backward {SDE}s Theory},
author={Tianrong Chen and Guan-Horng Liu and Evangelos Theodorou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=nioAdKCEdXB... | OpenReview/ICLR/figures/2022/accept_poster/nioAdKCEdXB/Figure2.png | 2 | Figure 2: Schematic diagram of the our stochastic optimal control interpretation, and how it connects the objective of SGM (3) and optimality of SB (6) through Forward-Backward SDEs theory. | <paragraph_1>We motivate our approach starting from some control-theoretic observation (see Fig. 2). Notice that both SGM and SB consist of forward and backward SDEs with similar structures. From the stochastic control perspective, these SDEs belong to the class of control-affine SDEs with additive noise:</paragraph_1>
... | diagram | 0.959581 | |
OpenReview | ICLR | 2,022 | Deep Point Cloud Reconstruction | Point cloud obtained from 3D scanning is often sparse, noisy, and irregular. To cope with these issues, recent studies have been separately conducted to densify, denoise, and complete inaccurate point cloud. In this paper, we advocate that jointly solving these tasks leads to significant improvement for point cloud rec... | Computer Vision, 3D Geometry, Deep Learning based Point Cloud Understanding, Point Cloud Denoising, Point Cloud Upsampling | We propose deep learning-based point cloud reconstruction algorithm | [
5,
8,
6,
6
] | Accept (Poster) | Jaesung Choe, ByeongIn Joung, Francois Rameau, Jaesik Park, In So Kweon | ~Jaesung_Choe1, ~ByeongIn_Joung1, ~Francois_Rameau1, ~Jaesik_Park3, ~In_So_Kweon2 | 20210928 | https://openreview.net/forum?id=mKDtUtxIGJ | mKDtUtxIGJ | @inproceedings{
choe2022deep,
title={Deep Point Cloud Reconstruction},
author={Jaesung Choe and ByeongIn Joung and Francois Rameau and Jaesik Park and In So Kweon},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=mKDtUtxIGJ}
} | OpenReview/ICLR/figures/2022/accept_poster/mKDtUtxIGJ/Figure4.png | 4 | Figure 4: Voxel re-localization network. The 2nd stage network involves transformers for voxelto-point refinement. In particular, transformers are applied into self/cross-attention layers with our amplified positional encoding to compute the relation between a query voxel and its neighbor voxels. | <paragraph_1>Let us describe the detailed process in the voxel re-localization network, which is illustrated in Fig. 4. Given output voxels Vout, we collect the K(=8) closest voxels to each voxel vi∈R1×3 using the hash table that we used in the 1st stage network. Then, we obtain a voxel set Vi={vk}K k=1 that</paragraph... | diagram | 0.989271 | |
OpenReview | ICLR | 2,022 | Prototype memory and attention mechanisms for few shot image generation | Recent discoveries indicate that the neural codes in the primary visual cortex (V1) of macaque monkeys are complex, diverse and sparse. This leads us to ponder the computational advantages and functional role of these “grandmother cells." Here, we propose that such cells can serve as prototype memory priors that bias a... | neuroscience, deep learning | computational role for “prototype concept neurons” in top-down synthesis path | [
8,
5,
5
] | Accept (Poster) | Tianqin Li, Zijie Li, Andrew Luo, Harold Rockwell, Amir Barati Farimani, Tai Sing Lee | ~Tianqin_Li2, ~Zijie_Li2, ~Andrew_Luo2, ~Harold_Rockwell1, ~Amir_Barati_Farimani2, ~Tai_Sing_Lee1 | 20210928 | https://openreview.net/forum?id=lY0-7bj0Vfz | lY0-7bj0Vfz | @inproceedings{
li2022prototype,
title={Prototype memory and attention mechanisms for few shot image generation},
author={Tianqin Li and Zijie Li and Andrew Luo and Harold Rockwell and Amir Barati Farimani and Tai Sing Lee},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://ope... | OpenReview/ICLR/figures/2022/accept_poster/lY0-7bj0Vfz/Figure6.png | 6 | Figure 6: Schematic of affinity map visualization procedure | <paragraph_1>heatmap. To interpret the vector embeddings (concepts) stored in each concept cluster, we perform three kinds of visualization: affinity heatmap w.r.t. different concept clusters, images’ binary mask w.r.t. different concept clusters, and t-SNE visualization (van der Maaten & Hinton, 2008) of the feature ma... | diagram | 0.904239 | |
OpenReview | ICLR | 2,022 | Filling the G_ap_s: Multivariate Time Series Imputation by Graph Neural Networks | Dealing with missing values and incomplete time series is a labor-intensive, tedious, inevitable task when handling data coming from real-world applications. Effective spatio-temporal representations would allow imputation methods to reconstruct missing temporal data by exploiting information coming from sensors at dif... | graph neural networks, missing data, time series analysis, time series imputation | We propose a graph neural network architecture for multivariate time series imputation and achieve state-of-the-art results on several benchmarks. | [
6,
6,
8,
8
] | Accept (Poster) | Andrea Cini, Ivan Marisca, Cesare Alippi | ~Andrea_Cini1, ~Ivan_Marisca1, ~Cesare_Alippi1 | 20210928 | https://openreview.net/forum?id=kOu3-S3wJ7 | kOu3-S3wJ7 | @inproceedings{
cini2022filling,
title={Filling the G\_ap\_s: Multivariate Time Series Imputation by Graph Neural Networks},
author={Andrea Cini and Ivan Marisca and Cesare Alippi},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=kOu3-S3wJ7}
} | OpenReview/ICLR/figures/2022/accept_poster/kOu3-S3wJ7/Figure1.png | 1 | Figure 1: Representation of a multivariate time series as a sequence of graphs. Red circles denote nodes with missing values, nodes are identified. | <paragraph_1>associated with the i-th node; entry wi,j t of the adjacency matrix Wt ∈RNt×Nt denotes the scalar weight of the edge (if any) connecting the i-th and j-th node. Fig. 1 exemplifies this modelling framework. We assume nodes to be identified, i.e., to have a unique ID that enables time-wise consistent processin... | diagram | 0.996668 | |
OpenReview | ICLR | 2,022 | Phase Collapse in Neural Networks | Deep convolutional classifiers linearly separate image classes and improve accuracy as depth increases. They progressively reduce the spatial dimension whereas the number of channels grows with depth. Spatial variability is therefore transformed into variability along channels. A fundamental challenge is to understand ... | phase collapse, neural collapse, concentration, classification, imagenet, deep networks, complex networks, sparsity in deep networks | The classification accuracy of CNNs mostly relies on the mechanism of phase collapses to eliminate spatial variability and linearly separate class means. | [
6,
6,
8,
8
] | Accept (Poster) | Florentin Guth, John Zarka, Stéphane Mallat | ~Florentin_Guth1, ~John_Zarka1, ~Stéphane_Mallat1 | 20210928 | https://openreview.net/forum?id=iPHLcmtietq | iPHLcmtietq | @inproceedings{
guth2022phase,
title={Phase Collapse in Neural Networks},
author={Florentin Guth and John Zarka and St{\'e}phane Mallat},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=iPHLcmtietq}
} | OpenReview/ICLR/figures/2022/accept_poster/iPHLcmtietq/Figure1.png | 1 | Figure 1: Architecture of a Learned Scattering network with phase collapses. It has J + 1 layers with J = 11 for ImageNet and J = 8 for CIFAR-10. Each layer is computed with a 1× 1 convolutional operator Pj which linearly combines channels. It is followed by a phase collapse, computed with a spatial convolutional filte... | <paragraph_1>It does not use any bias. This network architecture is illustrated in Figure 1. With the addition of skip-connections, we show that this phase collapse network reaches ResNet accuracy on ImageNet and CIFAR-10.</paragraph_1> | diagram | 0.997483 | |
OpenReview | ICLR | 2,022 | Reverse Engineering of Imperceptible Adversarial Image Perturbations | It has been well recognized that neural network based image classifiers are easily fooled by images with tiny perturbations crafted by an adversary. There has been a vast volume of research to generate and defend such adversarial attacks. However, the following problem is left unexplored: How to reverse-engineer advers... | Reverse Engineering of Deceptions, adversarial examples, denoising, neural networks, interpretability | Reverse engineer adversarial image perturbations with a denoiser-based framework. | [
6,
8,
6
] | Accept (Poster) | Yifan Gong, Yuguang Yao, Yize Li, Yimeng Zhang, Xiaoming Liu, Xue Lin, Sijia Liu | ~Yifan_Gong2, ~Yuguang_Yao1, ~Yize_Li1, ~Yimeng_Zhang2, ~Xiaoming_Liu2, ~Xue_Lin1, ~Sijia_Liu1 | 20210928 | https://openreview.net/forum?id=gpp7cf0xdfN | gpp7cf0xdfN | @inproceedings{
gong2022reverse,
title={Reverse Engineering of Imperceptible Adversarial Image Perturbations},
author={Yifan Gong and Yuguang Yao and Yize Li and Yimeng Zhang and Xiaoming Liu and Xue Lin and Sijia Liu},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openrevi... | OpenReview/ICLR/figures/2022/accept_poster/gpp7cf0xdfN/Figure3.png | 3 | Figure 3: CDD-RED overview. | <paragraph_1>In this section, we propose a novel ClassDiscriminative Denoising based RED approach termed CDD-RED; see Fig. 3 for an overview. CDD-RED contains two key components. First, we propose a PA regularization to enforce the prediction-level stabilities of both estimated benign example xRED and adversarial examp... | diagram | 0.992899 | |
OpenReview | ICLR | 2,022 | Equivariant Self-Supervised Learning: Encouraging Equivariance in Representations | In state-of-the-art self-supervised learning (SSL) pre-training produces semantically good representations by encouraging them to be invariant under meaningful transformations prescribed from human knowledge. In fact, the property of invariance is a trivial instance of a broader class called equivariance, which can be ... | self-supervised learning, contrastive learning, photonics science | Imposing invariance to certain transformations (e.g. random resized cropping) and sensitivity to other transformations (e.g. four-fold rotations) learns better features. | [
6,
6,
8,
6
] | Accept (Poster) | Rumen Dangovski, Li Jing, Charlotte Loh, Seungwook Han, Akash Srivastava, Brian Cheung, Pulkit Agrawal, Marin Soljacic | ~Rumen_Dangovski1, ~Li_Jing1, ~Charlotte_Loh1, ~Seungwook_Han1, ~Akash_Srivastava1, ~Brian_Cheung1, ~Pulkit_Agrawal1, ~Marin_Soljacic1 | 20210928 | https://openreview.net/forum?id=gKLAAfiytI | gKLAAfiytI | @inproceedings{
dangovski2022equivariant,
title={Equivariant Self-Supervised Learning: Encouraging Equivariance in Representations},
author={Rumen Dangovski and Li Jing and Charlotte Loh and Seungwook Han and Akash Srivastava and Brian Cheung and Pulkit Agrawal and Marin Soljacic},
booktitle={International Conference o... | OpenReview/ICLR/figures/2022/accept_poster/gKLAAfiytI/Figure3.png | 3 | Figure 3: Sketch of E-SSL with four-fold rotations prediction, resulting in a backbone that is sensitive to rotations and insensitive to flips and blurring. ImageNet example n01534433:169. | <paragraph_1>E-SSL can be constructed for any semantically meaningful transformation (see for example, Figure 1). From Figure 1 we choose four-fold rotations as the most promising transformation and we fix it for the upcoming section. As a minor motivation, we also present empirical results about the similarities betwee... | diagram | 0.992163 | |
OpenReview | ICLR | 2,022 | Dual Lottery Ticket Hypothesis | Fully exploiting the learning capacity of neural networks requires overparameterized dense networks. On the other side, directly training sparse neural networks typically results in unsatisfactory performance. Lottery Ticket Hypothesis (LTH) provides a novel view to investigate sparse network training and maintain its ... | Dual Lottery Ticket Hypothesis, Sparse Network Training | We articulate a Dual Lottery Ticket Hypothesis (DLTH) with a proposed training strategy Random Sparse Network to validate DLTH. | [
8,
8,
8,
6,
6
] | Accept (Poster) | Yue Bai, Huan Wang, ZHIQIANG TAO, Kunpeng Li, Yun Fu | ~Yue_Bai1, ~Huan_Wang3, ~ZHIQIANG_TAO2, ~Kunpeng_Li1, ~Yun_Fu1 | 20210928 | https://openreview.net/forum?id=fOsN52jn25l | fOsN52jn25l | @inproceedings{
bai2022dual,
title={Dual Lottery Ticket Hypothesis},
author={Yue Bai and Huan Wang and ZHIQIANG TAO and Kunpeng Li and Yun Fu},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=fOsN52jn25l}
} | OpenReview/ICLR/figures/2022/accept_poster/fOsN52jn25l/Figure2.png | 2 | Figure 2: Diagram of Lottery Ticket Hypothesis (LTH) and Dual Lottery Ticket Hypothesis (DLTH). | <paragraph_1>However, LTH only focuses on finding one sparse structure at the expense of full pretraining, which is not universal to both practical usage and investigating the relationship between dense and its subnetworks for sparse network training. In our work, we go from a complementary perspective of LTH and propos... | diagram | 0.986218 | |
OpenReview | ICLR | 2,022 | Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting | Recently, brain-inspired spiking neuron networks (SNNs) have attracted widespread research interest because of their event-driven and energy-efficient characteristics. It is difficult to efficiently train deep SNNs due to the non-differentiability of its activation function, which disables the typically used gradient d... | Spiking Neural Networks, Direct Training, Surrogate Gradient, Generalizability | This paper provides a novel temporal efficient training method for SNN, which significantly improves performance by modifying the optimization target. | [
8,
5,
5,
8
] | Accept (Poster) | Shikuang Deng, Yuhang Li, Shanghang Zhang, Shi Gu | ~Shikuang_Deng1, ~Yuhang_Li1, ~Shanghang_Zhang4, ~Shi_Gu1 | 20210928 | https://openreview.net/forum?id=_XNtisL32jv | _XNtisL32jv | @inproceedings{
deng2022temporal,
title={Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting},
author={Shikuang Deng and Yuhang Li and Shanghang Zhang and Shi Gu},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=_XNtisL32jv}... | OpenReview/ICLR/figures/2022/accept_poster/_XNtisL32jv/Figure1.png | 1 | Figure 1: Workflow of temporal efficient training (TET). To obtain a more generalized SNN, we modify the optimization target to adjust each moment’s output distribution. | <paragraph_1>In this work, we examine the limitation of the traditional direct training approach with SG and propose the temporal efficient training (TET) algorithm. Instead of directly optimizing the integrated potential, TET optimizes every moment’s pre-synaptic inputs. As a result, it avoids the trap into local minim... | diagram | 0.999357 | |
OpenReview | ICLR | 2,022 | SUMNAS: Supernet with Unbiased Meta-Features for Neural Architecture Search | One-shot Neural Architecture Search (NAS) usually constructs an over-parameterized network, which we call a supernet, and typically adopts sharing parameters among the sub-models to improve computational efficiency. One-shot NAS often repeatedly samples sub-models from the supernet and trains them to optimize the share... | Neural architecture search | We propose a supernet learning strategy that learns unbiased meta-features to tackle multi-model forgetting problem of neural architecture search. | [
8,
5,
6,
6
] | Accept (Poster) | Hyeonmin Ha, Ji-Hoon Kim, Semin Park, Byung-Gon Chun | ~Hyeonmin_Ha1, ~Ji-Hoon_Kim2, ~Semin_Park1, ~Byung-Gon_Chun1 | 20210928 | https://openreview.net/forum?id=Z8FzvVU6_Kj | Z8FzvVU6_Kj | @inproceedings{
ha2022sumnas,
title={{SUMNAS}: Supernet with Unbiased Meta-Features for Neural Architecture Search},
author={Hyeonmin Ha and Ji-Hoon Kim and Semin Park and Byung-Gon Chun},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=Z8FzvVU6_Kj}
} | OpenReview/ICLR/figures/2022/accept_poster/Z8FzvVU6_Kj/Figure2.png | 2 | Figure 2: An example of topological equilibrium. Multiple sub-models with different edges could be reduced down to a single architecture. Different combination of skip-connect and 3x3 conv encapsulated in the box produce exactly the same output and thus are all equal to the graph on the far right. | <paragraph_1>NAS-Bench-201: To compute Kendall tau, we use the rankings of sampled architectures instead of the entire architecture set defined by the search space. The search space of NAS-Bench-201 contains many architectures with very similar performances, and in many cases, multiple architectures with seemingly diffe... | diagram | 0.995248 | |
OpenReview | ICLR | 2,022 | CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation | Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a labeled source domain to a different unlabeled target domain. Most existing UDA methods focus on learning domain-invariant feature representation, either from the domain level or category level, using convolution neural networks (CNNs)-based... | [
6,
8,
8
] | Accept (Poster) | Tongkun Xu, Weihua Chen, Pichao WANG, Fan Wang, Hao Li, Rong Jin | ~Tongkun_Xu1, ~Weihua_Chen1, ~Pichao_WANG3, ~Fan_Wang6, ~Hao_Li16, ~Rong_Jin1 | 20210928 | https://openreview.net/forum?id=XGzk5OKWFFc | XGzk5OKWFFc | @inproceedings{
xu2022cdtrans,
title={{CDT}rans: Cross-domain Transformer for Unsupervised Domain Adaptation},
author={Tongkun Xu and Weihua Chen and Pichao WANG and Fan Wang and Hao Li and Rong Jin},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=XGz... | OpenReview/ICLR/figures/2022/accept_poster/XGzk5OKWFFc/Figure2.png | 2 | Figure 2: The proposed CDTrans framework. It consists of three weight-sharing transformers fed by inputs from the selected pairs using the two-way center-aware labeling method. Cross-entropy is adopted to source branch (HS) and target branch (HT ), while the distillation loss is applied between source-target branch (HS... | <paragraph_1>The framework of the proposed Cross-domain Transformer (CDTrans) is shown in Fig. 2, which consists of three weight-sharing transformers. There are three data flows and constraints for the weight-sharing branches.</paragraph_1>
<paragraph_2>The inputs of the framework are the selected pairs from our labelin... | diagram | 0.997131 | |||
OpenReview | ICLR | 2,022 | Training invariances and the low-rank phenomenon: beyond linear networks | The implicit bias induced by the training of neural networks has become a topic of rigorous study. In the limit of gradient flow and gradient descent with appropriate step size, it has been shown that when one trains a deep linear network with logistic or exponential loss on linearly separable data, the weights converg... | deep learning, nonsmooth analysis, Clarke subdifferential, implicit regularization, low rank bias, alignment, training invariance | We extend theoretical results regarding the low-rank bias of deep linear neural networks trained with gradient-based algorithm to non-linear architectures, reflecting empirical results in the literature. | [
8,
6,
8,
8
] | Accept (Poster) | Thien Le, Stefanie Jegelka | ~Thien_Le1, ~Stefanie_Jegelka3 | 20210928 | https://openreview.net/forum?id=XEW8CQgArno | XEW8CQgArno | @inproceedings{
le2022training,
title={Training invariances and the low-rank phenomenon: beyond linear networks},
author={Thien Le and Stefanie Jegelka},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=XEW8CQgArno}
} | OpenReview/ICLR/figures/2022/accept_poster/XEW8CQgArno/Figure1.png | 1 | Figure 1: Transformation of a feedforward network into a tree net. All nodes apart from the input nodes use ReLU activation. The two neural nets drawn here compute the same function. This idea has been used in Khim & Loh (2019) to prove generalization bounds for adversarial risk. . | <paragraph_1>We give a quick illustration of the two steps described in Section 3 for a fully-connected ReLUactivated network with 1 hidden layer. Figure 1 describes the unrolling of the neural network (Figure 1a) into a tree network (Figure 1b). Figure 2a describes the weight pull-back in the hidden layer and Figure 2... | diagram | 0.997233 | |
OpenReview | ICLR | 2,022 | DEGREE: Decomposition Based Explanation for Graph Neural Networks | Graph Neural Networks (GNNs) are gaining extensive attention for their application in graph data. However, the black-box nature of GNNs prevents users from understanding and trusting the models, thus hampering their applicability. Whereas explaining GNNs remains a challenge, most existing methods fall into approximatio... | XAI, GNN | We propose a new decomposition based explanation for Graph Neural Networks. | [
6,
8,
6,
6
] | Accept (Poster) | Qizhang Feng, Ninghao Liu, Fan Yang, Ruixiang Tang, Mengnan Du, Xia Hu | ~Qizhang_Feng1, ~Ninghao_Liu2, ~Fan_Yang27, ~Ruixiang_Tang1, ~Mengnan_Du1, ~Xia_Hu4 | 20210928 | https://openreview.net/forum?id=Ve0Wth3ptT_ | Ve0Wth3ptT_ | @inproceedings{
feng2022degree,
title={{DEGREE}: Decomposition Based Explanation for Graph Neural Networks},
author={Qizhang Feng and Ninghao Liu and Fan Yang and Ruixiang Tang and Mengnan Du and Xia Hu},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id... | OpenReview/ICLR/figures/2022/accept_poster/Ve0Wth3ptT_/Figure9.png | 9 | Figure 9: Qualitative comparison of DEGREE and SubgraphX. The first row shows the interpretation generated by SubgraphX. The second row is generated by DEGREE. The red color indicates mutagenicity. | <paragraph_1>In this section we make a qualitative comparison between DEGREE and SubgraphX. We randomly select a number of similar molecules and visualize the explanations generated by DEGREE and SubgraphX. We report them in the Figure 9. We can find that none of the subgraphs generated by SubgraphX include the ’N-H’ o... | diagram | 0.976312 | |
OpenReview | ICLR | 2,022 | Backdoor Defense via Decoupling the Training Process | Recent studies have revealed that deep neural networks (DNNs) are vulnerable to backdoor attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few training samples. The attacked model behaves normally on benign samples, whereas its prediction will be maliciously changed when the backdoor is ac... | Backdoor Defense, Backdoor Learning | We reveal that the hidden backdoors are embedded in the feature space mostly due to the end-to-end supervised training paradigm, based on which we propose a simple yet effective decoupling-based training method for backdoor defense. | [
8,
6,
6,
6
] | Accept (Poster) | Kunzhe Huang, Yiming Li, Baoyuan Wu, Zhan Qin, Kui Ren | hkunzhe@zju.edu.cn, ~Yiming_Li1, ~Baoyuan_Wu1, qinzhan@zju.edu.cn, kuiren@zju.edu.cn | 20210928 | https://openreview.net/forum?id=TySnJ-0RdKI | TySnJ-0RdKI | @inproceedings{
huang2022backdoor,
title={Backdoor Defense via Decoupling the Training Process},
author={Kunzhe Huang and Yiming Li and Baoyuan Wu and Zhan Qin and Kui Ren},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=TySnJ-0RdKI}
} | OpenReview/ICLR/figures/2022/accept_poster/TySnJ-0RdKI/Figure2.png | 2 | Figure 2: The main pipeline of our defense. In the first stage, we train the whole DNN model via self-supervised learning based on label-removed training samples. In the second stage, we freeze the learned feature extractor and adopt all training samples to train the remaining fully connected layers via supervised lear... | <paragraph_1>In this section, we describe the general pipeline of our defense. As shown in Figure 2, it consists of three main stages, including (1) learning a purified feature extractor via self-supervised learning, (2) filtering high-credible samples via label-noise learning, and (3) semi-supervised fine-tuning.</paragr... | diagram | 0.930779 | |
OpenReview | ICLR | 2,022 | EigenGame Unloaded: When playing games is better than optimizing | We build on the recently proposed EigenGame that views eigendecomposition as a competitive game. EigenGame's updates are biased if computed using minibatches of data, which hinders convergence and more sophisticated parallelism in the stochastic setting. In this work, we propose an unbiased stochastic update that is as... | pca, principal components analysis, nash, games, eigendecomposition, svd, singular value decomposition | We improve the EigenGame algorithm by removing update bias, enabling further parallelism and better performance. | [
8,
5,
8,
5
] | Accept (Poster) | Ian Gemp, Brian McWilliams, Claire Vernade, Thore Graepel | ~Ian_Gemp1, ~Brian_McWilliams2, ~Claire_Vernade1, ~Thore_Graepel1 | 20210928 | https://openreview.net/forum?id=So6YAqnqgMj | So6YAqnqgMj | @inproceedings{
gemp2022eigengame,
title={EigenGame Unloaded: When playing games is better than optimizing},
author={Ian Gemp and Brian McWilliams and Claire Vernade and Thore Graepel},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=So6YAqnqgMj}
} | OpenReview/ICLR/figures/2022/accept_poster/So6YAqnqgMj/Figure8.png | 8 | Figure 8: This diagram presents the relationships between utilities and updates. An arrow indicates the endpoint is reasonably derived from the origin; the lack of an arrow indicates the direction is unlikely. The link from equation (42) is explicitly crossed out with a hard stop for emphasis. | <paragraph_1>We could have extended the diagram in Figure 5b to include this dead end link. We have also included the true gradient of uµ i as a logical endpoint. We present these extensions in Figure 8.</paragraph_1> | diagram | 0.8942 | |
OpenReview | ICLR | 2,022 | Label-Efficient Semantic Segmentation with Diffusion Models | Denoising diffusion probabilistic models have recently received much research attention since they outperform alternative approaches, such as GANs, and currently provide state-of-the-art generative performance. The superior performance of diffusion models has made them an appealing tool in several applications, includi... | [
8,
8,
6
] | Accept (Poster) | Dmitry Baranchuk, Andrey Voynov, Ivan Rubachev, Valentin Khrulkov, Artem Babenko | ~Dmitry_Baranchuk2, ~Andrey_Voynov1, ~Ivan_Rubachev1, ~Valentin_Khrulkov1, ~Artem_Babenko1 | 20210928 | https://openreview.net/forum?id=SlxSY2UZQT | SlxSY2UZQT | @inproceedings{
baranchuk2022labelefficient,
title={Label-Efficient Semantic Segmentation with Diffusion Models},
author={Dmitry Baranchuk and Andrey Voynov and Ivan Rubachev and Valentin Khrulkov and Artem Babenko},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.... | OpenReview/ICLR/figures/2022/accept_poster/SlxSY2UZQT/Figure1.png | 1 | Figure 1: Overview of the proposed method. (1) x0 −→ xt by adding noise according to q(xt|x0). (2) Extracting feature maps from a noise predictor ϵθ(xt, t). (3) Collecting pixel-level representations by upsampling the feature maps to the image resolution and concatenating them. (4) Using the pixel-wise feature vectors ... | <paragraph_1>Extracting representations. For a given real image x0 ∈RH×W ×3, one can compute T sets of activation tensors from the noise predictor network ϵθ(xt, t). The overall scheme for a timestep t is presented in Figure 1. First, we corrupt x0 by adding Gaussian noise according to Equation (2). The noisy xt is use... | diagram | 0.998798 | |||
OpenReview | ICLR | 2,022 | Evaluating Disentanglement of Structured Representations | We introduce the first metric for evaluating disentanglement at individual hierarchy levels of a structured latent representation. Applied to object-centric generative models, this offers a systematic, unified approach to evaluating (i) object separation between latent slots (ii) disentanglement of object properties in... | We introduce the first metric for evaluating disentanglement at individual hierarchy levels of a structured latent representation, and apply it to object-centric generative models. | [
6,
6,
6
] | Accept (Poster) | Raphaël Dang-Nhu | ~Raphaël_Dang-Nhu2 | 20210928 | https://openreview.net/forum?id=SLz5sZjacp | SLz5sZjacp | @inproceedings{
dang-nhu2022evaluating,
title={Evaluating Disentanglement of Structured Representations},
author={Rapha{\"e}l Dang-Nhu},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=SLz5sZjacp}
} | OpenReview/ICLR/figures/2022/accept_poster/SLz5sZjacp/Figure1.png | 1 | Figure 1: A compositional latent representation is composed of several slots. Each slot generates a part of the image. Then, the different parts are composed together. Pixel-level metrics measure object separation between slots at visual level, while our framework operates purely in latent space. | <paragraph_1>A salient challenge in generative modeling is the ability to decompose the representation of images and scenes into distinct objects that are represented separately and then combined together. Indeed, the capacity to reason about objects and their relations is a central aspect of human intelligence (Spelke... | diagram | 0.988266 | ||
OpenReview | ICLR | 2,022 | Equivariant Graph Mechanics Networks with Constraints | Learning to reason about relations and dynamics over multiple interacting objects is a challenging topic in machine learning. The challenges mainly stem from that the interacting systems are exponentially-compositional, symmetrical, and commonly geometrically-constrained.
Current methods, particularly the ones based on... | [
6,
8,
5,
8
] | Accept (Poster) | Wenbing Huang, Jiaqi Han, Yu Rong, Tingyang Xu, Fuchun Sun, Junzhou Huang | ~Wenbing_Huang1, ~Jiaqi_Han2, ~Yu_Rong1, ~Tingyang_Xu1, ~Fuchun_Sun1, ~Junzhou_Huang2 | 20210928 | https://openreview.net/forum?id=SHbhHHfePhP | SHbhHHfePhP | @inproceedings{
huang2022equivariant,
title={Equivariant Graph Mechanics Networks with Constraints},
author={Wenbing Huang and Jiaqi Han and Yu Rong and Tingyang Xu and Fuchun Sun and Junzhou Huang},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=SHbh... | OpenReview/ICLR/figures/2022/accept_poster/SHbhHHfePhP/Figure3.png | 3 | Figure 3: Illustrations of hinges and sticks. | <paragraph_1>Implementation of hinges. A hinge, as displayed in Fig.3 (a), consists of three particles 0, 1, 2, and two sticks 01 and 02. The freedom degrees of this system can be explained in this way: particle 0 moves freely, and particles 1 and 2 can only rotate round particle 0 owing to the length constraint by the... | diagram | 0.974015 | |||
OpenReview | ICLR | 2,022 | THOMAS: Trajectory Heatmap Output with learned Multi-Agent Sampling | In this paper, we propose THOMAS, a joint multi-agent trajectory prediction framework allowing for an efficient and consistent prediction of multi-agent multi-modal trajectories. We present a unified model architecture for simultaneous agent future heatmap estimation, in which we leverage hierarchical and sparse image ... | Trajectory prediction, Multi-agent, Motion forecasting, Motion estimation, Autonomous driving | We propose a solution for multi-agent coherent multimodal trajectory prediction by learning a recombination of each agent predicted modalities. | [
6,
6,
6,
6
] | Accept (Poster) | Thomas Gilles, Stefano Sabatini, Dzmitry Tsishkou, Bogdan Stanciulescu, Fabien Moutarde | ~Thomas_Gilles1, ~Stefano_Sabatini1, ~Dzmitry_Tsishkou1, ~Bogdan_Stanciulescu1, ~Fabien_Moutarde1 | 20210928 | https://openreview.net/forum?id=QDdJhACYrlX | QDdJhACYrlX | @inproceedings{
gilles2022thomas,
title={{THOMAS}: Trajectory Heatmap Output with learned Multi-Agent Sampling},
author={Thomas Gilles and Stefano Sabatini and Dzmitry Tsishkou and Bogdan Stanciulescu and Fabien Moutarde},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openr... | OpenReview/ICLR/figures/2022/accept_poster/QDdJhACYrlX/Figure1.png | 1 | Figure 1: Illustration of the THOMAS multi-agent prediction pipeline | <paragraph_1>Our goal is to predict the future T timesteps of A agents using their past history made of H timesteps and the HD-Map context. Similar to recent works (Zhao et al., 2020; Zeng et al., 2021; Gu et al., 2021), we will divide the problem into goal-based prediction followed by full trajectory reconstruction. O... | diagram | 0.984447 | |
OpenReview | ICLR | 2,022 | AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation | We extend semi-supervised learning to the problem of domain adaptation to learn significantly higher-accuracy models that train on one data distribution and test on a different one. With the goal of generality, we introduce AdaMatch, a unified solution for unsupervised domain adaptation (UDA), semi-supervised learning ... | unsupervised domain adaptation, semi-supervised learning, semi-supervised domain adaptation | We introduce AdaMatch, a unified solution that achieves state-of-the-art results for unsupervised domain adaptation (UDA), semi-supervised learning (SSL), and semi-supervised domain adaptation (SSDA). | [
5,
6,
6,
8
] | Accept (Poster) | David Berthelot, Rebecca Roelofs, Kihyuk Sohn, Nicholas Carlini, Alexey Kurakin | ~David_Berthelot1, ~Rebecca_Roelofs1, ~Kihyuk_Sohn1, ~Nicholas_Carlini1, ~Alexey_Kurakin1 | 20210928 | https://openreview.net/forum?id=Q5uh1Nvv5dm | Q5uh1Nvv5dm | @inproceedings{
berthelot2022adamatch,
title={AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation},
author={David Berthelot and Rebecca Roelofs and Kihyuk Sohn and Nicholas Carlini and Alexey Kurakin},
booktitle={International Conference on Learning Representations},
year={2022},
url={https:/... | OpenReview/ICLR/figures/2022/accept_poster/Q5uh1Nvv5dm/Figure1.png | 1 | Figure 1: AdaMatch diagram illustrating the loss computations. | <paragraph_1>Overview. A high-level depiction of AdaMatch is in Figure 1. Two augmentations are made for each image: a weak and a strong one with the intent to make the class prediction harder on the strongly augmented image2. Next, we obtain logits by running two batches through the model: a batch of the source images... | diagram | 0.990825 | |
OpenReview | ICLR | 2,022 | Inductive Relation Prediction Using Analogy Subgraph Embeddings | Prevailing methods for relation prediction in heterogeneous graphs aim at learning latent representations (i.e., embeddings) of observed nodes and relations, and thus are limited to the transductive setting where the relation types must be known during training. Here, we propose ANalogy SubGraphEmbeddingLearning (Gr... | Link Prediction, Relation Modelling, Heterogeneous Graphs, Knowledge Graphs | In this paper, we propose GraphANGEL, a novel relation prediction framework that predicts (new) relations between each node pair by checking whether the subgraphs containing the pair are similar to other subgraphs containing the considered relation. | [
8,
8,
8,
8,
8
] | Accept (Poster) | Jiarui Jin, Yangkun Wang, Kounianhua Du, Weinan Zhang, Zheng Zhang, David Wipf, Yong Yu, Quan Gan | ~Jiarui_Jin1, ~Yangkun_Wang1, ~Kounianhua_Du1, ~Weinan_Zhang1, ~Zheng_Zhang1, ~David_Wipf1, ~Yong_Yu1, ~Quan_Gan1 | 20210928 | https://openreview.net/forum?id=PTRo58zPt3P | PTRo58zPt3P | @inproceedings{
jin2022inductive,
title={Inductive Relation Prediction Using Analogy Subgraph Embeddings},
author={Jiarui Jin and Yangkun Wang and Kounianhua Du and Weinan Zhang and Zheng Zhang and David Wipf and Yong Yu and Quan Gan},
booktitle={International Conference on Learning Representations},
year={2022},
url={... | OpenReview/ICLR/figures/2022/accept_poster/PTRo58zPt3P/Figure2.png | 2 | Figure 2: Illustration of GraphANGEL’s relation prediction workflow, where different edge colors in the graph G represent different relation types, and dashed edges in G represent the triplet 〈s, r, t〉 we wish to predict. The left box shows the patterns considered in our implementation, where black edges mean matching ... | <paragraph_1>For each graph G = (V, E, R) where V denotes node set, E denotes edge set and R is relation set, as Figure 2 shows, we outline how GraphANGEL works for each triplet ⟨s, r, t⟩to predict the existence for the edge of type r connecting source node s and target node t as follows. (a) We start by determining P ... | diagram | 0.996797 | |
OpenReview | ICLR | 2,022 | Gaussian Mixture Convolution Networks | This paper proposes a novel method for deep learning based on the analytical convolution of multidimensional Gaussian mixtures.
In contrast to tensors, these do not suffer from the curse of dimensionality and allow for a compact representation, as data is only stored where details exist.
Convolution kernels and data ar... | deep learning architecture, gaussian convolution, gaussian mixture, 3d | Deep learning based on the analytical convolution of multi-dimensional Gaussian mixtures | [
6,
8,
6,
5
] | Accept (Poster) | Adam Celarek, Pedro Hermosilla, Bernhard Kerbl, Timo Ropinski, Michael Wimmer | ~Adam_Celarek1, ~Pedro_Hermosilla1, ~Bernhard_Kerbl1, ~Timo_Ropinski2, ~Michael_Wimmer1 | 20210928 | https://openreview.net/forum?id=Oxeka7Z7Hor | Oxeka7Z7Hor | @inproceedings{
celarek2022gaussian,
title={Gaussian Mixture Convolution Networks},
author={Adam Celarek and Pedro Hermosilla and Bernhard Kerbl and Timo Ropinski and Michael Wimmer},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=Oxeka7Z7Hor}
} | OpenReview/ICLR/figures/2022/accept_poster/Oxeka7Z7Hor/Figure4.png | 4 | Figure 4: The GMCN used in for our evaluation uses 5 Gaussian convolution layers (GCL, see right-hand side), integration of feature channel mixtures, and conventional transfer functions. See Section 5 for more details. | <paragraph_1>To evaluate our architecture, we use the proposed GMCN architecture to train classification networks on a series of well-known tasks. We used an NVIDIA GeForce 2080Ti for training. For all our experiments, we used the same network architecture and training parameters shown in Figure 4. First, we fit a GM to ... | diagram | 0.939382 | |
OpenReview | ICLR | 2,022 | Vitruvion: A Generative Model of Parametric CAD Sketches | Parametric computer-aided design (CAD) tools are the predominant way that engineers specify physical structures, from bicycle pedals to airplanes to printed circuit boards. The key characteristic of parametric CAD is that design intent is encoded not only via geometric primitives, but also by parameterized constraints ... | generative modeling, CAD, transformers, design, geometric constraints | We build a generative model for parametric CAD sketches and use it to perform autocompletion and hand drawing conversion tasks relevant to design. | [
8,
8,
6,
8
] | Accept (Poster) | Ari Seff, Wenda Zhou, Nick Richardson, Ryan P Adams | ~Ari_Seff1, ~Wenda_Zhou1, ~Nick_Richardson1, ~Ryan_P_Adams1 | 20210928 | https://openreview.net/forum?id=Ow1C7s3UcY | Ow1C7s3UcY | @inproceedings{
seff2022vitruvion,
title={Vitruvion: A Generative Model of Parametric {CAD} Sketches},
author={Ari Seff and Wenda Zhou and Nick Richardson and Ryan P Adams},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=Ow1C7s3UcY}
} | OpenReview/ICLR/figures/2022/accept_poster/Ow1C7s3UcY/Figure7.png | 7 | Figure 7: Primer-conditional generation. We take held-out sketches (left column) and delete 40% of their primitives, with the remaining prefix of primitives (second column from left) serving as a primer for the primitive model. To the right are the inferred completions, where the model is queried for additional primiti... | <paragraph_1>Fig. 7 displays random examples of priming the primitive model with incomplete sketches. Because just over half of the original primitives are in the primer, there is a wide array of plausible completions. In some cases, despite only six completions for each input, the original sketch is recovered. We envi... | diagram | 0.98793 | |
OpenReview | ICLR | 2,022 | Topological Experience Replay | State-of-the-art deep Q-learning methods update Q-values using state transition tuples sampled from the experience replay buffer. This strategy often randomly samples or prioritizes data sampling based on measures such as the temporal difference (TD) error. Such sampling strategies can be inefficient at learning Q-func... | Deep reinforcement learning, experience replay | We rearrange the update order of experience for training the Q-function by a dependency graph. | [
8,
6,
8,
5
] | Accept (Poster) | Zhang-Wei Hong, Tao Chen, Yen-Chen Lin, Joni Pajarinen, Pulkit Agrawal | ~Zhang-Wei_Hong1, ~Tao_Chen1, ~Yen-Chen_Lin1, ~Joni_Pajarinen2, ~Pulkit_Agrawal1 | 20210928 | https://openreview.net/forum?id=OXRZeMmOI7a | OXRZeMmOI7a | @inproceedings{
hong2022topological,
title={Topological Experience Replay},
author={Zhang-Wei Hong and Tao Chen and Yen-Chen Lin and Joni Pajarinen and Pulkit Agrawal},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=OXRZeMmOI7a}
} | OpenReview/ICLR/figures/2022/accept_poster/OXRZeMmOI7a/Figure1.png | 1 | Figure 1: The ordering of states used for updating Q-values directly effects the convergence speed. (a) Consider the graphical representation of the MDP with the goal state (G). Each node in the graph is a state. The arrows denote possible transitions (s, a, r, s′) and the numbers on the arrows are the rewards r associ... | <paragraph_1>As a motivating example, consider the Markov Decision Process (MDP) shown in Figure 1. Let the agent receive a positive reward when it reaches the goal state (labeled as G), but zero at other stated. (note that our method does not need rewards to be 0 or 1). Starting from state C, the agent can obtain the ... | diagram | 0.88907 | |
OpenReview | ICLR | 2,022 | TAPEX: Table Pre-training via Learning a Neural SQL Executor | Recent progress in language model pre-training has achieved a great success via leveraging large-scale unstructured textual data. However, it is still a challenge to apply pre-training on structured tabular data due to the absence of large-scale high-quality tabular data. In this paper, we propose TAPEX to show that ta... | table pre-training, sythetic pre-training, SQL execution, table-based question answering, table-based fact verification | This work performs table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries and their execution results. | [
8,
8,
8,
6
] | Accept (Poster) | Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou | ~Qian_Liu2, ~Bei_Chen3, ~Jiaqi_Guo1, ~Morteza_Ziyadi1, ~Zeqi_Lin1, ~Weizhu_Chen1, ~Jian-Guang_Lou1 | 20210928 | https://openreview.net/forum?id=O50443AsCP | O50443AsCP | @inproceedings{
liu2022tapex,
title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor},
author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.... | OpenReview/ICLR/figures/2022/accept_poster/O50443AsCP/Figure3.png | 3 | Figure 3: The illustration of the pre-training procedure in our method. During pre-training, we feed the concatenation of a sampled SQL query and a sampled table to the model, and train it to output the corresponding execution result (e.g., “Pairs”). | <paragraph_1>To design efficient tasks for table pre-training, we argue that the key lies in the executability of tables. That is to say, structured tables enable us to perform discrete operations on them via programming languages such as SQL queries, while unstructured text does not. Taking this into account, TAPEX ad... | diagram | 0.880001 | |
OpenReview | ICLR | 2,022 | Neural Program Synthesis with Query | Aiming to find a program satisfying the user intent given input-output examples, program synthesis has attracted increasing interest in the area of machine learning. Despite the promising performance of existing methods, most of their success comes from the privileged information of well-designed input-output examples.... | We propose a query-based framework for the interactive program synthesis. | [
8,
3,
3
] | Accept (Poster) | Di Huang, Rui Zhang, Xing Hu, Xishan Zhang, Pengwei Jin, Nan Li, Zidong Du, Qi Guo, Yunji Chen | ~Di_Huang5, ~Rui_Zhang1, ~Xing_Hu3, ~Xishan_Zhang1, ~Pengwei_Jin1, ~Nan_Li3, ~Zidong_Du1, ~Qi_Guo4, ~Yunji_Chen1 | 20210928 | https://openreview.net/forum?id=NyJ2KIN8P17 | NyJ2KIN8P17 | @inproceedings{
huang2022neural,
title={Neural Program Synthesis with Query},
author={Di Huang and Rui Zhang and Xing Hu and Xishan Zhang and Pengwei Jin and Nan Li and Zidong Du and Qi Guo and Yunji Chen},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?... | OpenReview/ICLR/figures/2022/accept_poster/NyJ2KIN8P17/Figure13.png | 13 | Figure 13: An example: 8 candidate programs p0−7 and 4 queries qA−D each with 2 possible responses { √ , ×}. | <paragraph_1>However, this greedy strategy fails in some cases. An example is shown in Figure 13. Suppose that after a series of queries, and finally there are only 8 candidate programs P = {p0, ..., p7} distributed uniformly and 4 queries Q = {qA, ..., qD} each with 2 possible responses {√, ×} left, our goal is to fin... | diagram | 0.877081 | ||
OpenReview | ICLR | 2,022 | Sound Adversarial Audio-Visual Navigation | Audio-visual navigation task requires an agent to find a sound source in a realistic, unmapped 3D environment by utilizing egocentric audio-visual observations. Existing audio-visual navigation works assume a clean environment that solely contains the target sound, which, however, would not be suitable in most real-wor... | This work aims to do an adversarial sound intervention for robust audio-visual navigation. | [
6,
8,
8
] | Accept (Poster) | Yinfeng Yu, Wenbing Huang, Fuchun Sun, Changan Chen, Yikai Wang, Xiaohong Liu | ~Yinfeng_Yu1, ~Wenbing_Huang1, ~Fuchun_Sun2, ~Changan_Chen2, ~Yikai_Wang2, ~Xiaohong_Liu3 | 20210928 | https://openreview.net/forum?id=NkZq4OEYN- | NkZq4OEYN- | @inproceedings{
yu2022sound,
title={Sound Adversarial Audio-Visual Navigation},
author={Yinfeng Yu and Wenbing Huang and Fuchun Sun and Changan Chen and Yikai Wang and Xiaohong Liu},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=NkZq4OEYN-}
} | OpenReview/ICLR/figures/2022/accept_poster/NkZq4OEYN-/Figure2.png | 2 | Figure 2: Comparison of different problem modeling methods. AVN is models as an MDP. Our model has an attacker intervention, while the SA-MDP model has an adversary that can map one state in the state space to another state. | <paragraph_1>Problem modeling of ours. We model the agent as playing against an attacker in a two-player Markov game (Simon, 2016). We denote the agent and attacker by superscript ω and ν , respectively. The game M = (S, (Aω, Aν), P, (Rω, Rν)) consists of state set S, action sets Aω and Aν , and a joint state transitio... | diagram | 0.990331 | ||
OpenReview | ICLR | 2,022 | Sound Adversarial Audio-Visual Navigation | Audio-visual navigation task requires an agent to find a sound source in a realistic, unmapped 3D environment by utilizing egocentric audio-visual observations. Existing audio-visual navigation works assume a clean environment that solely contains the target sound, which, however, would not be suitable in most real-wor... | This work aims to do an adversarial sound intervention for robust audio-visual navigation. | [
6,
8,
8
] | Accept (Poster) | Yinfeng Yu, Wenbing Huang, Fuchun Sun, Changan Chen, Yikai Wang, Xiaohong Liu | ~Yinfeng_Yu1, ~Wenbing_Huang1, ~Fuchun_Sun2, ~Changan_Chen2, ~Yikai_Wang2, ~Xiaohong_Liu3 | 20210928 | https://openreview.net/forum?id=NkZq4OEYN- | NkZq4OEYN- | @inproceedings{
yu2022sound,
title={Sound Adversarial Audio-Visual Navigation},
author={Yinfeng Yu and Wenbing Huang and Fuchun Sun and Changan Chen and Yikai Wang and Xiaohong Liu},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=NkZq4OEYN-}
} | OpenReview/ICLR/figures/2022/accept_poster/NkZq4OEYN-/Figure10.png | 10 | Figure 10: Trajectories of different models in different environments on dataset Replica. The first row in the figure is a clean environment, and the second line is a acoustically complex environment. Each column in the figure represents the same model. Acou com env stands for acoustically complex environment. | <paragraph_1>Trajectory comparisons on dataset Replica. Figure 10 shows the test episodes for our SAAVN model on dataset Replica. The environment of each row in Fig. 10 is consistent, and the model of each column is too. SA-MDP failed to complete the task successfully in all two environments. AVN can complete the job i... | diagram | 0.975557 | ||
OpenReview | ICLR | 2,022 | From Stars to Subgraphs: Uplifting Any GNN with Local Structure Awareness | Message Passing Neural Networks (MPNNs) are a common type of Graph Neural Network (GNN), in which each node’s representation is computed recursively by aggregating representations (“messages”) from its immediate neighbors akin to a star-shaped pattern. MPNNs are appealing for being efficient and scalable, however their... | Graph Neural Networks, Expressiveness, Message Passing Neural Network, Graph Classification | [
6,
6,
8,
6
] | Accept (Poster) | Lingxiao Zhao, Wei Jin, Leman Akoglu, Neil Shah | ~Lingxiao_Zhao1, ~Wei_Jin4, ~Leman_Akoglu3, ~Neil_Shah2 | 20210928 | https://openreview.net/forum?id=Mspk_WYKoEH | Mspk_WYKoEH | @inproceedings{
zhao2022from,
title={From Stars to Subgraphs: Uplifting Any {GNN} with Local Structure Awareness},
author={Lingxiao Zhao and Wei Jin and Leman Akoglu and Neil Shah},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=Mspk_WYKoEH}
} | OpenReview/ICLR/figures/2022/accept_poster/Mspk_WYKoEH/Figure5.png | 5 | Figure 5: A pair of CFI graphs (A and B) zoomed at the (rewired) edge (u, v) of a base graph. The base graph is a degree-3 regular graph with separator size k + 1 for k-WL-failed case. | <paragraph_1>We describe the proposed scheme of constructing the two graphs A and B briefly, for details see Section 6 in Cai et al. (1992). At high level, this scheme first constructs A by replacing every node (with degree d) in G by a carefully designed graph Xd, and then rewires two edges in A to its non-isomorphic ... | diagram | 0.998245 | ||
OpenReview | ICLR | 2,022 | MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts | Understanding the performance of machine learning models across diverse data distributions is critically important for reliable applications. Motivated by this, there is a growing focus on curating benchmark datasets that capture distribution shifts. While valuable, the existing benchmarks are limited in that many of t... | benchmark dataset, distribution shift, out-of-domain generalization | We leverage annotated subsets within a heterogeneous dataset to evaluate the performance of learning algorithms to distribution shifts and to visualize training dynamics. | [
6,
6,
6
] | Accept (Poster) | Weixin Liang, James Zou | ~Weixin_Liang1, ~James_Zou1 | 20210928 | https://openreview.net/forum?id=MTex8qKavoS | MTex8qKavoS | @inproceedings{
liang2022metashift,
title={MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts},
author={Weixin Liang and James Zou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=MTex8qKavoS}
} | OpenReview/ICLR/figures/2022/accept_poster/MTex8qKavoS/Figure2.png | 2 | Figure 2: Meta-graph—visualizing the diverse data distributions within the “cat” class. Each node represents one subset of the cat images. Each subset corresponds to “cat” in a different context: e.g. “cat with sink” or “cat with fence”. Each edge indicates the similarity between the two connecting subsets. Node colors... | <paragraph_1>What is MetaShift? The MetaShift is a collection of subsets of data together with an annotation graph that explains the similarity/distance between two subsets (edge weight) as well as what is unique about each subset (node metadata). For each class, say “cat”, we have many subsets of cats, and we can thin... | diagram | 0.860014 | |
OpenReview | ICLR | 2,022 | BDDM: Bilateral Denoising Diffusion Models for Fast and High-Quality Speech Synthesis | Diffusion probabilistic models (DPMs) and their extensions have emerged as competitive generative models yet confront challenges of efficient sampling. We propose a new bilateral denoising diffusion model (BDDM) that parameterizes both the forward and reverse processes with a schedule network and a score network, which... | Speech Synthesis, Vocoder, Generative Model, Diffusion Model | In this paper, we propose a novel bilateral denoising diffusion model (BDDM), which takes significantly fewer sampling steps than the SOTA diffusion-based vocoder to generate high-quality audio samples. | [
8,
6,
6
] | Accept (Poster) | Max W. Y. Lam, Jun Wang, Dan Su, Dong Yu | ~Max_W._Y._Lam1, ~Jun_Wang21, ~Dan_Su3, ~Dong_Yu2 | 20210928 | https://openreview.net/forum?id=L7wzpQttNO | L7wzpQttNO | @inproceedings{
lam2022bddm,
title={{BDDM}: Bilateral Denoising Diffusion Models for Fast and High-Quality Speech Synthesis},
author={Max W. Y. Lam and Jun Wang and Dan Su and Dong Yu},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=L7wzpQttNO}
} | OpenReview/ICLR/figures/2022/accept_poster/L7wzpQttNO/Figure1.png | 1 | Figure 1: A bilateral denoising diffusion model (BDDM) introduces a junctional variable xt and a schedule network φ. The schedule network can optimize the shortened noise schedule β̂n(φ) if we know the score of the distribution at the junctional step, using the KL divergence to directly compare pθ∗(x̂n−1|x̂n = xt) agai... | <paragraph_1>For fast sampling with DPMs, we strive for a noise schedule ˆβ for sampling that is much shorter than the noise schedule β for training. As shown in Fig. 1, we define two separate diffusion processes corresponding to the noise schedules, β and ˆβ, respectively. The upper diffusion process parameterized by β... | diagram | 0.989579 | |
OpenReview | ICLR | 2,022 | Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction | Learning on graphs has attracted significant attention in the learning community due to numerous real-world applications. In particular, graph neural networks (GNNs), which take \emph{numerical} node features and graph structure as inputs, have been shown to achieve state-of-the-art performance on various graph-related... | Self-supervised learning, Graph Neural Networks, Extreme multi-label classification | We design a self-supervised learning method for extracting node representations from raw data. | [
8,
6,
6
] | Accept (Poster) | Eli Chien, Wei-Cheng Chang, Cho-Jui Hsieh, Hsiang-Fu Yu, Jiong Zhang, Olgica Milenkovic, Inderjit S Dhillon | ~Eli_Chien1, ~Wei-Cheng_Chang1, ~Cho-Jui_Hsieh1, ~Hsiang-Fu_Yu2, ~Jiong_Zhang1, ~Olgica_Milenkovic1, ~Inderjit_S_Dhillon1 | 20210928 | https://openreview.net/forum?id=KJggliHbs8 | KJggliHbs8 | @inproceedings{
chien2022node,
title={Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction},
author={Eli Chien and Wei-Cheng Chang and Cho-Jui Hsieh and Hsiang-Fu Yu and Jiong Zhang and Olgica Milenkovic and Inderjit S Dhillon},
booktitle={International Conference on Learning Representations},... | OpenReview/ICLR/figures/2022/accept_poster/KJggliHbs8/Figure4.png | 4 | Figure 4: Illustration of a cSBM: Node features are independent Gaussian random vectors while edges are modeled as independent Bernoulli random variables. | <paragraph_1>Description of the cSBM. Using our Assumption 4.1, we analyze the case where the graph and node features are generated according to a cSBM (Deshpande et al., 2018) (see Figure 4). For simplicity, we use the most straightforward two-cluster cSBM. Let {yi}n i=1 ∈{0, 1} be the labels of nodes in a graph. We d... | diagram | 0.995175 | |
OpenReview | ICLR | 2,022 | DKM: Differentiable k-Means Clustering Layer for Neural Network Compression | Deep neural network (DNN) model compression for efficient on-device inference is becoming increasingly important to reduce memory requirements and keep user data on-device. To this end, we propose a novel differentiable k-means clustering layer (DKM) and its application to train-time weight clustering-based DNN model c... | Deep learning, neural network, compression | We propose a novel model compression scheme based on differentiable K-means layer, and it delivers the state-of-the-art results. | [
6,
6,
5,
6
] | Accept (Poster) | Minsik Cho, Keivan Alizadeh-Vahid, Saurabh Adya, Mohammad Rastegari | ~Minsik_Cho1, ~Keivan_Alizadeh-Vahid1, sadya@apple.com, ~Mohammad_Rastegari2 | 20210928 | https://openreview.net/forum?id=J_F_qqCE3Z5 | J_F_qqCE3Z5 | @inproceedings{
cho2022dkm,
title={{DKM}: Differentiable k-Means Clustering Layer for Neural Network Compression},
author={Minsik Cho and Keivan Alizadeh-Vahid and Saurabh Adya and Mohammad Rastegari},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=J_... | OpenReview/ICLR/figures/2022/accept_poster/J_F_qqCE3Z5/Figure2.png | 2 | Figure 2: Weight-sharing using attention matrix A is iteratively performed in a DKM layer until the centroids (C) converge. Once converged, a compressed weight, W̃ is used for forward-propagation. Since DKM is a differentiable layer, backward-propagation will run through the iterative loop and the gradients for the wei... | <paragraph_1>We overcome such limitations with DKM by interpreting weight-centroid assignment as distancebased attention optimization (Bahdana et al., 2015) as in Fig. 1 (b) and letting each weight interact with all the centroids. Such attention mechanism naturally cast differentiable and iterative k-means clustering i... | diagram | 0.973346 | |
OpenReview | ICLR | 2,022 | Improving Non-Autoregressive Translation Models Without Distillation | Transformer-based autoregressive (AR) machine translation models have achieved significant performance improvements, nearing human-level accuracy on some languages. The AR framework translates one token at a time which can be time consuming, especially for long sequences. To accelerate inference, recent work has been e... | Natural Language Processing, Deep Learning, Non-autoregressive Machine Translation, Transformer, Distillation | Improving the CMLM non-autoregressive machine translation model so it trains without knowledge distillation and achieves SOTA BLEU score on both raw and distilled dataset | [
3,
8,
8,
8
] | Accept (Poster) | Xiao Shi Huang, Felipe Perez, Maksims Volkovs | ~Xiao_Shi_Huang1, ~Felipe_Perez1, ~Maksims_Volkovs3 | 20210928 | https://openreview.net/forum?id=I2Hw58KHp8O | I2Hw58KHp8O | @inproceedings{
huang2022improving,
title={Improving Non-Autoregressive Translation Models Without Distillation},
author={Xiao Shi Huang and Felipe Perez and Maksims Volkovs},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=I2Hw58KHp8O}
} | OpenReview/ICLR/figures/2022/accept_poster/I2Hw58KHp8O/Figure2.png | 2 | Figure 2: CMLMC loss example. Here, source German sentence X=[wir arbeiten an NLP] is translated to the target English sentence Y =[we work on NLP]. First, sampled mask Ymask masks out the [on] token. Masked sentence is passed through the CMLMC decoder to predict the masked out token in the Lmask loss. Then, fully mask... | <paragraph_1>Figure 2 illustrates how the joint loss is computed for an example German to English translation. The mismatch between NAR training and inference procedures is also recognized by SMART (Ghazvininejad et al., 2020b). Similarly to our approach, SMART applies the decoder during training to generate prediction... | diagram | 0.99702 | |
OpenReview | ICLR | 2,022 | Handling Distribution Shifts on Graphs: An Invariance Perspective | There is increasing evidence suggesting neural networks' sensitivity to distribution shifts, so that research on out-of-distribution (OOD) generalization comes into the spotlight. Nonetheless, current endeavors mostly focus on Euclidean data, and its formulation for graph-structured data is not clear and remains under-... | Representation Learning on Graphs, Out-of-Distribution Generalization, Domain Shift, Graph Structure Learning, Invariant Models | We formulate out-of-distribution generalization problem for node-level prediction on graphs and propose a new learning approach based on invariant models | [
5,
6,
6,
6
] | Accept (Poster) | Qitian Wu, Hengrui Zhang, Junchi Yan, David Wipf | ~Qitian_Wu1, ~Hengrui_Zhang1, ~Junchi_Yan2, ~David_Wipf1 | 20210928 | https://openreview.net/forum?id=FQOC5u-1egI | FQOC5u-1egI | @inproceedings{
wu2022handling,
title={Handling Distribution Shifts on Graphs: An Invariance Perspective},
author={Qitian Wu and Hengrui Zhang and Junchi Yan and David Wipf},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=FQOC5u-1egI}
} | OpenReview/ICLR/figures/2022/accept_poster/FQOC5u-1egI/Figure1.png | 1 | Figure 1: (a) The proposed approach Explore-to-Extrapolate Risk Minimization which entails K context generators that generate graph data of different (virtual) environments based on input data from a single (real) environment. The GNN model is updated via gradient descent to minimize a weighted combination of mean and ... | <paragraph_1>2. To account for structural information, we extend the invariance principle with recursive computation on the induced BFS trees of ego-graphs. Then, for out-of-distribution generalization on graphs, we devise a new learning approach, entitled Explore-to-Extrapolate Risk Minimization, that aims GNNs at min... | diagram | 0.998238 | |
OpenReview | ICLR | 2,022 | Relating transformers to models and neural representations of the hippocampal formation | Many deep neural network architectures loosely based on brain networks have recently been shown to replicate neural firing patterns observed in the brain. One of the most exciting and promising novel architectures, the Transformer neural network, was developed without the brain in mind. In this work, we show that trans... | Neuroscience, representation learning, hippocampus, cortex, transformers | Transformers learn brain representatations and they are algorithmically related to models of the hippocampal formation. | [
8,
6,
8,
8
] | Accept (Poster) | James C. R. Whittington, Joseph Warren, Tim E.J. Behrens | ~James_C._R._Whittington1, joseph.warren@ucl.ac.uk, behrens@fmrib.ox.ac.uk | 20210928 | https://openreview.net/forum?id=B8DVo9B1YE0 | B8DVo9B1YE0 | @inproceedings{
whittington2022relating,
title={Relating transformers to models and neural representations of the hippocampal formation},
author={James C. R. Whittington and Joseph Warren and Tim E.J. Behrens},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/fo... | OpenReview/ICLR/figures/2022/accept_poster/B8DVo9B1YE0/Figure7.png | 7 | Figure 7: Schematic to show the model flow. Depiction of TEM at two time-points, with each time-point described at a different level of detail. Timepoint t shows network implementation, t+ 1 describes each computation in words. Red is for model predictions, green is for updating model variables. We do not show the stab... | <paragraph_1>We present a more detailed model schematic of TEM in Figure 7. We see there are two components to TEM - a RNN for understanding position (g, in green top of Figure 7) that also indexes memories via ‘queries’ q = Wgg. A memory network that binds together x and g, via an outer product (middle green in 7, wit... | diagram | 0.976803 | |
OpenReview | ICLR | 2,022 | Multi-Task Processes | Neural Processes (NPs) consider a task as a function realized from a stochastic process and flexibly adapt to unseen tasks through inference on functions. However, naive NPs can model data from only a single stochastic process and are designed to infer each task independently. Since many real-world data represent a set... | stochastic processes, neural processes, multi-task learning, incomplete data | We propose a new family of stochastic processes that can infer multiple heterogeneous functions jointly given a few incomplete observations (i.e., some functions may not be observed at each input). | [
6,
5,
8,
6
] | Accept (Poster) | Donggyun Kim, Seongwoong Cho, Wonkwang Lee, Seunghoon Hong | ~Donggyun_Kim1, ~Seongwoong_Cho1, ~Wonkwang_Lee2, ~Seunghoon_Hong2 | 20210928 | https://openreview.net/forum?id=9otKVlgrpZG | 9otKVlgrpZG | @inproceedings{
kim2022multitask,
title={Multi-Task Processes},
author={Donggyun Kim and Seongwoong Cho and Wonkwang Lee and Seunghoon Hong},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=9otKVlgrpZG}
} | OpenReview/ICLR/figures/2022/accept_poster/9otKVlgrpZG/Figure2.png | 2 | Figure 2: Architecture of the neural network model for MTNP. | <paragraph_1>3.3 NEURAL NETWORK MODEL FOR MTNP This section presents an implementation of MTNPs composed of an encoder qφ and a decoder pθ (Eq. 7). While our MTNP formulation is not restricted to a specific architecture, we adopt ANP (Kim et al., 2019) as our backbone, which implements the encoder by attention layers (V... | diagram | 0.986639 | |
OpenReview | ICLR | 2,022 | Huber Additive Models for Non-stationary Time Series Analysis | Sparse additive models have shown promising flexibility and interpretability in processing time series data. However, existing methods usually assume the time series data to be stationary and the innovation is sampled from a Gaussian distribution. Both assumptions are too stringent for heavy-tailed and non-stationary ti... | Sparse additive models, variable selection, Huber, non-stationary, robust forecasting | An adaptive sparse Huber additive model for robust forecasting and variable selection in non-Gaussian and (non)stationary time series data | [
6,
6,
6,
8
] | Accept (Poster) | Yingjie Wang, Xianrui Zhong, Fengxiang He, Hong Chen, Dacheng Tao | ~Yingjie_Wang1, ~Xianrui_Zhong1, ~Fengxiang_He1, ~Hong_Chen1, ~Dacheng_Tao1 | 20210928 | https://openreview.net/forum?id=9kpuB2bgnim | 9kpuB2bgnim | @inproceedings{
wang2022huber,
title={Huber Additive Models for Non-stationary Time Series Analysis},
author={Yingjie Wang and Xianrui Zhong and Fengxiang He and Hong Chen and Dacheng Tao},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=9kpuB2bgnim}
} | OpenReview/ICLR/figures/2022/accept_poster/9kpuB2bgnim/Figure4.png | 4 | Figure 4: The Granger causal network on CMEs dataset. | <paragraph_1>Coronal Mass Ejections (CMEs) are the most violent eruptions in the Solar System. Despite machine learning approaches have been applied to these tasks recently Wang et al. (2019); Liu et al. (2018), there is no any work for interpretable prediction with Granger causal network. CMEs data are provided in The... | diagram | 0.964329 | |
OpenReview | ICLR | 2,022 | Latent Image Animator: Learning to Animate Images via Latent Space Navigation | Due to the remarkable progress of deep generative models, animating images has become increasingly efficient, whereas associated results have become increasingly realistic. Current animation-approaches commonly exploit structure representation extracted from driving videos. Such structure representation is instrumental... | Video generation, Generative Adversarial Network | Image animation via latent space navigation | [
8,
6,
6,
6,
8
] | Accept (Poster) | Yaohui Wang, Di Yang, Francois Bremond, Antitza Dantcheva | ~Yaohui_Wang1, ~Di_Yang4, ~Francois_Bremond1, antitza.dantcheva@inria.fr | 20210928 | https://openreview.net/forum?id=7r6kDq0mK_ | 7r6kDq0mK_ | @inproceedings{
wang2022latent,
title={Latent Image Animator: Learning to Animate Images via Latent Space Navigation},
author={Yaohui Wang and Di Yang and Francois Bremond and Antitza Dantcheva},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=7r6kDq0m... | OpenReview/ICLR/figures/2022/accept_poster/7r6kDq0mK_/Figure8.png | 8 | Figure 8: Generator architecture. We show details about architecture of G in (a) and G block in (b). | <paragraph_1>We proceed to describe the model architecture in this section. Fig. 7 shows details of our E. In each ResBlock in E, spatial size of input feature maps are downsampled. We take feature maps of spatial sizes from 8 × 8 to 256 × 256 as our appearance features xenc i . We use a 5-layer MLP to predict a magnit... | diagram | 0.996126 | |
OpenReview | ICLR | 2,022 | Transfer RL across Observation Feature Spaces via Model-Based Regularization | In many reinforcement learning (RL) applications, the observation space is specified by human developers and restricted by physical realizations, and may thus be subject to dramatic changes over time (e.g. increased number of observable features). However, when the observation space changes, the previous policy will li... | transfer reinforcement learning, representation learning, observation space change, latent dynamics model | We propose a model-based transfer learning algorithm that transfers knowledge across tasks with drastically different observation spaces, without any prior knowledge of the inter-task mapping. | [
5,
5,
8,
6
] | Accept (Poster) | Yanchao Sun, Ruijie Zheng, Xiyao Wang, Andrew E Cohen, Furong Huang | ~Yanchao_Sun1, ~Ruijie_Zheng1, ~Xiyao_Wang1, ~Andrew_E_Cohen1, ~Furong_Huang1 | 20210928 | https://openreview.net/forum?id=7KdAoOsI81C | 7KdAoOsI81C | @inproceedings{
sun2022transfer,
title={Transfer {RL} across Observation Feature Spaces via Model-Based Regularization},
author={Yanchao Sun and Ruijie Zheng and Xiyao Wang and Andrew E Cohen and Furong Huang},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/fo... | OpenReview/ICLR/figures/2022/accept_poster/7KdAoOsI81C/Figure2.png | 2 | Figure 2: The architecture of proposed method. P̂ and R̂ are learned in the source task, then transferred to the target task and fixed during training. | <paragraph_1>The learning procedures for the source task and the target task are illustrated in Algorithm 1 and Algorithm 2, respectively. Figure 2 depicts the architecture of the learning model for both source and target tasks. z = ϕ(o) and z′ = ¯ϕ(o′) are the encoded observation and next observation. Given the curren... | diagram | 0.998007 | |
OpenReview | ICLR | 2,022 | Learning Continuous Environment Fields via Implicit Functions | We propose a novel scene representation that encodes reaching distance -- the distance between any position in the scene to a goal along a feasible trajectory. We demonstrate that this environment field representation can directly guide the dynamic behaviors of agents in 2D mazes or 3D indoor scenes. Our environment... | Continuous Scene Representation, Implicit Neural Networks | We propose a novel scene representation that can dynamically change behaviors of agents inside the scene. | [
1,
8,
6
] | Accept (Poster) | Xueting Li, Shalini De Mello, Xiaolong Wang, Ming-Hsuan Yang, Jan Kautz, Sifei Liu | ~Xueting_Li1, ~Shalini_De_Mello1, ~Xiaolong_Wang3, ~Ming-Hsuan_Yang1, ~Jan_Kautz1, ~Sifei_Liu2 | 20210928 | https://openreview.net/forum?id=3ILxkQ7yElm | 3ILxkQ7yElm | @inproceedings{
li2022learning,
title={Learning Continuous Environment Fields via Implicit Functions},
author={Xueting Li and Sifei Liu and Shalini De Mello and Xiaolong Wang and Ming-Hsuan Yang and Jan Kautz},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/fo... | OpenReview/ICLR/figures/2022/accept_poster/3ILxkQ7yElm/Figure9.png | 9 | Figure 9: Aligned walking sequence to searched trajectory. The figure is for visualization purposes only. The human takes a much smaller step in reality. | <paragraph_1>Aligning walking sequence to searched trajectories. As discussed in Section 4.1, we align a random human walking sequence from the SURREAL dataset (Varol et al., 2017) to the searched trajectory from the bird’s eye view. We show this process in Fig. 9. At each step, we compute the angle θ between the human... | diagram | 0.918099 | |
OpenReview | ICLR | 2,022 | Large-Scale Representation Learning on Graphs via Bootstrapping | Self-supervised learning provides a promising path towards eliminating the need for costly label information in representation learning on graphs. However, to achieve state-of-the-art performance, methods often need large numbers of negative examples and rely on complex augmentations. This can be prohibitively expens... | [
5,
6,
8,
6
] | Accept (Poster) | Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, Mehdi Azabou, Eva L Dyer, Remi Munos, Petar Veličković, Michal Valko | ~Shantanu_Thakoor5, ~Corentin_Tallec2, ~Mohammad_Gheshlaghi_Azar1, ~Mehdi_Azabou2, ~Eva_L_Dyer1, ~Remi_Munos1, ~Petar_Veličković1, ~Michal_Valko1 | 20210928 | https://openreview.net/forum?id=0UXT6PpRpW | 0UXT6PpRpW | @inproceedings{
thakoor2022largescale,
title={Large-Scale Representation Learning on Graphs via Bootstrapping},
author={Shantanu Thakoor and Corentin Tallec and Mohammad Gheshlaghi Azar and Mehdi Azabou and Eva L Dyer and Remi Munos and Petar Veli{\v{c}}kovi{\'c} and Michal Valko},
booktitle={International Conference o... | OpenReview/ICLR/figures/2022/accept_poster/0UXT6PpRpW/Figure1.png | 1 | Figure 1: Overview of our proposed BGRL method. The original graph is first used to derive two different semantically similar views using augmentations T1,2. From these, we use encoders Eθ,φ to form online and target node embeddings. The predictor pθ uses the online embedding H̃1 to form a prediction Z̃1 of the target ... | <paragraph_1>Figure 1 visually summarizes BGRL’s architecture.</paragraph_1> | diagram | 0.997887 | |||
OpenReview | ICLR | 2,022 | Equivariant Transformers for Neural Network based Molecular Potentials | The prediction of quantum mechanical properties is historically plagued by a trade-off between accuracy and speed. Machine learning potentials have previously shown great success in this domain, reaching increasingly better accuracy while maintaining computational efficiency comparable with classical force fields. In t... | Molecular Modeling, Quantum Chemistry, Attention, Transformers | We propose a novel equivariant Transformer architecture for the prediction of molecular potentials and provide insights into the molecular representation through extensive analysis of the model's attention weights. | [
8,
6,
8,
6
] | Accept (Spotlight) | Philipp Thölke, Gianni De Fabritiis | ~Philipp_Thölke1, ~Gianni_De_Fabritiis1 | 20210928 | https://openreview.net/forum?id=zNHzqZ9wrRB | zNHzqZ9wrRB | @inproceedings{
th{\"o}lke2022equivariant,
title={Equivariant Transformers for Neural Network based Molecular Potentials},
author={Philipp Th{\"o}lke and Gianni De Fabritiis},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=zNHzqZ9wrRB}
} | OpenReview/ICLR/figures/2022/accept_spotlight/zNHzqZ9wrRB/Figure4.png | 4 | Figure 4: Visualization of five molecules from the QM9 dataset with attention scores corresponding to models trained on ANI-1, MD17 (uracil) and QM9. Blue and red lines represent negative and positive attention scores respectively. | <paragraph_1>Neural network predictions are notoriously difficult to interpret due to the complex nature of the learned transformations. To shed light into the black box predictor, we extract and analyze the equivariant Transformer’s attention weights. We run inference on the ANI-1, QM9, and MD17 test sets for all molec... | diagram | 0.940188 | |
OpenReview | ICLR | 2,022 | Spanning Tree-based Graph Generation for Molecules | In this paper, we explore the problem of generating molecules using deep neural networks, which has recently gained much interest in chemistry. To this end, we propose a spanning tree-based graph generation (STGG) framework based on formulating molecular graph generation as a construction of a spanning tree and the res... | molecule generation, tree generation, graph generation, deep generative model, de novo drug design | We propose a new molecular graph generative model based on compact tree constructive operators. | [
8,
8,
6,
6
] | Accept (Spotlight) | Sungsoo Ahn, Binghong Chen, Tianzhe Wang, Le Song | ~Sungsoo_Ahn1, ~Binghong_Chen1, ~Tianzhe_Wang1, ~Le_Song1 | 20210928 | https://openreview.net/forum?id=w60btE_8T2m | w60btE_8T2m | @inproceedings{
ahn2022spanning,
title={Spanning Tree-based Graph Generation for Molecules},
author={Sungsoo Ahn and Binghong Chen and Tianzhe Wang and Le Song},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=w60btE_8T2m}
} | OpenReview/ICLR/figures/2022/accept_spotlight/w60btE_8T2m/Figure3.png | 3 | Figure 3: Attention module and the relative positional encoding used in our framework. | <paragraph_1>(a) Attention module (b) Relative positional encoding Figure 3: Attention module and the relative positional encoding used in our framework.</paragraph_1> | diagram | 0.996342 | |
OpenReview | ICLR | 2,022 | Lossless Compression with Probabilistic Circuits | Despite extensive progress on image generation, common deep generative model architectures are not easily applied to lossless compression. For example, VAEs suffer from a compression cost overhead due to their latent variables. This overhead can only be partially eliminated with elaborate schemes such as bits-back codi... | [
6,
8,
5,
6
] | Accept (Spotlight) | Anji Liu, Stephan Mandt, Guy Van den Broeck | ~Anji_Liu1, ~Stephan_Mandt1, ~Guy_Van_den_Broeck1 | 20210928 | https://openreview.net/forum?id=X_hByk2-5je | X_hByk2-5je | @inproceedings{
liu2022lossless,
title={Lossless Compression with Probabilistic Circuits},
author={Anji Liu and Stephan Mandt and Guy Van den Broeck},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=X_hByk2-5je}
} | OpenReview/ICLR/figures/2022/accept_spotlight/X_hByk2-5je/Figure1.png | 1 | Figure 1: An example structureddecomposable PC. The feedforward order is from left to right; inputs are assumed to be boolean variables; parameters are labeled on the corresponding edges. Probability of each unit given input assignment x1x2x4 is labeled blue next to the corresponding unit. | <paragraph_1>All product units in Fig. 1 are decomposable. For example, each purple product unit (whose scope is {X1, X2}) has two children with disjoint scopes {X1} and {X2}, respectively. In addition to Def. 2, we make use of another property, structured decomposability, which is the key to guaranteeing computational... | diagram | 0.997781 | |||
OpenReview | ICLR | 2,022 | Independent SE(3)-Equivariant Models for End-to-End Rigid Protein Docking | Protein complex formation is a central problem in biology, being involved in most of the cell's processes, and essential for applications, e.g. drug design or protein engineering. We tackle rigid body protein-protein docking, i.e., computationally predicting the 3D structure of a protein-protein complex from the indivi... | protein complexes, protein structure, rigid body docking, SE(3) equivariance, graph neural networks | We perform rigid protein docking using a novel independent SE(3)-equivariant message passing mechanism that guarantees the same resulting protein complex independent of the initial placement of the two 3D structures. | [
8,
8,
8
] | Accept (Spotlight) | Octavian-Eugen Ganea, Xinyuan Huang, Charlotte Bunne, Yatao Bian, Regina Barzilay, Tommi S. Jaakkola, Andreas Krause | ~Octavian-Eugen_Ganea1, ~Xinyuan_Huang1, ~Charlotte_Bunne1, ~Yatao_Bian1, ~Regina_Barzilay1, ~Tommi_S._Jaakkola1, ~Andreas_Krause1 | 20210928 | https://openreview.net/forum?id=GQjaI9mLet | GQjaI9mLet | @inproceedings{
ganea2022independent,
title={Independent {SE}(3)-Equivariant Models for End-to-End Rigid Protein Docking},
author={Octavian-Eugen Ganea and Xinyuan Huang and Charlotte Bunne and Yatao Bian and Regina Barzilay and Tommi S. Jaakkola and Andreas Krause},
booktitle={International Conference on Learning Repr... | OpenReview/ICLR/figures/2022/accept_spotlight/GQjaI9mLet/Figure3.png | 3 | Figure 3: Details on EQUIDOCK’s Architecture and Losses. a. The message passing operations in IEGMN guarantee pairwise independent SE(3)-equivariance as in Eq. (4), b. We predict keypoints for each protein that are aligned with the binding pocket location using an additional optimal transport (OT) loss, c. After predic... | <paragraph_1>Overview of Our Approach. Our model is depicted in Fig. 3. We first build k-NN protein graphs G1 = (V1, E1) and G2 = (V2, E2). We then design SE(3)-invariant node features F1 ∈Rd×n, F2 ∈ Rd×m and edge features {fj→i : ∀(i, j) ∈E1 ∪E2} (see Appendix A).</paragraph_1>
<paragraph_2>Independent E(3)-Equivariant... | diagram | 0.996378 | |
OpenReview | ICLR | 2,022 | On Improving Adversarial Transferability of Vision Transformers | Vision transformers (ViTs) process input images as sequences of patches via self-attention; a radically different architecture than convolutional neural networks (CNNs). This makes it interesting to study the adversarial feature space of ViT models and their transferability. In particular, we observe that adversarial ... | Vision Transformers, Adversarial Perturbations | Novel approach to improve transferability of adversarial perturbations found in vision transformers via self-ensemble and token refinement. | [
8,
8,
8,
6
] | Accept (Spotlight) | Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Fahad Khan, Fatih Porikli | ~Muzammal_Naseer1, ~Kanchana_Ranasinghe1, ~Salman_Khan4, ~Fahad_Khan1, ~Fatih_Porikli2 | 20210928 | https://openreview.net/forum?id=D6nH3719vZy | D6nH3719vZy | @inproceedings{
naseer2022on,
title={On Improving Adversarial Transferability of Vision Transformers },
author={Muzammal Naseer and Kanchana Ranasinghe and Salman Khan and Fahad Khan and Fatih Porikli},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=D... | OpenReview/ICLR/figures/2022/accept_spotlight/D6nH3719vZy/Figure1.png | 1 | Figure 1: Left: Conventional adversarial attacks view ViT as a single classifier and maximize the prediction loss (e.g., cross entropy) to fool the model based on the last classification token only. This leads to sub-optimal results as class tokens in previous ViT blocks only indirectly influence adversarial perturbati... | <paragraph_1>Our approach is motivated by the modular nature of ViTs (Touvron et al., 2020; Yuan et al., 2021; Mao et al., 2021): they process a sequence of input image patches repeatedly using multiple multi-headed self-attention layers (transformer blocks) (Vaswani et al., 2017). We refer to the representation of pat... | diagram | 0.984493 | |
OpenReview | ICLR | 2,022 | Neural Collapse Under MSE Loss: Proximity to and Dynamics on the Central Path | The recently discovered Neural Collapse (NC) phenomenon occurs pervasively in today's deep net training paradigm of driving cross-entropy (CE) loss towards zero. During NC, last-layer features collapse to their class-means, both classifiers and class-means collapse to the same Simplex Equiangular Tight Frame, and class... | neural collapse, deep learning theory, deep learning, inductive bias, equiangular tight frame, ETF, nearest class center, mean squared error loss, MSE loss, invariance, renormalization, gradient flow, dynamics, adversarial robustness | Neural Collapse occurs empirically on deep nets trained with MSE loss and studying this setting leads to insightful closed-form dynamics. | [
8,
8,
6,
6
] | Accept (Oral) | X.Y. Han, Vardan Papyan, David L. Donoho | ~X.Y._Han1, ~Vardan_Papyan1, ~David_L._Donoho1 | 20210928 | https://openreview.net/forum?id=w1UbdvWH_R3 | w1UbdvWH_R3 | @inproceedings{
han2022neural,
title={Neural Collapse Under {MSE} Loss: Proximity to and Dynamics on the Central Path},
author={X.Y. Han and Vardan Papyan and David L. Donoho},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=w1UbdvWH_R3}
} | OpenReview/ICLR/figures/2022/accept_oral/w1UbdvWH_R3/Figure1.png | 1 | Figure 1: Portrait of Neural Collapse. Top figure depicts the last-layer features, classmeans, and classifiers with which NC is defined—as well as the Simplex ETF to which they all converge with training. Bottom figure shows the deviations of features from their corresponding class-means. Reproduced and modified from F... | <paragraph_1>The experiments in this section examine the properties of Neural Collapse (NC) on deep nets trained using MSE loss. The direct MSE-analogues to the cross-entropy (CE) loss table and figures in Papyan, Han, and Donoho (2020) are in Table 1 and Figures 3-9 here. Furthermore, Figures 10-11 compares the MSE-NC... | diagram | 0.858417 | |
OpenReview | ICLR | 2,022 | Data-Efficient Graph Grammar Learning for Molecular Generation | The problem of molecular generation has received significant attention recently. Existing methods are typically based on deep neural networks and require training on large datasets with tens of thousands of samples. In practice, however, the size of class-specific chemical datasets is usually limited (e.g., dozens of s... | molecular generation, graph grammar, data efficient generative model | [
8,
8,
8,
8
] | Accept (Oral) | Minghao Guo, Veronika Thost, Beichen Li, Payel Das, Jie Chen, Wojciech Matusik | ~Minghao_Guo1, ~Veronika_Thost1, ~Beichen_Li1, ~Payel_Das1, ~Jie_Chen1, ~Wojciech_Matusik2 | 20210928 | https://openreview.net/forum?id=l4IHywGq6a | l4IHywGq6a | @inproceedings{
guo2022dataefficient,
title={Data-Efficient Graph Grammar Learning for Molecular Generation},
author={Minghao Guo and Veronika Thost and Beichen Li and Payel Das and Jie Chen and Wojciech Matusik},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net... | OpenReview/ICLR/figures/2022/accept_oral/l4IHywGq6a/Figure4.png | 4 | Figure 4: Left: Analysis of balance factor λ. We choose 9 different combinations of λi for two optimization objectives: Diversity and RS, showing a clear trade-off between the two objectives. Right: Examples generated by our learned graph grammar. Our graph grammar can generate novel complex molecular structures that d... | <paragraph_1>Optimizing for Specific Metrics, Balance Factor λ. We study the effect of λ weighing the importance of metrics according to user needs. We choose 9 different combinations for two optimization objectives: Diversity and RS. λ1 ranges from 0 to 2 with 0.25 as interval, while λ2 ranges from 4 to 0 with 0.5 as i... | diagram | 0.98502 | ||
OpenReview | ICLR | 2,022 | MIDI-DDSP: Detailed Control of Musical Performance via Hierarchical Modeling | Musical expression requires control of both what notes that are played, and how they are performed. Conventional audio synthesizers provide detailed expressive controls, but at the cost of realism. Black-box neural audio synthesis and concatenative samplers can produce realistic audio, but have few mechanisms for contr... | Audio Synthesis, Generative Model, Hierarchical, DDSP, Music, Audio, Structured Models | Controlling musical performance and synthesis with a structured hierarchical generative model | [
8,
8,
8
] | Accept (Oral) | Yusong Wu, Ethan Manilow, Yi Deng, Rigel Swavely, Kyle Kastner, Tim Cooijmans, Aaron Courville, Cheng-Zhi Anna Huang, Jesse Engel | ~Yusong_Wu1, ~Ethan_Manilow1, ~Yi_Deng4, rigeljs@google.com, ~Kyle_Kastner1, ~Tim_Cooijmans1, ~Aaron_Courville3, ~Cheng-Zhi_Anna_Huang1, ~Jesse_Engel1 | 20210928 | https://openreview.net/forum?id=UseMOjWENv | UseMOjWENv | @inproceedings{
wu2022mididdsp,
title={{MIDI}-{DDSP}: Detailed Control of Musical Performance via Hierarchical Modeling},
author={Yusong Wu and Ethan Manilow and Yi Deng and Rigel Swavely and Kyle Kastner and Tim Cooijmans and Aaron Courville and Cheng-Zhi Anna Huang and Jesse Engel},
booktitle={International Conferenc... | OpenReview/ICLR/figures/2022/accept_oral/UseMOjWENv/Figure1.png | 1 | Figure 1: (Left) The MIDI-DDSP architecture. MIDI-DDSP extracts interpretable features at the performance and synthesis levels, building a modeling hierarchy by learning feature generation at each level. Red and blue components indicate encoding and decoding respectively. Shaded boxes represent modules with learned par... | <paragraph_1>For music generation, despite recent progress, current tools still fall short of this ideal (Figure 1, right). Deep networks can either generate realistic full-band audio (Dhariwal et al., 2020) or provide detailed controls of attributes such as pitch, dynamics, and timbre (D´efossez et al., 2018; Engel et... | diagram | 0.961563 | |
OpenReview | ICLR | 2,022 | MIDI-DDSP: Detailed Control of Musical Performance via Hierarchical Modeling | Musical expression requires control of both what notes that are played, and how they are performed. Conventional audio synthesizers provide detailed expressive controls, but at the cost of realism. Black-box neural audio synthesis and concatenative samplers can produce realistic audio, but have few mechanisms for contr... | Audio Synthesis, Generative Model, Hierarchical, DDSP, Music, Audio, Structured Models | Controlling musical performance and synthesis with a structured hierarchical generative model | [
8,
8,
8
] | Accept (Oral) | Yusong Wu, Ethan Manilow, Yi Deng, Rigel Swavely, Kyle Kastner, Tim Cooijmans, Aaron Courville, Cheng-Zhi Anna Huang, Jesse Engel | ~Yusong_Wu1, ~Ethan_Manilow1, ~Yi_Deng4, rigeljs@google.com, ~Kyle_Kastner1, ~Tim_Cooijmans1, ~Aaron_Courville3, ~Cheng-Zhi_Anna_Huang1, ~Jesse_Engel1 | 20210928 | https://openreview.net/forum?id=UseMOjWENv | UseMOjWENv | @inproceedings{
wu2022mididdsp,
title={{MIDI}-{DDSP}: Detailed Control of Musical Performance via Hierarchical Modeling},
author={Yusong Wu and Ethan Manilow and Yi Deng and Rigel Swavely and Kyle Kastner and Tim Cooijmans and Aaron Courville and Cheng-Zhi Anna Huang and Jesse Engel},
booktitle={International Conferenc... | OpenReview/ICLR/figures/2022/accept_oral/UseMOjWENv/Figure11.png | 11 | Figure 11: The architecture of the Synthesis Generator. The Synthesis Generator is a GAN whose generator (left) takes in per-note Expression Controls and instrument embedding as a conditioning sequence (red box, left) and produces DDSP synthesis parameters, i.e., f0, Amplitudes, Harmonic Distribution, and Noise Magnitu... | <paragraph_1>The Synthesis Generator is a Generative Adversarial Network (GAN) Goodfellow et al. (2014) whose generator takes in the per-note Expression Controls, and produces the DDSP synthesis parameters. The architecture of the Synthesis Generator is shown in Figure 11. The Synthesis Generator</paragraph_1> | diagram | 0.998459 | |
OpenReview | ICLR | 2,022 | Natural Language Descriptions of Deep Visual Features | Some neurons in deep networks specialize in recognizing highly specific perceptual, structural, or semantic features of inputs. In computer vision, techniques exist for identifying neurons that respond to individual concept categories like colors, textures, and object classes. But these techniques are limited in scope,... | [
8,
8,
8
] | Accept (Oral) | Evan Hernandez, Sarah Schwettmann, David Bau, Teona Bagashvili, Antonio Torralba, Jacob Andreas | ~Evan_Hernandez1, ~Sarah_Schwettmann2, ~David_Bau1, ~Teona_Bagashvili1, ~Antonio_Torralba1, ~Jacob_Andreas1 | 20210928 | https://openreview.net/forum?id=NudBMY-tzDr | NudBMY-tzDr | @inproceedings{
hernandez2022natural,
title={Natural Language Descriptions of Deep Features},
author={Evan Hernandez and Sarah Schwettmann and David Bau and Teona Bagashvili and Antonio Torralba and Jacob Andreas},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.ne... | OpenReview/ICLR/figures/2022/accept_oral/NudBMY-tzDr/Figure11.png | 11 | Figure 11: Neuron captioning model. Given the set of top-activating images for a neuron and masks for the regions of greatest activation, we extract features maps from each convolutional layer of a pretrained image classifier. We then downsample the masks and use them to pool the features before concatenating them into... | diagram | 0.918849 | ||||
OpenReview | ICLR | 2,023 | Progressive Voronoi Diagram Subdivision Enables Accurate Data-free Class-Incremental Learning | Data-free Class-incremental Learning (CIL) is a challenging problem because rehearsing data from previous phases is strictly prohibited, causing catastrophic forgetting of Deep Neural Networks (DNNs). In this paper, we present \emph{iVoro}, a novel framework derived from computational geometry. We found Voronoi Diagram... | Voronoi Diagram, Computational Geometry | Deep Learning and representational learning | We show that progressive Voronoi Diagram is a powerful model for Class-incremental Learning. | [
6,
8,
6
] | Accept: poster | Chunwei Ma, Zhanghexuan Ji, Ziyun Huang, Yan Shen, Mingchen Gao, Jinhui Xu | ~Chunwei_Ma1, ~Zhanghexuan_Ji1, ~Ziyun_Huang1, ~Yan_Shen1, ~Mingchen_Gao1, ~Jinhui_Xu1 | 20220922 | https://openreview.net/forum?id=zJXg_Wmob03 | zJXg_Wmob03 | @inproceedings{
ma2023progressive,
title={Progressive Voronoi Diagram Subdivision Enables Accurate Data-free Class-Incremental Learning},
author={Chunwei Ma and Zhanghexuan Ji and Ziyun Huang and Yan Shen and Mingchen Gao and Jinhui Xu},
booktitle={The Eleventh International Conference on Learning Representations },
ye... | OpenReview/ICLR/figures/2023/accept_poster/zJXg_Wmob03/Figure1.png | 1 | Figure 1: Schematic illustrations of Voronoi Diagram (VD) for base sites (A), and when a new site (B) or a clique of new sites (C) is added to the system. | <paragraph_1>(C) Progressive Voronoi Diagram Figure 1: Schematic illustrations of Voronoi Diagram (VD) for base sites (A), and when a new site (B) or a clique of new sites (C) is added to the system.</paragraph_1>
<paragraph_2>classes untouched and thus hardly forgattable (see Figure 1). Based on this intuition, in thi... | diagram | 0.98683 |
OpenReview | ICLR | 2,023 | Light Sampling Field and BRDF Representation for Physically-based Neural Rendering | Physically-based rendering (PBR) is key for immersive rendering effects used widely in the industry to showcase detailed realistic scenes from computer graphics assets. A well-known caveat is that producing the same is computationally heavy and relies on complex capture devices. Inspired by the success in quality and e... | Neural Rendering | Applications (eg, speech processing, computer vision, NLP) | [
6,
8,
8,
3
] | Accept: poster | Jing Yang, Hanyuan Xiao, Wenbin Teng, Yunxuan Cai, Yajie Zhao | ~Jing_Yang11, corneliushsiao@gmail.com, ~Wenbin_Teng1, yunxuanc@usc.edu, yajie730@gmail.com | 20220922 | https://openreview.net/forum?id=yYEb8v65X8 | yYEb8v65X8 | @inproceedings{
yang2023light,
title={Light Sampling Field and {BRDF} Representation for Physically-based Neural Rendering},
author={Jing Yang and Hanyuan Xiao and Wenbin Teng and Yunxuan Cai and Yajie Zhao},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openr... | OpenReview/ICLR/figures/2023/accept_poster/yYEb8v65X8/Figure3.png | 3 | Figure 3: Material and Light Sampling Field Network. (a) Our material network takes in 3D position and view direction as inputs and predicts the specular strength and skin scattering parameters of our BRDF model. (b) The Light Sampling Field network applies similar differentiable network but with the light sampling cre... | <paragraph_1>coefficients of local SH. We visualize our Light Sampling Field with selected discrete sample points in Fig. 2b. Learning Light Sampling Field. We design a network (Fig. 3b) to predict the spherical Harmonics coefficient Cm k of a continuous light field. The inputs of this network are the lighting embeddin... | diagram | 0.995146 | |
OpenReview | ICLR | 2,023 | Evidential Uncertainty and Diversity Guided Active Learning for Scene Graph Generation | Scene Graph Generation (SGG) has already shown its great potential in various downstream tasks, but it comes at the price of a prohibitively expensive annotation process. To reduce the annotation cost, we propose using Active Learning (AL) for sampling the most informative data. However, directly porting current AL met... | Active learning, Scene graph generation, Uncertainty estimation | Applications (eg, speech processing, computer vision, NLP) | We proposed an Active Learning framework for the Scene Graph Generation. | [
6,
6,
6,
5
] | Accept: poster | Shuzhou Sun, Shuaifeng Zhi, Janne Heikkilä, Li Liu | ~Shuzhou_Sun1, ~Shuaifeng_Zhi2, ~Janne_Heikkilä1, ~Li_Liu9 | 20220922 | https://openreview.net/forum?id=xI1ZTtVOtlz | xI1ZTtVOtlz | @inproceedings{
sun2023evidential,
title={Evidential Uncertainty and Diversity Guided Active Learning for Scene Graph Generation},
author={Shuzhou Sun and Shuaifeng Zhi and Janne Heikkil{\"a} and Li Liu},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openrevie... | OpenReview/ICLR/figures/2023/accept_poster/xI1ZTtVOtlz/Figure1.png | 1 | Figure 1: The overall structure of our proposed AL framework EDAL. Following the standard AL setup, EDAL samples data from the unlabeled pool round by round to support the model training, and the quit condition is that the label budget is exhausted. | <paragraph_1>The overall pipeline of EDAL is shown in Figure 1, which is a hybrid AL model composed of uncertainty-based and diversity-based methods. First, the evidential uncertainty estimation method is applied with the extracted prior information from the available labeled data samples to estimate the relationship u... | diagram | 0.992964 |
OpenReview | ICLR | 2,023 | MEDICAL IMAGE UNDERSTANDING WITH PRETRAINED VISION LANGUAGE MODELS: A COMPREHENSIVE STUDY | The large-scale pre-trained vision language models (VLM) have shown remarkable domain transfer capability on natural images. However, it remains unknown whether this capability can also apply to the medical image domain. This paper thoroughly studies the knowledge transferability of pre-trained VLMs to the medical doma... | Vision Language models, Multimodality, Medical images, Few-shot learning, zero-shot | Deep Learning and representational learning | This paper discuss about how to leverage the trending vision language model to transfer to the medical domain, showing exciting performance on zero-shot and few-shot learning tasks. | [
6,
8,
8,
6
] | Accept: poster | Ziyuan Qin, Huahui Yi, Qicheng Lao, Kang Li | ~Ziyuan_Qin1, ~Huahui_Yi1, ~Qicheng_Lao2, ~Kang_Li9 | 20220922 | https://openreview.net/forum?id=txlWziuCE5W | txlWziuCE5W | @inproceedings{
qin2023medical,
title={{MEDICAL} {IMAGE} {UNDERSTANDING} {WITH} {PRETRAINED} {VISION} {LANGUAGE} {MODELS}: A {COMPREHENSIVE} {STUDY}},
author={Ziyuan Qin and Huahui Yi and Qicheng Lao and Kang Li},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://... | OpenReview/ICLR/figures/2023/accept_poster/txlWziuCE5W/Figure1.png | 1 | Figure 1: Overview of the proposed approach. The optimal medical prompts can be automatically generated with the help of pre-trained VQA model, medical language model, or a hybrid of both. | <paragraph_1>Figure 1 (right) illustrates the overall flow of our MLM-driven auto-prompt generation pipeline. We first ask the model which contains medical domain-specific knowledge to predict the masked token in given cloze sentences we design. The template of the cloze sentences is given as: ‘The [Attr] of an [Object... | diagram | 0.994329 |
OpenReview | ICLR | 2,023 | Impossibly Good Experts and How to Follow Them | We consider the sequential decision making problem of learning from an expert that has access to more information than the learner. For many problems this extra information will enable the expert to achieve greater long term reward than any policy without this privileged information access. We call these experts ``Im... | Imitation Learning, Reinforcement Learning, Experts, Distillation | Reinforcement Learning (eg, decision and control, planning, hierarchical RL, robotics) | [
6,
6,
6
] | Accept: poster | Aaron Walsman, Muru Zhang, Sanjiban Choudhury, Dieter Fox, Ali Farhadi | ~Aaron_Walsman1, nanami17@cs.washington.edu, ~Sanjiban_Choudhury1, ~Dieter_Fox1, ~Ali_Farhadi3 | 20220922 | https://openreview.net/forum?id=sciA_xgYofB | sciA_xgYofB | @inproceedings{
walsman2023impossibly,
title={Impossibly Good Experts and How to Follow Them},
author={Aaron Walsman and Muru Zhang and Sanjiban Choudhury and Dieter Fox and Ali Farhadi},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=sc... | OpenReview/ICLR/figures/2023/accept_poster/sciA_xgYofB/Figure5.png | 5 | Figure 5: ADVISOR cannot recover the agent-optimal policy π∗ L in Example IV. | <paragraph_1>Example IV in Figure 5 provides a demonstration of a failure case for ADVISOR. In this example, nothing prevents the auxiliary policy πaux from replicating expert behavior at A, meaning the ADVISOR loss will strongly favor the imitation learning signal at this location which encourages the</paragraph_1> | diagram | 0.997086 | |
OpenReview | ICLR | 2,023 | Budgeted Training for Vision Transformer | The superior performances of Vision Transformers often come with higher training costs. Compared to their CNN counterpart, Transformer models are hungry for large-scale data and their training schedules are usually prolonged. This sets great restrictions on training Transformers with limited resources, where a proper t... | Deep Learning and representational learning | [
6,
5,
6
] | Accept: poster | zhuofan xia, Xuran Pan, Xuan Jin, Yuan He, Hui Xue', Shiji Song, Gao Huang | ~zhuofan_xia1, ~Xuran_Pan1, ~Xuan_Jin1, ~Yuan_He2, ~Hui_Xue'1, ~Shiji_Song1, ~Gao_Huang1 | 20220922 | https://openreview.net/forum?id=sVzBN-DlJRi | sVzBN-DlJRi | @inproceedings{
xia2023budgeted,
title={Budgeted Training for Vision Transformer},
author={zhuofan xia and Xuran Pan and Xuan Jin and Yuan He and Hui Xue' and Shiji Song and Gao Huang},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=sVzB... | OpenReview/ICLR/figures/2023/accept_poster/sVzBN-DlJRi/Figure5.png | 5 | Figure 5: Illustration of our growing process of the MLP ratio γ. Taken an ϕ1(·) with the original γ = 4 as the example, the 4 × 1 matrix projects C dimensions to 4C dimensions, with 4 rows as the output dimensions and 1 column as the input dimension. The rows are divided into M = 4 parts to activate progressively, fro... | <paragraph_1>M C. Similarly, for MLP hidden dimension, only C(1) 2 of C2 channels are activated in the MLP layers by incorporating a smaller MLP ratio γ. We illustrate the growing process of the MLP ratio in Fig. 5, and the growing of attention heads follow the same recipe.</paragraph_1> | diagram | 0.926191 | ||
OpenReview | ICLR | 2,023 | DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing | This paper presents a new pre-trained language model, NewModel, which improves the original DeBERTa model by replacing mask language modeling (MLM) with replaced token detection (RTD), a more sample-efficient pre-training task. Our analysis shows that vanilla embedding sharing in ELECTRA hurts training efficiency and m... | Deep Learning and representational learning | [
6,
8,
6,
5
] | Accept: poster | Pengcheng He, Jianfeng Gao, Weizhu Chen | ~Pengcheng_He2, ~Jianfeng_Gao1, ~Weizhu_Chen1 | 20220922 | https://openreview.net/forum?id=sE7-XhLxHA | sE7-XhLxHA | @inproceedings{
he2023debertav,
title={De{BERT}aV3: Improving De{BERT}a using {ELECTRA}-Style Pre-Training with Gradient-Disentangled Embedding Sharing},
author={Pengcheng He and Jianfeng Gao and Weizhu Chen},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://open... | OpenReview/ICLR/figures/2023/accept_poster/sE7-XhLxHA/Figure1.png | 1 | Figure 1: Illustration of different embedding sharing methods. (a) ES: E, θG and θD will be jointly updated in a single backward pass with regards to LMLM ` λLRTD . (b) NES: EG and θG will first be updated via the backward pass with regards to LMLM , then ED and θD will be updated via the backward pass with regards to ... | <paragraph_1>To pre-train ELECTRA, we use a generator and a discriminator that share token embeddings, as shown in Figure 1 (a). This method, called Embedding Sharing (ES), allows the generator to provide informative inputs for the discriminator and reduces the number of parameters to learn. However, it also creates a ... | diagram | 0.990451 | ||
OpenReview | ICLR | 2,023 | Spacetime Representation Learning | Much of the data we encounter in the real world can be represented as directed graphs. In this work, we introduce a general family of representations for directed graphs through connected time-oriented Lorentz manifolds, called "spacetimes" in general relativity. Spacetimes intrinsically contain a causal structure that... | pseudo-Riemannian geometry, spacetimes, Lorentz geometry, Lorentzian causality theory, Lorentzian pre-length spaces, directed graphs | General Machine Learning (ie none of the above) | Representation of directed graphs by exploiting the causal structure of spacetimes via Lorentzian pre-length spaces | [
8,
6,
3,
6
] | Accept: poster | Marc T. Law, James Lucas | ~Marc_T._Law1, ~James_Lucas1 | 20220922 | https://openreview.net/forum?id=qV_M_rhYajc | qV_M_rhYajc | @inproceedings{
law2023spacetime,
title={Spacetime Representation Learning},
author={Marc T. Law and James Lucas},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=qV_M_rhYajc}
} | OpenReview/ICLR/figures/2023/accept_poster/qV_M_rhYajc/Figure8.png | 8 | Figure 8: A graph with directed cycles. | <paragraph_1>We now consider the graph G = (V, E) defined as V = {vi}4 i=1 and E = {(v1, v4), (v2, v1), (v3, v1), (v3, v2), (v4, v2), (v4, v3)}. This is a graph with directed cycles (e.g., v1 →v4 →v2 →v1, see Figure 8).</paragraph_1> | diagram | 0.994301 |
OpenReview | ICLR | 2,023 | On Explaining Neural Network Robustness with Activation Path | Despite their verified performance, neural networks are prone to be misled by maliciously designed adversarial examples. This work investigates the robustness of neural networks from the activation pattern perspective. We find that despite the complex structure of the deep neural network, most of the neurons provide lo... | Randomized Smoothing, Robustness, Neural Network | Deep Learning and representational learning | [
6,
6,
6,
6
] | Accept: poster | Ziping Jiang | ~Ziping_Jiang1 | 20220922 | https://openreview.net/forum?id=piIsx-G3Gux | piIsx-G3Gux | @inproceedings{
jiang2023on,
title={On Explaining Neural Network Robustness with Activation Path},
author={Ziping Jiang},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=piIsx-G3Gux}
} | OpenReview/ICLR/figures/2023/accept_poster/piIsx-G3Gux/Figure5.png | 5 | Figure 5: An illustration of the fixed (float) path and neuron of a neural network with 2D input, 4D output, and 1 hidden layer. (1) The 2D input space; (3) A sphere centered at x; (4) A float neuron with index (1,4) in the network.; (4) A bent-hyperplane defined by H = x|z1,4(x) = 0; | <paragraph_1>Figure 5 presents an illustration of the key concepts of this work. Consider N as a neural network with 2D input, 4D output and 1 hidden layer with 4 neurons. Assume that N has ReLU activation function. The input space is partitioned into several regions by a set of hyperplanes. Each of the region is refer... | diagram | 0.960041 | |
OpenReview | ICLR | 2,023 | Function-Consistent Feature Distillation | Feature distillation makes the student mimic the intermediate features of the teacher. Nearly all existing feature-distillation methods use L2 distance or its slight variants as the distance metric between teacher and student features. However, while L2 distance is isotropic w.r.t. all dimensions, the neural network’s ... | knowledge distillation, feature distillation, function consistency | Deep Learning and representational learning | [
8,
5,
8,
5
] | Accept: poster | Dongyang Liu, Meina Kan, Shiguang Shan, Xilin CHEN | ~Dongyang_Liu1, ~Meina_Kan1, ~Shiguang_Shan2, ~Xilin_CHEN2 | 20220922 | https://openreview.net/forum?id=pgHNOcxEdRI | pgHNOcxEdRI | @inproceedings{
liu2023functionconsistent,
title={Function-Consistent Feature Distillation},
author={Dongyang Liu and Meina Kan and Shiguang Shan and Xilin CHEN},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=pgHNOcxEdRI}
} | OpenReview/ICLR/figures/2023/accept_poster/pgHNOcxEdRI/Figure2.png | 2 | Figure 2: An overview of FCFD. Top: illustration of the traditional KD loss (Lkd) and appearancebased feature matching loss (Lapp). Bottom: illustration of our proposed function matching losses Lfunc and Lfunc′ . Note that the three L1 func terms in the figure sum up to complete L1 func. In each iteration, we randomly ... | <paragraph_1>We propose FCFD, where both the numerical value of features and the information about later layers are combined together for faithful feature mimicking. An illustration of FCFD is shown in Fig. 2. Due to limited space, here we focus on the methodology on image classification. However, FCFD is also applicab... | diagram | 0.86913 | |
OpenReview | ICLR | 2,023 | $\Lambda$-DARTS: Mitigating Performance Collapse by Harmonizing Operation Selection among Cells | Differentiable neural architecture search (DARTS) is a popular method for neural architecture search (NAS), which performs cell-search and utilizes continuous relaxation to improve the search efficiency via gradient-based optimization. The main shortcoming of DARTS is performance collapse, where the discovered architec... | Deep Learning and representational learning | We pinpoint the reason for performance collapse in DARTS and provide theoretical and empirical analysis on that as well as a solution to remedy the performance collapse via harmonizing the decisions of different cell. | [
6,
8,
6,
6
] | Accept: poster | Sajad Movahedi, Melika Adabinejad, Ayyoob Imani, Arezou Keshavarz, Mostafa Dehghani, Azadeh Shakery, Babak N Araabi | ~Sajad_Movahedi1, ~Melika_Adabinejad1, ~Ayyoob_Imani1, ~Arezou_Keshavarz1, ~Mostafa_Dehghani1, ~Azadeh_Shakery1, ~Babak_N_Araabi1 | 20220922 | https://openreview.net/forum?id=oztkQizr3kk | oztkQizr3kk | @inproceedings{
movahedi2023lambdadarts,
title={\${\textbackslash}Lambda\$-{DARTS}: Mitigating Performance Collapse by Harmonizing Operation Selection among Cells},
author={Sajad Movahedi and Melika Adabinejad and Ayyoob Imani and Arezou Keshavarz and Mostafa Dehghani and Azadeh Shakery and Babak N Araabi},
booktitle={... | OpenReview/ICLR/figures/2023/accept_poster/oztkQizr3kk/Figure14.png | 14 | Figure 14: Best normal and reduction cells discovered by Λ(ω,α) on S1 and CIFAR-100 dataset. | diagram | 0.993248 | ||
OpenReview | ICLR | 2,023 | $\Lambda$-DARTS: Mitigating Performance Collapse by Harmonizing Operation Selection among Cells | Differentiable neural architecture search (DARTS) is a popular method for neural architecture search (NAS), which performs cell-search and utilizes continuous relaxation to improve the search efficiency via gradient-based optimization. The main shortcoming of DARTS is performance collapse, where the discovered architec... | Deep Learning and representational learning | We pinpoint the reason for performance collapse in DARTS and provide theoretical and empirical analysis on that as well as a solution to remedy the performance collapse via harmonizing the decisions of different cell. | [
6,
8,
6,
6
] | Accept: poster | Sajad Movahedi, Melika Adabinejad, Ayyoob Imani, Arezou Keshavarz, Mostafa Dehghani, Azadeh Shakery, Babak N Araabi | ~Sajad_Movahedi1, ~Melika_Adabinejad1, ~Ayyoob_Imani1, ~Arezou_Keshavarz1, ~Mostafa_Dehghani1, ~Azadeh_Shakery1, ~Babak_N_Araabi1 | 20220922 | https://openreview.net/forum?id=oztkQizr3kk | oztkQizr3kk | @inproceedings{
movahedi2023lambdadarts,
title={\${\textbackslash}Lambda\$-{DARTS}: Mitigating Performance Collapse by Harmonizing Operation Selection among Cells},
author={Sajad Movahedi and Melika Adabinejad and Ayyoob Imani and Arezou Keshavarz and Mostafa Dehghani and Azadeh Shakery and Babak N Araabi},
booktitle={... | OpenReview/ICLR/figures/2023/accept_poster/oztkQizr3kk/Figure16.png | 16 | Figure 16: Best normal and reduction cells discovered by Λ(ω,α) on S3 and CIFAR-100 dataset. | diagram | 0.991562 | ||
OpenReview | ICLR | 2,023 | Decompose to Generalize: Species-Generalized Animal Pose Estimation | This paper challenges the cross-species generalization problem for animal pose estimation, aiming to learn a pose estimator that can be well generalized to novel species. We find the relation between different joints is important with two-fold impact: 1) on the one hand, some relation is consistent across all the speci... | Pose Estimation, Domain Generalization, Transfer Learning | Deep Learning and representational learning | [
5,
6,
8,
6
] | Accept: poster | Guangrui Li, Yifan Sun, Zongxin Yang, Yi Yang | ~Guangrui_Li1, ~Yifan_Sun2, ~Zongxin_Yang1, ~Yi_Yang22 | 20220922 | https://openreview.net/forum?id=nQai_B1Zrt | nQai_B1Zrt | @inproceedings{
li2023,
title={ Decompose to Generalize: Species-Generalized Animal Pose Estimation},
author={Guangrui Li and Yifan Sun and Zongxin Yang and Yi Yang},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=nQai_B1Zrt}
} | OpenReview/ICLR/figures/2023/accept_poster/nQai_B1Zrt/Figure2.png | 2 | Figure 2: Overview of “Decompose to Generalize” (D-Gen) scheme. D-Gen consists of two stages, i.e., joints decomposition (left) and the sub-sequential network split (right). 1) In the joints decomposition stage, D-Gen leverages different strategies (e.g., heuristic, geometry-based or attention-based) to divide the body... | <paragraph_1>The framework of the proposed D-Gen is illustrated in Fig. 2. The key motivation is that some joint relations are consistent across all the species and are thus beneficial for cross-species generalization, while some other joint relations are inconsistent and harmful. Therefore, D-Gen seeks to break the in... | diagram | 0.93022 | |
OpenReview | ICLR | 2,023 | Integrating Symmetry into Differentiable Planning with Steerable Convolutions | To achieve this, we draw inspiration from equivariant convolution networks and model the path planning problem as a set of signals over grids. We demonstrate that value iteration can be treated as a linear equivariant operator, which is effectively a steerable convolution. Building upon Value Iteration Networks (VIN), ... | Reinforcement Learning (eg, decision and control, planning, hierarchical RL, robotics) | [
8,
8,
6
] | Accept: poster | Linfeng Zhao, Xupeng Zhu, Lingzhi Kong, Robin Walters, Lawson L.S. Wong | ~Linfeng_Zhao1, ~Xupeng_Zhu1, ~Lingzhi_Kong2, ~Robin_Walters1, ~Lawson_L.S._Wong2 | 20220922 | https://openreview.net/forum?id=n7CPzMPKQl | n7CPzMPKQl | @inproceedings{
zhao2023integrating,
title={Integrating Symmetry into Differentiable Planning with Steerable Convolutions},
author={Linfeng Zhao and Xupeng Zhu and Lingzhi Kong and Robin Walters and Lawson L.S. Wong},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={http... | OpenReview/ICLR/figures/2023/accept_poster/n7CPzMPKQl/Figure4.png | 4 | Figure 4: Commutative diagram of a single step of value update, showing equivariance under rotations. Each grid in the Q-value field corresponds to all the values Q(·, a) of a single action a. | <paragraph_1>2. Value iteration, using the Bellman (optimality) operator, consists of only maps between signals (steerable fields) over Z2 (e.g., value map and transition function map). This allows us to inject symmetry by enforcing equivariance to those maps. Taking Figure 1 as an example, the 4 corner states are symm... | diagram | 0.965825 | ||
OpenReview | ICLR | 2,023 | TANGOS: Regularizing Tabular Neural Networks through Gradient Orthogonalization and Specialization | Despite their success with unstructured data, deep neural networks are not yet a panacea for structured tabular data. In the tabular domain, their efficiency crucially relies on various forms of regularization to prevent overfitting and provide strong generalization performance. Existing regularization techniques inclu... | Deep Learning, Tabular Data, Regularization | Deep Learning and representational learning | We introduce TANGOS, a regularization method that orthogonalizes the gradient attribution of neurons to improve the generalization of deep neural networks on tabular data. | [
5,
8,
6
] | Accept: poster | Alan Jeffares, Tennison Liu, Jonathan Crabbé, Fergus Imrie, Mihaela van der Schaar | ~Alan_Jeffares1, ~Tennison_Liu1, ~Jonathan_Crabbé1, ~Fergus_Imrie1, ~Mihaela_van_der_Schaar2 | 20220922 | https://openreview.net/forum?id=n6H86gW8u0d | n6H86gW8u0d | @inproceedings{
jeffares2023tangos,
title={{TANGOS}: Regularizing Tabular Neural Networks through Gradient Orthogonalization and Specialization},
author={Alan Jeffares and Tennison Liu and Jonathan Crabb{\'e} and Fergus Imrie and Mihaela van der Schaar},
booktitle={The Eleventh International Conference on Learning Repr... | OpenReview/ICLR/figures/2023/accept_poster/n6H86gW8u0d/Figure2.png | 2 | Figure 2: Method illustration. TANGOS regularizes the gradients with respect to each of the latent units. | diagram | 0.972716 | |
OpenReview | ICLR | 2,023 | Latent Neural ODEs with Sparse Bayesian Multiple Shooting | Training dynamic models, such as neural ODEs, on long trajectories is a hard problem that requires using various tricks, such as trajectory splitting, to make model training work in practice. These methods are often heuristics with poor theoretical justifications, and require iterative manual tuning. We propose a princ... | Generative models | [
8,
10,
6,
6
] | Accept: poster | Valerii Iakovlev, Cagatay Yildiz, Markus Heinonen, Harri Lähdesmäki | ~Valerii_Iakovlev1, ~Cagatay_Yildiz1, ~Markus_Heinonen1, ~Harri_Lähdesmäki1 | 20220922 | https://openreview.net/forum?id=moIlFZfj_1b | moIlFZfj_1b | @inproceedings{
iakovlev2023latent,
title={Latent Neural {ODE}s with Sparse Bayesian Multiple Shooting},
author={Valerii Iakovlev and Cagatay Yildiz and Markus Heinonen and Harri L{\"a}hdesm{\"a}ki},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net... | OpenReview/ICLR/figures/2023/accept_poster/moIlFZfj_1b/Figure6.png | 6 | Figure 6: (a) Temporal attention. (b) Relative position encoding. | <paragraph_1>where ϵ ∈ (0, 1], p ∈ N and δr ∈ R>0 are constants. Since exp (CDP ij + CTA ij ) = exp (CDP ij ) exp (CTA ij ), the main purpose of temporal attention is to reduce the amount of attention from βi to αj as the temporal distance |ti −tj| grows. Parameter δr defines the distance beyond which exp(CDP ij ) is s... | diagram | 0.965571 | ||
OpenReview | ICLR | 2,023 | Continuous pseudo-labeling from the start | Self-training (ST), or pseudo-labeling has sparked significant interest in the automatic speech recognition (ASR) community recently because of its success in harnessing unlabeled data. Unlike prior semi-supervised learning approaches that relied on iteratively regenerating pseudo-labels (PLs) from a trained model and ... | selt-training, pseudo-labeling, speech recognition, data selection and filtering | Unsupervised and Self-supervised learning | We show how to perform continuous self-training right from the start without any supervised pre-training. | [
6,
6,
5,
8
] | Accept: poster | Dan Berrebbi, Ronan Collobert, Samy Bengio, Navdeep Jaitly, Tatiana Likhomanenko | ~Dan_Berrebbi1, ~Ronan_Collobert1, ~Samy_Bengio1, ~Navdeep_Jaitly1, ~Tatiana_Likhomanenko1 | 20220922 | https://openreview.net/forum?id=m3twGT2bAug | m3twGT2bAug | @inproceedings{
berrebbi2023continuous,
title={Continuous pseudo-labeling from the start},
author={Dan Berrebbi and Ronan Collobert and Samy Bengio and Navdeep Jaitly and Tatiana Likhomanenko},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum... | OpenReview/ICLR/figures/2023/accept_poster/m3twGT2bAug/Figure1.png | 1 | Figure 1: Comparison between slimIPL (left) and how we control the cache by using PL evolution (right). The constant pout from slimIPL now is dynamic and computed based on the PL evolution. | <paragraph_1>unlabeled subset U C, which is itself updated as training goes: at each iteration, slimIPL removes a sample from the cache with probability pout, replacing it with a new one x ∈U along with its generated PL. More details about slimIPL can be found in Algorithm 1 and in Figure 1.</paragraph_1>
<paragraph_2>... | diagram | 0.894826 |
OpenReview | ICLR | 2,023 | A Differential Geometric View and Explainability of GNN on Evolving Graphs | Graphs are ubiquitous in social networks and biochemistry, where Graph Neural Networks (GNN) are the state-of-the-art models for prediction. Graphs can be evolving and it is vital to formally model and understand how a trained GNN responds to graph evolution. We propose a smooth parameterization of the GNN predicted di... | Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics) | [
8,
6,
6,
6
] | Accept: poster | Yazheng Liu, Xi Zhang, Sihong Xie | ~Yazheng_Liu1, ~Xi_Zhang12, ~Sihong_Xie1 | 20220922 | https://openreview.net/forum?id=lRdhvzMpVYV | lRdhvzMpVYV | @inproceedings{
liu2023a,
title={A Differential Geometric View and Explainability of {GNN} on Evolving Graphs},
author={Yazheng Liu and Xi Zhang and Sihong Xie},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=lRdhvzMpVYV}
} | OpenReview/ICLR/figures/2023/accept_poster/lRdhvzMpVYV/Figure1.png | 1 | Figure 1: G0 at time time s = 0 is updated to G1 at time s = 1 after the edge (J,K) is added, and the predicted class distribution (Pr(Y |G0)) of node J changes accordingly. The contributions of each path p on a computation graph to Pr(Y = j|G) for class j give the coordinates of Pr(Y |G) in a high-dimensional Euclidea... | <paragraph_1>global Euclidean space Rm, so that each point can be assigned with m global coordinates. A smooth curve on M is a smooth function γ : [0, 1] →M. A two dimensional manifold embedded in R3 with two curves is shown in Figure 1.</paragraph_1>
<paragraph_2>We propose a novel extrinsic coordinate based on the co... | diagram | 0.933018 | ||
OpenReview | ICLR | 2,023 | Human MotionFormer: Transferring Human Motions with Vision Transformers | Human motion transfer aims to transfer motions from a target dynamic person to a source static one for motion synthesis. An accurate matching between the source person and the target motion in both large and subtle motion changes is vital for improving the transferred motion quality. In this paper, we propose Human Mot... | Applications (eg, speech processing, computer vision, NLP) | [
8,
3,
6,
6
] | Accept: poster | Hongyu Liu, Xintong Han, Chenbin Jin, Lihui Qian, Huawei Wei, Zhe Lin, Faqiang Wang, Haoye Dong, Yibing Song, Jia Xu, Qifeng Chen | ~Hongyu_Liu2, ~Xintong_Han1, sbkim0407@gmail.com, turtleduck1995@gmail.com, ~Huawei_Wei2, linzheshabia@gmail.com, tshfqw@163.com, ~Haoye_Dong1, ~Yibing_Song1, ~Jia_Xu1, ~Qifeng_Chen1 | 20220922 | https://openreview.net/forum?id=lQVpasnQS62 | lQVpasnQS62 | @inproceedings{
liu2023human,
title={Human MotionFormer: Transferring Human Motions with Vision Transformers},
author={Hongyu Liu and Xintong Han and Chenbin Jin and Lihui Qian and Huawei Wei and Zhe Lin and Faqiang Wang and Haoye Dong and Yibing Song and Jia Xu and Qifeng Chen},
booktitle={The Eleventh International C... | OpenReview/ICLR/figures/2023/accept_poster/lQVpasnQS62/Figure3.png | 3 | Figure 3: Overview of our decoder and fusion blocks. There are warping and generation branches in these two blocks. In decoder block, We build the global and local correspondence between source image and target pose with Multi-Head Cross-Attention and CNN respectively. The fusion block predict an mask to combine the ou... | <paragraph_1>As shown in Fig 3, the decoder block has warping and generation branches. In each branch, there is a cross-attention process and a convolutional layer to capture the global and local correspondence respectively. Let Xl de denote the output of l-th decoder block (l > 1) or the output of precedent stage (l =... | diagram | 0.999716 | ||
OpenReview | ICLR | 2,023 | Augmentation with Projection: Towards an Effective and Efficient Data Augmentation Paradigm for Distillation | Knowledge distillation is one of the primary methods of transferring knowledge from large to small models. However, it requires massive task-specific data, which may not be plausible in many real-world applications. Data augmentation methods such as representation interpolation, token replacement, or augmentation with ... | Knowledge Distillation, Data Augmentation, Natural Language Processing | Deep Learning and representational learning | We proposed an effective and efficient data augmentation paradigm for knowledge distillation | [
6,
8,
6,
8
] | Accept: poster | Ziqi Wang, Yuexin Wu, Frederick Liu, Daogao Liu, Le Hou, Hongkun Yu, Jing Li, Heng Ji | ~Ziqi_Wang2, ~Yuexin_Wu1, ~Frederick_Liu1, ~Daogao_Liu1, ~Le_Hou1, ~Hongkun_Yu2, ~Jing_Li10, ~Heng_Ji3 | 20220922 | https://openreview.net/forum?id=kPPVmUF6bM_ | kPPVmUF6bM_ | @inproceedings{
wang2023augmentation,
title={Augmentation with Projection: Towards an Effective and Efficient Data Augmentation Paradigm for Distillation},
author={Ziqi Wang and Yuexin Wu and Frederick Liu and Daogao Liu and Le Hou and Hongkun Yu and Jing Li and Heng Ji},
booktitle={The Eleventh International Conferenc... | OpenReview/ICLR/figures/2023/accept_poster/kPPVmUF6bM_/Figure4.png | 4 | Figure 4: Left: MixUp with knowledge distillation. Right: AugPro-Mix with knowledge distillation. | <paragraph_1>In this section, we will introduce four variants of LAug: two backbones (MixUp and FGSM) and two AugPro variants building on top of them. Figure 4 shows the concept of our proposed method.</paragraph_1>
<paragraph_2>Here we show a concept figure (Figure 4) to let readers better understand the difference be... | diagram | 0.930995 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.