Enhance model card for BRONet with paper, code, project page, and metadata

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +104 -3
README.md CHANGED
@@ -1,3 +1,104 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: image-classification
4
+ library_name: pytorch
5
+ tags:
6
+ - certified-robustness
7
+ - lipschitz-networks
8
+ - orthogonal-layers
9
+ - logit-annealing
10
+ - bronet
11
+ - pytorch
12
+ ---
13
+
14
+ # Enhancing Certified Robustness via Block Reflector Orthogonal Layers and Logit Annealing Loss
15
+
16
+ This repository contains the official PyTorch implementation for BRONet, a novel Lipschitz neural network designed to enhance certified robustness in deep learning.
17
+
18
+ ๐Ÿ“š [Paper](https://huggingface.co/papers/2505.15174) | ๐ŸŒ [Project Page](https://bob1113.github.io/BRONet/) | ๐Ÿ’ป [Code](https://github.com/ntuaislab/BRONet)
19
+
20
+ ## Abstract
21
+
22
+ Lipschitz neural networks are well-known for providing certified robustness in deep learning. In this paper, we present a novel, efficient Block Reflector Orthogonal (BRO) layer that enhances the capability of orthogonal layers on constructing more expressive Lipschitz neural architectures. In addition, by theoretically analyzing the nature of Lipschitz neural networks, we introduce a new loss function that employs an annealing mechanism to increase margin for most data points. This enables Lipschitz models to provide better certified robustness. By employing our BRO layer and loss function, we design BRONet - a simple yet effective Lipschitz neural network that achieves state-of-the-art certified robustness. Extensive experiments and empirical analysis on CIFAR-10/100, Tiny-ImageNet, and ImageNet validate that our method outperforms existing baselines. The implementation is available at this https URL .
23
+
24
+ ## Overview
25
+
26
+ Official PyTorch implementation for our ICML 2025 spotlight paper. We introduce:
27
+
28
+ - **Block Reflector Orthogonal Layer (BRO):** A low-rank, approximation-free orthogonal convolutional layer designed to efficiently construct Lipschitz neural networks, improving both stability and expressiveness.
29
+ - **Logit Annealing Loss (LA):** An adaptive loss function that dynamically balances classification margins across samples, leading to enhanced certified robustness.
30
+
31
+ ## Repository Structure
32
+
33
+ - `bronet/` โ€” Contains the implementation for BRONet experiments.
34
+ - `lipconvnet/` โ€” Contains the implementation for the LipConvNet experiments.
35
+
36
+ Key modules:
37
+
38
+ - **BRO layer:** [`lipconvnet/models/layers/bro.py`](https://github.com/ntuaislab/BRONet/blob/main/lipconvnet/models/layers/bro.py)
39
+ - **LA loss:** [`bronet/models/margin_layer.py`](https://github.com/ntuaislab/BRONet/blob/main/bronet/models/margin_layer.py)
40
+
41
+ ## Getting Started
42
+
43
+ To set up the environment and run our code:
44
+
45
+ ### 1. Requirements
46
+
47
+ - Python 3.11
48
+ - PyTorch โ‰ฅ 2.0 with CUDA support
49
+ - A recent NVIDIA GPU (e.g., Ampere or newer) is recommended for training and certification
50
+
51
+ ### 2. Reproduce the paper results
52
+
53
+ To reproduce the main results in the paper, run the following command:
54
+
55
+ ```bash
56
+ cd bronet
57
+ bash run.sh
58
+ ```
59
+
60
+ ## Pre-trained Models
61
+
62
+ | Datasets | Models | Checkpoint |
63
+ | :---------------------------: | :----------: | :----------------------------------------------------------------------: |
64
+ | ImageNet (Table 1) | BRONet (+LA) | [Link](https://huggingface.co/pinhank121/BRONet_ImageNet/tree/main) |
65
+ | ImageNet w/ EDM2 2M (Table 2) | BRONet (+LA) | [Link](https://huggingface.co/pinhank121/BRONet_ImageNet_EDM2/tree/main) |
66
+
67
+ To test the provided models, download the checkpoint and config file, then run:
68
+
69
+ ```bash
70
+ cd bronet
71
+ OMP_NUM_THREADS=1 torchrun --nproc_per_node=1 \
72
+ test.py --launcher=pytorch \
73
+ ---master_port $MASTER_PORT=$((12000 + $RANDOM % 20000)) \
74
+ --config='path_to_config' \
75
+ --resume_from='path_to_downloaded_checkpoint'
76
+ ```
77
+
78
+ See [`bronet/README.md`](https://github.com/ntuaislab/BRONet/blob/main/bronet/README.md) for instructions on reproducing the results.
79
+
80
+ ## Acknowledgements
81
+
82
+ This work builds on and benefits from several open-source efforts:
83
+
84
+ - [Cayley Layer](https://github.com/locuslab/orthogonal-convolutions)
85
+ - [SOC](https://github.com/singlasahil14/SOC)
86
+ - [LOT](https://github.com/AI-secure/Layerwise-Orthogonal-Training)
87
+ - [SLL](https://github.com/araujoalexandre/Lipschitz-SLL-Networks)
88
+ - [LiResNet](https://github.com/hukkai/liresnet)
89
+
90
+ We sincerely thank the authors of these projects for making their work publicly available.
91
+
92
+ ## Citation
93
+
94
+ If you find our work useful, please cite us:
95
+
96
+ ```bibtex
97
+ @inproceedings{lai2025enhancing,
98
+ title={Enhancing Certified Robustness via Block Reflector Orthogonal Layers and Logit Annealing Loss},
99
+ author={Bo-Han Lai and Pin-Han Huang and Bo-Han Kung and Shang-Tse Chen},
100
+ booktitle={International Conference on Machine Learning (ICML)},
101
+ year={2025},
102
+ note={Spotlight}
103
+ }
104
+ ```