papers-content / 2109 /2109.07865.md
Sylvestre's picture
Sylvestre HF Staff
Squashing commit
8906289 verified

Title: OMPQ: Orthogonal Mixed Precision Quantization

URL Source: https://arxiv.org/html/2109.07865

Markdown Content: Yuexiao Ma 1, Taisong Jin 2, Xiawu Zheng 3, Yan Wang 4, Huixia Li 2,

Yongjian Wu 5, Guannan Jiang 6, Wei Zhang 6, Rongrong Ji 1

Abstract

To bridge the ever-increasing gap between deep neural networks’ complexity and hardware capability, network quantization has attracted more and more research attention. The latest trend of mixed precision quantization takes advantage of hardware’s multiple bit-width arithmetic operations to unleash the full potential of network quantization. However, existing approaches rely heavily on an extremely time-consuming search process and various relaxations when seeking the optimal bit configuration. To address this issue, we propose to optimize a proxy metric of network orthogonality that can be efficiently solved with linear programming, which proves to be highly correlated with quantized model accuracy and bit-width. Our approach significantly reduces the search time and the required data amount by orders of magnitude, but without a compromise on quantization accuracy. Specifically, we achieve 72.08%72.08% Top-1 accuracy on ResNet-18 18 with 6.7 6.7 Mb parameters, which does not require any searching iterations. Given the high efficiency and low data dependency of our algorithm, we use it for the post-training quantization, which achieves 71.27%71.27% Top-1 accuracy on MobileNetV 2 2 with only 1.5 1.5 Mb parameters.

1 Introduction

Recently, we witness an obvious trend in deep learning that the models have rapidly increasing complexity (He et al. 2016; Simonyan and Zisserman 2014; Szegedy et al. 2015; Howard et al. 2017; Sandler et al. 2018; Zhang et al. 2018b). Due to practical limits such as latency, battery, and temperature, the host hardware where the models are deployed cannot keep up with this trend. It results in a large and ever-increasing gap between the computational demands and the resources. To address this issue, network quantization(Courbariaux et al. 2016; Rastegari et al. 2016; Kim et al. 2019; Banner, Nahshan, and Soudry 2019; Liu et al. 2019), which maps single-precision floating point weights or activations to lower bits integers for compression and acceleration, has attracted considerable research attention. Network quantization can be naturally formulated as an optimization problem and a straightforward approach is to relax the constraints to make it a tractable optimization problem, at a cost of an approximated solution,

Image 1: Refer to caption

Figure 1: Comparison of the resources used to obtain the optimal bit configuration between our algorithm and other mixed precision algorithms (FracBits (Yang and Jin 2021), HAWQ (Dong et al. 2020), BRECQ (Li et al. 2021)) on ResNet-18 18. “Searching Data” is the number of input images.

e.g. Straight Through Estimation (STE) (Bengio, Léonard, and Courville 2013).

With the recent development of inference hardware, arithmetic operations with variable bit-width become a possibility and bring further flexibility to the network quantization. To take full advantage of the hardware capability, mixed precision quantization (Dong et al. 2020; Wang et al. 2019; Li et al. 2021; Yang and Jin 2021) aims to quantize different network layers to different bit-width, so as to achieve a better trade-off between compression ratio and accuracy. While benefiting from the extra flexibility, the mixed precision quantization also suffers from a more complicated and challenging optimization problem, with a non-differentiable and extremely non-convex objective function. Therefore, existing approaches(Dong et al. 2020; Yang and Jin 2021; Wang et al. 2019; Li et al. 2021) often require numerous data and computing resources to search for the optimal bit configuration. For instance, FracBits(Yang and Jin 2021) approximates the bit-width by performing a first-order Taylor expansion at the adjacent integer, making the bit variable differentiable. This allows it to integrate the search process into training to obtain the optimal bit configuration. However, to derive a decent solution, it still requires a large amount of computation resources in the searching and training process. To resolve the large demand on training data, Dong et al.(Dong et al. 2020) use the average eigenvalue of the hessian matrix of each layer as the metric for bit allocation. However, the matrix-free Hutchinson algorithm for implicitly calculating the average of the eigenvalues of the hessian matrix still needs 50 iterations for each network layer. Another direction is black box optimization. For instance, Wang et al.(Wang et al. 2019) use reinforcement learning for the bit allocation of each layer. Li et al.(Li et al. 2021) use evolutionary search algorithm (Guo et al. 2020) to derive the optimal bit configuration, together with a block reconstruction strategy to efficiently optimize the quantized model. But the population evolution process requires 1,024 1,024 input data and 100 100 iterations, which is time-consuming.

Different from the existing approaches of black box optimization or constraint relaxation, we propose to construct a proxy metric, which could have a substantially different form, but be highly correlated with the objective function of original linear programming. In general, we propose to obtain the optimal bit configuration by using the orthogonality of neural network. Specifically, we deconstruct the neural network into a set of functions, and define the orthogonality of the model by extending its definition from a function f:ℝ→ℝ f:\mathbb{R}\rightarrow\mathbb{R} to the entire network f:ℝ m→ℝ n f:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}. The measurement of the orthogonality could be efficiently performed with Monte Carlo sampling and Cauchy-Schwarz inequality, based on which we propose an efficient metric named OR thogonality M etric (ORM) as the proxy metric. As illustrated in Fig.1, we only need a single-pass search process on a small amount of data with ORM. In addition, we derive an equivalent form of ORM to accelerate the computation.

On the other hand, model orthogonality and quantization accuracy are positively correlated on different networks. Therefore, maximizing model orthogonality is taken as our objective function. Meanwhile, our experiments show that layer orthogonality and bit-width are also positively correlated. We assign a larger bit-width to the layer with larger orthogonality while combining specific constraints to construct a linear programming problem. The optimal bit configuration can be gained simply by solving the linear programming problem.

In summary, our contributions are listed as follows:

  • • We introduce a novel metric of layer orthogonality. We introduce function orthogonality into neural networks and propose the OR thogonality M etric (ORM). We leverage it as a proxy metric to efficiently solve the mixed precision quantization problem, which is the first attempt in the community and can easily be integrated into any quantization scheme.

  • • We observe a positive correlation between ORM and quantization accuracy on different models. Therefore, we optimize the model orthogonality through the linear programming problem, which can derive the optimal bit-width configuration in a few seconds without iterations.

  • • We also provide extensive experimental results on ImageNet, which elaborate that the proposed orthogonality-based approach could gain the state-of-the-art quantization performance with orders of magnitude’s speed up.

Image 2: Refer to caption

Figure 2: Overview. Left: Deconstruct the model into a set of functions ℱ\mathcal{F}. Middle: ORM symmetric matrix calculated from ℱ\mathcal{F}. Right: Linear programming problem constructed by the importance factor θ\theta to derive optimal bit configuration.

2 Related Work

Quantized Neural Networks: Existing neural network quantization algorithms can be divided into two categories based on their training strategy: post-training quantization (PTQ) and quantization-aware training (QAT). PTQ (Li et al. 2021; Cai et al. 2020; Nagel et al. 2019) is an offline quantization method, which only needs a small amount of data to complete the quantization process. Therefore, PTQ could obtain an optimal quantized model efficiently, at a cost of accuracy drop from quantization. In contrast, quantization-aware training (Gong et al. 2019; Zhou et al. 2016; Dong et al. 2020; Zhou et al. 2017; Chen, Wang, and Pan 2019; Cai et al. 2017; Choi et al. 2018) adopts online quantization strategy. This type of methods utilize the whole training dataset during quantization process. As a result, it has superior accuracy but limited efficiency.

If viewed from a perspective of bit-width allocation strategy, neural network quantization can also be divided into unified quantization and mixed precision quantization. Traditionally, network quantization means unified quantization. Choi et al.(Choi et al. 2018) aim to optimize the parameterized clip boundary of activation value of each layer during training process. Chen et al.(Chen, Wang, and Pan 2019) introduce the meta-quantization block to approximate the derivative of non-derivable function. Recently, some works (Yang and Jin 2021; Dong et al. 2020; Li et al. 2021) that explore assigning different bit-widths to different layers begin to emerge. Yang et al.(Yang and Jin 2021) approximate the derivative of bit-width by first-order Taylor expansion at adjacent integer points, thereby fusing the optimal bit-width selection with the training process. However, its optimal bit-width searching takes 80%80% of the training epochs, which consumes lots of time and computation power. Dong et al.(Dong et al. 2020) take the average eigenvalues of the hessian matrix of each layer as the basis for the bit allocation of that layer. However, the matrix-free Hutchinson algorithm for calculating the average eigenvalues needs to be iterated 50 50 times for each layer.

Network Similarity: Previous works (Bach and Jordan 2002; Gretton, Herbrich, and Smola 2003; Leurgans, Moyeed, and Silverman 1993; Fukumizu, Bach, and Jordan 2004; Gretton et al. 2005; Kornblith et al. 2019) define covariance and cross-covariance operators in the Reproducing Kernel Hilbert Spaces (RKHSs), and derive mutual information criteria based on these operators. Gretton et al.(Gretton et al. 2005) propose the Hilbert-Schmidt Independence Criterion (HSIC), and give a finite-dimensional approximation of it. Furthermore, Kornblith et al.(Kornblith et al. 2019) give the similarity criterion CKA based on HSIC, and study its relationship with the other similarity criteria. In the following, we propose a metric from the perspective of network orthogonality, and give a simple and clear derivation. Simultaneously, we use it to guide the network quantization.

3 Methodology

In this section, we will introduce our mixed precision quantization algorithm from three aspects: how to define the orthogonality, how to efficiently measure it, and how to construct a linear programming model to derive the optimal bit configuration.

3.1 Network Orthogonality

A neural network can be naturally decomposed into a set of layers or functions. Formally, for the given input x∈ℝ 1×(C×H×W)x\in\mathbb{R}^{1\times(C\times H\times W)}, we decompose a neural network into ℱ={f 1,f 2,⋯,f L}\mathcal{F}={f_{1},f_{2},\cdots,f_{L}}, where f i f_{i} represents the transformation from input x x to the result of the i i-th layer. In other words, if g i g_{i} represents the function of the i i-th layer, then f i​(x)=g i​(f i−1​(x))=g i​(g i−1​(⋯​g 1​(x)))f_{i}(x)=g_{i}\big(f_{i-1}(x)\big)=g_{i}\Big(g_{i-1}\big(\cdots g_{1}(x)\big)\Big). Here we introduce the inner product (Arfken, Weber, and Spector 1999) between functions f i f_{i} and f j f_{j}, which is formally defined as,

⟨f i,f j⟩P​(x)=∫𝒟 f i​(x)​P​(x)​f j​(x)T​𝑑 x,{\left\langle f_{i},f_{j}\right\rangle}{P(x)}=\int{\mathcal{D}}f_{i}(x)P(x){f_{j}(x)}^{T}dx,(1)

where f i​(x)∈ℝ 1×(C i×H i×W i)f_{i}(x)\in\mathbb{R}^{1\times(C_{i}\times H_{i}\times W_{i})}, f j​(x)∈ℝ 1×(C j×H j×W j)f_{j}(x)\in\mathbb{R}^{1\times(C_{j}\times H_{j}\times W_{j})} are the known functions when the model is given, and 𝒟\mathcal{D} is the domain of x x. If we set f i(m)​(x)f_{i}^{(m)}(x) to be the m m-th element of f i​(x)f_{i}(x), then P​(x)∈ℝ(C i×H i×W i)×(C j×H j×W j)P(x)\in\mathbb{R}^{(C_{i}\times H_{i}\times W_{i})\times(C_{j}\times H_{j}\times W_{j})} is the probability density matrix between f i​(x)f_{i}(x) and f j​(x)f_{j}(x), where P m,n​(x)P_{m,n}(x) is the probability density function of the random variable f i(m)​(x)⋅f j(n)​(x)f_{i}^{(m)}(x)\cdot f_{j}^{(n)}(x). According to the definition in (Arfken, Weber, and Spector 1999), ⟨f i,f j⟩P​(x)=0{\left\langle f_{i},f_{j}\right\rangle}{P(x)}=0 means that f i f{i} and f j f_{j} are weighted orthogonal. In other words, ⟨f i,f j⟩P​(x){\left\langle f_{i},f_{j}\right\rangle}{P(x)} is negatively correlated with the orthogonality between f i f{i} and f j f_{j}. When we have a known set of functions to be quantized ℱ={f i}i=1 L\mathcal{F}={f_{i}}{i=1}^{L}, with the goal to approximate an arbitrary function h∗h^{*}, the quantization error can then be expressed by the mean square error: ξ​∫𝒟|h∗​(x)−∑i ψ i​f i​(x)|2​𝑑 x\xi\int{\mathcal{D}}{|h^{*}(x)-\sum_{i}\psi_{i}f_{i}(x)|}^{2}dx, where ξ\xi and ψ i\psi_{i} are combination coefficient. According to Parseval equality (Tanton 2005), if ℱ\mathcal{F} is an orthogonal basis functions set, then the mean square error could achieve 0. Furthermore, the orthogonality between the basis functions is stronger, the mean square error is smaller, i.e., the model corresponding to the linear combination of basis functions has a stronger representation capability. Here we further introduce this insight to network quantization. In general, the larger the bit, the more representational capability of the corresponding model (Liu et al. 2018). Specifically, we propose to assign a larger bit-width to the layer with stronger orthogonality against all other layers to maximize the representation capability of the model. However, Eq.1 has the integral of a continuous function which is untractable in practice. Therefore, we derive a novel metric to efficiently approximate the orthogonality of each layer in Sec3.2.

3.2 Efficient Orthogonality Metric

To avoid the intractable integral, we propose to leverage the Monte Carlo sampling to approximate the orthogonality of the layers. Specifically, from the Monte Carlo integration perspective in (Caflisch 1998), Eq.1 can be rewritten as

⟨f i,f j⟩P​(x)=∫𝒟 f i​(x)​P​(x)​f j​(x)T​𝑑 x=‖E P​(x)​[f j​(x)T​f i​(x)]‖F.\begin{split}{\left\langle f_{i},f_{j}\right\rangle}{P(x)}&=\int{\mathcal{D}}f_{i}(x)P(x){f_{j}(x)}^{T}dx\ &={\left|E_{P(x)}[{f_{j}(x)}^{T}f_{i}(x)]\right|}_{F}.\end{split}(2)

We randomly get N N samples x 1,x 2,…,x N x_{1},x_{2},\dots,x_{N} from a training dataset with the probability density matrix P​(x)P(x), which allows the expectation E P​(x)​[f j​(x)T​f i​(x)]E_{P(x)}[{f_{j}(x)}^{T}f_{i}(x)] to be further approximated as,

‖E P​(x)​[f j​(x)T​f i​(x)]‖F≈1 N​‖∑n=1 N f j​(x n)T​f i​(x n)‖F=1 N​‖f j​(X)T​f i​(X)‖F,\begin{split}{\left|E_{P(x)}[{f_{j}(x)}^{T}f_{i}(x)]\right|}{F}&\approx\frac{1}{N}{\left|\sum{n=1}^{N}{f_{j}(x_{n})}^{T}f_{i}(x_{n})\right|}{F}\ &=\frac{1}{N}{\left|f{j}(X)^{T}f_{i}(X)\right|}_{F},\end{split}(3)

where f i​(X)∈ℝ N×(C i×H i×W i)f_{i}(X)\in\mathbb{R}^{N\times(C_{i}\times H_{i}\times W_{i})} represents the output of the i i-th layer, f j​(X)∈ℝ N×(C j×H j×W j)f_{j}(X)\in\mathbb{R}^{N\times(C_{j}\times H_{j}\times W_{j})} represents the output of the j j-th layer, and ||⋅||F||\cdot||_{F} is the Frobenius norm. From Eqs.2-3, we have

N​∫𝒟 f i​(x)​P​(x)​f j​(x)T​𝑑 x≈‖f j​(X)T​f i​(X)‖F.N\int_{\mathcal{D}}f_{i}(x)P(x){f_{j}(x)}^{T}dx\approx{\left|f_{j}(X)^{T}f_{i}(X)\right|}_{F}.(4)

However, the comparison of orthogonality between different layers is difficult due to the differences in dimensionality. To this end, we use the Cauchy-Schwarz inequality to normalize it in [0,1][0,1] for the different layers. Applying Cauchy-Schwarz inequality to the left side of Eq.4, we have

0≤(N​∫𝒟 f i​(x)​P​(x)​f j​(x)T​𝑑 x)2≤∫𝒟 N​f i​(x)​P i​(x)​f i​(x)T​𝑑 x​∫𝒟 N​f j​(x)​P j​(x)​f j​(x)T​𝑑 x.\begin{split}0&\leq{\left(N\int_{\mathcal{D}}f_{i}(x)P(x){f_{j}(x)}^{T}dx\right)}^{2}\ &\leq\int_{\mathcal{D}}Nf_{i}(x)P_{i}(x){f_{i}(x)}^{T}dx\int_{\mathcal{D}}Nf_{j}(x)P_{j}(x){f_{j}(x)}^{T}dx.\end{split}(5)

We substitute Eq.4 into Eq.5 and perform some simplifications to derive our OR thogonality M etric (ORM) 1 1 1 ORM is formally consistent with CKA. However, we pioneer to discover its relationship with quantized model accuracy and confirm its validity in mixed precision quantization from the perspective of function orthogonality, and CKA explores the relationship between hidden layers from the perspective of similarity. In other words, CKA implicitly verifies the validity of ORM further. , refer to supplementary material for details:

ORM​(X,f i,f j)=‖f j​(X)T​f i​(X)‖F 2‖f i​(X)T​f i​(X)‖F​‖f j​(X)T​f j​(X)‖F,{\rm ORM}(X,f_{i},f_{j})=\frac{{||f_{j}(X)^{T}f_{i}(X)||}^{2}{F}}{||f{i}(X)^{T}f_{i}(X)||{F}||f{j}(X)^{T}f_{j}(X)||_{F}},(6)

where ORM ∈[0,1]\in[0,1]. f i f_{i} and f j f_{j} is orthogonal when ORM=0{\rm\text{ORM}}=0. On the contrary, f i f_{i} and f j f_{j} is dependent when ORM=1{\rm\text{ORM}}=1. Therefore, ORM is negatively correlated to orthogonality.

Calculation Acceleration. Given a specific model, calculating Eq.6 involves the huge matrices. Suppose that f i​(X)∈ℝ N×(C i×H i×W i)f_{i}(X)\in\mathbb{R}^{N\times(C_{i}\times H_{i}\times W_{i})}, f j​(X)∈ℝ N×(C j×H j×W j)f_{j}(X)\in\mathbb{R}^{N\times(C_{j}\times H_{j}\times W_{j})}, and the dimension of features in the j j-th layer is larger than that of the i i-th layer. Furthermore, the time complexity of computing ORM​(X,f i,f j){\rm ORM}(X,f_{i},f_{j}) is 𝑶​(N​C j 2​H j 2​W j 2)\boldsymbol{O}(NC_{j}^{2}H_{j}^{2}W_{j}^{2}). The huge matrix occupies a lot of memory resources, and also increases the time complexity of the entire algorithm by several orders of magnitude. Therefore, we derive an equivalent form to accelerate calculation. If we take Y=f i​(X)Y=f_{i}(X), Z=f j​(X)Z=f_{j}(X) as an example, then Y​Y T,Z​Z T∈ℝ N×N YY^{T},ZZ^{T}\in\mathbb{R}^{N\times N}. We have:

‖Z T​Y‖F 2=⟨vec​(Y​Y T),vec​(Z​Z T)⟩,||Z^{T}Y||_{F}^{2}=\left\langle\textbf{vec}(YY^{T}),\textbf{vec}(ZZ^{T})\right\rangle,(7)

where vec(⋅\cdot) represents the operation of flattening matrix into vector. From Eq.7, the time complexity of calculating ORM​(X,f i,f j){\rm ORM}(X,f_{i},f_{j}) becomes 𝑶​(N 2​C j​H j​W j)\boldsymbol{O}(N^{2}C_{j}H_{j}W_{j}) through the inner product of vectors. When the number of samples N N is larger than the dimension of features C×H×W C\times H\times W, the norm form is faster to calculate thanking to lower time complexity, vice versa. Specific acceleration ratio and the proof of Eq.7 are demonstrated in supplementary material.

3.3 Mixed Precision Quantization

Effectiveness of ORM on Mixed Precision Quantization. ORM directly indicates the importance of the layer in the network, which can be used to decide the configuration of the bit-width eventually. We conduct extensive experiments to provide sufficient and reliable evidence for such claim. Specifically, we first sample different quantization configurations for ResNet-18 and MobileNetV2. Then finetuning to obtain the performance. Meanwhile, the overall orthogonality of the sampled models is calculated separately. Interestingly, we find that model orthogonality and performance are positively correlated to the sum of ORM in Fig.3. Naturally, inspired by this finding, maximizing orthogonality is taken as our objective function, which is employed to integrate the model size constraints and construct a linear programming problem to obtain the final bit configuration. The detailed experiments are provided in the supplementary material.

Image 3: Refer to caption

Figure 3: Relationship between orthogonality and accuracy for different quantization configurations on ResNet-18 18 and MobileNetV 2 2.

For a specific neural network, we can calculate an orthogonality matrix K K, where k i​j=ORM​(X,f i,f j)k_{ij}={\rm ORM}(X,f_{i},f_{j}). Obviously, K K is a symmetric matrix and the diagonal elements are 1 1. Furthermore, we show some ORM matrices on widely used models with the different number of samples N N in the supplementary material. We add up the non-diagonal elements of each row of the matrix,

γ i=∑j=1 L k i​j−1.\gamma_{i}=\sum_{j=1}^{L}k_{ij}-1.(8)

Smaller γ i\gamma_{i} means stronger orthogonality between f i f_{i} and other functions in the function set ℱ\mathcal{F}, and it also means that former i i layers of the neural network are more independent. Thus, we leverage the monotonically decreasing function e−x e^{-x} to model this relationship:

θ i=e−β​γ i,\theta_{i}=e^{-\beta\gamma_{i}},(9)

where β\beta is a hyper-parameter to control the bit-width difference between different layers. We also investigate the other monotonically decreasing functions (For the details, please refer to Sec4.2). θ i\theta_{i} is used as the importance factor for the former i i layers of the network, then we define a linear programming problem as follows:

Objective:max 𝐛​∑i=1 L(b i L−i+1​∑j=i L θ j),Constraints:∑i L M(b i)≤𝒯.\begin{split}\text{Objective: }&\max_{\mathbf{b}}\sum_{i=1}^{L}\left(\frac{b_{i}}{L-i+1}\sum_{j=i}^{L}\theta_{j}\right),\ \text{Constraints: }&\sum_{i}^{L}M^{(b_{i})}\leq\mathcal{T}.\end{split}(10)

M(b i)M^{(b_{i})} is the model size of the i i-th layer under b i b_{i} bit quantization and 𝒯\mathcal{T} represents the target model size. b is the optimal bit configuration. Maximizing the objective function means assigning the larger bit-width to more independent layer, which implicitly maximizes the model’s representation capability. More details of network deconstruction, linear programming construction and the impact of β\beta are provided in the supplementary material.

Note that it is extremely efficient to solve the linear programming problem in Eq.10, which only takes a few seconds on a single CPU. In other words, our method is extremely efficient (9 9 s on MobileNetV 2 2) when comparing to the previous methods (Yang and Jin 2021; Dong et al. 2020; Li et al. 2021) that require lots of data or iterations for searching. In addition, our algorithm can be combined as a plug-and-play module with quantization-aware training or post-training quantization schemes thanking to the high efficiency and low data requirements. In other words, our approach is capable of improving the accuracy of SOTA methods, where detail results are reported in Secs4.3 and 4.4.

Decreasing ResNet-18 18 MobileNetV 2 2 Changing Function(%%)(%%)Rate e−x e^{-x}72.30 72.30 63.51 e−x e^{-x} −l​o​g​x-logx 72.26 72.26 63.20 63.20 x−2 x^{-2} −x-x 72.36 63.0 63.0 0 −x 3-x^{3}71.71 71.71-6​x 6x −e x-e^{x}--e x e^{x}

Table 1: The Top-1 accuracy (%) with different monotonically decreasing functions on ResNet-18 18 and MobileNetV 2 2.

Table 2: Top-1 1 accuracy (%) of different deconstruction granularity. The activations bit-width of MobileNetV 2 2 and ResNet-18 18 are both 8 8. ∗* means mixed bit.

(a) ResNet-18 18

Method W/A Int-Only Uniform Model Size (Mb)BOPs (G)Top-1 (%%) Baseline 32 32/32 32✗-44.6 44.6 1,858 1,858 73.09 73.09 RVQuant 8 8/8 8✗✗11.1 11.1 116 116 70.01 70.01 HAWQ-V3 8 8/8 8✔✔11.1 11.1 116 116 71.56 71.56 OMPQ∗*/8 8✔✔6.7 6.7 97 97 72.30 PACT▽\text{PACT}^{\triangledown}5 5/5 5✗✔7.2 7.2 74 74 69.80 69.80 LQ-Nets▽\text{LQ-Nets}^{\triangledown}4 4/32 32✗✗5.8 5.8 225 225 70.00 70.00 HAWQ-V3∗*/∗✔✔6.7 6.7 72 72 70.22 70.22 OMPQ∗/6 6✔✔6.7 6.7 75 75 72.08

(b) ResNet-50 50

Table 3: Mixed precision quantization results of ResNet-18 18 and ResNet-50 50. “Int-Only” means only including integer during quantization process. “Uniform” represents uniform quantization. W/A is the bit-width of weight and activation. * indicates mixed precision. ▽\triangledown represents not quantizing the first and last layers.

4 Experiments

In this section, we conduct a series of experiments to validate the effectiveness of OMPQ on ImageNet. We first introduce the implementation details of our experiments. Ablation experiments about the monotonically decreasing function and deconstruction granularity are then conducted to demonstrate the importance of each component. Finally, we combine OMPQ with widely-used QAT and PTQ schemes, which shows a better compression and the accuracy trade-off comparing to the SOTA methods.

4.1 Implementation Details

The ImageNet dataset includes 1.2M training data and 50,000 validation data. We randomly obtain 64 training data samples for ResNet-18 18/50 50 and 32 training data samples for MobileNetV 2 2 following similar data pre-processing (He et al. 2016) to derive the set of functions ℱ\mathcal{F}. OMPQ is extremely efficient which only needs a piece of Nvidia Geforce GTX 1080Ti and a single Intel(R) Xeon(R) CPU E5-2620 v4. For the models that have a large amount of parameters, we directly adopt the round function to convert the bit-width into an integer after linear programming. Meanwhile, we adopt depth-first search (DFS) to find the bit configuration that strictly meets the different constraints for a small model, e.g. ResNet-18 18. The aforementioned processes are extremely efficient, which only take a few seconds on these devices. Besides, OMPQ is flexible, which is capable of leveraging different search spaces with QAT and PTQ under different requirements. Finetuning implementation details are listed as follows.

For the experiments on QAT quantization scheme, we use two NVIDIA Tesla V100 GPUs. Our quantization framework does not contain integer division or floating point numbers in the network. In the training process, the initial learning rate is set to 1e-4, and the batch size is set to 128 128. We use cosine learning rate scheduler and SGD optimizer with 1e-4 weight decay during 90 90 epochs without distillation. We fix the weight and activation of first and last layer at 8 8 bit following previous works, where the search space is 4-8 bit.

For the experiments on PTQ quantization scheme, we perform OMPQ on an NVIDIA Geforce GTX 1080Ti and combine it with the finetuning block reconstruction algorithm BRECQ. In particular, the activation precision of all layers are fixed to 8 bit. In other words, only the weight bit is searched, which is allocated in the 2-4 bit search space.

4.2 Ablation Study

Monotonically Decreasing Function. We then investigate the monotonically decreasing function in Eq.9. Obviously, the second-order derivatives of monotonically decreasing functions in Eq.9 influence the changing rate of orthogonality differences. In other words, the variance of the orthogonality between different layers becomes larger as the rate becomes faster. We test the accuracy of five different monotonically decreasing functions on quantization-aware training of ResNet-18 18 (6.7Mb) and post-training quantization of MobileNetV 2 2 (0.9Mb). We fix the activation to 8 8 bit.

(a) ResNet-18 18

Method W/A Model Size (Mb)Top-1 (%)Searching Searching Data Iterations Baseline 32 32/32 32 44.6 44.6 71.08 71.08-- FracBits-PACT∗*/∗4.5 4.5 69.10 69.10 1.2 1.2 M 120 120 OMPQ∗/4 4 4.5 4.5 68.69 68.69 64 0 OMPQ∗*/8 8 4.5 4.5 69.94 64 0 ZeroQ 4 4/4 4 5.81 5.81 21.20 21.20-- BRECQ†\text{BRECQ}^{\dagger}4 4/4 4 5.81 5.81 69.32 69.32-- PACT 4 4/4 4 5.81 5.81 69.20 69.20-- HAWQ-V3 4 4/4 4 5.81 5.81 68.45 68.45-- FracBits-PACT∗*/∗5.81 5.81 69.70 1.2 1.2 M 120 120 OMPQ∗/4 4 5.5 5.5 69.38 69.38 64 0 BRECQ∗*/8 8 4.0 4.0 68.82 68.82 1,024 1,024 100 100 OMPQ∗*/8 8 4.0 4.0 69.41 64 0

(b) MobileNetV 2 2

Method W/A Model Size (Mb)Top-1 (%)Searching Searching Data Iterations Baseline 32 32/32 32 13.4 13.4 72.49 72.49-- BRECQ∗*/8 8 1.3 1.3 68.99 68.99 1,024 1,024 100 100 OMPQ∗*/8 8 1.3 1.3 69.62 32 0 FracBits∗*/∗1.84 1.84 69.90 69.90 1.2 1.2 M 120 120 BRECQ∗/8 8 1.5 1.5 70.28 70.28 1,024 1,024 100 100 OMPQ∗*/8 8 1.5 1.5 71.39 32 0

Table 4: Mixed precision post-training quantization experiments on ResNet-18 18 and MobileNetV 2 2. †{\dagger} means using distilled data in the finetuning process.

It can be observed from Table1 that the accuracy gradually decreases with the increasing of changing rate. For the corresponding bit configuration, we also observe that a larger changing rate also means a more aggressive bit allocation strategy. In other words, OMPQ tends to assign more different bits between layers under a large changing rate, which leads to worse performance in network quantization. Another interesting observation is the accuracy on ResNet-18 18 and MobileNetV 2 2. Specifically, quantization-aware training on ResNet-18 18 requires numerous data, which makes the change of accuracy insignificant. On the contrary, post-training quantization on MobileNetV 2 2 is incapable of assigning bit configuration that meets the model constraints when the functions are set to −x 3-x^{3} or −e x-e^{x}. To this end, we select e−x e^{-x} as our monotonically decreasing function in the following experiments.

Deconstruction Granularity. We study the impact of different deconstruction granularity on model accuracy. Specifically, we test four different granularity including layer-wise, block-wise, stage-wise and net-wise on the quantized-aware training of ResNet-18 18 and the post-training quantization of MobileNetV 2 2. As reported in Table2, the accuracy of the two models is increasing with finer granularities. Such difference is more significant on MobileNetV 2 2 due to the different sensitiveness between the point-wise and depth-wise convolution. We thus employ layer-wise granularity in the following experiments.

4.3 Quantization-Aware Training

We perform quantization-aware training on ResNet-18 18/50 50, where the results and compress ratio are compared with the previous unified quantization methods (Park, Yoo, and Vajda 2018; Choi et al. 2018; Zhang et al. 2018a) and mixed precision quantization (Wang et al. 2019; Chin et al. 2020; Yao et al. 2021). As shown in Table3, OMPQ shows the best trade-off between accuracy and compress ratio on ResNet-18 18/50 50. For example, we achieve 72.08%72.08% on ResNet-18 18 with only 6.7 6.7 Mb and 75 75 BOPs. Comparing with HAWQ-V3(Yao et al. 2021), the difference of the model size is negligible (6.7 6.7 Mb, 75 75 BOPs vs 6.7 6.7 Mb, 72 72 BOPs). Meanwhile, the model compressed by OMPQ is 1.86%1.86% higher than HAWQ-V3(Yao et al. 2021). Similarly, we achieve 76.28%76.28% on ResNet-50 50 with 18.7 18.7 Mb and 156 156 BOPs. OMPQ is 0.89%0.89% higher than HAWQ-V3 with similar model size (18.7 18.7 Mb, 156 156 BOPs vs 18.7 18.7 Mb, 154 154 BOPs).

Image 4: Refer to caption

Figure 4: Mixed precision quantization comparison of OMPQ and BRECQ on ResNet-18 18 and MobileNetV 2 2.

4.4 Post-Training Quantization

As we mentioned before, OMPQ can also be combined with PTQ scheme to further improve the quantization efficiency thanking to its low data dependence and search efficiency. Previous PTQ method BRECQ (Li et al. 2021) proposes block reconstruction quantization strategy to reduce quantization errors. We replace the evolutionary search algorithm with OMPQ and combine it with the finetuning process of BRECQ, which rapidly reduces the search cost and achieves better performance. Experiment results are demonstrated in Table4, we can observe that OMPQ clearly shows the superior performance to unified quantization and mixed precision quantization methods under different model constraints. In particular, OMPQ outperforms BRECQ by 0.52%0.52% on ResNet-18 18 under the same model size (4.0 4.0 Mb). OMPQ also outperforms FracBits by 1.37%1.37% on MobileNetV 2 2 with a smaller model size (1.5 1.5 Mb vs 1.8 1.8 Mb).

We also compare OMPQ with BRECQ and unified quantization, where the results are reported in Fig.4. Obviously, the accuracy of OMPQ is generally higher than BRECQ on ResNet-18 18 and MobileNetV 2 2 with different model constraints. Furthermore, OMPQ and BRECQ are both better than unified quantization, which shows that mixed precision quantization is superior.

5 Conclusion

In this paper, we have proposed a novel mixed precision algorithm, termed OMPQ, to effectively search the optimal bit configuration on the different constraints. Firstly, we derive the orthogonality metric of neural network by generalizing the orthogonality of the function to the neural network. Secondly, we leverage the proposed orthogonality metric to design a linear programming problem, which is capable of finding the optimal bit configuration. Both orthogonality generation and linear programming solving are extremely efficient, which are finished within a few seconds on a single CPU and GPU. Meanwhile, OMPQ also outperforms the previous mixed precision quantization and unified quantization methods. Furthermore, we will explore the mixed precision quantization method combining multiple knapsack problem with the network orthogonality metric.

Acknowledgments

This work is supported by the National Science Fund for Distinguished Young (No.62025603), the National Natural Science Foundation of China (No.62025603, No. U1705262, No. 62072386, No. 62072387, No. 62072389, No. 62002305, No.61772443, No. 61802324 and No. 61702136), Guangdong Basic and Applied Basic Research Foundation (No.2019B1515120049), China National Postdoctoral Program for Innovative Talents (BX20220392), and China Postdoctoral Science Foundation (2022M711729).

References

  • Arfken, Weber, and Spector (1999) Arfken, G.B.; Weber, H.J.; and Spector, D. 1999. Mathematical Methods for Physicists. American Journal of Physics, 67(2): 165–169.
  • Bach and Jordan (2002) Bach, F.R.; and Jordan, M.I. 2002. Kernel Independent Component Analysis. Journal of Machine Learning Research (JMLR), 3: 1–48.
  • Banner, Nahshan, and Soudry (2019) Banner, R.; Nahshan, Y.; and Soudry, D. 2019. Post training 4-bit quantization of convolutional networks for rapid-deployment. Neural Information Processing Systems(NeurIPS), 32.
  • Bengio, Léonard, and Courville (2013) Bengio, Y.; Léonard, N.; and Courville, A. 2013. Estimating or Propagating Gradients through Stochastic Neurons for Conditional Computation. arXiv preprint arXiv:1308.3432.
  • Caflisch (1998) Caflisch, R.E. 1998. Monte carlo and quasi-monte carlo methods. Acta numerica, 7: 1–49.
  • Cai et al. (2020) Cai, Y.; Yao, Z.; Dong, Z.; Gholami, A.; Mahoney, M.W.; and Keutzer, K. 2020. Zeroq: A Novel Zero Shot Quantization Framework. In Computer Vision and Pattern Recognition (CVPR), 13169–13178.
  • Cai et al. (2017) Cai, Z.; He, X.; Sun, J.; and Vasconcelos, N. 2017. Deep Learning with Low Precision by Half-Wave Gaussian Quantization. In Computer Vision and Pattern Recognition (CVPR), 5918–5926.
  • Chen, Wang, and Pan (2019) Chen, S.; Wang, W.; and Pan, S.J. 2019. Metaquant: Learning to Quantize by Learning to Penetrate Non-Differentiable Quantization. Neural Information Processing Systems(NeurIPS), 32: 3916–3926.
  • Chin et al. (2020) Chin, T.-W.; Pierce, I.; Chuang, J.; Chandra, V.; and Marculescu, D. 2020. One Weight Bitwidth to Rule Them All. In European Conference on Computer Vision (ECCV), 85–103.
  • Choi et al. (2018) Choi, J.; Wang, Z.; Venkataramani, S.; Chuang, P. I.-J.; Srinivasan, V.; and Gopalakrishnan, K. 2018. Pact: Parameterized Clipping Activation for Quantized Neural Networks. arXiv preprint arXiv:1805.06085.
  • Courbariaux et al. (2016) Courbariaux, M.; Hubara, I.; Soudry, D.; El-Yaniv, R.; and Bengio, Y. 2016. Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830.
  • Dong et al. (2020) Dong, Z.; Yao, Z.; Arfeen, D.; Gholami, A.; Mahoney, M.W.; and Keutzer, K. 2020. HAWQ-V2: Hessian Aware Trace-Weighted Quantization of Neural Networks. In Neural Information Processing Systems(NeurIPS), 18518–18529.
  • Dosovitskiy et al. (2021) Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations (ICLR).
  • Fukumizu, Bach, and Jordan (2004) Fukumizu, K.; Bach, F.R.; and Jordan, M.I. 2004. Dimensionality Reduction for Supervised Learning with Reproducing Kernel Hilbert Spaces. Journal of Machine Learning Research (JMLR), 5: 73–99.
  • Gong et al. (2019) Gong, R.; Liu, X.; Jiang, S.; Li, T.; Hu, P.; Lin, J.; Yu, F.; and Yan, J. 2019. Differentiable soft quantization: Bridging full-precision and low-bit neural networks. In International Conference on Computer Vision (ICCV), 4852–4861.
  • Gretton et al. (2005) Gretton, A.; Bousquet, O.; Smola, A.; and Schölkopf, B. 2005. Measuring Statistical Dependence with Hilbert-Schmidt Norms. In Algorithmic Learning Theory (ALT), 63–77.
  • Gretton, Herbrich, and Smola (2003) Gretton, A.; Herbrich, R.; and Smola, A.J. 2003. The Kernel Mutual Information. In International Conference on Acoustics, Speech and Signal Processing (ICASSP), IV–880.
  • Guo et al. (2020) Guo, Z.; Zhang, X.; Mu, H.; Heng, W.; Liu, Z.; Wei, Y.; and Sun, J. 2020. Single Path One-Shot Neural Architecture Search with Uniform Sampling. In European Conference on Computer Vision (ECCV), 544–560.
  • He et al. (2016) He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep Residual Learning for Image Recognition. In Computer Vision and Pattern Recognition (CVPR), 770–778.
  • Howard et al. (2017) Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; and Adam, H. 2017. Mobilenets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv preprint arXiv:1704.04861.
  • Ioffe and Szegedy (2015) Ioffe, S.; and Szegedy, C. 2015. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In International Conference on Machine Learning (ICML), 448–456.
  • Kim et al. (2019) Kim, H.; Kim, K.; Kim, J.; and Kim, J.-J. 2019. BinaryDuo: Reducing Gradient Mismatch in Binary Activation Network by Coupling Binary Activations. In International Conference on Learning Representations (ICLR).
  • Kornblith et al. (2019) Kornblith, S.; Norouzi, M.; Lee, H.; and Hinton, G. 2019. Similarity of Neural Network Representations Revisited. In International Conference on Machine Learning (ICML), 3519–3529.
  • Leurgans, Moyeed, and Silverman (1993) Leurgans, S.E.; Moyeed, R.A.; and Silverman, B.W. 1993. Canonical Correlation Analysis when The Data are Curves. Journal of the Royal Statistical Society: Series B (Methodological), 55: 725–740.
  • Li et al. (2021) Li, Y.; Gong, R.; Tan, X.; Yang, Y.; Hu, P.; Zhang, Q.; Yu, F.; Wang, W.; and Gu, S. 2021. {BRECQ}: Pushing the Limit of Post-Training Quantization by Block Reconstruction. In International Conference on Learning Representations (ICLR).
  • Liu et al. (2019) Liu, C.; Ding, W.; Xia, X.; Zhang, B.; Gu, J.; Liu, J.; Ji, R.; and Doermann, D. 2019. Circulant binary convolutional networks: Enhancing the performance of 1-bit dcnns with circulant back propagation. In Computer Vision and Pattern Recognition (CVPR), 2691–2699.
  • Liu et al. (2018) Liu, Z.; Wu, B.; Luo, W.; Yang, X.; Liu, W.; and Cheng, K.-T. 2018. Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm. In European Conference on Computer Vision (ECCV), 722–737.
  • Nagel et al. (2019) Nagel, M.; Baalen, M.v.; Blankevoort, T.; and Welling, M. 2019. Data-Free Quantization through Weight Equalization and Bias Correction. In International Conference on Computer Vision (ICCV), 1325–1334.
  • Park, Yoo, and Vajda (2018) Park, E.; Yoo, S.; and Vajda, P. 2018. Value-Aware Quantization for Training and Inference of Neural Networks. In European Conference on Computer Vision (ECCV), 580–595.
  • Rastegari et al. (2016) Rastegari, M.; Ordonez, V.; Redmon, J.; and Farhadi, A. 2016. Xnor-net: Imagenet Classification Using Binary Convolutional Neural Networks. In European Conference on Computer Vision (ECCV), 525–542.
  • Sandler et al. (2018) Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; and Chen, L.-C. 2018. Mobilenetv2: Inverted Residuals and Linear Bottlenecks. In Computer Vision and Pattern Recognition (CVPR), 4510–4520.
  • Simonyan and Zisserman (2014) Simonyan, K.; and Zisserman, A. 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv preprint arXiv:1409.1556.
  • Szegedy et al. (2015) Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; and Rabinovich, A. 2015. Going Deeper with Convolutions. In Computer Vision and Pattern Recognition (CVPR), 1–9.
  • Tanton (2005) Tanton, J. 2005. Encyclopedia of Mathematics. Facts on file.
  • Wang et al. (2019) Wang, K.; Liu, Z.; Lin, Y.; Lin, J.; and Han, S. 2019. Haq: Hardware-Aware Automated Quantization with Mixed Precision. In Computer Vision and Pattern Recognition (CVPR), 8612–8620.
  • Yang and Jin (2021) Yang, L.; and Jin, Q. 2021. FracBits: Mixed Precision Quantization via Fractional Bit-Widths. AAAI Conference on Artificial Intelligence (AAAI), 35: 10612–10620.
  • Yao et al. (2021) Yao, Z.; Dong, Z.; Zheng, Z.; Gholami, A.; Yu, J.; Tan, E.; Wang, L.; Huang, Q.; Wang, Y.; Mahoney, M.; et al. 2021. HAWQ-V3: Dyadic Neural Network Quantization. In International Conference on Machine Learning (ICML), 11875–11886.
  • Zhang et al. (2018a) Zhang, D.; Yang, J.; Ye, D.; and Hua, G. 2018a. Lq-nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks. In European Conference on Computer Vision (ECCV), 365–382.
  • Zhang et al. (2018b) Zhang, X.; Zhou, X.; Lin, M.; and Sun, J. 2018b. Shufflenet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. In Computer Vision and Pattern Recognition (CVPR), 6848–6856.
  • Zhou et al. (2017) Zhou, A.; Yao, A.; Guo, Y.; Xu, L.; and Chen, Y. 2017. Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights. In International Conference on Learning Representations (ICLR).
  • Zhou et al. (2016) Zhou, S.; Wu, Y.; Ni, Z.; Zhou, X.; Wen, H.; and Zou, Y. 2016. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160.

Supplementary Material

6 Calculation Acceleration

Suppose that f i​(X)∈ℝ N×(C i×H i×W i)f_{i}(X)\in\mathbb{R}^{N\times(C_{i}\times H_{i}\times W_{i})} and f j​(X)∈ℝ N×(C j×H j×W j)f_{j}(X)\in\mathbb{R}^{N\times(C_{j}\times H_{j}\times W_{j})} are the output of the i i-th layer and the j j-th layer in the neural network, respectively. We set Y=f i​(X)Y=f_{i}(X), Z=f j​(X)Z=f_{j}(X), then Y​Y T YY^{T}, Z​Z T∈ℝ N×N ZZ^{T}\in\mathbb{R}^{N\times N}. The calculation is accelerated through:

‖Z T​Y‖F 2=⟨vec​(Y​Y T),vec​(Z​Z T)⟩.||Z^{T}Y||_{F}^{2}=\left\langle\textbf{vec}(YY^{T}),\textbf{vec}(ZZ^{T})\right\rangle.(11)

Proof.

We set p 1=C i×H i×W i,p 2=C j×H j×W j p_{1}=C_{i}\times H_{i}\times W_{i},p_{2}=C_{j}\times H_{j}\times W_{j}, and z i,m,y i,m z_{i,m},y_{i,m} are the i i-th row and m m-th column entry of the matrix Z Z and Y Y, then

‖Z T​Y‖F 2=∑i=1 p 2∑j=1 p 1(∑m=1 N z m,i​y m,j)2=∑i=1 p 2∑j=1 p 1(∑m=1 N z m,i​y m,j)​(∑n=1 N z n,i​y n,j)=∑i=1 p 2∑j=1 p 1∑m=1 N∑n=1 N z m,i​y m,j​z n,i​y n,j=∑m=1 N∑n=1 N∑i=1 p 2∑j=1 p 1 z m,i​y m,j​z n,i​y n,j=∑m=1 N∑n=1 N(∑j=1 p 1 y m,j​y n,j)​(∑i=1 p 2 z m,i​z n,i)=⟨vec​(Y​Y T),vec​(Z​Z T)⟩.\begin{split}&||Z^{T}Y||{F}^{2}\ &=\sum{i=1}^{p_{2}}\sum_{j=1}^{p_{1}}{\left(\sum_{m=1}^{N}z_{m,i}y_{m,j}\right)}^{2}\ &=\sum_{i=1}^{p_{2}}\sum_{j=1}^{p_{1}}\left(\sum_{m=1}^{N}z_{m,i}y_{m,j}\right)\left(\sum_{n=1}^{N}z_{n,i}y_{n,j}\right)\ &=\sum_{i=1}^{p_{2}}\sum_{j=1}^{p_{1}}\sum_{m=1}^{N}\sum_{n=1}^{N}z_{m,i}y_{m,j}z_{n,i}y_{n,j}\ &=\sum_{m=1}^{N}\sum_{n=1}^{N}\sum_{i=1}^{p_{2}}\sum_{j=1}^{p_{1}}z_{m,i}y_{m,j}z_{n,i}y_{n,j}\ &=\sum_{m=1}^{N}\sum_{n=1}^{N}\left(\sum_{j=1}^{p_{1}}y_{m,j}y_{n,j}\right)\left(\sum_{i=1}^{p_{2}}z_{m,i}z_{n,i}\right)\ &=\left\langle\textbf{vec}(YY^{T}),\textbf{vec}(ZZ^{T})\right\rangle.\end{split}

From Eq.11, we can see that two computation forms have different time complexities when dealing with different situations. Specifically, the time complexities of calculating ‖Z T​Y‖F 2||Z^{T}Y||{F}^{2} and inner product of vectors are 𝐎​(N​p 1​p 2)\mathbf{O}(Np{1}p_{2}) and 𝐎​(N 2​(p 1+p 2+1))\mathbf{O}(N^{2}(p_{1}+p_{2}+1)), respectively. In other words, when feature number (p 1​or​p 2)(p_{1}\text{or}p_{2}) is larger than the number of samples N N, we take the inner product form to speed up the calculation. Conversely, using ‖Z T​Y‖F 2||Z^{T}Y||_{F}^{2} is clearly more faster than inner product form.

Concretely, we take N,p∈{10000,100}N,p\in{10000,100} as an example and randomly generate matrix Y,Z∈ℝ N×p Y,Z\in\mathbb{R}^{N\times p}. In our experiment, we calculate the ORM of Y,Z Y,Z in vector inner product form and norm form, respectively. The results are reported in Table5. From this table, when the number of features is less than the number of samples, calculating ORM in the norm form is much faster (54×\times) than the inner product form. On the contrary, when the number of features is greater than the number of samples, the inner product form calculates the ORM faster (70×\times).

Table 5: The ORM calculation time (second) of different calculation strategy between matrix Y Y and Z Z which have the same size. There are two cases for the size of Y Y or Z Z, i.e., feature number p p is larger/smaller than the number of the samples N N.

7 Network Deconstruction and Linear Programming

In this section, we elaborate on why the network is deconstructed by the former i i layers instead of the i i-th layer. Furthermore, we will illustrate how to construct a linear programming problem in details.

7.1 Network Deconstruction

Fig.5 demonstrates two approaches to the deconstruction of the network: the former i i layers deconstruction and the i i-th layer deconstruction, and we naturally accept that the latter deconstruction is more intuitive. We give the definition of orthogonality for the i i-th and j j-th layers under the latter deconstruction:

⟨g i,g j⟩P​(x)=∫𝒟 g i​(x i)​P​(x)​g j​(x j)T​𝑑 x,\begin{split}{\left\langle g_{i},g_{j}\right\rangle}{P(x)}&=\int{\mathcal{D}}g_{i}(x_{i})P(x){g_{j}(x_{j})}^{T}dx,\end{split}(12)

where x i x_{i}, x j x_{j} are the outputs of the former layer, respectively. To unify the inputs, we further expand Eq.12 as follows:

⟨g i,g j⟩P​(x)=∫𝒟 g i​(g i−1​(…​g 1​(x)))​P​(x)​g j​(g j−1​(…​g 1​(x)))T​𝑑 x.\begin{split}&{\left\langle g_{i},g_{j}\right\rangle}{P(x)}\ &=\int{\mathcal{D}}g_{i}(g_{i-1}(\dots g_{1}(x)))P(x){g_{j}(g_{j-1}(\dots g_{1}(x)))}^{T}dx.\end{split}(13)

To facilitate the mathematical analysis, we set f i​(x)=g i​(g i−1​(…​g 1​(x)))f_{i}(x)=g_{i}(g_{i-1}(\dots g_{1}(x))), f j​(x)=g j​(g j−1​(…​g 1​(x)))f_{j}(x)=g_{j}(g_{j-1}(\dots g_{1}(x))) and the composition function f​(x)f(x) represents the former i i layers of the network. Therefore, the unified input restricts us to deconstruct the network according to the former i i layers only. Moreover, two deconstruction approaches are equivalent in the context of unified input. Having determined the network deconstruction, we next explain that how to construct a linear programming problem associate to the network deconstruction.

Image 5: Refer to caption

Figure 5: The former i i layers deconstruction(left) and the i i-th layer deconstruction(right). We omit the BatchNorm layer (Ioffe and Szegedy 2015) and the activation layer for simplicity while X 1=X,X 2=g 1​(X 1),…,X L=g L−1​(X L−1)X_{1}=X,X_{2}=g_{1}(X_{1}),\dots,X_{L}=g_{L-1}(X_{L-1}).

Image 6: Refer to caption

Figure 6: Compositions of linear programming objective function. b i b_{i} represents the bit-width variable of the i i-th layer. θ i\theta_{i} represents the importance factor of the former i i layers.

Image 7: Refer to caption

Figure 7: Layerwise bit configuration of MobileNetV 2 2, which is quantized to 0.9 0.9 Mb

Image 8: Refer to caption

Figure 8: Layerwise bit configuration of ResNet-18 18, which is quantized to 3.5 3.5 Mb

Image 9: Refer to caption

Figure 9: The relationship between accuracy and hyperparameter β\beta on MobileNetV 2 2, which is quantized to 0.9 0.9 Mb.

7.2 Linear Programming

According to Eq.14, we calculate the importance factor θ i\theta_{i} corresponding to the former i i layers of the network,

θ i=e−β​γ i.\theta_{i}=e^{-\beta\gamma_{i}}.(14)

The stronger orthogonality of the former i i layers implies that the importance factor θ i\theta_{i} is larger. Therefore, we take θ i\theta_{i} as the weight coefficient for the bit-width assignment of the former i i layers, which means that the important layers are assigned a larger bit-width to retain more information. However, this will lead to different scales of weight coefficient for different layers due to accumulation, e.g. the range of weight coefficients for the first layer and the last layer is [L​e−β​(L−1),L][Le^{-\beta(L-1)},L] and [e−β​(L−1),1][e^{-\beta(L-1)},1], respectively. We thus rescale the i i-th layer by multiplying the factor 1/(L−i+1)1/(L-i+1). Finally, we sum up all the terms to obtain the objective function of the linear programming in Eq.15. More details are intuitively illustrated in Fig.6.

Objective:max 𝐛​∑i=1 L(b i L−i+1​∑j=i L θ j),Constraints:∑i L M(b i)≤𝒯.\begin{split}\text{Objective: }&\max_{\mathbf{b}}\sum_{i=1}^{L}\left(\frac{b_{i}}{L-i+1}\sum_{j=i}^{L}\theta_{j}\right),\ \text{Constraints: }&\sum_{i}^{L}M^{(b_{i})}\leq\mathcal{T}.\end{split}(15)

8 ORM Matrix

We present the ORM matrices of ResNet-18 18, ResNet-50 50 and MobileNetV 2 2 for different samples N N in Fig.11. We can obtain the following conclusions: (a) the orthogonality between the former i i and the former j j layers generally becomes stronger as the absolute difference |i−j||i-j| increases. (b) the orthogonality discrepancy of the network which output activation lies within the same block is tiny. (c) as the samples N N increases, the approximation of expectation becomes more accurate and the orthogonality discrepancy of different layers becomes more and more obvious.

9 Importance Variance

According to Sec7.2, the hyperparameter β\beta may also affect the range of the weight coefficients of bit-width variable as follows:

[lim β→+∞e−β​(L−1),1]=[0,1],[lim β→0+e−β​(L−1),1]={1}.\begin{split}\left[\lim_{\beta\to+\infty}e^{-\beta(L-1)},1\right]&=\left[0,1\right],\ \left[\lim_{\beta\to 0^{+}}e^{-\beta(L-1)},1\right]&=\left{1\right}.\end{split}(16)

From Eq.16, when β\beta increases, the range of importance factor θ\theta becomes larger. On the contrary, when β\beta approaches zero, the range of θ\theta is greatly compressed, so the variance of importance factors of different layers is greatly reduced. We explore the relationship between accuracy and β\beta on MobileNetV 2 2. We quantize MobileNetV 2 2 to 0.9 0.9 Mb and fix all activation value at 8 8 bit. As shown in Fig.9, when β\beta increases, the accuracy gradually increases as the variance of importance factors of different layers becomes larger, and stabilizes when β\beta is large enough. According to our experiments, the low variance of importance factor θ\theta may lead the linear programming algorithm to choose an aggressive bit configuration resulting in sub-optimal accuracy.

Image 10: Refer to caption

Figure 10: Relationship between orthogonality and bit-width of different layers on ResNet-18 18 and MobileNetV 2 2.

10 Bit Configurations

We also give the layer-wise bit configurations of ResNet-18 18(He et al. 2016) and MobileNetV 2 2(Howard et al. 2017) on PTQ. As shown in Fig.7-8, we observe that MobileNetV 2 2 allocates more bits to the depthwise convolution, and allocates fewer bits to the pointwise convolution. In other words, the 3×3 3\times 3 convolutional layers are more orthogonal than the 1×1 1\times 1 convolutional layers, which somehow explains the fact that networks containing a large number of 1×1 1\times 1 convolutional layers are difficult to quantize. Therefore, this phenomenon can potentially be instructive for low precision quantization and binarization of ResNet-50 50, MobileNetV 2 2 and even Transformer (Dosovitskiy et al. 2021). Besides, the bit-width of each layer on ResNet-18 18 and MobileNetV 2 2 decreases as the depth of the layer increases.

11 Effectiveness of ORM on Mixed Precision Quantization

We have already provided the positive correlation between model orthogonality and quantization accuracy in the paper. We will further reveal the relationship between layer orthogonality and bit-width in Fig.10. We randomly select some layers and compute their orthogonality under different bit-widths. Obviously, layer orthogonality is positively correlated with bit-width on different models. Furthermore, The positive correlation coefficient of orthogonality and bit width is different for different layers, which implies that the sensitivity of orthogonality is different for different layers. Therefore, the positive correlation and sensitivity differences between orthogonality and bit-width allow us to construct the objective function for the linear programming problem.

12 Derivation of ORM

Leveraging Monte Carlo sampling, we obtain the following approximation

N​∫𝒟 f i​(x)​P​(x)​f j​(x)T​𝑑 x≈‖f j​(X)T​f i​(X)‖F.N\int_{\mathcal{D}}f_{i}(x)P(x){f_{j}(x)}^{T}dx\approx{\left|f_{j}(X)^{T}f_{i}(X)\right|}_{F}.(17)

Applying Cauchy-Schwarz inequality to the left side of Eq.17, we have

0≤(N​∫𝒟 f i​(x)​P​(x)​f j​(x)T​𝑑 x)2≤∫𝒟 N​f i​(x)​P i​(x)​f i​(x)T​𝑑 x​∫𝒟 N​f j​(x)​P j​(x)​f j​(x)T​𝑑 x.\begin{split}0&\leq{\left(N\int_{\mathcal{D}}f_{i}(x)P(x){f_{j}(x)}^{T}dx\right)}^{2}\ &\leq\int_{\mathcal{D}}Nf_{i}(x)P_{i}(x){f_{i}(x)}^{T}dx\int_{\mathcal{D}}Nf_{j}(x)P_{j}(x){f_{j}(x)}^{T}dx.\end{split}(18)

Normalize Eq.18 to [0,1][0,1], we have

(N​∫𝒟 f i​(x)​P​(x)​f j​(x)T​𝑑 x)2∫𝒟 N​f i​(x)​P i​(x)​f i​(x)T​𝑑 x​∫𝒟 N​f j​(x)​P j​(x)​f j​(x)T​𝑑 x∈[0,1].\begin{split}\begin{aligned} &\frac{{\left(N\int_{\mathcal{D}}f_{i}(x)P(x){f_{j}(x)}^{T}dx\right)}^{2}}{\int_{\mathcal{D}}Nf_{i}(x)P_{i}(x){f_{i}(x)}^{T}dx\int_{\mathcal{D}}Nf_{j}(x)P_{j}(x){f_{j}(x)}^{T}dx}\ &\in[0,1].\end{aligned}\end{split}(19)

Image 11: Refer to caption

Figure 11: The ORM matrices of ResNet-18 18, ResNet-50 50 and MobileNetV 2 2 with different samples N N.

We substitute Eq.17 into Eq.LABEL:eq:Intermediate2_appendix and obtain ORM

ORM​(X,f i,f j)\displaystyle{\rm ORM}(X,f_{i},f_{j})(20) =‖f j​(X)T​f i​(X)‖F 2‖f i​(X)T​f i​(X)‖F​‖f j​(X)T​f j​(X)‖F∈[0,1].\displaystyle=\frac{{||f_{j}(X)^{T}f_{i}(X)||}^{2}{F}}{||f{i}(X)^{T}f_{i}(X)||{F}||f{j}(X)^{T}f_{j}(X)||_{F}}\in[0,1].