mishig HF Staff commited on
Commit
369d7d7
·
verified ·
1 Parent(s): 61f9a17

Add 1 files

Browse files
Files changed (1) hide show
  1. 2010/2010.01388.md +649 -0
2010/2010.01388.md ADDED
@@ -0,0 +1,649 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: Online Neural Networks for Change-Point Detection
2
+
3
+ URL Source: https://arxiv.org/html/2010.01388
4
+
5
+ Markdown Content:
6
+ [1]\fnm Mikhail \sur Hushchyn
7
+
8
+ 1]\orgname HSE University, \orgaddress\city Moscow, \country Russia
9
+
10
+ ###### Abstract
11
+
12
+ Moments when a time series changes its behavior are called change points. Occurrence of change point implies that the state of the system is altered and its timely detection might help to prevent unwanted consequences. In this paper, we present two change-point detection approaches based on neural networks and online learning. These algorithms demonstrate linear computational complexity and are suitable for change-point detection in large time series. We compare them with the best known algorithms on various synthetic and real world data sets. Experiments show that the proposed methods outperform known approaches. We also prove the convergence of the algorithms to the optimal solutions and describe conditions rendering current approach more powerful than offline one.
13
+
14
+ ###### keywords:
15
+
16
+ time series, change-point detection, machine learning, neural networks
17
+
18
+ 1 Introduction
19
+ --------------
20
+
21
+ The first works[10.2307/2333009, 10.1093/biomet/42.3-4.523] about change-point detection were presented in the 1950s. They utilize shifts of the signal mean value to detect changes in the quality of the output of a continuous production process. In the following decades, a lot of other change-point detection methods were developed. They are based on different ideas and are able to recognize various changes in time series: jumps of mean and variance of a signal, correlations between its different components and other more elaborate dependencies. These algorithms are well-described in various overviews[10.5555/151741, Aminikhanghahi2017, TRUONG2020107299]. Detection of change points can be found in many applications: quality monitoring of industrial processes, failure detection in complex systems, health monitoring, speech recognition and video analysis.
22
+
23
+ This study introduces two new approaches for change-point detection based on online learning of neural networks. These algorithms can be used to detect changes in time series behavior. As it is shown in the following sections, they have linear computational complexity, work with multidimensional signals and are well suited for long time series. The proposed solutions are inspired by the Kullback–Leibler importance estimation procedure (KLIEP)[kliep1], unconstrained least-squares importance fitting (uLSIF)[ulsif1, Sugiyama2011] and the relative uLSIF (RuLSIF)[rulsif1, Yamada2013]. The methods are used to estimate the direct probability density ratio for two samples. As demonstrated in[LIU201372], this approach can be used for change-point detection in time series data. Moreover, according to[Aminikhanghahi2017], better results are expected with respect to other change-point detection algorithms. The idea is based on calculation of distances between pairs of observations from two different samples using Radial basis function (RBF) kernels to approximate the probability density ratio.
24
+
25
+ The first implementation of decision tree and logistic regression classifiers to analyze changes between two samples is demonstrated in[Hido2008]. However, the method is not applied for change-point detection. The authors of[Nam2015] show that Convolutional Neural Networks (CNNs), trained with uLSIF loss function can be used for outlier detection in images. In recent years, several approaches based on neural networks[khan2019deep, hushchyn2020generalization], with KLIEP and RuLSIF loss functions, were presented for change-point detection in time series data. It is also shown that they outperform previous methods based on RBF kernels.
26
+
27
+ 2 Change-Point Detection
28
+ ------------------------
29
+
30
+ ![Image 1: Refer to caption](https://arxiv.org/html/2010.01388v2/x1.png)
31
+
32
+ Figure 1: Example of a time series with two change-points at moments t 1=400 t_{1}=400 and t 2=800 t_{2}=800. Observations between these points have different probability distributions: P 1​(x​(t))P_{1}(x(t)) for 0<t<t 1 0<t<t_{1}, P 2​(x​(t))P_{2}(x(t)) for t 1<t<t 2 t_{1}<t<t_{2} and P 3​(x​(t))P_{3}(x(t)) for t 2<t<1200 t_{2}<t<1200.
33
+
34
+ Consider a time series, where each observation for a moment t t is represented by a d−d-dimensional vector x​(t)∈ℛ d x(t)\in\mathcal{R}^{d}:
35
+
36
+ x​(1),x​(2),x​(3),…,x​(τ),x​(τ+1),x​(τ+2),…x(1),x(2),x(3),...,x(\tau),x(\tau+1),x(\tau+2),...(1)
37
+
38
+ Assume that all observations x​(t)x(t) with t<τ t<\tau have probability density distribution p 0​(x)p_{0}(x), and all observations with t≥τ t\geq\tau are sampled from distribution p 1​(x)≠p 0​(x)p_{1}(x)\neq p_{0}(x). In other words, the time series changes its behavior at moment τ\tau. Such moments are called change-points. There may be several such points in one time series, as it is demonstrated in Figure[1](https://arxiv.org/html/2010.01388#S2.F1 "Figure 1 ‣ 2 Change-Point Detection ‣ Online Neural Networks for Change-Point Detection"). The goal is to detect all change points with the highest quality. This is an unsupervised problem, since the true positions of change-points are not given.
39
+
40
+ Often the original time series is transformed into an autoregression form[LIU201372]:
41
+
42
+ X​(k),X​(k+1),X​(k+2),…,X​(τ),X​(τ+1),X​(τ+2),…X(k),X(k+1),X(k+2),...,X(\tau),X(\tau+1),X(\tau+2),...(2)
43
+
44
+ where X​(t)X(t) is a combined vector of k k previous observations of the time series and is defined as:
45
+
46
+ X​(t)=[x​(t)T,x​(t−1)T,…,x​(t−k+1)T]T∈ℛ k​d X(t)=[x(t)^{T},x(t-1)^{T},...,x(t-k+1)^{T}]^{T}\in\mathcal{R}^{kd}(3)
47
+
48
+ This transformation allows us to take into account time dependencies between observations and helps to improve the quality of change-point detection. It is equal to the time series in Eq.[1](https://arxiv.org/html/2010.01388#S2.E1 "In 2 Change-Point Detection ‣ Online Neural Networks for Change-Point Detection") with k=1 k=1. We also use this notation to preserve consistency with conventional notation.
49
+
50
+ 3 Quality Metrics
51
+ -----------------
52
+
53
+ Consider a time series with n n change-points at moments τ 1\tau_{1}, τ 2\tau_{2}, …, τ n\tau_{n}. Suppose that an algorithm recognises m m change-points at moments τ^1\hat{\tau}_{1}, τ^2\hat{\tau}_{2}, …, τ^m\hat{\tau}_{m}. Following[TRUONG2020107299], a set of correctly detected change-points is defined as True Positive (TP):
54
+
55
+ TP={τ i|∃τ^j:|τ^j−τ i|<M}\text{TP}=\{\tau_{i}|\exists\hat{\tau}_{j}:|\hat{\tau}_{j}-\tau_{i}|<M\}(4)
56
+
57
+ where M M is a margin size and M=50 M=50 in our study. Then, Precision, Recall and F1-score metrics are calculated as follows:
58
+
59
+ Precision=|TP|m\text{Precision}=\frac{|\text{TP}|}{m}(5)
60
+
61
+ Recall=|TP|n\text{Recall}=\frac{|\text{TP}|}{n}(6)
62
+
63
+ F1=2⋅Precision⋅Recall Precision+Recall\text{F1}=\frac{2\cdot\text{Precision}\cdot\text{Recall}}{\text{Precision}+\text{Recall}}(7)
64
+
65
+ We use F1-score to measure quality of change-point detection algorithms. We also use a common measure in clustering analysis, called Rand Index (RI)[rand_index], which is calculated in the following way. True change-points {τ i}n\{\tau_{i}\}_{n} split the time series into n+1 n+1 segments S S. Similarly, the observations are divided by the detected change-points {τ^i}m\{\hat{\tau}_{i}\}_{m} into m+1 m+1 segments S^\hat{S}. RI measures the similarity of these two segmentation sets. The Rand Index is then defined as
66
+
67
+ RI=A 0.5​T​(T−1),\text{RI}=\frac{A}{0.5~T(T-1)},(8)
68
+
69
+ where A A is the number of observation pairs x​(i)x(i) and x​(j)x(j), that share the same segment, both in S S and S^\hat{S}; T T is the total number of observations in the time series and 0.5​T​(T−1)0.5~T(T-1) gives the total number of observation pairs in the whole time series.
70
+
71
+ 4 Proposed Methods
72
+ ------------------
73
+
74
+ ![Image 2: Refer to caption](https://arxiv.org/html/2010.01388v2/x2.png)
75
+
76
+ Figure 2: Example of change-point detection using the proposed algorithms. (Top) A time series with two change-points at moments t 1=400 t_{1}=400 and t 2=800 t_{2}=800. (Bottom) Change-point detection score d¯​(t)\bar{d}(t) estimated by the algorithms ONNC and ONNR.
77
+
78
+ ### 4.1 Classification-Based Model
79
+
80
+ Consider a time series defined in Eq.([2](https://arxiv.org/html/2010.01388#S2.E2 "In 2 Change-Point Detection ‣ Online Neural Networks for Change-Point Detection")) with several change-points. The idea of the proposed algorithm is based on a comparison of two observations X​(t−l)X(t-l) and X​(t)X(t) of this time series. Here l l is the lag size between these two observations. If there is no change point between them, X​(t−l)X(t-l) and X​(t)X(t) have the same distributions. Otherwise, they are sampled from different distributions, which means that a change point occurred at the moment τ:t−l<τ≤t\tau:t-l<\tau\leq t. Repeating this comparison for all pairs of observations sequentially helps to determine the positions of all change-points in the time series.
81
+
82
+ A more general way is to compare two mini-batches of observations 𝒳​(t−l)\mathcal{X}(t-l) and 𝒳​(t)\mathcal{X}(t). Here, a mini-batch 𝒳​(t)\mathcal{X}(t), is a sequence of observations of size n n, which is defined as:
83
+
84
+ 𝒳​(t)={X​(t),X​(t−1),…,X​(t−n+1)}\mathcal{X}(t)=\{X(t),X(t-1),...,X(t-n+1)\}(9)
85
+
86
+ Further in this study, we work with these mini-batches of size n≪l n\ll l in order to speed up the change-point detection algorithm.
87
+
88
+ To check whether observations in two mini-batches 𝒳​(t−l)\mathcal{X}(t-l) and 𝒳​(t)\mathcal{X}(t) come from the same distribution, we use a classification model based on a neural network f​(X,θ)f(X,\theta) with weights θ\theta. This network is trained on the mini-batches with cross-entropy loss function L t​(θ)L_{t}(\theta),
89
+
90
+ L t​(θ)=−1 n​∑X∈𝒳​(t−l)log⁡(1−f​(X,θ))−1 n​∑X∈𝒳​(t)log⁡f​(X,θ),L_{t}(\theta)=-\frac{1}{n}\sum_{X\in\mathcal{X}(t-l)}\log(1-f(X,\theta))-\frac{1}{n}\sum_{X\in\mathcal{X}(t)}\log f(X,\theta),(10)
91
+
92
+ where all observations from 𝒳​(t−l)\mathcal{X}(t-l) are considered as the negative class and observations from 𝒳​(t)\mathcal{X}(t) are taken as the positive class. We use only one neural network for the whole time series and it is trained in accordance with the online learning paradigm: each pair of mini-batches is used only once and the network makes a few iterations of optimization on each pair. Information from previous pairs are encoded in the neural network weights and each new step just slightly changes them.
93
+
94
+ The neural network f​(X,θ)f(X,\theta) can be used to compare distributions of observations in the mini-batches. In this work, we use a dissimilarity score based on the Kullback-Leibler divergence, D t​(θ)D_{t}(\theta). Following[hushchyn2020generalization], we define this score as
95
+
96
+ D t​(θ)=1 n​∑X∈𝒳​(t−l)log⁡1−f​(X,θ)f​(X,θ)++1 n​∑X∈𝒳​(t)log⁡f​(X,θ)1−f​(X,θ).\begin{split}D_{t}(\theta)=&\frac{1}{n}\sum_{X\in\mathcal{X}(t-l)}\log\frac{1-f(X,\theta)}{f(X,\theta)}+\\ +&\frac{1}{n}\sum_{X\in\mathcal{X}(t)}\log\frac{f(X,\theta)}{1-f(X,\theta)}.\end{split}(11)
97
+
98
+ If observations in the mini-batches are sampled from the same distribution, this dissimilarity score value is close to 0. Otherwise, it takes positive values. All steps above are combined into one algorithm called change-point detection based on Online Neural Network Classification (ONNC) and shown in Algorithm[1](https://arxiv.org/html/2010.01388#alg1 "Algorithm 1 ‣ 4.1 Classification-Based Model ‣ 4 Proposed Methods ‣ Online Neural Networks for Change-Point Detection"). An example of change-point detection, using ONNC, is demonstrated in Figure[2](https://arxiv.org/html/2010.01388#S4.F2 "Figure 2 ‣ 4 Proposed Methods ‣ Online Neural Networks for Change-Point Detection").
99
+
100
+ Algorithm 1 ONNC change-point detection algorithm.
101
+
102
+ 1:Inputs: time series
103
+
104
+ {X​(t)}t=k T\{X(t)\}_{t=k}^{T}
105
+ ;
106
+
107
+ k k
108
+ – size of a combined vector
109
+
110
+ X​(t)X(t)
111
+ ;
112
+
113
+ n n
114
+ – size of a mini-batch
115
+
116
+ 𝒳​(t)\mathcal{X}(t)
117
+ ;
118
+
119
+ l l
120
+ – lag size and
121
+
122
+ n≪l n\ll l
123
+ ;
124
+
125
+ f​(X,θ)f(X,\theta)
126
+ – a neural network with weights
127
+
128
+ θ\theta
129
+ ;
130
+
131
+ 2:Initialization:
132
+
133
+ t←k+n+l t\leftarrow k+n+l
134
+ ;
135
+
136
+ 3:while
137
+
138
+ t≤T t\leq T
139
+ do
140
+
141
+ 4: take mini-batches
142
+
143
+ 𝒳​(t−l)\mathcal{X}(t-l)
144
+ and
145
+
146
+ 𝒳​(t)\mathcal{X}(t)
147
+ ;
148
+
149
+ 5:
150
+
151
+ d​(t)←D t​(θ)d(t)\leftarrow D_{t}(\theta)
152
+ ;
153
+
154
+ 6:
155
+
156
+ d¯​(t)←d¯​(t−n)+1 l​(d​(t)−d​(t−l−n))\bar{d}(t)\leftarrow\bar{d}(t-n)+\frac{1}{l}(d(t)-d(t-l-n))
157
+ ;
158
+
159
+ 7:
160
+
161
+ loss​(t,θ)←L t​(θ)\mathrm{loss}(t,\theta)\leftarrow L_{t}(\theta)
162
+ ;
163
+
164
+ 8:
165
+
166
+ θ←Optimizer​(loss​(t,θ))\theta\leftarrow\mathrm{Optimizer}(\mathrm{loss}(t,\theta))
167
+ ;
168
+
169
+ 9:
170
+
171
+ t←t+n t\leftarrow t+n
172
+ ;
173
+
174
+ 10:end while
175
+
176
+ 11:return
177
+
178
+ {d¯​(t)}t=1 T\{\bar{d}(t)\}_{t=1}^{T}
179
+ – change-point detection score
180
+
181
+ ### 4.2 Regression-Based Model
182
+
183
+ An alternative method of change-point detection is based on regression models. In this case, a regression model, based on a neural network g​(X,θ)g(X,\theta), with weights θ\theta, is used to estimate the ratio between distributions of a time series observations in two mini-batches 𝒳​(t−l)\mathcal{X}(t-l) and 𝒳​(t)\mathcal{X}(t). Assume that all observations in 𝒳​(t−l)\mathcal{X}(t-l) have a probability density distribution q​(X)q(X), and observations in 𝒳​(t)\mathcal{X}(t) mini-batch are sampled from the distribution p​(X)p(X). Then, the output of the neural network approximates the ratio between these two distributions directly
184
+
185
+ g​(X,θ)≈p​(X)q​(X).g(X,\theta)\approx\frac{p(X)}{q(X)}.(12)
186
+
187
+ Following the idea of the RuLSIF method[rulsif1, Yamada2013] and mathematical inference in[hushchyn2020generalization], the loss function for the neural network is defined as
188
+
189
+ L​(𝒳​(t−l),𝒳​(t),θ)=1−α 2​n​∑X∈𝒳​(t−l)g 2​(X,θ)++α 2​n​∑X∈𝒳​(t)g 2​(X,θ)−1 n​∑X∈𝒳​(t)g​(X,θ),\begin{split}L(\mathcal{X}(t-l),\mathcal{X}(t),\theta)&=\frac{1-\alpha}{2n}\sum_{X\in\mathcal{X}(t-l)}g^{2}(X,\theta)+\\ &+\frac{\alpha}{2n}\sum_{X\in\mathcal{X}(t)}g^{2}(X,\theta)-\frac{1}{n}\sum_{X\in\mathcal{X}(t)}g(X,\theta),\end{split}(13)
190
+
191
+ where α\alpha is an adjustable parameter. In this work, we take α=0.1\alpha=0.1. Similarly to the classification-based algorithm, described in the previous section, the neural network is trained in an online learning way: all mini-batches are processed only once in time order.
192
+
193
+ While the output g​(X,θ)g(X,\theta) approximates the ratio between the distributions of observations in the mini-batches, we can estimate the dissimilarity score between them using the Pearson χ 2−\chi^{2}-divergence[hushchyn2020generalization]:
194
+
195
+ D​(𝒳​(t−l),𝒳​(t),θ)=1 n​∑X∈𝒳​(t)g​(X,θ)−1\begin{split}D(\mathcal{X}(t-l),\mathcal{X}(t),\theta)=\frac{1}{n}\sum_{X\in\mathcal{X}(t)}g(X,\theta)-1\end{split}(14)
196
+
197
+ However, the loss function and the dissimilarity score described above are asymmetric with respect to the mini-batches 𝒳​(t−l)\mathcal{X}(t-l) and 𝒳���(t)\mathcal{X}(t), and affect the change-point detection quality. To compensate this effect, we use two neural networks g 1​(X,θ 1)g_{1}(X,\theta_{1}) and g 2​(X,θ 2)g_{2}(X,\theta_{2}) as is described in Algorithm[2](https://arxiv.org/html/2010.01388#alg2 "Algorithm 2 ‣ 4.2 Regression-Based Model ‣ 4 Proposed Methods ‣ Online Neural Networks for Change-Point Detection"). We call this algorithm change-point detection based on Online Neural Network Regression (ONNR). An example of change-point detection using this algorithm is shown in Figure[2](https://arxiv.org/html/2010.01388#S4.F2 "Figure 2 ‣ 4 Proposed Methods ‣ Online Neural Networks for Change-Point Detection").
198
+
199
+ Algorithm 2 ONNR change-point detection algorithm.
200
+
201
+ 1:Inputs: time series
202
+
203
+ {X​(t)}t=k T\{X(t)\}_{t=k}^{T}
204
+ ;
205
+
206
+ k k
207
+ – size of a combined vector
208
+
209
+ X​(t)X(t)
210
+ ;
211
+
212
+ n n
213
+ – size of a mini-batch
214
+
215
+ 𝒳​(t)\mathcal{X}(t)
216
+ ;
217
+
218
+ l l
219
+ – lag size and
220
+
221
+ n≪l n\ll l
222
+ ;
223
+
224
+ g 1​(X,θ 1)g_{1}(X,\theta_{1})
225
+ and
226
+
227
+ g 2​(X,θ 2)g_{2}(X,\theta_{2})
228
+ – neural network with weights
229
+
230
+ θ 1\theta_{1}
231
+ and
232
+
233
+ θ 2\theta_{2}
234
+ respectively;
235
+
236
+ 2:Initialization:
237
+
238
+ t←k+n+l t\leftarrow k+n+l
239
+ ;
240
+
241
+ 3:while
242
+
243
+ t≤T t\leq T
244
+ do
245
+
246
+ 4: take mini-batches
247
+
248
+ 𝒳​(t−l)\mathcal{X}(t-l)
249
+ and
250
+
251
+ 𝒳​(t)\mathcal{X}(t)
252
+ ;
253
+
254
+ 5:
255
+
256
+ d 1​(t)←D​(𝒳​(t−l),𝒳​(t),θ 1)d_{1}(t)\leftarrow D(\mathcal{X}(t-l),\mathcal{X}(t),\theta_{1})
257
+ ;
258
+
259
+ 6:
260
+
261
+ d 2(t)←D(𝒳(t),𝒳(t−l),,θ 2)d_{2}(t)\leftarrow D(\mathcal{X}(t),\mathcal{X}(t-l),,\theta_{2})
262
+ ;
263
+
264
+ 7:
265
+
266
+ d​(t)←d 1​(t)+d 2​(t)d(t)\leftarrow d_{1}(t)+d_{2}(t)
267
+ ;
268
+
269
+ 8:
270
+
271
+ d¯​(t)←d¯​(t−n)+1 l​(d​(t)−d​(t−l−n))\bar{d}(t)\leftarrow\bar{d}(t-n)+\frac{1}{l}(d(t)-d(t-l-n))
272
+ ;
273
+
274
+ 9:
275
+
276
+ l​o​s​s​(t,θ 1)←L​(𝒳​(t−l),𝒳​(t),θ 1)loss(t,\theta_{1})\leftarrow L(\mathcal{X}(t-l),\mathcal{X}(t),\theta_{1})
277
+ ;
278
+
279
+ 10:
280
+
281
+ θ 1←Optimizer 1​(loss​(t,θ 1))\theta_{1}\leftarrow\mathrm{Optimizer}_{1}(\mathrm{loss}(t,\theta_{1}))
282
+ ;
283
+
284
+ 11:
285
+
286
+ l​o​s​s​(t,θ 2)←L​(𝒳​(t),𝒳​(t−l),θ 2)loss(t,\theta_{2})\leftarrow L(\mathcal{X}(t),\mathcal{X}(t-l),\theta_{2})
287
+ ;
288
+
289
+ 12:
290
+
291
+ θ 2←Optimizer 2​(loss​(t,θ 2))\theta_{2}\leftarrow\mathrm{Optimizer}_{2}(\mathrm{loss}(t,\theta_{2}))
292
+ ;
293
+
294
+ 13:
295
+
296
+ t←t+n t\leftarrow t+n
297
+ ;
298
+
299
+ 14:end while
300
+
301
+ 15:return
302
+
303
+ {d¯​(t)}t=1 T\{\bar{d}(t)\}_{t=1}^{T}
304
+ – change-point detection score
305
+
306
+ 5 Properties of the Algorithm
307
+ -----------------------------
308
+
309
+ ### 5.1 Convergence Properties
310
+
311
+ In this section, we describe several theoretical properties of the ONNC algorithm in a special case. We consider the batch size of n=1 n=1 and X​(i)=x​(i)X(i)=x(i) for simplicity. As a result we fit the algorithm with cross-entropy loss function L t​(θ)L_{t}(\theta) in the following form:
312
+
313
+ L t​(θ)=−log⁡(1−f​(x​(t−l),θ))−log⁡f​(x​(t),θ).L_{t}(\theta)=-\log(1-f(x(t-l),\theta))-\log f(x(t),\theta).(15)
314
+
315
+ We also consider the last N N steps of the algorithm and the lag size l=N l=N for further analysis and change-point detection score estimation:
316
+
317
+ I N=∑i=t−N+1 t L i​(θ i).I_{N}=\sum_{i=t-N+1}^{t}L_{i}(\theta_{i}).(16)
318
+
319
+ Without losing the generality, we suppose that we use Online Gradient Descent (OGD) algorithm for the optimization:
320
+
321
+ θ i=θ i−1−η​∇θ L i​(θ i−1)\theta_{i}=\theta_{i-1}-\eta\nabla_{\theta}L_{i}(\theta_{i-1})(17)
322
+
323
+ where η\eta is a learning rate. We also assume that θ∈F\theta\in F. We follow the assumptions for the feasible set F F and L i​(θ i)L_{i}(\theta_{i}) functions as described in[10.5555/3041838.3041955]. To explore the algorithm’s properties, we analyze the regret of this algorithm for the last N N steps:
324
+
325
+ R​(N)=∑i=t−N+1 t L i​(θ i)−min θ​∑i=t−N+1 t L i​(θ),R(N)=\sum_{i=t-N+1}^{t}L_{i}(\theta_{i})-\min_{\theta}\sum_{i=t-N+1}^{t}L_{i}(\theta),(18)
326
+
327
+ where the first term corresponds to the OGD algorithm. The second term corresponds to the offline algorithm[hushchyn2020generalization], that finds a static feasible solution for the all N N steps. Regard the following theorem for the offline optimization, and the theorem, that shows when the online algorithm outperform the offline one.
328
+
329
+ ###### Theorem 1.
330
+
331
+ For any time moment t−N<ν≤t t-N<\nu\leq t the following inequality holds:
332
+
333
+ min θ​∑i=t−N+1 t L i​(θ)≥min θ​∑i=t−N+1 ν L i​(θ)+min θ​∑i=ν+1 t L i​(θ)\min_{\theta}\sum_{i=t-N+1}^{t}L_{i}(\theta)\geq\min_{\theta}\sum_{i=t-N+1}^{\nu}L_{i}(\theta)+\min_{\theta}\sum_{i=\nu+1}^{t}L_{i}(\theta)(19)
334
+
335
+ ###### Theorem 2.
336
+
337
+ For any time moment t−N<ν≤t t-N<\nu\leq t the follwoing inequality holds:
338
+
339
+ R​(N)≤‖F‖2 η+‖∇L‖2 2​η​N−C​(N,ν)R(N)\leq\frac{\parallel F\parallel^{2}}{\eta}+\frac{\parallel\nabla L\parallel^{2}}{2}\eta N-C(N,\nu)(20)
340
+
341
+ where
342
+
343
+ ‖F‖=max x,y∈F⁡d​(x,y)\begin{split}\parallel F\parallel&=\max_{x,y\in F}d(x,y)\end{split}(21)
344
+
345
+ ‖∇L‖=max x∈F,i∈{t,t−1,…}⁡‖∇L i​(θ i−1)‖\begin{split}\parallel\nabla L\parallel&=\max_{x\in F,i\in\{t,t-1,...\}}\parallel\nabla L_{i}(\theta_{i-1})\parallel\end{split}(22)
346
+
347
+ C​(N,ν)=min θ​∑i=t−N+1 t L i​(θ)−min θ​∑i=t−N+1 ν L i​(θ)−min θ​∑i=ν+1 t L i​(θ)≥0\begin{split}C(N,\nu)&=\min_{\theta}\sum_{i=t-N+1}^{t}L_{i}(\theta)\\ &-\min_{\theta}\sum_{i=t-N+1}^{\nu}L_{i}(\theta)-\min_{\theta}\sum_{i=\nu+1}^{t}L_{i}(\theta)\geq 0\end{split}(23)
348
+
349
+ This theorem defines the upper bound for the online algorithm regret. It helps to estimate conditions when the online algorithm finds smaller loss function values compared to the offline one. These conditions are presented in the following corollaries of the Theorem [2](https://arxiv.org/html/2010.01388#Thmtheorem2 "Theorem 2. ‣ 5.1 Convergence Properties ‣ 5 Properties of the Algorithm ‣ Online Neural Networks for Change-Point Detection"). The proof of this theorem is provided in Appendix [B](https://arxiv.org/html/2010.01388#A2 "Appendix B Proofs of Theorems ‣ Online Neural Networks for Change-Point Detection").
350
+
351
+ ###### Corollary 2.1.
352
+
353
+ For a given N N and learning rate η=2​‖F‖2 N​‖∇L​(θ)‖2\eta=\sqrt{\frac{2\parallel F\parallel^{2}}{N\parallel\nabla L(\theta)\parallel^{2}}} the upper bound of the regret R​(N)R(N) reaches its minimum:
354
+
355
+ R​(N)≤2​N​‖F‖2​‖∇L​(θ)‖2−C​(N,ν)R(N)\leq\sqrt{2N\parallel F\parallel^{2}\parallel\nabla L(\theta)\parallel^{2}}-C(N,\nu)(24)
356
+
357
+ ###### Corollary 2.2.
358
+
359
+ For a given N N, learning rate η=2​‖F‖2 N​‖∇L​(θ)‖2\eta=\sqrt{\frac{2\parallel F\parallel^{2}}{N\parallel\nabla L(\theta)\parallel^{2}}}, and C​(N,ν)>2​N​‖F‖2​‖∇L​(θ)‖2 C(N,\nu)>\sqrt{2N\parallel F\parallel^{2}\parallel\nabla L(\theta)\parallel^{2}} the regret R​(N)R(N) takes negative values:
360
+
361
+ R​(N)<0 R(N)<0(25)
362
+
363
+ ###### Corollary 2.3.
364
+
365
+ For a given N N, learning rate η=2​‖F‖2 N​‖∇L​(θ)‖2\eta=\sqrt{\frac{2\parallel F\parallel^{2}}{N\parallel\nabla L(\theta)\parallel^{2}}}, and ν∗=arg⁡max ν⁡C​(N,ν)\nu^{*}=\arg\max_{\nu}C(N,\nu) the upper bound of R​(N)+C​(N,ν∗)R(N)+C(N,\nu^{*}) reaches its minimum:
366
+
367
+ R​(N)+C​(N,ν∗)≤2​N​‖F‖2​‖∇L​(θ)‖2.R(N)+C(N,\nu^{*})\leq\sqrt{2N\parallel F\parallel^{2}\parallel\nabla L(\theta)\parallel^{2}}.(26)
368
+
369
+ Therefore,
370
+
371
+ lim N→∞R​(N)+C​(N,ν∗)N≤0.\lim_{N\to\infty}\frac{R(N)+C(N,\nu^{*})}{N}\leq 0.(27)
372
+
373
+ The Corollary [2.1](https://arxiv.org/html/2010.01388#Thmtheorem2.Thmcorollary1 "Corollary 2.1. ‣ 5.1 Convergence Properties ‣ 5 Properties of the Algorithm ‣ Online Neural Networks for Change-Point Detection") estimates the optimal learning rate for OGD algorithm used for change-point detection. The Corollary [2.2](https://arxiv.org/html/2010.01388#Thmtheorem2.Thmcorollary2 "Corollary 2.2. ‣ 5.1 Convergence Properties ‣ 5 Properties of the Algorithm ‣ Online Neural Networks for Change-Point Detection") defines the conditions when OGD has a lower loss function value than the offline optimization algorithm. The Corollary [2.3](https://arxiv.org/html/2010.01388#Thmtheorem2.Thmcorollary3 "Corollary 2.3. ‣ 5.1 Convergence Properties ‣ 5 Properties of the Algorithm ‣ Online Neural Networks for Change-Point Detection") shows estimation of convergence of the online algorithm. Intuition for these results is that online algorithms adapts to changes of signal distribution and finds lower loss function value.
374
+
375
+ ### 5.2 Offline and Online Dissimilarity Scores
376
+
377
+ Now consider a general example with a change-point at the time moment t−ν+1 t-\nu+1. In this case, x​(i)∼p 0​(x)x(i)\sim p_{0}(x) for i≤t−ν i\leq t-\nu and x​(i)∼p 1​(x)x(i)\sim p_{1}(x) for i>t+ν i>t+\nu. We compare change point detection scores for the optimal offline and online change point detection algorithms. The offline algorithm corresponds to the minimum of the loss function:
378
+
379
+ I N o​f​f​l​i​n​e=min θ​∑i=t−N+1 t L i​(θ).I_{N}^{offline}=\min_{\theta}\sum_{i=t-N+1}^{t}L_{i}(\theta).(28)
380
+
381
+ The optimal online algorithm provides the minimum of the following expression as it shown in Corollary [2.3](https://arxiv.org/html/2010.01388#Thmtheorem2.Thmcorollary3 "Corollary 2.3. ‣ 5.1 Convergence Properties ‣ 5 Properties of the Algorithm ‣ Online Neural Networks for Change-Point Detection"):
382
+
383
+ I N o​n​l​i​n​e=min θ​∑i=t−N+1 t−ν L i​(θ)+min θ​∑i=t−ν+1 t L i​(θ).I_{N}^{online}=\min_{\theta}\sum_{i=t-N+1}^{t-\nu}L_{i}(\theta)+\min_{\theta}\sum_{i=t-\nu+1}^{t}L_{i}(\theta).(29)
384
+
385
+ For both methods we take the average dissimilarity score for the change point detection:
386
+
387
+ d¯​(t)=1 N​∑t−N+1 t(log⁡1−f​(x​(i−N),θ i)f​(x​(i−N),θ i)+log⁡f​(x​(i),θ i)1−f​(x​(i),θ i)).\begin{split}\bar{d}(t)=\frac{1}{N}\sum_{t-N+1}^{t}\left(\log\frac{1-f(x(i-N),\theta_{i})}{f(x(i-N),\theta_{i})}+\log\frac{f(x(i),\theta_{i})}{1-f(x(i),\theta_{i})}\right).\end{split}(30)
388
+
389
+ ###### Theorem 3.
390
+
391
+ For any time moment t t and v≤N v\leq N the following equation holds for the online algorithm:
392
+
393
+ 𝔼​[d¯​(t)o​n​l​i​n​e]=ν N​(𝔼 x∼p 1​(x)​[log⁡p 1​(x)p 0​(x)]−𝔼 x∼p 0​(x)​[log⁡p 1​(x)p 0​(x)]).\mathbb{E}[\bar{d}(t)^{online}]=\frac{\nu}{N}\left(\mathbb{E}_{x\sim p_{1}(x)}\left[\log\frac{p_{1}(x)}{p_{0}(x)}\right]-\mathbb{E}_{x\sim p_{0}(x)}\left[\log\frac{p_{1}(x)}{p_{0}(x)}\right]\right).(31)
394
+
395
+ ###### Theorem 4.
396
+
397
+ For any time moment t t and v≤N v\leq N the following equation holds for the offline algorithm:
398
+
399
+ 𝔼​[d¯​(t)o​f​f​l​i​n​e]=ν N​(𝔼 x∼p 1​(x)​[log⁡p~1​(x)p~0​(x)]−𝔼 x∼p 0​(x)​[log⁡p~1​(x)p~0​(x)]).\mathbb{E}[\bar{d}(t)^{offline}]=\frac{\nu}{N}\left(\mathbb{E}_{x\sim p_{1}(x)}\left[\log\frac{\tilde{p}_{1}(x)}{\tilde{p}_{0}(x)}\right]-\mathbb{E}_{x\sim p_{0}(x)}\left[\log\frac{\tilde{p}_{1}(x)}{\tilde{p}_{0}(x)}\right]\right).(32)
400
+
401
+ where
402
+
403
+ p~1​(x)p~0​(x)=1+ν N​(p 1​(x)p 0​(x)−1).\frac{\tilde{p}_{1}(x)}{\tilde{p}_{0}(x)}=1+\frac{\nu}{N}\left(\frac{p_{1}(x)}{p_{0}(x)}-1\right).(33)
404
+
405
+ ###### Corollary 4.1.
406
+
407
+ For ν N​(p 1​(x)p 0​(x)−1)<<1\frac{\nu}{N}\left(\frac{p_{1}(x)}{p_{0}(x)}-1\right)<<1, 𝔼​[d¯​(t)o​f​f​l​i​n​e]=O​(ν 2)\mathbb{E}[\bar{d}(t)^{offline}]=O(\nu^{2}).
408
+
409
+ ###### Corollary 4.2.
410
+
411
+ For any time moment t t and v≤N v\leq N the following inequality holds for the offline and online algorithms:
412
+
413
+ 𝔼​[d¯​(t)o​n​l​i​n​e]≥𝔼​[d¯​(t)o​f​f​l​i​n​e].\mathbb{E}[\bar{d}(t)^{online}]\geq\mathbb{E}[\bar{d}(t)^{offline}].(34)
414
+
415
+ The proof of the theorems is provided in Appendix [B](https://arxiv.org/html/2010.01388#A2 "Appendix B Proofs of Theorems ‣ Online Neural Networks for Change-Point Detection"). Example of change-point detection using the online and offline algorithms is shown in Figure [3](https://arxiv.org/html/2010.01388#S5.F3 "Figure 3 ‣ 5.2 Offline and Online Dissimilarity Scores ‣ 5 Properties of the Algorithm ‣ Online Neural Networks for Change-Point Detection"). In this example a change point occurs at time moment t=0 t=0 with p 0​(x)=𝒩​(0,1)p_{0}(x)=\mathcal{N}(0,1) for t<0 t<0 and p 1​(x)=𝒩​(3,1)p_{1}(x)=\mathcal{N}(3,1) for t≥0 t\geq 0. The figure demonstrates the relation between the dissimilarity scores d¯​(t)\bar{d}(t) for the online and offline algorithms, that is defined by the Theorems [3](https://arxiv.org/html/2010.01388#Thmtheorem3 "Theorem 3. ‣ 5.2 Offline and Online Dissimilarity Scores ‣ 5 Properties of the Algorithm ‣ Online Neural Networks for Change-Point Detection") and [4](https://arxiv.org/html/2010.01388#Thmtheorem4 "Theorem 4. ‣ 5.2 Offline and Online Dissimilarity Scores ‣ 5 Properties of the Algorithm ‣ Online Neural Networks for Change-Point Detection") for N=200 N=200.
416
+
417
+ ![Image 3: Refer to caption](https://arxiv.org/html/2010.01388v2/x3.png)
418
+
419
+ Figure 3: Example of change-point detection using the online and offline algorithms. (Top) A time series with one change-point at moments t=0 t=0. (Bottom) Change-point detection score d¯​(t)\bar{d}(t) estimated by the algorithms.
420
+
421
+ 6 Data Sets
422
+ -----------
423
+
424
+ To test change-point detection algorithms, we use several synthetic and real world data sets with various numbers of dimensions. Their purpose is to estimate how different methods work in different conditions and with different kinds of change-points. The first synthetic data set is called mean jumps and contains 10 one-dimensional time series, where each observation x​(t)x(t) is sampled from normal distribution x​(t)∼𝒩​(μ,σ)x(t)\sim\mathcal{N}(\mu,\sigma) with mean μ\mu and standard deviation σ=1\sigma=1. Change-points are generated every 200 timestamps by changing mean μ\mu in the following way:
425
+
426
+ μ N={0,if​N=1 μ N−1+0.2​N,if​N=2,…,10,\mu_{N}=\begin{cases}0,&\mbox{if }N=1\\ \mu_{N-1}+0.2N,&\mbox{if }N=2,...,10,\end{cases}(35)
427
+
428
+ where N N is an integer which is estimated as 200​(N−1)<t≤200​N 200(N-1)<t\leq 200N.
429
+
430
+ Similarly, variance jumps data set contains 10 one-dimensional time series, where each observation x​(t)x(t) is also sampled from normal distribution x​(t)∼𝒩​(μ,σ)x(t)\sim\mathcal{N}(\mu,\sigma) with mean μ=0\mu=0 and standard deviation σ\sigma. Change-points are generated every 200 timestamps by changing σ\sigma in the following way:
431
+
432
+ σ N={1,if​N=2​k+1 1+0.25​N,if​N=2​k\sigma_{N}=\begin{cases}1,&\mbox{if }N=2k+1\\ 1+0.25N,&\mbox{if }N=2k\end{cases}(36)
433
+
434
+ where N N is an integer that is estimated as 200​(N−1)<t≤200​N 200(N-1)<t\leq 200N.
435
+
436
+ The last synthetic data set we use in this work is called cov jumps. It also contains 10 two-dimensional time series, where each observation x​(t)x(t) is sampled from multivariate normal distribution x​(t)∼𝒩​(μ,Σ)x(t)\sim\mathcal{N}(\mu,\Sigma), with a vector of means μ=(0,0)T\mu=(0,0)^{T} and covariance matrix Σ\Sigma. As previously, change-points are generated every 200 timestamps by changing Σ\Sigma in the following way:
437
+
438
+ Σ N={(1−0.1​N−0.1​N 1),if​N=2​k+1(1 0.1​N 0.1​N 1),if​N=2​k\Sigma_{N}=\begin{cases}\begin{pmatrix}1&-0.1N\\ -0.1N&1\\ \end{pmatrix},&\mbox{if }N=2k+1\\ \begin{pmatrix}1&0.1N\\ 0.1N&1\\ \end{pmatrix},&\mbox{if }N=2k\end{cases}(37)
439
+
440
+ where N N is an integer that is estimated as 200​(N−1)<t≤200​N 200(N-1)<t\leq 200N.
441
+
442
+ We also use two real world data sets that are publicly available and are taken from the human activity recognition domain. WISDM[8835065, Dua:2019] data set contains 3-dimensional signals of accelerometer and gyroscope sensors, collected from a smartphone and a smartwatch measured at a rate of 20 Hz. The signal is collected for different human activities. Their changes are considered as change-points. Each time series has 17 change-points. We use 10 samples of the smartwatch gyroscope sensors for further tests. We also downsample the signals and take only about 3000 observations per time series.
443
+
444
+ Similarly, EMG Physical Action Data Set[Dua:2019] contains EMG data, which corresponds to 10 different physical activities for 4 persons. Transitions between the activities are considered as change-points. Each sample has 8 dimensions. We downsample the original signals to only about 2000 measurements per time series for the change-point detection tests.
445
+
446
+ One more interesting data set we use is called Kepler[kepler2019]. It contains data from the Kepler spacecraft that was launched in March 2009. Its mission was to search for transit-driven exoplanets, located within the habitable zones of Sun-like stars. In this work we use the one-dimensional Kepler light curves, with Data Conditioning Simple Aperture Photometry (DCSAP) data from 10 stars with exoplanets.
447
+
448
+ The next range of data sets are based on real samples for classification tasks in machine learning, collected from astronomical and high energy physics domains.
449
+
450
+ The first data set is called HTRU2[10.1093/mnras/stw656, Dua:2019] and describes a sample of pulsar candidates, collected during the High Time Resolution Universe Survey (South)[doi:10.1111/j.1365-2966.2010.17325.x]. It contains two types of astronomical objects: positive (pulsars) and negative (others), that are described by 8 features. We create 10 time series with 2000 observations x​(t)x(t), that are sampled from positive or negative classes with change-points at every 200 timestamps:
451
+
452
+ x​(t)={random negative object,if​N=2​k random positive object,if​N=2​k+1 x(t)=\begin{cases}\text{random negative object},&\mbox{if }N=2k\\ \text{random positive object},&\mbox{if }N=2k+1\end{cases}(38)
453
+
454
+ where N N is an integer that is estimated as 200​(N−1)<t≤200​N 200(N-1)<t\leq 200N. Changes of the object classes are considered as change-points. Then, we scale each components of the time series by reducing their mean values to 0 and variance to 1. After that, we add white noise generated from the normal distribution 𝒩​(μ=0,σ=2)\mathcal{N}(\mu=0,\sigma=2). The goal of this transformation is to reduce the difference between the distributions of the classes and make change-point detection more difficult.
455
+
456
+ One more astronomical data set is MAGIC Gamma Telescope Data Set[Dua:2019], which describes signals registered in the Cherenkov gamma telescope, from high energy particles, that come from space. There are also two kinds of signals: positive and negative, that correspond to gamma and hadron particles respectively. Each signal is described by 10 features. Similar to the HTRU2 data set, we create 10 time series by sampling observations x​(t)x(t) as is shown in([38](https://arxiv.org/html/2010.01388#S6.E38 "In 6 Data Sets ‣ Online Neural Networks for Change-Point Detection")) and adding noise generated from 𝒩​(μ=0,σ=5)\mathcal{N}(\mu=0,\sigma=5) to each component.
457
+
458
+ SUSY[Baldi_2014, Dua:2019] is a data set from a high energy physics domain. It contains positive (signal) and negative (background) events, observed in a particle detector and described by 18 features. We create 10 time series in the same way as for the HTRU2 data set.
459
+
460
+ One more high energy physics data set is called Higgs[Baldi_2014, Dua:2019] and contains positive (signal) and negative (background) events. Each event is described by 21 features. While it is a quite difficult data set for change-point detection, we create 10 time series with 4000 observations x​(t)x(t), that are sampled from the positive or negative classes:
461
+
462
+ x​(t)={random negative object,if​N=2​k random positive object,if​N=2​k+1 x(t)=\begin{cases}\text{random negative object},&\mbox{if }N=2k\\ \text{random positive object},&\mbox{if }N=2k+1\end{cases}(39)
463
+
464
+ where N N is an integer that is estimated as 400​(N−1)<t≤400​N 400(N-1)<t\leq 400N. Changes of the object classes are considered as change-points.
465
+
466
+ The final data set we use in this work is MNIST[Dua:2019], which contains 1794 samples of hand-written digits. Each digit is described by 64 features. We create 10 time series with 1794 observations x​(t)x(t) by stacking all randomly shuffled 0 digits, then adding all randomly shuffled 1 digits and repeating this for all classes. Changes of the digits are considered as change-points. Then, similarly to the HTRU2 data set, we add white noise, generated from normal distribution 𝒩​(μ=0,σ=5)\mathcal{N}(\mu=0,\sigma=5).
467
+
468
+ 7 Experiments
469
+ -------------
470
+
471
+ ![Image 4: Refer to caption](https://arxiv.org/html/2010.01388v2/x4.png)
472
+
473
+ Figure 4: Change-point detection score estimated by the algorithms ONNC and ONNR after the time shift: d¯′​(t)=d¯​(t+l+n)\bar{d}^{\prime}(t)=\bar{d}(t+l+n), where score d¯​(t)\bar{d}(t) is shown in Figure[4](https://arxiv.org/html/2010.01388#S7.F4 "Figure 4 ‣ 7 Experiments ‣ Online Neural Networks for Change-Point Detection"). Positions of the score peaks are considered as positions of the detected change-points.
474
+
475
+ ![Image 5: Refer to caption](https://arxiv.org/html/2010.01388v2/x5.png)
476
+
477
+ Figure 5: Example of change-point detection score d¯′​(t)\bar{d}^{\prime}(t) estimated by ONNC and ONNR algorithms (bottom) for a time series in mean jumps data set (top).
478
+
479
+ We compare the proposed methods with 4 known methods for change-point detection 1 1 1 All code and data needed to reproduce our results are available in a repository: https://gitlab.com/lambda-hse/change-point/online-nn-cpd. These methods are Binseg[RePEc:cup:etheor:v:13:y:1997:i:03:p:315-352_00, fryzlewicz2014], Pelt[Killick_2012], Window[TRUONG2020107299] and RuLSIF[LIU201372]. There are several reviews[Aminikhanghahi2017, TRUONG2020107299, burg2020evaluation], where it is shown that they demonstrate the best quality of change-point detection on various data sets.
480
+
481
+ Implementations of Binseg, Pelt and Window algorithms in the ruptures[TRUONG2020107299] package are used in further experiments. The Binseg and Window methods require the set up of the number of change-points needed to be found in a time series. The optimal number for each sample is estimated from a range [1,40][1,40], using grid search, by maximizing RI quality metric. The Window algorithm also has width hyperparameter. To provide good resolution between consecutive change points, we take width=20\textit{width}=20 for Kepler, width=200\textit{width}=200 for Higgs and width=100\textit{width}=100 for the rest of the data sets described in Section[6](https://arxiv.org/html/2010.01388#S6 "6 Data Sets ‣ Online Neural Networks for Change-Point Detection"). Similarly, the Pelt method has a hyperparameter pen for penalty. Its optimal value is found in the range [0,10][0,10] using grid search with step 0.5 0.5 by maximizing the RI quality metric. For all these algorithms, we use the rbf cost function as the most universal choice which works with any kind of change-points.
482
+
483
+ The regularisation parameter, λ\lambda, and width σ\sigma of RBF kernels in the RuLSIF algorithm are also optimised using grid search in the range [10−3,10 3][10^{-3},10^{3}]. For the window size hyperparameter, we take the same values as for the width hyperparameter in the Window algorithm.
484
+
485
+ For the proposed algorithms in this work, ONNC and ONNR, we use the following hyperparameters. The lag size l=20 l=20 for Kepler, l=200 l=200 for Higgs and l=100 l=100 for the rest of the data sets. The number of previous observations in([3](https://arxiv.org/html/2010.01388#S2.E3 "In 2 Change-Point Detection ‣ Online Neural Networks for Change-Point Detection")) k=1 k=1. The mini-batch size n={1,10}n=\{1,10\}; the number of epochs of the neural network optimizer n​_​e​p​o​c​h​s={1,10}n\_epochs=\{1,10\} and the learning rate l​r={0.1,0.01}lr=\{0.1,0.01\}. The optimal values of these hyperparameters are estimated using grid search by maximizing the RI quality metric. The neural network optimizer is Adam.
486
+
487
+ ![Image 6: Refer to caption](https://arxiv.org/html/2010.01388v2/x6.png)
488
+
489
+ Figure 6: Example of change-point detection score d¯′​(t)\bar{d}^{\prime}(t) estimated by ONNC and ONNR algorithms (bottom) for a time series in variance jumps data set (top).
490
+
491
+ Table 1: Average values of RI quality metric for all change-point detection algorithms and data sets.
492
+
493
+ ![Image 7: Refer to caption](https://arxiv.org/html/2010.01388v2/x7.png)
494
+
495
+ Figure 7: Example of change-point detection score d¯′​(t)\bar{d}^{\prime}(t) estimated by ONNC and ONNR algorithms (bottom) for a time series in Kepler data set (top).
496
+
497
+ Binseg, Pelt, Window and RuLSIF are offline algorithms for change-point detection. This means that they process observations of a time series in any order they need. It helps to detect change-points without time delay. Our algorithms are online and process the observations sequentially in time order. This creates a time delay in the change-point detection score d¯​(t)\bar{d}(t) as it is demonstrated in Figure[2](https://arxiv.org/html/2010.01388#S4.F2 "Figure 2 ‣ 4 Proposed Methods ‣ Online Neural Networks for Change-Point Detection"). Assuming, that firstly, the whole time series is processed and then the quality is measured, we transform the score d¯​(t)\bar{d}(t) to the offline-equivalent form by applying time shift on the sum of the lag l l and mini-batch n n sizes: d¯′​(t)=d¯​(t+l+n)\bar{d}^{\prime}(t)=\bar{d}(t+l+n) as is shown in Figure[4](https://arxiv.org/html/2010.01388#S7.F4 "Figure 4 ‣ 7 Experiments ‣ Online Neural Networks for Change-Point Detection"). Positions of the score peaks are considered as positions of the detected change-points.
498
+
499
+ Table 2: Average values of F1-score quality metric for all change-point detection algorithms and data sets.
500
+
501
+ Each algorithm is applied to all time series in a data set. Then, the quality metric values are averaged over all samples in it. The average values of the RI and F1-score quality metrics are presented in Table[1](https://arxiv.org/html/2010.01388#S7.T1 "Table 1 ‣ 7 Experiments ‣ Online Neural Networks for Change-Point Detection") and Table[2](https://arxiv.org/html/2010.01388#S7.T2 "Table 2 ‣ 7 Experiments ‣ Online Neural Networks for Change-Point Detection") respectively. The results show that ONNC and ONNR have similar or better RI values for all data sets and demonstrate the best values of the F1-score for all data sets, except mean jumps and MNIST, where these algorithms show the same quality as other methods. Examples of change-point detection score, estimated by ONNC and ONNR algorithms, for several time series are demonstrated in Figure[5](https://arxiv.org/html/2010.01388#S7.F5 "Figure 5 ‣ 7 Experiments ‣ Online Neural Networks for Change-Point Detection"),[6](https://arxiv.org/html/2010.01388#S7.F6 "Figure 6 ‣ 7 Experiments ‣ Online Neural Networks for Change-Point Detection"),[7](https://arxiv.org/html/2010.01388#S7.F7 "Figure 7 ‣ 7 Experiments ‣ Online Neural Networks for Change-Point Detection"),[8](https://arxiv.org/html/2010.01388#S7.F8 "Figure 8 ‣ 7 Experiments ‣ Online Neural Networks for Change-Point Detection") and[9](https://arxiv.org/html/2010.01388#S7.F9 "Figure 9 ‣ 7 Experiments ‣ Online Neural Networks for Change-Point Detection").
502
+
503
+ ![Image 8: Refer to caption](https://arxiv.org/html/2010.01388v2/x8.png)
504
+
505
+ Figure 8: Example of change-point detection score d¯′​(t)\bar{d}^{\prime}(t) estimated by ONNC and ONNR algorithms (bottom) for a time series in WISDM data set (top).
506
+
507
+ ![Image 9: Refer to caption](https://arxiv.org/html/2010.01388v2/x9.png)
508
+
509
+ Figure 9: Example of change-point detection score d¯′​(t)\bar{d}^{\prime}(t) estimated by ONNC and ONNR algorithms (bottom) for a time series in HTRU2 data set (top).
510
+
511
+ 8 Discussion
512
+ ------------
513
+
514
+ In this work, two new online algorithms for change-point detection in time series data are introduced. They are based on sequential comparison of two mini-batches of observations using neural networks, to estimate whether they have the same distribution. Each pair of mini-batches is processed only once, which provides good scalability of the algorithms.
515
+
516
+ The results in Table[1](https://arxiv.org/html/2010.01388#S7.T1 "Table 1 ‣ 7 Experiments ‣ Online Neural Networks for Change-Point Detection") and Table[2](https://arxiv.org/html/2010.01388#S7.T2 "Table 2 ‣ 7 Experiments ‣ Online Neural Networks for Change-Point Detection") demonstrate that the algorithms are able to detect various kinds of change-points in high-dimensional time series. Also, ONNC and ONNR methods demonstrate better quality of the detection on noisy data sets than other approaches. Reducing the noise level increases the quality for all algorithms considered here. To explain this, one can consider an RBF kernel for two observations X​(i)X(i) and X​(j)X(j) from Eq.([2](https://arxiv.org/html/2010.01388#S2.E2 "In 2 Change-Point Detection ‣ Online Neural Networks for Change-Point Detection")):
517
+
518
+ K​(X​(i),X​(j))=exp⁡(−d i​j 2 2​σ 2)K(X(i),X(j))=\exp(-\frac{d_{ij}^{2}}{2\sigma^{2}})(40)
519
+
520
+ and
521
+
522
+ d i​j=(X 1(i)−X 1(j)2+…+(X k​d(i)−X k​d(j)2,d_{ij}=\sqrt{(X_{1}(i)-X_{1}(j)^{2}+...+(X_{kd}(i)-X_{kd}(j)^{2}},(41)
523
+
524
+ where σ\sigma is the kernel width; d i​j d_{ij} is the Euclidean distance between the observations. The kernels are used in the cost functions of Binseg, Pelt, Window and RulSIF methods. In these equations, all signal components are taken into account equally. Uninformative and noisy components increase the variance of the distances, which reduces the sensitivity of the cost functions and decreases the quality of change-point detection.
525
+
526
+ Table 3: Computational complexity and memory usage of the change-point detection algorithms. T T - the number of observation in a time series; W W is the window width; K K is the number of kernels; l l is the lag size.
527
+
528
+ As was considered previously, the ONNC and ONNR algorithms described in Algorithm[1](https://arxiv.org/html/2010.01388#alg1 "Algorithm 1 ‣ 4.1 Classification-Based Model ‣ 4 Proposed Methods ‣ Online Neural Networks for Change-Point Detection") and Algorithm[2](https://arxiv.org/html/2010.01388#alg2 "Algorithm 2 ‣ 4.2 Regression-Based Model ‣ 4 Proposed Methods ‣ Online Neural Networks for Change-Point Detection"), respectively, process mini-batches of a time series observations sequentially. Thus, the computational complexity of these methods is 𝒪​(T)\mathcal{O}(T), where T T is the total number of observations in the time series. They also need 𝒪​(l)\mathcal{O}(l) memory to store the last l l values of d​(t)d(t) score, where l l is the lag size between the mini-batches. This makes the ONNC and ONNR algorithms scalable and suitable for change-point detection in large time series.
529
+
530
+ According to[TRUONG2020107299], the minimal theoretical computational complexities for Binseg and Pelt algorithms are 𝒪​(T​log⁡T)\mathcal{O}(T\log T) and 𝒪​(T)\mathcal{O}(T) respectively, for cases when the cost function requires 𝒪​(1)\mathcal{O}(1) operations on each step of the algorithms. However, using the cost function, based on RBF kernels, increases the required number of computations to 𝒪​(T 3)\mathcal{O}(T^{3}) and memory usage to 𝒪​(T 2)\mathcal{O}(T^{2}), due to the calculation of distances between pairs of observations. This makes them unsuitable for change-point detection in large time series.
531
+
532
+ Similarly, the Window method needs 𝒪​(W 2)\mathcal{O}(W^{2}) operations at each step to calculate the pairwise distances between observations in windows with the width W W. In the same way, RuLSIF requires 𝒪​(K​W)\mathcal{O}(KW) computations and memory at each step, where K K is the number of kernels used. Computational complexities and memory usage for the all algorithms considered in this paper are presented in Table[3](https://arxiv.org/html/2010.01388#S8.T3 "Table 3 ‣ 8 Discussion ‣ Online Neural Networks for Change-Point Detection"). It demonstrates that ONNC and ONNR algorithms are more scalable and take less computational resources than other methods.
533
+
534
+ 9 Conclusion
535
+ ------------
536
+
537
+ In this work, we present two different change-point detection algorithms for time series data based on online learning. It is demonstrated that they outperform other popular algorithms on various synthetic and real-world data sets. The estimated computational complexities and memory usage show that they are faster than other methods, provide better scalability and are well suited for large time series for change-point detection. It is shown theoretically that the ONNC algorithm converges to its optimal solution. The discussion also defines the conditions, when the proposed online learning approach helps to achieve better solution than the offline one. We derived the exact equations of the optimal solutions for the both cases. They demonstrate the superiority of the online learning approach over the offline one for change point detection in time series.
538
+
539
+ 10 Acknowledgments
540
+ ------------------
541
+
542
+ The work was supported by the grant for research centers in the field of AI provided by the Ministry of Economic Development of the Russian Federation in accordance with the agreement 000000C313925P4E0002 and the agreement with HSE University № 139-15-2025-009. The computation for this research was performed using the computational resources of HPC facilities at HSE University [Kostenetskiy_2021].
543
+
544
+ Appendix A Optimal Predictions
545
+ ------------------------------
546
+
547
+ Suppose that x​(i)∼p​(x)x(i)\sim p(x) for t−N<i≤t t-N<i\leq t, and x​(i)∼q​(x)x(i)\sim q(x) for t−N−l<i≤t−l t-N-l<i\leq t-l, where p​(x)p(x) and q​(x)q(x) are some probability density functions. Define a discrete form I N I_{N} of a binary cross-entropy loss function between two probability distributions p​(x)p(x) and q​(x)q(x) for a model f​(x,θ)f(x,\theta), with weights θ\theta, and the continuous form I I of the function we define as
548
+
549
+ I N=−1 N​∑i=t−N+1 t L i​(θ)\begin{split}I_{N}&=-\frac{1}{N}\sum_{i=t-N+1}^{t}L_{i}(\theta)\end{split}(42)
550
+
551
+ I=∫p​(x)​log⁡f​(x,θ)​𝑑 x+∫q​(x)​log⁡(1−f​(x,θ))​𝑑 x\begin{split}I&=\int p(x)\log f(x,\theta)dx+\int q(x)\log(1-f(x,\theta))dx\end{split}(43)
552
+
553
+ The central limit theorem states that I N I_{N} behaves as I I with quite large N N with the following variance convergence [caflisch_1998]:
554
+
555
+ V​a​r x​[I N]=𝔼 x​[(I N−I)2]=O​(1 N)\begin{split}Var_{x}[I_{N}]&=\mathbb{E}_{x}[(I_{N}-I)^{2}]=O(\frac{1}{N})\end{split}(44)
556
+
557
+ During the model fitting we minimize the loss function value. Supposing that θ∗=arg⁡max θ⁡I\theta^{*}=\arg\max_{\theta}I and following the math in [goodfellow2014generative] we obtain the optimal model predictions:
558
+
559
+ f​(x,θ∗)=p​(x)p​(x)+q​(x)\begin{split}f(x,\theta^{*})=\frac{p(x)}{p(x)+q(x)}\end{split}(45)
560
+
561
+ Appendix B Proofs of Theorems
562
+ -----------------------------
563
+
564
+ ###### The proof of Theorem [1](https://arxiv.org/html/2010.01388#Thmtheorem1 "Theorem 1. ‣ 5.1 Convergence Properties ‣ 5 Properties of the Algorithm ‣ Online Neural Networks for Change-Point Detection").
565
+
566
+ Consider the following optimal value for θ\theta:
567
+
568
+ θ 0=arg⁡min θ​∑i=t−N+1 t L i​(θ)\theta_{0}=\arg\min_{\theta}\sum_{i=t-N+1}^{t}L_{i}(\theta)(46)
569
+
570
+ Then
571
+
572
+ min θ​∑i=t−N+1 t L i​(θ)−min θ​∑i=t−N+1 ν L i​(θ)−min θ​∑i=ν+1 t L i​(θ)=(∑i=t−N+1 ν L i​(θ 0)−min θ​∑i=t−N+1 ν L i​(θ))+(∑i=ν+1 t L i​(θ 0)−min θ​∑i=ν+1 t L i​(θ))≥0\begin{split}\min_{\theta}\sum_{i=t-N+1}^{t}L_{i}(\theta)-\min_{\theta}\sum_{i=t-N+1}^{\nu}L_{i}(\theta)-\min_{\theta}\sum_{i=\nu+1}^{t}L_{i}(\theta)=\\ (\sum_{i=t-N+1}^{\nu}L_{i}(\theta_{0})-\min_{\theta}\sum_{i=t-N+1}^{\nu}L_{i}(\theta))+(\sum_{i=\nu+1}^{t}L_{i}(\theta_{0})-\min_{\theta}\sum_{i=\nu+1}^{t}L_{i}(\theta))\geq 0\end{split}(47)
573
+
574
+ We use above the following property of minimum:
575
+
576
+ ∑i L i​(θ)−min θ​∑i L i​(θ)≥0,∀θ\sum_{i}L_{i}(\theta)-\min_{\theta}\sum_{i}L_{i}(\theta)\geq 0,\forall\theta(48)
577
+
578
+
579
+
580
+ ###### The proof of Theorem [2](https://arxiv.org/html/2010.01388#Thmtheorem2 "Theorem 2. ‣ 5.1 Convergence Properties ‣ 5 Properties of the Algorithm ‣ Online Neural Networks for Change-Point Detection").
581
+
582
+ According to[10.5555/3041838.3041955] we get the following upper bound for regret of OSD algorithm:
583
+
584
+ R​(N)≤‖F‖2 2​η+‖∇L‖2 2​η​N R(N)\leq\frac{\parallel F\parallel^{2}}{2\eta}+\frac{\parallel\nabla L\parallel^{2}}{2}\eta N(49)
585
+
586
+ Similarly
587
+
588
+ R 1=∑i=ν+1 t L i​(θ i)−min θ​∑i=ν+1 t L i​(θ)≤‖F‖2 2​η+‖∇L‖2 2​η​(t−ν)\begin{split}R_{1}&=\sum_{i=\nu+1}^{t}L_{i}(\theta_{i})-\min_{\theta}\sum_{i=\nu+1}^{t}L_{i}(\theta)\\ &\leq\frac{\parallel F\parallel^{2}}{2\eta}+\frac{\parallel\nabla L\parallel^{2}}{2}\eta(t-\nu)\end{split}(50)
589
+
590
+ R 2=∑i=t−N+1 ν L i​(θ i)−min θ​∑i=t−N+1 ν L i​(θ)≤‖F‖2 2​η+‖∇L‖2 2​η​(ν−t+N)\begin{split}R_{2}&=\sum_{i=t-N+1}^{\nu}L_{i}(\theta_{i})-\min_{\theta}\sum_{i=t-N+1}^{\nu}L_{i}(\theta)\\ &\leq\frac{\parallel F\parallel^{2}}{2\eta}+\frac{\parallel\nabla L\parallel^{2}}{2}\eta(\nu-t+N)\end{split}(51)
591
+
592
+ Then
593
+
594
+ R​(N)=R 1+R 2−C​(N,ν)≤‖F‖2 η+‖∇L‖2 2​η​N−C​(N,ν)\begin{split}R(N)&=R_{1}+R_{2}-C(N,\nu)\\ &\leq\frac{\parallel F\parallel^{2}}{\eta}+\frac{\parallel\nabla L\parallel^{2}}{2}\eta N-C(N,\nu)\end{split}(52)
595
+
596
+ In result
597
+
598
+ R​(N)≤‖F‖2 η+‖∇L‖2 2​η​N−C​(N,ν)R(N)\leq\frac{\parallel F\parallel^{2}}{\eta}+\frac{\parallel\nabla L\parallel^{2}}{2}\eta N-C(N,\nu)(53)
599
+
600
+ Theorem[1](https://arxiv.org/html/2010.01388#Thmtheorem1 "Theorem 1. ‣ 5.1 Convergence Properties ‣ 5 Properties of the Algorithm ‣ Online Neural Networks for Change-Point Detection") shows that C​(N,ν)≥0 C(N,\nu)\geq 0.
601
+
602
+
603
+
604
+ ###### The proof of Theorem [3](https://arxiv.org/html/2010.01388#Thmtheorem3 "Theorem 3. ‣ 5.2 Offline and Online Dissimilarity Scores ‣ 5 Properties of the Algorithm ‣ Online Neural Networks for Change-Point Detection").
605
+
606
+ Let’s expand the definition of the dissimilarity score into two terms:
607
+
608
+ d¯​(t)=1 N​∑t−N+1 t−ν(log⁡1−f​(x​(i−N),θ)f​(x​(i−N),θ)+log⁡f​(x​(i),θ)1−f​(x​(i),θ))++1 N​∑t−ν+1 t(log⁡1−f​(x​(i−N),θ)f​(x​(i−N),θ)+log⁡f​(x​(i),θ)1−f​(x​(i),θ)).\begin{split}\bar{d}(t)&=\frac{1}{N}\sum_{t-N+1}^{t-\nu}\left(\log\frac{1-f(x(i-N),\theta)}{f(x(i-N),\theta)}+\log\frac{f(x(i),\theta)}{1-f(x(i),\theta)}\right)+\\ &+\frac{1}{N}\sum_{t-\nu+1}^{t}\left(\log\frac{1-f(x(i-N),\theta)}{f(x(i-N),\theta)}+\log\frac{f(x(i),\theta)}{1-f(x(i),\theta)}\right).\end{split}(54)
609
+
610
+ According to Equation [45](https://arxiv.org/html/2010.01388#A1.E45 "In Appendix A Optimal Predictions ‣ Online Neural Networks for Change-Point Detection"), the optimal solution for t−N<i≤t−ν t-N<i\leq t-\nu corresponds to f​(x,θ)=0.5 f(x,\theta)=0.5. Similarly, the optimal predictions for t−ν<i≤t t-\nu<i\leq t:
611
+
612
+ f​(x​(i),θ)=p 1​(x​(i))p 1​(x​(i))+p 0​(x​(i)).f(x(i),\theta)=\frac{p_{1}(x(i))}{p_{1}(x(i))+p_{0}(x(i))}.(55)
613
+
614
+ Then
615
+
616
+ d¯​(t)=1 N​∑t−ν+1 t(log⁡p 0​(x​(i−N))p 1​(x​(i−N))+log⁡p 1​(x​(i))p 0​(x​(i))).\begin{split}\bar{d}(t)=\frac{1}{N}\sum_{t-\nu+1}^{t}\left(\log\frac{p_{0}(x(i-N))}{p_{1}(x(i-N))}+\log\frac{p_{1}(x(i))}{p_{0}(x(i))}\right).\end{split}(56)
617
+
618
+ Taking the expected value from the expression above, we get the final result:
619
+
620
+ 𝔼​[d¯​(t)]=ν N​(𝔼 x∼p 1​(x)​[log⁡p 1​(x)p 0​(x)]−𝔼 x∼p 0​(x)​[log⁡p 1​(x)p 0​(x)]).\mathbb{E}[\bar{d}(t)]=\frac{\nu}{N}\left(\mathbb{E}_{x\sim p_{1}(x)}\left[\log\frac{p_{1}(x)}{p_{0}(x)}\right]-\mathbb{E}_{x\sim p_{0}(x)}\left[\log\frac{p_{1}(x)}{p_{0}(x)}\right]\right).(57)
621
+
622
+
623
+
624
+ ###### The proof of Theorem [4](https://arxiv.org/html/2010.01388#Thmtheorem4 "Theorem 4. ‣ 5.2 Offline and Online Dissimilarity Scores ‣ 5 Properties of the Algorithm ‣ Online Neural Networks for Change-Point Detection").
625
+
626
+ Let’s expand the definition of the dissimilarity score into two terms:
627
+
628
+ d¯​(t)=1 N​∑t−N+1 t−ν(log⁡1−f​(x​(i−N),θ)f​(x​(i−N),θ)+log⁡f​(x​(i),θ)1−f​(x​(i),θ))++1 N​∑t−ν+1 t(log⁡1−f​(x​(i−N),θ)f​(x​(i−N),θ)+log⁡f​(x​(i),θ)1−f​(x​(i),θ)).\begin{split}\bar{d}(t)&=\frac{1}{N}\sum_{t-N+1}^{t-\nu}\left(\log\frac{1-f(x(i-N),\theta)}{f(x(i-N),\theta)}+\log\frac{f(x(i),\theta)}{1-f(x(i),\theta)}\right)+\\ &+\frac{1}{N}\sum_{t-\nu+1}^{t}\left(\log\frac{1-f(x(i-N),\theta)}{f(x(i-N),\theta)}+\log\frac{f(x(i),\theta)}{1-f(x(i),\theta)}\right).\end{split}(58)
629
+
630
+ According to Equation [45](https://arxiv.org/html/2010.01388#A1.E45 "In Appendix A Optimal Predictions ‣ Online Neural Networks for Change-Point Detection"), the optimal prediction of the model:
631
+
632
+ f​(x​(i),θ)=p~1​(x​(i))p~1​(x​(i))+p~0​(x​(i)),f(x(i),\theta)=\frac{\tilde{p}_{1}(x(i))}{\tilde{p}_{1}(x(i))+\tilde{p}_{0}(x(i))},(59)
633
+
634
+ where p~0​(x​(i))=p 0​(x​(i))\tilde{p}_{0}(x(i))=p_{0}(x(i)) and
635
+
636
+ p~1​(x​(i))=ν​p 1​(x​(i))+(N−ν)​p 0​(x​(i))N\tilde{p}_{1}(x(i))=\frac{\nu p_{1}(x(i))+(N-\nu)p_{0}(x(i))}{N}(60)
637
+
638
+ Then
639
+
640
+ d¯​(t)=1 N​∑t−ν+1 t(log⁡p~0​(x​(i−N))p~1​(x​(i−N))+log⁡p~1​(x​(i))p~0​(x​(i))).\begin{split}\bar{d}(t)=\frac{1}{N}\sum_{t-\nu+1}^{t}\left(\log\frac{\tilde{p}_{0}(x(i-N))}{\tilde{p}_{1}(x(i-N))}+\log\frac{\tilde{p}_{1}(x(i))}{\tilde{p}_{0}(x(i))}\right).\end{split}(61)
641
+
642
+ Taking the expected value from the expression above, we get the final result:
643
+
644
+ 𝔼​[d¯​(t)]=ν N​(𝔼 x∼p 1​(x)​[log⁡p~1​(x)p~0​(x)]−𝔼 x∼p 0​(x)​[log⁡p~1​(x)p~0​(x)]).\mathbb{E}[\bar{d}(t)]=\frac{\nu}{N}\left(\mathbb{E}_{x\sim p_{1}(x)}\left[\log\frac{\tilde{p}_{1}(x)}{\tilde{p}_{0}(x)}\right]-\mathbb{E}_{x\sim p_{0}(x)}\left[\log\frac{\tilde{p}_{1}(x)}{\tilde{p}_{0}(x)}\right]\right).(62)
645
+
646
+
647
+
648
+ References
649
+ ----------