Monketoo commited on
Commit
f6876fa
·
verified ·
1 Parent(s): 418f1f2

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. samples/texts/1085779/page_1.md +31 -0
  2. samples/texts/1085779/page_10.md +42 -0
  3. samples/texts/1085779/page_11.md +32 -0
  4. samples/texts/1085779/page_12.md +33 -0
  5. samples/texts/1085779/page_13.md +38 -0
  6. samples/texts/1085779/page_14.md +30 -0
  7. samples/texts/1085779/page_15.md +39 -0
  8. samples/texts/1085779/page_16.md +31 -0
  9. samples/texts/1085779/page_17.md +17 -0
  10. samples/texts/1085779/page_2.md +22 -0
  11. samples/texts/1085779/page_3.md +35 -0
  12. samples/texts/1085779/page_4.md +25 -0
  13. samples/texts/1085779/page_5.md +34 -0
  14. samples/texts/1085779/page_6.md +52 -0
  15. samples/texts/1085779/page_7.md +43 -0
  16. samples/texts/1085779/page_8.md +34 -0
  17. samples/texts/1085779/page_9.md +28 -0
  18. samples/texts/1930866/page_1.md +25 -0
  19. samples/texts/1930866/page_10.md +59 -0
  20. samples/texts/1930866/page_11.md +21 -0
  21. samples/texts/1930866/page_2.md +44 -0
  22. samples/texts/1930866/page_3.md +27 -0
  23. samples/texts/1930866/page_4.md +31 -0
  24. samples/texts/1930866/page_5.md +35 -0
  25. samples/texts/1930866/page_6.md +45 -0
  26. samples/texts/1930866/page_7.md +33 -0
  27. samples/texts/1930866/page_8.md +47 -0
  28. samples/texts/1930866/page_9.md +44 -0
  29. samples/texts/2058102/page_1.md +35 -0
  30. samples/texts/2058102/page_2.md +35 -0
  31. samples/texts/2058102/page_3.md +35 -0
  32. samples/texts/2058102/page_4.md +46 -0
  33. samples/texts/2058102/page_5.md +41 -0
  34. samples/texts/2058102/page_6.md +44 -0
  35. samples/texts/2298363/page_1.md +33 -0
  36. samples/texts/2298363/page_10.md +15 -0
  37. samples/texts/2298363/page_2.md +45 -0
  38. samples/texts/2298363/page_3.md +51 -0
  39. samples/texts/2298363/page_4.md +69 -0
  40. samples/texts/2298363/page_5.md +39 -0
  41. samples/texts/2298363/page_6.md +29 -0
  42. samples/texts/2298363/page_7.md +54 -0
  43. samples/texts/2298363/page_8.md +35 -0
  44. samples/texts/2298363/page_9.md +35 -0
  45. samples/texts/348597/page_1.md +8 -0
  46. samples/texts/348597/page_10.md +75 -0
  47. samples/texts/348597/page_100.md +31 -0
  48. samples/texts/348597/page_101.md +37 -0
  49. samples/texts/348597/page_102.md +26 -0
  50. samples/texts/348597/page_103.md +36 -0
samples/texts/1085779/page_1.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The Hyder Series
2
+
3
+ by Syed Shahabudeen
4
+
5
+ july 2021
6
+
7
+ **Abstract**
8
+
9
+ The Hyder Series is a generelized version of a special type of multi-
10
+ ple infinite series.In this paper, we will be looking at some main aspect
11
+ of this series in detail.
12
+
13
+ # 1 Introduction
14
+
15
+ Hyder Series is basically a generelized form of a special type of an infinite
16
+ series. It is defined as
17
+
18
+ $$
19
+ \mathcal{H}^q(\alpha_1, \alpha_2, \alpha_3, \dots, \alpha_k; p_1, p_2, p_3, \dots, p_k; \beta) = \sum_{m_1, m_2, m_3, \dots, m_k = 0}^{\infty} \frac{\prod_{1 \le i \le k} \alpha_i^{m_i}}{\left( \sum_{n=1}^{k} p_n m_n + \beta \right)^q}
20
+ $$
21
+
22
+ where $\sum_{m_1,m_2,m_3,\dots,m_k=0} = \sum_{m_1=0}^{\infty} \sum_{m_2=0}^{\infty} \sum_{m_3=0}^{\infty} \dots \sum_{m_k=0}^{\infty}$
23
+
24
+ In this paper we'll be looking at some special values, its respective proofs
25
+ and relation of hyder series to hypergeometric series.
26
+
27
+ **1.1 Notations**
28
+
29
+ The *q* in the Hyder Notation Stands for the power order of the series.
30
+ $p_1, p_2, ..., p_k$ are the coefficients of $m_1, m_2, ..., m_k$.
31
+ If a number is being repeated for *n* number of times in the first two slots
samples/texts/1085779/page_10.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ratio and exponential constant) therefore
2
+
3
+ $$
4
+ \mathcal{H}\left(\frac{1}{\pi}, \frac{1}{\phi}, \frac{1}{e}; 2r(3); 1\right) = \pi\phi e \left( \frac{\tanh^{-1}\left(\frac{1}{\sqrt{\pi}}\right)}{\sqrt{\pi}(\pi - \phi)(\pi - e)} + \frac{\tanh^{-1}\left(\frac{1}{\sqrt{\phi}}\right)}{\sqrt{\phi}(\phi - \pi)(\phi - e)} + \frac{\tanh^{-1}\left(\frac{1}{\sqrt{e}}\right)}{\sqrt{e}(e - \pi)(e - \phi)} \right)
5
+ $$
6
+
7
+ **Theorem 3.**
8
+
9
+ $$
10
+ \mathcal{H}(a_{r(m)}; p_{r(m)}; \eta) = \frac{1}{\eta} {}_2F_1\left(m, \frac{\eta}{p}; \frac{\eta}{p} + 1; a\right); \quad a \in (0, 1), m \in \mathbb{N}
11
+ $$
12
+
13
+ where ${}_2F_1$ is a Hypergeometric Series[3]
14
+
15
+ Proof.
16
+
17
+ $$
18
+ \begin{align*}
19
+ \mathcal{H}(a_{r(m)}; p_{r(m)}; \eta) &= \sum_{n_1,n_2,n_3,\ldots,n_m \ge 0} \frac{a^{n_1+n_2+\cdots+n_m}}{(pn_1+pn_2+\cdots+pn_m+\eta)} \\
20
+ &= \frac{1}{a^{\frac{\eta-1}{p}}} \int_0^1 \sum_{n_1,n_2,n_3,\ldots,n_m \ge 0} (ax^p)^{n_1+n_2+n_3+\cdots+n_m+\frac{\eta-1}{p}} dx \\
21
+ &= \int_0^1 \frac{x^{(\eta-1)}}{(1-ax^p)^m} dx \tag*{\text{(Let } t=x^p\text{)}} \\
22
+ &= \frac{1}{p} \int_0^1 \frac{t^{\frac{\eta}{p}-1}}{(1-at)^m} dt
23
+ \end{align*}
24
+ $$
25
+
26
+ Since
27
+
28
+ $$
29
+ \frac{\Gamma(\beta)\Gamma(\gamma-\beta)}{\Gamma(\gamma)} {}_2F_1(\alpha, \beta; \gamma; z) = \int_0^1 t^{\beta-1}(1-t)^{\gamma-\beta-1}(1-zt)^{-\alpha}dt \quad (\text{Euler Integral})
30
+ $$
31
+
32
+ Therefore
33
+
34
+ $$
35
+ \begin{gather*}
36
+ \frac{1}{p} \int_0^1 \frac{t^{\frac{\eta}{p}-1}}{(1-at)^m} dt = \frac{1}{p} \frac{\Gamma\left(\frac{\eta}{p}\right) \Gamma(1)}{\Gamma\left(\frac{\eta}{p}+1\right)} {}_2F_1\left(m, \frac{\eta}{p}; \frac{\eta}{p}+1; a\right) \\
37
+ \Rightarrow \\
38
+ \mathcal{H}(a_{r(m)}; p_{r(m)}; \eta) = \frac{1}{\eta} {}_2F_1\left(m, \frac{\eta}{p}; \frac{\eta}{p}+1; a\right) \tag{9}
39
+ \end{gather*}
40
+ $$
41
+
42
+ $\square$
samples/texts/1085779/page_11.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ From the above theorem it is clear how the hyder series is related to the ${}_2F_1$ hypergeometric series. Now we will look forward to some special cases.
2
+
3
+ **Corollary 3.1.**
4
+
5
+ $$ \mathcal{H}\left(\left(\frac{1}{2}\right)_{r(m)} ; 1_{r(m)}; m\right) = 2^{m-1} \left(\psi\left(\frac{1+m}{2}\right) - \psi\left(\frac{m}{2}\right)\right); \quad m \in \mathbb{N} $$
6
+
7
+ *Proof.*
8
+
9
+ $$
10
+ \begin{align*}
11
+ \mathcal{H}\left(\left(\frac{1}{2}\right)_{r(m)} ; 1_{r(m)}; m\right) &= \frac{1}{m} {}_2F_1\left(m, m; m+1; \frac{1}{2}\right) && \text{(from theorem 2)} \\
12
+ &= \frac{2^m}{m} {}_2F_1\left(m, 1; m+1; -1\right) && \text{(Apply Pfaff Transformation)} \\
13
+ &= 2^m \int_0^1 \frac{t^{m-1}}{1+t} dt
14
+ \end{align*}
15
+ $$
16
+
17
+ the following integral can be easily proved in terms of digamma function [5]
18
+ ,A beautiful proof of it's can also be seen in the book "Almost Impossible Integral and Sums"[1] at page 67. Therefore
19
+
20
+ $$ \int_0^1 \frac{t^{m-1}}{1+t} dt = \frac{1}{2} \left( \psi\left(\frac{1+m}{2}\right) - \psi\left(\frac{m}{2}\right) \right) $$
21
+
22
+ Hence
23
+
24
+ $$ \mathcal{H}\left(\left(\frac{1}{2}\right)_{r(m)} ; 1_{r(m)}; m\right) = 2^{m-1} \left( \psi\left(\frac{1+m}{2}\right) - \psi\left(\frac{m}{2}\right) \right) \quad (10) $$
25
+
26
+
27
+
28
+ By making use of the Series definition of the Digamma function, we can write the digamma relation in Eq (9) in terms of the Lerch Transcendent[4] Notation i.e
29
+
30
+ $$ \mathcal{H}\left(\left(\frac{1}{2}\right)_{r(m)} ; 1_{r(m)}; m\right) = \Phi(-1, 1, m) $$
31
+
32
+ where $\Phi(-1, 1, m) = \sum_{k=0}^{\infty} \frac{(-1)^k}{m+k}$
samples/texts/1085779/page_12.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Example 3.** let us plug the values $m = 4$ in Eq(9) Therefore as per the definition it equals to
2
+
3
+ $$
4
+ \begin{aligned}
5
+ H\left(\left(\frac{1}{2}\right)_{r(4)} ; 1_{r(4)} ; 4\right) &= \sum_{x=0}^{\infty} \sum_{y=0}^{\infty} \sum_{z=0}^{\infty} \sum_{l=0}^{\infty} \frac{1}{2^{x+y+z+l}(x+y+z+l+4)} && \text{(Apply result from Eq(9))} \\
6
+ &= 2^3 \left(\psi\left(\frac{5}{2}\right) - \psi(2)\right)
7
+ \end{aligned}
8
+ $$
9
+
10
+ Since $\psi(\frac{5}{2}) = \frac{8}{3} - \gamma - \ln(4)$ and $\psi(2) = 1 - \gamma$. (where $\gamma$ is Euler-Mascheroni[2] constant).
11
+
12
+ therefore
13
+
14
+ $$ H\left(\left(\frac{1}{2}\right)_{r(4)}; 1_{r(4)}; 4\right) = \frac{40}{3} - 16 \ln(2) $$
15
+
16
+ ## 1.3 Hyder Series of Higher order
17
+
18
+ The Hyder Series that we just encountered at the beginning of this paper were all just the order of 1 i.e $q = 1$. In this section we will be looking at some special cases of higher order of Hyder Series.
19
+
20
+ **Lemma 2.** For $q, \beta \in \mathbb{N}$
21
+
22
+ $$
23
+ \int_0^1 \frac{x^{\beta+1}}{\left(1-\frac{x}{a}\right)^2} \log^q(x) dx = a^{\beta+2} (-1)^q (q!) \left( \mathrm{Li}_q\left(\frac{1}{a}\right) - (\beta+1) \mathrm{Li}_{q+1}\left(\frac{1}{a}\right) + (\beta+1) \sum_{k=1}^{\beta} \frac{1}{a^k k^{q+1}} - \sum_{k=1}^{\beta} \frac{1}{a^k k^q} \right)
24
+ $$
25
+
26
+ where $\mathrm{Li}_s(z)$ is the polylogarithm function [6]
27
+
28
+ *Proof.*
29
+
30
+ $$
31
+ \int_0^1 \frac{x^{\beta+1}}{\left(1-\frac{x}{a}\right)^2} \log^q(x) dx = a \int_0^1 \frac{\frac{x}{a}}{\left(1-\frac{x}{a}\right)^2} x^\beta \log^q(x) dx
32
+ \quad \square
33
+ $$
samples/texts/1085779/page_13.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The above mentioned Integral can be evaluated by using the series
2
+
3
+ $$ \sum_{k=0}^{\infty} kx^k = \frac{x}{(1-x)^2} $$
4
+
5
+ In our's case $x$ equals $\frac{x}{a}$. Therefore we can write the Integral as
6
+
7
+ $$ a \int_0^1 \frac{\frac{x}{a}}{(1-\frac{x}{a})^2} x^\beta \log^q(x) dx = a \sum_{k=0}^\infty \frac{k}{a^k} \int_0^1 x^{\beta+k} \log^q(x) dx $$
8
+
9
+ To evaluate the integral we'll make use of a well known result
10
+
11
+ $$ \int_0^1 x^m \log^n(x) dx = \frac{(-1)^q q!}{(m+1)^{q+1}} \quad (11) $$
12
+
13
+ therefore
14
+
15
+ $$
16
+ \begin{aligned}
17
+ a \sum_{k=0}^{\infty} \frac{k}{a^k} \int_0^1 x^{\beta+k} \log^q(x) dx &= a(-1)^q (q!) \sum_{k=0}^{\infty} \frac{k}{a^k (\beta + k + 1)^{q+1}} \\
18
+ &= a(-1)^q (q!) \left( \sum_{k=0}^{\infty} \frac{1}{a^k (\beta + k + 1)^q} - (\beta + 1) \sum_{k=0}^{\infty} \frac{1}{a^k (\beta + k + 1)^{q+1}} \right) \\
19
+ &= a(-1)^q (q!) \left( \Phi\left(\frac{1}{a}, q, \beta+1\right) - (\beta+1)\Phi\left(\frac{1}{a}, q+1, \beta+1\right) \right)
20
+ \end{aligned}
21
+ $$
22
+
23
+ On writing the Lerch Transdescent in terms of Polylogarithm function we get
24
+
25
+ $$ \Phi\left(\frac{1}{a}, q, \beta+1\right) = a^{\beta+1} \left( \mathrm{Li}_q\left(\frac{1}{a}\right) - \sum_{k=1}^{\beta} \frac{1}{a^k k^q} \right) $$
26
+
27
+ and
28
+
29
+ $$ \Phi\left(\frac{1}{a}, q+1, \beta+1\right) = a^{\beta+1} \left( \mathrm{Li}_{q+1}\left(\frac{1}{a}\right) - \sum_{k=1}^{\beta} \frac{1}{a^k k^{q+1}} \right) $$
30
+
31
+ finally upon substitution, we will get the result as
32
+
33
+ $$
34
+ \begin{aligned}
35
+ \int_0^1 \frac{x^{\beta+1}}{\left(1-\frac{x}{a}\right)^2} \log^q(x) dx = {}& a^{\beta+2} (-1)^q (q!) \left( \mathrm{Li}_q\left(\frac{1}{a}\right) - (\beta+1) \mathrm{Li}_{q+1}\left(\frac{1}{a}\right) + (\beta+1) \sum_{k=1}^{\beta} \frac{1}{a^k k^{q+1}} \right. \\
36
+ & \left. - \sum_{k=1}^{\beta} \frac{1}{a^k k^q} \right)
37
+ \end{aligned}
38
+ $$
samples/texts/1085779/page_14.md ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Theorem 4.**
2
+
3
+ $$
4
+ \mathcal{H}^{q+1} \left( \left(\frac{1}{a}\right)_{r(2)} ; 1_{r(2)}; \beta + 2 \right) = a^{\beta+2} \left( \mathrm{Li}_q \left(\frac{1}{a}\right) - (\beta+1) \mathrm{Li}_{q+1} \left(\frac{1}{a}\right) + (\beta+1) \sum_{k=1}^{\beta} \frac{1}{a^k k^{q+1}} - \sum_{k=1}^{\beta} \frac{1}{a^k k^q} \right)
5
+ $$
6
+
7
+ Proof. As per the definition of Hyder Series
8
+
9
+ $$
10
+ \mathcal{H}^{q+1} \left( \left( \frac{1}{a} \right)_{r(2)} ; 1_{r(2)} ; \beta + 2 \right) = \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \frac{1}{a^{k_1+k_2}(k_1+k_2+\beta+2)^{q+1}}
11
+ $$
12
+
13
+ To evaluate the series, we will make use of the Eq(11), In this case $m = k_1 + k_2 + \beta + 1$
14
+
15
+ therefore
16
+
17
+ $$
18
+ \begin{align*}
19
+ \mathcal{H}^{q+1} \left( \left(\frac{1}{a}\right)_{r(2)} ; 1_{r(2)} ; \beta + 2 \right) &= \frac{(-1)^q}{q!} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \frac{1}{a^{k_1+k_2}} \int_0^1 x^{k_1+k_2+\beta+1} \log^q(x) dx \\
20
+ &= \frac{(-1)^q}{q!} \int_0^1 x^{\beta+1} \log^q(x) \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \frac{x^{k_1+k_2}}{a^{k_1+k_2}} dx \\
21
+ &= \frac{(-1)^q}{q!} \underbrace{\int_0^1 \frac{x^{\beta+1}}{\left(1-\frac{x}{a}\right)^2} \log^q(x) dx}_{A}
22
+ \end{align*}
23
+ $$
24
+
25
+ The above mentioned Integral *A* is the same Integral that we just came to
26
+ prove in **Lemma 2**. Therefore the final result becomes
27
+
28
+ $$
29
+ \mathcal{H}^{q+1} \left( \left(\frac{1}{a}\right)_{r(2)} ; 1_{r(2)} ; \beta + 2 \right) = a^{\beta+2} \left( \operatorname{Li}_q \left(\frac{1}{a}\right) - (\beta+1) \operatorname{Li}_{q+1} \left(\frac{1}{a}\right) + (\beta+1) \sum_{k=1}^{\beta} \frac{1}{a^k k^{q+1}} - \sum_{k=1}^{\beta} \frac{1}{a^k k^q} \right)
30
+ $$
samples/texts/1085779/page_15.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Example 4.** For $q = 2, \beta = 2$ and $a = 2$ in **Theorem 4**, from definition we have
2
+
3
+ $$
4
+ \begin{aligned}
5
+ \mathcal{H}^3 \left( \left(\frac{1}{2}\right)_{r(2)} ; 1_{r(2)} ; 4 \right) &= \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \frac{1}{2^{k_1+k_2}(k_1+k_2+4)^3} \\
6
+ &= 2^4 \left( \mathrm{Li}_2\left(\frac{1}{2}\right) - 3 \mathrm{Li}_3\left(\frac{1}{2}\right) + 3 \sum_{k=1}^{2} \frac{1}{2^k k^3} - \sum_{k=1}^{2} \frac{1}{2^k k^2} \right)
7
+ \end{aligned}
8
+ $$
9
+
10
+ here
11
+
12
+ $$ \mathrm{Li}_2\left(\frac{1}{2}\right) = \frac{\pi^2}{12} - \frac{\log^2(2)}{2} $$
13
+
14
+ and
15
+
16
+ $$ \mathrm{Li}_3\left(\frac{1}{2}\right) = \frac{\log^3(2)}{6} + \frac{7\zeta(3)}{8} - \frac{\pi^2 \log(2)}{12} $$
17
+
18
+ therefore
19
+
20
+ $$ \mathcal{H}^3 \left( \left(\frac{1}{2}\right)_{r(2)} ; 1_{r(2)} ; 4 \right) = \frac{4\pi^2}{3} + 4\pi^2 \log(2) + \frac{33}{2} - 8\log^3(2) - 8\log^2(2) - 42\zeta(3) $$
21
+
22
+ **Corollary 4.1.** The following equation hold's true for the case when $q \ge 2$
23
+
24
+ $$ \mathcal{H}^{q+1}(1_{r(2)}; 1_{r(2)}; 2) = \zeta(q) - \zeta(q+1) \quad (12) $$
25
+
26
+ where $\zeta(q)$ is the zeta function[7]
27
+
28
+ *Proof.*
29
+
30
+ $$
31
+ \begin{aligned}
32
+ \mathcal{H}^{q+1}(1_{r(2)}; 1_{r(2)}; 2) &= \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \frac{1}{(k_1 + k_2 + 2)^{q+1}} && (\text{Apply Theorem 4}) \\
33
+ &= \mathrm{Li}_q(1) - \mathrm{Li}_{q+1}(1)
34
+ \end{aligned}
35
+ $$
36
+
37
+ Since $\mathrm{Li}_q(1) = \zeta(q)$, therefore
38
+
39
+ $$ \mathcal{H}^{q+1}(1_{r(2)}; 1_{r(2)}; 2) = \zeta(q) - \zeta(q+1) $$
samples/texts/1085779/page_16.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Example 5.** Here are some examples for the above case
2
+
3
+ $$
4
+ \begin{aligned}
5
+ \mathcal{H}^3 (1_{r(2)}; 1_{r(2)}; 2) &= \zeta(2) - \zeta(3) \\
6
+ &= \frac{\pi^2}{6} - \zeta(3)
7
+ \end{aligned}
8
+ $$
9
+
10
+ $$
11
+ \begin{aligned}
12
+ \mathcal{H}^4 (1_{r(2)}; 1_{r(2)}; 3) &= \zeta(3) - \zeta(4) \\
13
+ &= \zeta(3) - \frac{\pi^4}{90}
14
+ \end{aligned}
15
+ $$
16
+
17
+ ## 2 Conclusion
18
+
19
+ This paper was just an introduction to Hyder Series. We came to see some important results and some special cases of Hyder series of higher order. This series is named in honour of my late Grandfather Syed Hyder, who was a chief executive engineer in the water department of state Tamil Nadu. He enjoyed solving mathematical problems during his leisure times and was a man of wit and humour. I hope this Series could have many more other interesting result to be discovered and could also have unique relation to some special functions.
20
+
21
+ ## References
22
+
23
+ [1] Cornel Ioan Vălean. (Almost) impossible integrals, sums, and series. Springer, 2019.
24
+
25
+ [2] Eric W Weisstein. Euler-mascheroni constant. https://mathworld.wolfram.com/COMS.html, 2002.
26
+
27
+ [3] Eric W Weisstein. Hypergeometric function. https://mathworld.wolfram.com/COMS.html, 2002.
28
+
29
+ [4] Eric W Weisstein. Lerch transcendent. https://mathworld.wolfram.com/COMS.html, 2002.
30
+
31
+ [5] Eric W Weisstein. Polygamma function. https://mathworld.wolfram.com/COMS.html, 2002.
samples/texts/1085779/page_17.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [6] Eric W Weisstein. Polylogarithm. https://mathworld.wolfram.com/, 2002.
2
+
3
+ [7] Eric W Weisstein. Riemann zeta function. https://mathworld.wolfram.com/, 2002.
4
+
5
+ [8] Eric W Weisstein. Rising factorial. https://mathworld.wolfram.com/, 2003.
6
+
7
+ [9] Wikipedia contributors". General Leibniz rule, 12 2020.
8
+
9
+ *Romanian Mathematical Magazine*
10
+
11
+ Web: http://www.ssmrmh.ro
12
+
13
+ The Author: This article is published with open access
14
+
15
+ Warriorshahab@gmail.com
16
+ Kmea Engineering College
17
+ Ernakulam, Kerala-India
samples/texts/1085779/page_2.md ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ of the Hyder Series, then it can be denoted as $\mathcal{H}^q(\alpha_r(n); p_r(n); \beta)$ where $r(n)$ stands for the $n$ number of times repeated. Such that as per the definition it equals
2
+
3
+ $$ \mathcal{H}^q(\alpha_{r(n)}; p_{r(n)}; \beta) = \sum_{m_1, m_2, m_3, \dots, m_n = 0}^{\infty} \frac{\alpha^{m_1+m_2+m_3+\dots+m_n}}{\left( p \sum_{k=1}^{n} m_k + \beta \right)^q} $$
4
+
5
+ It has to be noted that the number of sums repeated is equal to the number of terms written on any one of the first two slots of hyder notation.
6
+
7
+ ## 1.2 Some Examples
8
+
9
+ These are some examples for the case when $q = 1$
10
+
11
+ $$ \mathcal{H}\left(\frac{2}{3}, \frac{2}{3}; 2, 2; 1\right) = \frac{\sqrt{3}}{2\sqrt{2}} \tanh^{-1}\left(\sqrt{\frac{2}{3}}\right) + \frac{3}{2} \quad (1) $$
12
+
13
+ *Proof.*
14
+
15
+ $$
16
+ \begin{align*}
17
+ \mathcal{H}\left(\frac{2}{3}, \frac{2}{3}; 2, 2; 1\right) &= \sum_{m=0}^{\infty} \sum_{n=0}^{\infty} \frac{1}{\left(\frac{3}{2}\right)^{m+n} (2m + 2n + 1)} \\
18
+ &= \sum_{m=0}^{\infty} \sum_{n=0}^{\infty} \int_0^1 \frac{x^{2m+2n}}{\left(\frac{3}{2}\right)^{m+n}} dx \\
19
+ &= \int_0^1 \left( \sum_{m=0}^{\infty} \left(\frac{2x^2}{3}\right)^m \sum_{n=0}^{\infty} \left(\frac{2x^2}{3}\right)^n \right) dx \\
20
+ &= \frac{3^2}{2^2} \underbrace{\int_0^1 \frac{1}{\left(\frac{3}{2}-x^2\right)^2} dx}_{I\left(\frac{3}{2}\right)}
21
+ \end{align*}
22
+ $$
samples/texts/1085779/page_3.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Here
2
+
3
+ $$
4
+ \begin{align*}
5
+ I(a) &= \int_0^1 \frac{dx}{(a - x^2)^2} \\
6
+ &= -\frac{\partial}{\partial a} \left( \frac{\tanh^{-1}\left(\frac{x}{\sqrt{a}}\right)}{\sqrt{a}} \right) \Bigg|_{x=0}^1 \\
7
+ &= \frac{\tanh^{-1}\left(\frac{1}{\sqrt{a}}\right)}{2a\sqrt{a}} + \frac{1}{2a^2\left(1 - \frac{1}{a}\right)}
8
+ \end{align*}
9
+ $$
10
+
11
+ $\therefore I\left(\frac{3}{2}\right) = \frac{\sqrt{2} \tanh^{-1}\left(\frac{\sqrt{2}}{\sqrt{3}}\right)}{3\sqrt{3}} + \frac{2}{3}$
12
+
13
+ So
14
+
15
+ $$
16
+ \begin{align*}
17
+ H\left(\frac{2}{3}, \frac{2}{3}; 2, 2; 1\right) &= \frac{3^2}{2^2} \left( \frac{\sqrt{2} \tanh^{-1}\left(\frac{\sqrt{2}}{\sqrt{3}}\right)}{3\sqrt{3}} + \frac{2}{3} \right) \\
18
+ &= \frac{\sqrt{3}}{2\sqrt{2}} \tanh^{-1}\left(\sqrt{\frac{2}{3}}\right) + \frac{3}{2}
19
+ \end{align*}
20
+ $$
21
+
22
+ $$
23
+ \mathcal{H}\left(\left(\frac{1}{2}\right)_{r(3)}; 2_{r(3)}; 1\right) = \frac{3\tanh^{-1}\left(\frac{1}{\sqrt{2}}\right)}{4\sqrt{2}} + \frac{7}{4} \quad (2)
24
+ $$
25
+
26
+ Proof.
27
+
28
+ $$
29
+ \begin{align*}
30
+ \mathcal{H}\left(\left(\frac{1}{2}\right)_{r(3)}; 2_{r(3)}; 1\right) &= \sum_{m=0}^{\infty} \sum_{n=0}^{\infty} \sum_{p=0}^{\infty} \frac{1}{2^{m+n+p}(2m+2n+2p+1)} \\
31
+ &= \sum_{m=0}^{\infty} \sum_{n=0}^{\infty} \sum_{p=0}^{\infty} \int_{0}^{1} \left(\frac{x^2}{2}\right)^{m+n+p} dx \\
32
+ &= 2^3 \int_{0}^{1} \frac{dx}{(2-x^2)^3} \\
33
+ &= \frac{3 \tanh^{-1}\left(\frac{1}{\sqrt{2}}\right)}{4\sqrt{2}} + \frac{7}{4}
34
+ \end{align*}
35
+ $$
samples/texts/1085779/page_4.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Theorem 1.** This Theorem is a generalised version of the above two mentioned examples, It holds true for $\alpha = \frac{1}{a}, p = 2$ and $q = 1$
2
+
3
+ $$ \mathcal{H}\left(\left(\frac{1}{a}\right)_{r(m+1)} ; 2r(m+1); 1\right) = \frac{1}{2} \left( \sum_{i=1}^{m} \binom{2m-2i}{m-i} \Omega(a,i) \right) + \frac{(2m)!}{2^{2m}(m!)^2 a^{m+\frac{1}{2}}} \tanh^{-1}\left(\frac{1}{\sqrt{a}}\right) $$
4
+
5
+ where
6
+
7
+ $$ \Omega(a, i) = \frac{1}{i4^{m-i}(1-\frac{1}{a})^i} \sum_{k=0}^{i-1} \left(1-\frac{1}{a}\right)^k \left(\frac{1}{2}\right)_k, \quad a > 1, m \in \mathbb{N} $$
8
+
9
+ Before proceeding to the proof of theorem 1 we'll have to prove an important lemma
10
+
11
+ **Lemma 1.**
12
+
13
+ $$ \frac{\partial^i}{\partial a^i} \tanh^{-1} \left( \frac{x}{\sqrt{a}} \right) = \frac{x(-1)^i (i-1)!}{2\sqrt{a}(a-x^2)^i} \sum_{k=0}^{i-1} \frac{\left(1-\frac{x^2}{a}\right)^k}{k!} \left(\frac{1}{2}\right)_k ; \quad i \in \mathbb{N} \quad (3) $$
14
+
15
+ *Proof.* To find the 'i' th derivative of the given expression we'll make use of the Leibniz differentiation[9] formula which states that
16
+
17
+ $$ (fg)^i = \sum_{k=0}^{i} \binom{i}{k} f^{(i-k)} g^{(k)} \quad (4) $$
18
+
19
+ where *f* and *g* are n times differentiable functions. On differentiating $\tanh^{-1}\left(\frac{x}{\sqrt{a}}\right)$ with respect to *a* we get
20
+
21
+ $$ \frac{\partial}{\partial a} \tanh^{-1} \left( \frac{x}{\sqrt{a}} \right) = -\frac{x}{2\sqrt{a}(a-x^2)} $$
22
+
23
+ let us take $f = \frac{-x}{2(a-x^2)}$ and $g = \frac{1}{\sqrt{a}}$
24
+
25
+ therefore 'n'th differentiation of both functions with respect to a is
samples/texts/1085779/page_5.md ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ $$f^{(n)} = \frac{-x(-1)^n n!}{2(a-x^2)^{n+1}} \quad \text{and} \quad g^{(n)} = \frac{(-1)^n (2n-1)!!}{2^n a^{\frac{2n+1}{2}}}$$
2
+
3
+ on substituting the above values in eq 4 we'll get
4
+
5
+ $$ (fg)^i = \frac{-x}{2} \sum_{k=0}^{i} \binom{i}{k} \frac{(-1)^i (i-k)! (2k-1)!!}{(a-x^2)^{i-k+1} 2^k a^{\frac{2k+1}{2}}} $$
6
+
7
+ since $i \ge 1$ we have
8
+
9
+ $$ (fg)^i = \frac{-x}{2} \sum_{k=0}^{i-1} \binom{i-1}{k} \frac{(-1)^{i-1}(i-k-1)!(2k-1)!!}{(a-x^2)^{i-k} 2^k a^{\frac{2k+1}{2}}} $$
10
+
11
+ on rearranging and writing the double factorial in terms of
12
+ Pochammer symbol[8]. i.e $\frac{(2k-1)!!}{2^k} = \left(\frac{1}{2}\right)_k$
13
+
14
+ we'll get
15
+
16
+ $$ (fg)^i = \frac{x (-1)^i (i-1)!}{2\sqrt{a}(a-x^2)^i} \sum_{k=0}^{i-1} \left(1-\frac{x^2}{a}\right)^k \left(\frac{1}{2}\right)_k $$
17
+
18
+ $$ \therefore (fg) = \frac{\partial}{\partial a} \tanh^{-1} \left( \frac{x}{\sqrt{a}} \right) $$
19
+
20
+ $$ \therefore \quad \frac{\partial^i}{\partial a^i} \tanh^{-1} \left( \frac{x}{\sqrt{a}} \right) = \frac{x (-1)^i (i-1)!}{2\sqrt{a}(a-x^2)^i} \sum_{k=0}^{i-1} \left(1 - \frac{x^2}{a}\right)^k \left(\frac{1}{2}\right)_k $$
21
+
22
+
23
+
24
+ **Proof of Theorem 1**
25
+
26
+ *Proof.*
27
+
28
+ $$
29
+ \begin{aligned}
30
+ H\left(\left(\frac{1}{a}\right)_{r(m+1)} ; 2r(m+1); 1\right) &= \sum_{n_1, n_2, n_3, \ldots, n_{m+1} \ge 0} \frac{1}{a^{n_1+n_2+\cdots+n_{m+1}}} (2n_1 + 2n_2 + \cdots + 2n_{m+1} + 1) \\
31
+ &= \int_0^1 \sum_{n_1, n_2, n_3, \ldots, n_{m+1} \ge 0} \left(\frac{x^2}{a}\right)^{n_1+n_2+\cdots+n_{m+1}} dx \\
32
+ &= a^{m+1} \underbrace{\int_0^1 \frac{dx}{(a-x^2)^{m+1}}}_{I}
33
+ \end{aligned}
34
+ $$
samples/texts/1085779/page_6.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ here
2
+
3
+ $$
4
+ \begin{align*}
5
+ I &= \int_0^1 \frac{dx}{(a - x^2)^{m+1}} \\
6
+ &= \frac{(-1)^m}{m!} \frac{\partial^m}{\partial a^m} \left( \frac{\tanh^{-1}\left(\frac{1}{\sqrt{a}}\right)}{\sqrt{a}} \right)
7
+ \end{align*}
8
+ $$
9
+
10
+ Now to differentiate the above expression we'll make use of Leibniz differen-
11
+ tiation formula
12
+
13
+ $$
14
+ (fg)^m = \sum_{i=0}^{m} \binom{m}{i} f^{(m-i)} g^{(i)} \quad (5)
15
+ $$
16
+
17
+ let us consider $f = \frac{1}{\sqrt{a}}$ and $g = \tanh^{-1} \frac{1}{\sqrt{a}}$
18
+
19
+ therefore its respective 'n'th Derivatives are $f^{(n)} = \frac{(-1)^n (2n)!}{2^{2n} n! a^{\frac{2n+1}{2}}}$
20
+ and $g^{(n)} =$
21
+
22
+ $$
23
+ \frac{(-1)^n (n-1)!}{2\sqrt{a}(a-x^2)^n} \sum_{k=0}^{n-1} \frac{\left(1-\frac{1}{a}\right)^k}{k!} \left(\frac{1}{2}\right)_k, (\text{from Lemma 1})
24
+ $$
25
+
26
+ we can rewrite Eq (5) as
27
+
28
+ $$
29
+ (fg)^m = \sum_{i=1}^{m} \binom{m}{i} f^{(m-i)} g^{(i)} + f^{(m)} g \quad (6)
30
+ $$
31
+
32
+ $$
33
+ \begin{equation}
34
+ \begin{split}
35
+ \frac{\partial^m}{\partial a^m} \left( \frac{\tanh^{-1}\left(\frac{1}{\sqrt{a}}\right)}{\sqrt{a}} \right) &= \frac{(-1)^m}{2a^{m+1}} \sum_{i=1}^{m} \binom{m}{i} \frac{(i-1)!(2m-2i)!}{4^{m-i}(m-i)!(1-\frac{1}{a})^i} \sum_{k=0}^{i-1} \frac{\left(1-\frac{1}{a}\right)^k}{k!} \left(\frac{1}{2}\right)_k + \\
36
+ &\qquad \frac{(-1)^m (2m)!}{2^{2m}(m!)a^{m+\frac{1}{2}}} \tanh^{-1}\left(\frac{1}{\sqrt{a}}\right)
37
+ \end{split}
38
+ \tag{6}
39
+ \end{equation}
40
+ $$
41
+
42
+ the above expression can be rewritten as
43
+
44
+ $$
45
+ \begin{equation}
46
+ \begin{aligned}
47
+ \frac{\partial^m}{\partial a^m} \left( \frac{\tanh^{-1}\left(\frac{1}{\sqrt{a}}\right)}{\sqrt{a}} \right) &= \frac{(-1)^m m!}{2a^{m+1}} \sum_{i=1}^{m} \binom{2m-2i}{m-i} \sum_{k=0}^{i-1} \frac{(1-\frac{1}{a})^k}{k!} \left(\frac{1}{2}\right)_k + \\
48
+ &\qquad \frac{(-1)^m (2m)!}{2^{2m}(m!)a^{m+\frac{1}{2}}} \tanh^{-1}\left(\frac{1}{\sqrt{a}}\right)
49
+ \end{aligned}
50
+ \tag{7}
51
+ \end{equation}
52
+ $$
samples/texts/1085779/page_7.md ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ where
2
+
3
+ $$
4
+ \Omega (a, i) = \frac{1}{i 4^{m-i} (1 - \frac{1}{a})^i} \sum_{k=0}^{i-1} \frac{(1 - \frac{1}{a})^k}{k!} \left(\frac{1}{2}\right)_k
5
+ $$
6
+
7
+ we came to know that
8
+
9
+ $$
10
+ I = \frac{(-1)^m}{m!} \frac{\partial^m}{\partial a^m} \left( \frac{\tanh^{-1}\left(\frac{1}{\sqrt{a}}\right)}{\sqrt{a}} \right)
11
+ $$
12
+
13
+
14
+
15
+ $$
16
+ \begin{align*}
17
+ I &= \frac{(-1)^m}{m!} \left( \frac{(-1)^m m!}{2a^{m+1}} \left( \sum_{i=1}^{m} \binom{2m-2i}{m-i} \Omega(a,i) \right) + \frac{(-1)^m (2m)!}{2^{2m}(m!)a^{m+\frac{1}{2}}} \tanh^{-1}\left(\frac{1}{\sqrt{a}}\right) \right) \\
18
+ &= \frac{1}{2a^{m+1}} \left( \sum_{i=1}^{m} \binom{2m-2i}{m-i} \Omega(a,i) \right) + \frac{(2m)!}{2^{2m}(m!)^2 a^{m+\frac{1}{2}}} \tanh^{-1}\left(\frac{1}{\sqrt{a}}\right)
19
+ \end{align*}
20
+ $$
21
+
22
+
23
+
24
+ $$
25
+ H\left(\left(\frac{1}{a}\right)_{r(m+1)}; 2_{r(m+1)}; 1\right) = a^{m+1} I
26
+ $$
27
+
28
+ and finally we'll get the result
29
+
30
+ $$
31
+ \mathcal{H}\left(\left(\frac{1}{a}\right)_{r(m+1)} ; 2_{r(m+1)} ; 1\right) = \frac{1}{2} \left( \sum_{i=1}^{m} \binom{2m-2i}{m-i} \Omega(a,i) \right) + \frac{(2m)!}{2^{2m}(m!)^2 a^{m+\frac{1}{2}}} \tanh^{-1}\left(\frac{1}{\sqrt{a}}\right) \tag{7}
32
+ $$
33
+
34
+
35
+
36
+ **Example 1.** let us plug the value's $a = 3$ and $m = 4$ in Eq(7) i.e $\mathcal{H}\left(\left(\frac{1}{3}\right)_{r(5)} ; 2_{r(5)} ; 1\right)$ where 3 and 2 are repeated 5 times. Therefore from theorem 1 we'll get
37
+
38
+ $$
39
+ \begin{align*}
40
+ \mathcal{H}\left(\left(\frac{1}{3}\right)_{r(5)} ; 2_{r(5)} ; 1\right) &= \frac{1}{2} \left( \sum_{i=1}^{4} \binom{8-2i}{4-i} \Omega(3,i) \right) + \frac{(8)!\sqrt{3}}{2^8 (4!)^2} \tanh^{-1}\left(\frac{1}{\sqrt{3}}\right) \\
41
+ &= \frac{249}{128} + \frac{35\sqrt{3}}{128} \tanh^{-1}\left(\frac{1}{\sqrt{3}}\right)
42
+ \end{align*}
43
+ $$
samples/texts/1085779/page_8.md ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Theorem 2.** *This Equation holds true for $p = 2, \beta = 1$ and for the case when each values of $a_i$ is greater than 1.*
2
+
3
+ $$
4
+ \mathcal{H} \left( \left\{ \frac{1}{a_i} \right\}_{i=1}^n ; 2r(n) ; 1 \right) = (-1)^{n+1} \sum_{cyc} \frac{\tanh^{-1} \left( \frac{1}{\sqrt{a_1}} \right)}{\sqrt{a_1} \prod_{1<j \le n} (a_1 - a_j)} \left( \prod_{i=1}^{n} a_i \right) ; \quad a_i > 1
5
+ $$
6
+
7
+ *Proof.* As per the definition of Hyder series we can write the expression
8
+
9
+ $$
10
+ \begin{align*}
11
+ \mathcal{H}\left(\left\{\frac{1}{a_i}\right\}_{i=1}^n; 2r(n); 1\right) &= \sum_{k_1,k_2,k_3,\ldots,k_n \ge 0} \frac{1}{a_1^{k_1} a_2^{k_2} \cdots a_n^{k_n} (2k_1 + 2k_2 + \cdots + 2k_n + 1)} \\
12
+ &= \int_0^1 \sum_{k_1,k_2,k_3,\ldots,k_n \ge 0} \left(\frac{x^2}{a_1}\right)^{k_1} \left(\frac{x^2}{a_2}\right)^{k_2} \cdots \left(\frac{x^2}{a_n}\right)^{k_n} dx \\
13
+ &= (-1)^n \left(\prod_{i=1}^n a_i\right) \int_0^1 \frac{1}{(x^2-a_1)(x^2-a_2)\cdots(x^2-a_n)} dx
14
+ \end{align*}
15
+ $$
16
+
17
+ The following integral can be solved by using partial fraction. It is quite
18
+ intresting to note that while using partial fraction for this product,we can
19
+ see a cyclic pattern in it. i.e
20
+
21
+ $$
22
+ \frac{1}{(x^2 - a_1)(x^2 - a_2) \dots (x^2 - a_n)} = \sum_{cyc} \frac{1}{(x^2 - a_1) \prod_{1 < j \le n} (a_1 - a_j)}
23
+ $$
24
+
25
+ therefore
26
+
27
+ $$
28
+ \begin{align*}
29
+ \int_0^1 \frac{1}{(x^2 - a_1)(x^2 - a_2) \dots (x^2 - a_n)} dx &= \int_0^1 \sum_{cyc} \frac{1}{(x^2 - a_1) \prod_{1 < j \le n} (a_1 - a_j)} dx \\
30
+ &= \sum_{cyc} \frac{1}{\prod_{1 < j \le n} (a_1 - a_j)} \int_0^1 \frac{1}{(x^2 - a_1)} dx
31
+ \end{align*}
32
+ $$
33
+
34
+ since $\int_0^1 \frac{1}{(x^2 - a_1)} dx = \frac{-\tanh^{-1}\left(\frac{1}{\sqrt{a_1}}\right)}{\sqrt{a_1}}$
samples/texts/1085779/page_9.md ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ therefore
2
+
3
+ $$
4
+ \int_0^1 \frac{1}{(x^2 - a_1)(x^2 - a_2) \dots (x^2 - a_n)} dx = - \sum_{cyc} \frac{\tanh^{-1}\left(\frac{1}{\sqrt{a_1}}\right)}{\sqrt{a_1} \prod_{1<j \le n} (a_1 - a_j)}
5
+ $$
6
+
7
+ and finally we’ll get the result as
8
+
9
+ $$
10
+ \mathcal{H} \left( \left\{ \frac{1}{a_i} \right\}_{i=1}^{n} ; 2_{r(n)} ; 1 \right) = (-1)^{n+1} \sum_{cyc} \frac{\tanh^{-1} \left( \frac{1}{\sqrt{a_1}} \right)}{\sqrt{a_1} \prod_{1<j \le n} (a_1 - a_j)} \left( \prod_{i=1}^{n} a_i \right) \quad (8)
11
+ $$
12
+
13
+ **Example 2.** For *n* = 3 in Eq(8) we have
14
+
15
+ $$
16
+ \begin{align*}
17
+ \mathcal{H}\left(\frac{1}{a_1}, \frac{1}{a_2}, \frac{1}{a_3}; 2_{r(3)}; 1\right) &= \sum_{m=0}^{\infty} \sum_{n=0}^{\infty} \sum_{p=0}^{\infty} \frac{1}{a_1^m a_2^n a_3^p (2m + 2n + 2p + 1)} \\
18
+ &= a_1 a_2 a_3 \sum_{cyc} \frac{\tanh^{-1}\left(\frac{1}{\sqrt{a_1}}\right)}{\sqrt{a_1} \prod_{1<j \le 3} (a_1 - a_j)}
19
+ \end{align*}
20
+ $$
21
+
22
+ on expanding the Cyclic sum we get
23
+
24
+ $$
25
+ a_1 a_2 a_3 \left( \frac{\tanh^{-1}\left(\frac{1}{\sqrt{a_1}}\right)}{\sqrt{a_1}(a_1-a_2)(a_1-a_3)} + \frac{\tanh^{-1}\left(\frac{1}{\sqrt{a_2}}\right)}{\sqrt{a_2}(a_2-a_1)(a_2-a_3)} + \frac{\tanh^{-1}\left(\frac{1}{\sqrt{a_3}}\right)}{\sqrt{a_3}(a_3-a_1)(a_3-a_2)} \right)
26
+ $$
27
+
28
+ let us plug the values $a_1 = \pi$, $a_2 = \phi$, and $a_3 = e$, (where $\phi, e$ are golden
samples/texts/1930866/page_1.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Systematic Study of Frame Sequence Operators and their Pseudoinverses
2
+
3
+ P. Balazs
4
+
5
+ Acoustics Research Institute
6
+ Austrian Academy of Sciences, Vienna, Austria
7
+ Peter.Balazs@oeaw.ac.at
8
+
9
+ M. A. El-Gebeily
10
+
11
+ Department of Mathematical Sciences
12
+ King Fahd University of Petroleum and Minerals
13
+ Dhahran, Saudi Arabia
14
+
15
+ ## Abstract
16
+
17
+ In this note we investigate the operators associated with frame sequences in a Hilbert space $H$, i.e., the synthesis operator $T : \ell^2(\mathbb{N}) \to H$, the analysis operator $T^* : H \to \ell^2(\mathbb{N})$ and the associated frame operator $S = TT^*$ as operators defined on (or to) the whole space rather than on subspaces. Furthermore, the projection $P$ onto the range of $T$, the projection $Q$ onto the range of $T^*$ and the Gram matrix $G = T^*T$ are investigated. For all these operators, we investigate their pseudoinverses, how they interact with each other, as well as possible classification of frame sequences with them. For a tight frame sequence, we show that some of these operators are connected in a simple way.
18
+
19
+ **Mathematics Subject Classification:** Primary 41A58, 47A05, Secondary 46B15
20
+
21
+ **Keywords:** frame sequence, pseudoinverse, frame operator
22
+
23
+ ## 1 Introduction
24
+
25
+ Frame sequences are the natural generalization of frames [5]. In many situations, for example, when constructing a frame multi-resolution analysis (see e.g., [3], [8]), we start with a frame sequence in a Hilbert space $H$ and then define the initial approximation space $V$ as the closure of the span of the
samples/texts/1930866/page_10.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2. *S* is continuous, has closed range and
2
+
3
+ $$
4
+ \|S^{\dagger}\|^{-1} \|Pf\|^2 \leq \langle Sf, f \rangle \leq \|S\| \|Pf\|^2 \quad \forall f \in H.
5
+ $$
6
+
7
+ 3. G is continuous, has closed range and
8
+
9
+ $$
10
+ \|\mathbf{S}^{\dagger}\|^{-1} \|Qc\|^2 \leq \langle Gc, c \rangle \leq \|\mathbf{S}\| \|Qc\|^2 \quad \forall c \in \ell^2(\mathbb{N}).
11
+ $$
12
+
13
+ **Lemma 2.16** Let {$f_k$}$_{k=1}^{\infty}$ be a tight frame sequence in H. Then
14
+
15
+ $$
16
+ 1. S = AP, G = AQ,
17
+ $$
18
+
19
+ 2. $S^\dagger = \frac{1}{A}P$, $G^\dagger = \frac{1}{A}Q$.
20
+
21
+ **Proof.** We prove the statements for $G$ only. Since the frame is tight,
22
+
23
+ $$
24
+ \langle G_c, c \rangle = \|T_c\|^2 = A \|Q_c\|^2 = A \langle Q_c, Q_c \rangle . \blacksquare
25
+ $$
26
+
27
+ An analog of the polarization identity can be easily proved:
28
+
29
+ $$
30
+ \langle Gc, d \rangle = \frac{1}{4} ( \langle G(c+d), c+d \rangle - \langle G(c-d), c-d \rangle + \\
31
+ \qquad + i \langle G(c+id), c+id \rangle - i \langle G(c-id), c-id \rangle ),
32
+ $$
33
+
34
+ which yields
35
+
36
+ $$
37
+ \begin{align*}
38
+ \langle Gc, d \rangle &= \frac{1}{4} A \left( \|Q(c+d)\|^2 - \|Q(c-d)\|^2 + \right. \\
39
+ &\qquad \left. + i \|Q(c+id)\|^2 - i \|Q(c-id)\|^2 \right) \\
40
+ &= A \langle Qc, Qd \rangle = A \langle Qc, d \rangle .
41
+ \end{align*}
42
+ $$
43
+
44
+ Since *d* is arbitrary, *Gc* = *AQc*. Furthermore, *Qc* = *G†Gc* = *G†AQc* = *AG†c*.
45
+ This gives *G†c* = *1/A*·*Qc*.
46
+
47
+ ACKNOWLEDGEMENTS.
48
+
49
+ The authors would like to thank Pete Casazza for creating the contact between
50
+ them, as well as giving comments on a first version of this paper.
51
+
52
+ The work of the first author was partly supported by the European Union's
53
+ Human Potential Program, under contract HPRN-CT-2002-00285 (HASSIP).
54
+ He would like to thank the hospitality of the LATP, CMI, Marseille, France
55
+ and FYMA, UCL, Louvain-la-Neuve, Belgium.
56
+
57
+ References
58
+
59
+ [1] J.-P. Antoine, A. Inoue, and C. Trapani, “Partial *?-algebras and their operator realizations”, Mathematics and Its Applications, vol. 553, Kluwer Academic Publishers, 2002.
samples/texts/1930866/page_11.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2] P. Balazs, "Regular and Irregular Gabor Multipliers with Application to Psychoacoustic Masking", PhD thesis, University of Vienna (2005)
2
+
3
+ [3] J. Benedetto, S. Lee, "The theory of multiresolution analysis frames and application to filter banks", App. Comp. Har. Anal., 5, 389-427 (1998).
4
+
5
+ [4] F. J. Beutler, "The operator theory of pseudo-inverse", JMAA, 10, 451-493 (1965).
6
+
7
+ [5] P. G. Casazza, "The Art of Frame Theory", Taiwanese J. Math. 2 (4), pp. 129-202 (2000)
8
+
9
+ [6] O. Christensen, "Frames and pseudo-inverses", JMAA, 195, 401-414 (1995).
10
+
11
+ [7] O. Christensen, "Operators with closed range, pseudo-inverses and perturbation of frames for a subspace", Canad. Bull. Math., 42, 37-45 (1999).
12
+
13
+ [8] O. Christensen, "An introduction to frames and Riesz bases", Birkhäuser Boston (2003).
14
+
15
+ [9] O. Christensen, C. Lennard, C. Lewis, "Perturbation of frames for a subspace of a Hilbert space", Rocky Mountain J. Math., 30, 4, 1237-1249 (2000).
16
+
17
+ [10] J. B. Conway, "A course in functional analysis", 2. ed., Graduate Texts in Mathematics, Springer New York, 1990.
18
+
19
+ [11] W. Rudin, "Functional Analysis", McGraw-Hill (1973).
20
+
21
+ [12] A. Teolis, J. Benedetto, "Local frames", SPIE Math. Imaging 2034, 310-321 (1993).
samples/texts/1930866/page_2.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ frame sequence. In the literature, several operators are associated with frame
2
+ sequences and these spaces, namely, the projection operator $P: H \to H$ onto
3
+ $V$, the inclusion operator $\iota_V: V \to H$, the analysis operator $\mathcal{U}: V \to \ell^2(\mathbb{N})$,
4
+ the synthesis operator $\mathcal{T}: \ell^2(\mathbb{N}) \to V$ and the frame operator $\mathcal{S}: V \to V$ (the
5
+ definition of all these operators will be given in the next section). In the liter-
6
+ ature, frame sequences are analyzed mostly using the concrete representation
7
+ of these operators. On the other hand, analyzing frame sequences from a pure
8
+ operator theoretic point of view can offer a deeper insight into the structure
9
+ of such sequences. This note will take this approach. Almost exclusively, all
10
+ the proofs provided in this paper use these operators and operator theoretic
11
+ principles.
12
+
13
+ An alternative way of looking at the above operators is to extend their
14
+ definitions with the help of the first two operators to the whole space $H$. This
15
+ way we are always working with the base spaces $H$ and $\ell^2(\mathbb{N})$ and we do not
16
+ worry about the subspace $V$ and its image in $\ell^2(\mathbb{N})$. The extended operators
17
+ become: the synthesis operator $T: \ell^2(\mathbb{N}) \to H$, the analysis operator $U: H \to$
18
+ $\ell^2(\mathbb{N})$ and the frame operator $S: H \to H$. We develop relationships between
19
+ these extended versions. In fact, we start by defining these extended versions
20
+ rather than the "classical" restricted operators. Consequently no ambiguity
21
+ arises regarding notions such as inverses of operators and pseudo inverses.
22
+ While the proofs of the relationships are straightforward in the most part, they
23
+ are nontrivial in the sense that we use the current state of knowledge to derive
24
+ them. Also, a form of duality between statements about the synthesis and
25
+ the analysis operators emerges throughout the presentation. For all involved
26
+ operators, we investigate how they interact with each other, as well as possible
27
+ classification of frame sequences with them. For a tight frame sequence, we
28
+ will show that some of these operators are connected in a simple way.
29
+
30
+ Of course, most of the work in the literature on frame sequences is related
31
+ to this work. We mention in particular the references [7], [9], [12] which have
32
+ more direct bearing on this note.
33
+
34
+ As a preliminary lemma we list here the properties of the pseudo-inverse or
35
+ the Moore Penrose inverse of a bounded operator with closed range that are
36
+ most important to us (see, for example, Appendix A.7 in [8]).
37
+
38
+ **Lemma 1.1** Let $H_1, H_2$ be Hilbert spaces, and suppose that $U : H_1 \to H_2$ is a bounded operator with closed range $R_U$. Then there exists a bounded operator $U^\dagger : H_2 \to H_1$ such that
39
+
40
+ $$UU^\dagger f = f, \forall f \in R_U,$$
41
+
42
+ with $\ker U^\dagger = \overline{R_U^\perp}$ and $\overline{R}_{U^\dagger} = \ker U^\perp$. This operator is uniquely determined by these properties.
43
+
44
+ Furthermore, $U^\dagger$ has the following properties.
samples/texts/1930866/page_3.md ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 1. $UU^\dagger$ is the orthogonal projection of $H_2$ onto $R_U$.
2
+
3
+ 2. $U^\dagger U$ is the orthogonal projection of $H_1$ onto $R_{U^\dagger}$.
4
+
5
+ 3. $U^*$ has closed range and $(U^*)^\dagger = (U^\dagger)^*$.
6
+
7
+ 4. $U^\dagger U U^\dagger = U^\dagger.$
8
+
9
+ 5. On $\overline{R}_U$ we have $U^\dagger = U^* (UU^*)^{-1}$.
10
+
11
+ ## 2 Frame Sequences and Their Pseudoinverses
12
+
13
+ In this section $H$ denotes a general Hilbert space and $\ell^2(\mathbb{N})$ denotes the space of absolutely square summable sequences of complex numbers. We will denote elements of $\ell^2(\mathbb{N})$ by lower case letters such as $c, d, \dots$ etc. and, when we want to explicitly use the terms of the sequences $c, d, \dots$, we will use Greek letters such as $(\zeta_k)_{k=1}^\infty$, $(\eta_k)_{k=1}^\infty$, $\dots$ etc. When no confusion arises we will write these sequences as $(\zeta_k)$, $(\eta_k)$, $\dots$ etc. We denote by $\{\epsilon_k\}_{k=1}^\infty$ the sequence of standard basis elements in $\ell^2(\mathbb{N})$.
14
+
15
+ Suppose $\{f_k\}_{k=1}^\infty$ is a sequence in $H$. With $\{f_k\}_{k=1}^\infty$ we associate three, possibly unbounded [1], operators: the synthesis operator $T : \ell^2(\mathbb{N}) \to H$ defined by? $Tc = \sum_{k=1}^\infty \zeta_k f_k$, the analysis operator $U : H \to \ell^2(\mathbb{N})$ defined by $Uf = (\langle f, f_k \rangle)$ and the frame operator $S$ defined by $Sf = TUf = \sum_{k=1}^\infty \langle f, f_k \rangle f_k$, whenever the right hand sides of these definitions exist. Observe that $T$ is densely defined as its domain $D_T$ contains all finite sequences (sequences which, eventually, consist of zeros) in $\ell^2(\mathbb{N})$. This implies that $T$ has a well defined adjoint $T^* : \ell^2(\mathbb{N}) \to H$, which is a closed operator (see [11]). We also have
16
+
17
+ (A) $\ker T^*$ is closed.
18
+
19
+ (B) $\ker T^* = (R_T)^\perp = (\bar{R}_T)^\perp.$
20
+
21
+ (C) $H = \bar{R}_T \oplus (\bar{R}_T)^\perp = \bar{R}_T \oplus \ker T^*$.
22
+
23
+ It follows that the orthogonal projection $P$ of $H$ onto $\bar{R}_T = (\ker T^*)^\perp$ is always well defined. The following lemma and its corollary are straightforward.
24
+
25
+ **Lemma 2.1** $\operatorname{span}\{f_k\}_{k=1}^\infty \subseteq R_T \subseteq \overline{\operatorname{span}}\{f_k\}_{k=1}^\infty.$
26
+
27
+ **Corollary 2.2** If $T$ has closed range, then $R_T = \overline{\operatorname{span}}\{f_k\}_{k=1}^\infty$.
samples/texts/1930866/page_4.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Recall that ${f_k}^{\infty}_{k=1}$ is a frame sequence in $H$ if there are positive constants $A, B$ such that
2
+
3
+ $$A \|f\|^2 \le \sum_{k=1}^{\infty} |\langle f, f_k \rangle|^2 \le B \|f\|^2 \quad \forall f \in \operatorname{span}\{f_k\}^{\infty}_{k=1}. \quad (1)$$
4
+
5
+ The frame sequence is tight if $A = B$. By the definition above, it becomes clear that it is a frame for $V = \overline{\operatorname{span}}\{f_k\}$. The restricted versions of the above operators are defined in the same way with the only difference that they work from or to $V$. We let $\iota_V : V \to H$ be the inclusion operator $\iota_V(f) = f$, $U : V \to \ell^2(\mathbb{N})$ the analysis operator, $T : \ell^2(\mathbb{N}) \to V$ the synthesis and $S : V \to V$ the frame operator. We have following basic relationships between these operators, which are straightforward to show
6
+
7
+ **Proposition 2.3** If ${f_k}^{\infty}_{k=1}$ is a frame sequence in $H$, then the following properties hold.
8
+
9
+ 1. $T = \iota_V T$.
10
+
11
+ 2. $\bar{R}_T = \bar{R}_T = V$ and $P$ is the projection on $V$.
12
+
13
+ 3. $U = UP$.
14
+
15
+ 4. $R_U = R_U$.
16
+
17
+ 5. $S = TU = \iota_V T U P = \iota_V S P$.
18
+
19
+ **Proof.** (iii) & (IV) : For $h_2 \in V^\perp$, we clearly have $U(h_2) = (\langle h_2, f_k \rangle) = 0$. Every $h \in H$ can be uniquely described as $h = h_1 + h_2$ with $h_1 \in V$ and $h_2 \in V^\perp$, therefore, $U(h) = U(h_1) + U(h_2) = U(h_1) = UPf$.
20
+
21
+ All the others proofs are straightforward. ■
22
+
23
+ As a frame sequence is a frame for its closed span, $\mathcal{U}$ and $\mathcal{T}$ are bounded, $\mathcal{T} = \mathcal{U}^*$ and $\mathcal{U} = \mathcal{T}^*$. Also
24
+
25
+ **Corollary 2.4** If ${f_k}^{\infty}_{k=1}$ is a frame sequence, then
26
+
27
+ 1. the analysis operator $U$ is bounded,
28
+
29
+ 2. the synthesis operator $T$ is bounded,
30
+
31
+ 3. $T = U^*$ and $U = T^*$.
samples/texts/1930866/page_5.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Proof.** (i) and (ii): Since $\mathcal{T}$, $\mathcal{U}$, $P$ and $\iota_V$ are bounded, the boundedness of $U$ and $T$ follows directly from the above relations.
2
+
3
+ (iii): $U^* = (\mathcal{U}P)^* = P^*\mathcal{U}^* = \iota_V T = T$. Repeat the argument for $T^*$. ■
4
+
5
+ As $\{\tilde{f}_k\}$ is a frame for $V$, there is a sequence $\{\tilde{f}_k\} \subseteq V$, which is the canonical dual frame for $V$, $\tilde{f}_k = S^{-1}f_k$. $\{\tilde{f}_k\}$ is again a frame sequence in $H$ with closed span $V$ and the bounds $\tilde{A} = \frac{1}{B}$ and $\tilde{B} = \frac{1}{A}$. Let $\tilde{T}, \tilde{U}, \tilde{S}, \tilde{T}, \tilde{U}$ and $\tilde{S}$ be the corresponding operators associated with this frame sequence. Therefore we have $\tilde{T} = S^{-1}\tilde{T}$, $\tilde{U} = \mathcal{U}S^{-1}$, $\tilde{S} = S^{-1}$ and $\mathcal{T}\tilde{U} = \tilde{T}\mathcal{U} = \text{id}_V$. The following corollary can be easily shown.
6
+
7
+ **Corollary 2.5** If $\{f_k\}_{k=1}^\infty$ is a frame sequence in $H$ and $\{\tilde{f}_k\}$ is its dual sequence, then the following properties hold.
8
+
9
+ 1. $\tilde{T} = \iota_V \tilde{T}$ and $\tilde{U} = \tilde{U}P$.
10
+
11
+ 2. $\tilde{S} = \iota_V S^{-1} P$.
12
+
13
+ 3. $T\tilde{U} = \iota_V P = \tilde{T}U$.
14
+
15
+ It follows from Property (3) above that the projection $P$ on $V$ as a function from $H$ into $H$ is $T\tilde{U}$. This is well known [8]. Also, denoting by $Q$ the orthogonal projection of $\ell^2(\mathbb{N})$ onto $(\ker_T)^\perp = \bar{R}_{T^*}$, it is straightforward to show that $Q$ is the Gram matrix $G = U\tilde{T} = \mathcal{U}\tilde{T}$.
16
+
17
+ We have the following characterizations of frame sequences.
18
+
19
+ **Theorem 2.6** The following are equivalent:
20
+
21
+ 1. $\{f_k\}_{k=1}^\infty$ is a frame sequence in $H$ with bounds $A, B$.
22
+
23
+ 2. There exist positive constants $A, B$ such that for every $f \in H$,
24
+
25
+ $$A \|Pf\|^2 \leq \|T^*f\|^2 \leq B \|Pf\|^2.$$
26
+
27
+ 3. There exist positive constants $A, B$ such that for every $c \in \ell^2(\mathbb{N})$,
28
+
29
+ $$A \|Qc\|^2 \leq \|Tc\|^2 \leq B \|Qc\|^2.$$
30
+
31
+ **Proof.** Assume (i) holds. As $T^* = U = \mathcal{U}P$, (ii) is equivalent to the definition of frame sequences and is therefore true.
32
+
33
+ Assume (ii) holds. Then a similar inequality holds for the dual frame in $V$, i.e.,
34
+
35
+ $$\tilde{A} \|Pf\|^2 \leq \|\tilde{U}f\|^2 \leq \tilde{B} \|Pf\|^2, \text{ or } \frac{1}{B} \|Pf\|^2 \leq \|\tilde{U}f\|^2 \leq \frac{1}{A} \|Pf\|^2.$$
samples/texts/1930866/page_6.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Now choose $c \in \ell^2(\mathbb{N})$ and set $f = Tc$. Then $\frac{1}{B} \|PTc\|^2 \leq \| \tilde{U}Tc \|^{2} \leq \frac{1}{A} \|PTc\|^2$, which implies that
2
+
3
+ $$
4
+ \frac{1}{B} \|Tc\|^2 \leq \|Qc\|^2 \leq \frac{1}{A} \|Tc\|^2, \text{ or } A \|Qc\|^2 \leq \|Tc\|^2 \leq B \|Qc\|^2.
5
+ $$
6
+
7
+ Assume (iii) holds. Since ||$Qc$|| ≤ ||$c$||, it follows that $T$ is continuous. Let $c \in \ker_T^\perp$. Then
8
+
9
+ $$
10
+ A \|c\|^2 \leq \|Tc\|^2 \leq B \|c\|^2.
11
+ $$
12
+
13
+ Therefore $T|_{\ker_T^\perp}$ is bounded, injective and has closed range [10, 2]. As $R_T = R_{T|_{\ker_T^\perp}}$, $T$ is bounded and has closed range. Using [6] this is equivalent to $\{f_k\}$ forming a frame sequence. ■
14
+
15
+ It follows easily from Theorem 2.6 that {$f_k$}$_{k=1}^{\infty}$ is a frame for $H$ if and only if $T$ is surjective (i.e., $P = I$) and that {$f_k$}$_{k=1}^{\infty}$ is a Riesz Basis if and only if $T^*$ is onto (i.e., $Q = I$). Theorem 2.6 can also be restated in a number of other ways; the following one uses the optimal bounds for the inequalities.
16
+
17
+ **Corollary 2.7** The following are equivalent:
18
+
19
+ 1. $\{f_k\}_{k=1}^{\infty}$ is a frame sequence in $H$.
20
+
21
+ 2. $T^*$ is continuous, has a closed range, and
22
+
23
+ $$
24
+ \|T^\dagger\|^{-1} \|Pf\| \le \|T^*f\| \le \|T\| \|Pf\| \quad \forall f \in H.
25
+ $$
26
+
27
+ 3. *T* is continuous, has a closed range, and
28
+
29
+ $$
30
+ \|\mathbf{T}^{\dagger}\|^{-1} \|\mathbf{Q}\mathbf{c}\| \le \|\mathbf{T}\mathbf{c}\| \le \|\mathbf{T}\| \|\mathbf{Q}\mathbf{c}\| \quad \forall \mathbf{c} \in \ell^2(\mathbb{N}).
31
+ $$
32
+
33
+ For a frame sequence the frame operator $S = TT^*$ is bounded and self-adjoint. Furthermore, since $R_{T^*}$ is closed,
34
+
35
+ $$
36
+ \begin{align*}
37
+ R_S &= TT^*H = T(T^*H + \ker_T) \\
38
+ &= T(T^*H + R_{T^*}^\perp) = T(R_{T^*} + R_{T^*}^\perp) \\
39
+ &= T\ell^2(\mathbb{N}) = R_T.
40
+ \end{align*}
41
+ $$
42
+
43
+ It follows that $S$ has closed range and, hence, a continuous Moore-Penrose pseudo-inverse $S^\dagger$. We list some properties of $S^\dagger$ next.
44
+
45
+ **Lemma 2.8** The operator $S^\dagger : H \rightarrow H$ is the same as the operator $\tilde{S} =$ $i_V S^{-1} P$ and therefore is self-adjoint and has the following properties:
samples/texts/1930866/page_7.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 1. $SS^\dagger = S^\dagger S = P$.
2
+
3
+ 2. $S^\dagger(I - P) = 0$.
4
+
5
+ 3. $S^\dagger P = PS^\dagger = S^\dagger.$
6
+
7
+ **Proof.** $\tilde{T}$ has the same range as $T$ as mentioned above, $\overline{R}_{\tilde{T}} = V$. Therefore, $\overline{R}_{\tilde{S}} = \overline{R}_S$. Also, $S\tilde{S} = \iota_V S P \iota_V \tilde{S} P = \iota_V S \tilde{S} P = \iota_V S S^{-1} P = \iota_V P$. Furthermore, $\ker \tilde{S} = \{f : \tilde{S}f = 0\} = \{f : \iota_V S P f = 0\} = \{f : Pf = 0\} = V^\perp$, as $\iota_V$ and $S$ are injective. Repeat the same argument for the frame sequence $\{\tilde{f}_k\}$ with the roles of $S$ and $\tilde{S}$ switched and use Lemma 1.1 to arrive at $S^\dagger = \tilde{S}$.
8
+
9
+ (i) By Property (i) of Lemma 1.1, $SS^{\dagger}$ is the orthogonal projection onto $R_S = R_T$. Therefore, $SS^{\dagger} = P$. Switch the roles of $S$ and $\tilde{S}$ to show the second part.
10
+
11
+ (ii) $S^{\dagger}(I - P) = S^{\dagger} - S^{\dagger}P = S^{\dagger} - S^{\dagger}SS^{\dagger} = S^{\dagger} - S^{\dagger} = 0$.
12
+
13
+ (iii) The equality $S^{\dagger}P = S^{\dagger}$ follows form (ii). To show that $PS^{\dagger} = S^{\dagger}$ observe first that $S^{\dagger}P : H \to R_S$. Hence, by 2, Lemma 1.1 and (1.), $S^{\dagger} = S^{\dagger}P = PS^{\dagger}P = PS^{\dagger}SS^{\dagger} = PS^{\dagger}$. $\blacksquare$
14
+
15
+ Because of the frame property on $V$, among all sequences $c \in \ell^2(\mathbb{N})$ which synthesize an $f \in H$, the sequence $c_0 = (\langle f, S^\dagger f_k \rangle)$ is the one with the minimum norm.
16
+
17
+ Similarly, among all elements $f \in H$ which analyze to a $c \in \ell^2(\mathbb{N})$, the element $f_0 = S^\dagger T c = \sum_{k=1}^\infty \zeta_k S^\dagger f_k$ is the one with the minimum norm. We have $\|f\|^2 = \|f_0\|^2 + \|f - f_0\|^2$.
18
+
19
+ Proposition 5.3.5 in [8] can now be restated in terms of $S^\dagger$ which is defined on all of $H$ instead of $S^{-1}$ which is defined only on $V$.
20
+
21
+ **Corollary 2.9** Let $\{f_k\}_{k=1}^\infty$ be a frame sequence in $H$. For any $f \in H$,
22
+
23
+ $$P f = \sum_{k=1}^{\infty} \langle f, S^{\dagger} f_k \rangle f_k.$$
24
+
25
+ **Proposition 2.10** *The pseudo-inverse of T is $\tilde{U}$, $T^\dagger = \tilde{U}$. The pseudo-inverse of U is $\tilde{T}$, $U^\dagger = \tilde{T}$. Consequently, we have the following properties*
26
+
27
+ 1. $T^\dagger = T^* S^\dagger$ and $T^\dagger = S^\dagger T^*$.
28
+
29
+ 2. $(T^\dagger)^* T^\dagger = S^\dagger.$
30
+
31
+ 3. $(T^\dagger)^* = S^\dagger T.$
32
+
33
+ 4. $\|T\|^2 = \|S\|.$
samples/texts/1930866/page_8.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ $$
2
+ 5. \quad \|T^\dagger\|^2 = \|S^\dagger\|.
3
+ $$
4
+
5
+ **Proof.** Clearly $T\tilde{U} = \iota_V P$ and $\ker\tilde{U} = V^\perp$. Again by switching the roles of $T$ and $\tilde{T}$ we arrive with Lemma 1.1 at $T^\dagger = \tilde{U}$. Use an analog argument for $U$ and $\tilde{U}$.
6
+
7
+ (i) $\tilde{U} = \iota_V \tilde{U}P = \iota_V \tilde{U} (\tilde{T}U) P = \iota_V (\tilde{U}\tilde{T}) UP = \iota_V \tilde{S}UP = \iota_V \tilde{S}P_{\iota_V} UP =$
8
+
9
+ $\tilde{S}U$, so $T^{\dagger} = S^{\dagger}T^{*}$. Since $P = SS^{\dagger} = S\tilde{S} = (TT^{*})(\tilde{T}\tilde{T}^{*}) = T \cdot [T^{*}(TT^{*})^{\dagger}]$,
10
+ it follows from the uniqueness of the Moor-Penrose pseudo-inverse that $T^{\dagger} =$
11
+
12
+ $T^*S^\dagger$.
13
+
14
+ (ii) This follows immediately from $T^\dagger = \tilde{U}$
15
+
16
+ (iii) follows from (i) by taking conjugates.
17
+
18
+ (iv): We have for every $f \in H$, $\|T^*f\|^2 = \langle Sf, f \rangle \le \|Sf\|\|f\| \le \|S\|\|f\|^2$. On the other hand, $\|S\| = \|TT^*\| \le \|T\|^2$.
19
+
20
+ (v): Observe first that, for every $f \in H$,
21
+
22
+ $$
23
+ \|T^\dagger f\|^2 = \langle f, S^\dagger f \rangle. \tag{2}
24
+ $$
25
+
26
+ This can be seen as follows:
27
+
28
+ $$
29
+ \begin{align*}
30
+ \|T^\dagger f\|^2 &= \langle T^* S^\dagger f, T^* S^\dagger f \rangle = \langle T T^* S^\dagger f, S^\dagger f \rangle \\
31
+ &= \langle P f, S^\dagger f \rangle = \langle f, P S^\dagger f \rangle = \langle f, S^\dagger f \rangle.
32
+ \end{align*}
33
+ $$
34
+
35
+ Hence, for every $f \in H$, $\|T^\dagger f\|^2 \le \|S^\dagger\| \|f\|^2$. It follows that $\|T^\dagger\|^2 \le \|S^\dagger\|$. On the other hand, since $S^\dagger$ is self adjoint,
36
+
37
+ $$
38
+ \|S^{\dagger}\| = \sup_{\|f\|=1} \langle S^{\dagger}f, f \rangle = \sup_{\|f\|=1} \|T^{\dagger}f\|^2 \leq \|T^{\dagger}\|^2.
39
+ $$
40
+
41
+ Therefore, $\|S^\dagger\| = \|T^\dagger\|^2$. ■
42
+
43
+ Let $\{f_k\}_{k=1}^\infty$ be a frame sequence in $H$. Define the Gram matrix $G : l^2(\mathbb{N}) \rightarrow l^2(\mathbb{N})$ by $G = UT = T^*T$. Alternatively, $G = UPi_VT = UT$. More explicitly, $Gc = \sum_{l=1}^\infty (\sum_{k=1}^\infty c_k \langle f_k, f_j \rangle) \epsilon_j$ and $\langle Gc, c \rangle = \sum_{j,k=1}^\infty \langle c_k f_k, c_j f_j \rangle$. It immediately follows from the definitions that $T^*S = GT^*$ and $ST = TG$. Clearly $G$ is self adjoint. It is well known [2, 8] that $G$ is a bijective bounded operator from $R_{T^*}$ onto $R_T$ with bounded inverse if and only if $\{f_k\}$ is a frame sequence. In particular $R_G = R_T$ and $\ker_G = \ker_T = R_{T^*}^\perp$. It follows that $G$ has a closed range and, hence, a continuous Moor-Penrose pseudo-inverse $G^\dagger$. The Gram matrix is the projection onto $(\ker_T)^\perp = \bar{R}_{T^*}$.
44
+
45
+ We list some properties of $G^\dagger$ next. For that let us denote by $\tilde{G}$ the Gram
46
+ matrix corresponding to the dual frame $\{\tilde{f}_k\}$. The proof is analogous to the
47
+ one of Lemma 2.8 with appropriate adjustments.
samples/texts/1930866/page_9.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Lemma 2.11** The operator $G^\dagger : \ell^2(\mathbb{N}) \to \ell^2(\mathbb{N})$ is the same as $\tilde{G}$. It is therefore self-adjoint and has the following properties:
2
+
3
+ 1. $GG^\dagger = Q = G^\dagger G$.
4
+
5
+ 2. $G^\dagger (I - Q) = 0$.
6
+
7
+ 3. $G^\dagger Q = QG^\dagger = G^\dagger$.
8
+
9
+ The following corollary is the same as [8], Proposition 5.3.6.
10
+
11
+ **Corollary 2.12** Let $\{f_k\}_{k=1}^\infty$ be a frame sequence in H. For any $c \in \ell^2(\mathbb{N})$, $Qc = \sum_{k=1}^\infty \langle c, G^\dagger T^* f_k \rangle \epsilon_k$.
12
+
13
+ We may also write $Qc = \sum_{k,j=1}^\infty \langle c, G^\dagger \epsilon_j \rangle \langle f_j, f_k \rangle \epsilon_k$.
14
+
15
+ **Lemma 2.13** We have the following properties:
16
+
17
+ 1. $(T^*)^\dagger = TG^\dagger$.
18
+
19
+ 2. $T^\dagger (T^\dagger)^* = G^\dagger$.
20
+
21
+ 3. $T^\dagger = G^\dagger T^*$.
22
+
23
+ 4. $\|T\|^2 = \|G\|$.
24
+
25
+ 5. $\|T^\dagger\|^2 = \|G^\dagger\|$.
26
+
27
+ **Proof.** (i): Since $Q = T^*TG^\dagger = T^*(T^*)^\dagger$, it follows from the uniqueness of the Moor-Penrose pseudo-inverse that $(T^*)^\dagger = TG^\dagger$.
28
+
29
+ (ii) follows immediately from (1.) by multiplying on the left by $T^\dagger$ and using Property 3 of Lemma 1.1.
30
+
31
+ (iii) follows from (1.) by taking adjoints.
32
+
33
+ (iv): We have for every $c \in \ell^2(\mathbb{N})$, $\|Tc\|^2 = \langle Gc, c \rangle \leq \|G\|\|c\|^2$. Thus
34
+ $\|T\|^2 \leq \|G\|$. On the other hand, $\|G\| = \|T^*T\| \leq \|T\|^2$, which yields $\|G\| = \|T\|^2$.
35
+
36
+ (v): As $T^\dagger = \tilde{T}$ and $G^\dagger = \tilde{G}$, (5) is the same as (4) for the dual frame. $\blacksquare$
37
+
38
+ **Corollary 2.14** $\|G\| = \|S\|$ and $\|G^\dagger\| = \|S^\dagger\|$.
39
+
40
+ Theorem 2.6 (or rather, Corollary 2.7) can also be reformulated as
41
+
42
+ **Theorem 2.15** The following are equivalent:
43
+
44
+ 1. $\{f_k\}_{k=1}^\infty$ is a frame sequence in H.
samples/texts/2058102/page_1.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Invariant Theory for Hypothesis Testing on Graphs
2
+
3
+ Priebe, Carey
4
+
5
+ Johns Hopkins University, Applied Mathematics and Statistics
6
+ 3400 North Charles Street
7
+ Baltimore, Maryland 21218-2682, USA
8
+ E-mail: cep@jhu.edu
9
+
10
+ Rukhin, Andrey
11
+
12
+ Naval Surface Warfare Center, Sensor Fusion Department
13
+ 18444 Frontage Road
14
+ Dahlgren, Virginia 22448, USA
15
+ E-mail: andrey.rukhin@navy.mil
16
+
17
+ ## 1 Introduction
18
+
19
+ Following the setting outlined in Priebe et al. [2011] we aim to detect anomalies within attributed graphs. In particular, let $\mathcal{V} = \{1, \dots, n\}$ be the fixed set of vertices and $\phi: (\nu_2) \to \{0, 1, \dots, K\}$ be an edge-attribution function. The graph on $\mathcal{V}$ is defined to be $G = (\mathcal{V}, \mathcal{E}_{\phi})$ where
20
+
21
+ $$ (u, v) \in \mathcal{E}_{\phi} \iff \phi(u, v) > 0. $$
22
+
23
+ We say that the edge $(u, v)$ has attribute $c \in \{1, \dots, K\}$ if $\phi(u, v) = c$. One can view the categorical edge attributes as some mode of the communication event between actors $u$ and $v$ (e.g., a topic label derived from the content of the communication).
24
+
25
+ The specific anomaly we aim to detect is the “chatter” alternative – a small (unspecified) subset of vertices with altered communication behavior in an otherwise homogeneous setting. Our inference task is to determine whether or not a graph $(\mathcal{V}, \mathcal{E}_{\phi})$ includes a subset of vertices $\mathcal{M} = \{v_1, v_2, \dots, v_m\}$ whose edge-connectivity within the subset exhibits a different behaviour than that found among the remaining vertices in the graph.
26
+
27
+ To this end we consider the problem of detecting chatter anomalies in a graph using hypothesis testing on a fusion of attributed graph invariants. In particular, the focus of this paper is analyzing and comparing the inferential power of the linear attribute fusion of the attributed $q$-clique invariant
28
+
29
+ $$ T_q^W(G) = \sum_{c_1, \dots, c_K \in P((q_2), K)} w_{c_1, \dots, c_K} \sum_{(u_1, \dots, u_q) \in \binom{q}{q}} h(u_1, \dots, u_q; c_1, \dots, c_K), $$
30
+
31
+ where the sum is over the collection of partitions $P((q_2), K)$ of $(q_2)$ into $K$ non-negative parts, $W = \{w_i\}_{i \in P((q_2), K)}$ are the fusion weights, and the summand $h(u_1, \dots, u_q; c_1, \dots, c_K)$ indicates the event that the vertices $u_1, \dots, u_q$ are elements of a $q$-clique with $c_r$ edges of color $r$. Specifically, we consider the cases $q=2$ which yields the size fusion $T_2^W$ and $q=3$ which yields the triangle fusion $T_3^W$.
32
+
33
+ Our random graph model is motivated by the time series model found in Lee and Priebe [forthcoming]: for each vertex $v \in \mathcal{V}$ we assign a latent variable $\mathbf{X}^v = (X_1^v, \dots, X_d^v)$ drawn independently of all other vertices from some $d$-dimensional distribution. The edge-attribution function will be a random variable where the probability of an edge $(u, v)$ having attribute $c$ is defined to be a some predetermined function of the inner product of the latent variables. We assume that the edge attributes, conditioned on the latent variables, are independent. In this paper, we will assume that $\mathbf{X}^v \sim \text{Dirichlet}(\lambda_0^v, \dots, \lambda_K^v)$ and
34
+
35
+ $$ \mathbb{P}\{\phi(u,v) = c\} = X_c^u X_c^v $$
samples/texts/2058102/page_2.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ for all $(u, v) \in \binom{\mathcal{V}}{2}$ and all $c \in \{1, \dots, K\}$. This model choice is analogous to the first and second approximations found in Lee and Priebe [forthcoming]: if we write $\lambda^v = (\lambda_0^v, \dots, \lambda_K^v) = (1 + r x_0^v, \dots, 1 + r x_K^v)$ for some fixed $(x_0^v, \dots, x_K^v)$ in the unit simplex and non-negative real $r$, then $r \to \infty$ yields the first approximation model (i.e., the “independent edge model”). We mention that our approach herein differs from the second approximation in Lee and Priebe [forthcoming]; their second approximation yields an inner product model with truncated Gaussian latent variables.
2
+
3
+ Related work may be found in Bollobas et al. [2007] Section 16.4 and references therein. We also direct the interested reader to Priebe et al. [2011] in which the authors study other linear attribute fusion invariants; in particular, the authors consider
4
+
5
+ $$maxd^W(G) = \max_v \sum_{c=1}^{K} w_c \sum_{u \in N[v]} I\{\phi(u, v) = c\}$$
6
+
7
+ and
8
+
9
+ $$scan^W(G) = \max_v \sum_{c=1}^{K} w_c \sum_{u,x \in N[v]} I\{\phi(u,x) = c\},$$
10
+
11
+ where $N[v] = \{u | (u,v) \in \mathcal{E}\} \cup \{v\}$ is the closed neighborhood of vertex $v$ in the graph.
12
+
13
+ Finally, we add that we will restrict ourselves to simple undirected graphs. We will not consider hyper-graphs (hyper-edges consisting of more than two vertices), multi-graphs (more than one edge between any two vertices), self-loops (an edge from a vertex to itself), or weighted edges.
14
+
15
+ ## 2 Notation
16
+
17
+ For each positive integer $l$ we use the notation $[l] = \{1, \dots, l\}$.
18
+
19
+ For each $v \in \mathcal{V}$ we assign a *latent position vector* $\mathbf{X}^v = (X_0^v, \dots, X_K^v) \sim \text{Dirichlet}(\lambda^v)$ for some fixed parameter vector $\lambda^v \in \mathbb{R}_+^{K+1}$. We also assume that the latent positions are independent.
20
+
21
+ Our null hypothesis assumes a version of homogeneity among the vertices; specifically,
22
+
23
+ $$\mathbb{H}_0: \mathbf{X}^v = (X_0^v, \dots, X_K^v) \sim \text{Dirichlet}(\lambda) \text{ for all } v \in \mathcal{V}$$
24
+
25
+ for some Dirichlet parameter vector $\lambda = (\lambda_0, \dots, \lambda_K)$. Our alternative hypothesis incorporates the anomaly feature described in the preceding section as follows. Assume $m = m(n) < n$ satisfies the following two conditions: $\lim_{n \to \infty} m(n) = \infty$ and $\lim_{n \to \infty} \frac{m(n)}{n} = 0$. Our alternative hypothesis is defined to be
26
+
27
+ $$\mathbb{H}_1: \mathbf{X}^v = \begin{cases} (Y_0^v, \dots, Y_K^v) & \stackrel{iid}{\sim} \text{Dir}(\eta) & i \in [m], \\ (X_0^v, \dots, X_K^v) & \stackrel{iid}{\sim} \text{Dir}(\lambda) & i \in [n] - [m]. \end{cases}$$
28
+
29
+ for some fixed Dirichlet parameter vector $\eta = (\eta_0, \dots, \eta_K)$ and the same $\lambda = (\lambda_0, \dots, \lambda_K)$ from the null hypothesis. For convenience, we also define $\Lambda = \sum_{0 \le c \le K} \lambda_c$ and $H = \sum_{0 \le c \le K} \eta_c$.
30
+
31
+ We define
32
+
33
+ $$\epsilon_c = \sum_{(u,v) \in \binom{\mathcal{V}}{2}} \mathbb{I}\{\phi(u,v) = c\}$$
34
+
35
+ to be the *size* (i.e., number of 2-cliques) of attribute $c$ in the graph. Similarly, for the number of *triangles* (i.e., 3-cliques) we write $\tau_c$, $\tau_{b,c}$, and $\tau_{b,c,d}$ to denote the number of 3-cliques with three $c$-colored edges, two $b$-colored and one $c$-colored edge, and one edge of each of three edge-colors $b, c, d$, respectively.
samples/texts/2058102/page_3.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Before proceeding, we highlight a relevant property of the mixed moments of the Dirichlet distribution (see Johnson and Kotz [1972]): if $r_1, \dots, r_s$ are non-negative and $(X_1, \dots, X_s) \sim \text{Dirichlet}(\theta_1, \dots, \theta_s)$ then
2
+
3
+ $$ (1) \quad E\left[\prod_{i=1}^{s} X^{r_i}\right] = \frac{\Gamma\left(\sum_{i=1}^{s} \theta_i\right) \prod_{j=1}^{s} \Gamma(\theta_j + r_j)}{\prod_{i=1}^{s} \Gamma(\theta_i) \Gamma\left(\sum_{j=1}^{s} \theta_j + r_s\right)} $$
4
+
5
+ where $\Gamma$ denotes Euler's standard Gamma function. With this property one can compute the exact moments of the Hajek projection of $T_2^W$ and $T_3^W$ under either hypothesis. We write $\nu_{(c)}^{(i)}$ to denote the $i$-th moment of $X_c$ in the null latent vector and $\nu_{(b,c)}^{(i,j)}$ to denote the joint $(i,j)$-moment of $(X_b, X_c)$. Similarly, we'll write $\mu_{(c)}^{(i)}$ to denote the $i$-th moment of $Y_c$ in the anomalous latent vector and $\mu_{(b,c)}^{(i,j)}$ to denote the joint $(i,j)$-moment of $(Y_b, Y_c)$.
6
+
7
+ ## 3 Analysis
8
+
9
+ We will appeal to Hajek's Projection method, detailed in Nowicki and Wierman [1988], in order to demonstrate the asymptotic normality of the fusion invariants in this article. This approach is outlined as follows: We define the *projection* of the fusion $T$ to be the centered sum of independent random variables
10
+
11
+ $$ T^* = \sum_{v \in V} E[T | \mathbf{X}^v] - (n-1)E[T]. $$
12
+
13
+ For both the size and triangle fusion we aim to show that
14
+
15
+ $$ \frac{T - E[T]}{\sqrt{\mathrm{Var}(T^*)}} = \frac{T - T^*}{\sqrt{\mathrm{Var}(T^*)}} + \frac{T^* - E[T]}{\sqrt{\mathrm{Var}(T^*)}} \xrightarrow{D} N(0, 1). $$
16
+
17
+ To this end, one appeals to Chebyshev's Inequality to show that $\mathrm{Var}(T - T^*) = o(\mathrm{Var}(T^*))$ (see Nowicki and Wierman [1988] for the detailed argument). Specifically, if
18
+
19
+ $$ \mathbb{P}\left\{\frac{|T - T^*|}{\sqrt{\mathrm{Var}(T^*)}} \geq \epsilon\right\} \leq \frac{\mathrm{Var}(T - T^*)}{\epsilon^2 \mathrm{Var}(T^*)} \to 0 $$
20
+
21
+ for any positive $\epsilon$, then
22
+
23
+ $$ \frac{T - T^*}{\sqrt{\mathrm{Var}(T^*)}} + \frac{T^* - E[T]}{\sqrt{\mathrm{Var}(T^*)}} \xrightarrow{D} N(0, 1) $$
24
+
25
+ by applying the Central Limit Theorem to the normalized sum of independent random variables in the second term of the left-hand side.
26
+
27
+ ### 3.1 The Attributed Size Fusion
28
+
29
+ For each $c \in K$ define
30
+
31
+ $$ \varepsilon_c = \sum_{(u,v) \in (\mathcal{V}_2)} \mathbb{I}\{\phi(u,v) = c\} $$
32
+
33
+ to be the number of edges of color $c$ in the graph. The linear attribute fusion with parameter $W = (w_1, \dots, w_K)$ is defined to be
34
+
35
+ $$ T_2^W = \sum_{c=1}^{K} w_c \varepsilon_c. $$
samples/texts/2058102/page_4.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ### 3.1.1 The Attributed Size Fusion under $H_0$
2
+
3
+ We present all relevant terms within the Hajek Projection of the attributed size fusion under the null hypothesis.
4
+
5
+ The expectation of $T_2^W$ under the null is given by
6
+
7
+ $$E_0[T_2^W] = \binom{n}{2} \sum_{c=1}^{K} w_c [\nu_{(c)}^{(1)}]^2.$$
8
+
9
+ $E_0[T_2^W | \mathbf{X}^a]$ for any fixed $a \in \mathcal{V}$ is given by
10
+
11
+ $$E_0[T_2^W | \mathbf{X}^a] = (n-1) \sum_{c=1}^{K} w_c [\nu_{(c)}^{(1)}] X_c^{(a)} + \binom{n-1}{2} \sum_{c=1}^{K} w_c [\nu_{(c)}^{(1)}]^2.$$
12
+
13
+ We can now evaluate $T^*$ under the null:
14
+
15
+ $$
16
+ \begin{aligned}
17
+ T^* &= \sum_{a \in [n]} E[T_2^W | \mathbf{X}^a] - (n-1)E[T_2^W] \\
18
+ &= (n-1) \sum_{a \in [n]} \sum_{1 \le c \le K} w_c [\nu_{(c)}^{(1)}] X_c^{(a)} - \binom{n}{2} \sum_{c=1}^{K} w_c [\nu_{(c)}^{(1)}]^2.
19
+ \end{aligned}
20
+ $$
21
+
22
+ The variance of this sum of independent and identically distributed random variables is
23
+
24
+ $$Var_0(T^*) = \Theta(n^3).$$
25
+
26
+ As $Var_0(T_2^W - T^*) \le \binom{n}{2}(2+1)^2 E[\mathbb{I}\{\phi(u,v) > 0\}] = o(Var_0(T^*))$ by the Cauchy-Schwarz Inequality (see Nowicki and Wierman [1988] for full details), we have the desired convergence to the standard normal distribution.
27
+
28
+ ### 3.1.2 The Attributed Size Fusion under $H_1$
29
+
30
+ For the alternative we write the edge attribution function as
31
+
32
+ $$\mathbb{P}\{\phi(u, v) = c\} = \begin{cases} Y_c^u Y_c^v & u, v \in [m], \\ Y_c^u X_c^v & u \in [m], v \in [n] - [m], \\ X_c^u X_c^v & u, v \in [n] - [m]. \end{cases}$$
33
+
34
+ We perform a similar but more involved analysis to deduce the limiting distribution of the attributed size fusion of the graph under these conditions, yielding
35
+
36
+ $$Var_1(T^*) = \Theta(n^3)$$
37
+
38
+ and
39
+
40
+ $$Var_1(T_2^W - T^*) = \Theta(n^2) = o(Var_1(T^*))$$
41
+
42
+ as desired.
43
+
44
+ ### 3.1.3 Asymptotic Power Analysis of the Attributed Size Fusion
45
+
46
+ Returning to the context of hypothesis testing, assume we are interested in performing an $\alpha$-level hypothesis test to determine whether or not the graph includes an anomalous set of $m$ vertices whose underlying latent distribution differs from the null component of the graph. We define $\beta_2^W = \lim_{n \to \infty} \mathbb{P}_1\{T_2^W > c_\alpha\}$ where $c_\alpha = c(\alpha, n)$ is the $\alpha$-level critical value of the test.
samples/texts/2058102/page_5.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Fix $c \in [K]$. The difference of the corresponding terms the hypotheses means can be written as
2
+
3
+ $$E_1[\varepsilon_c] - E_0[\varepsilon_c] = D_1^{(c)} + D_2^{(c)} = \binom{m}{1} \binom{n-m}{1} \nu_{(c)}^{(1)} (\mu_{(c)}^{(1)} - \nu_{(c)}^{(1)}) + \binom{m}{2} \left( [\mu_{(c)}^{(1)}]^2 - [\nu_{(c)}^{(1)}]^2 \right)$$
4
+
5
+ (here $D_i^{(c)}$ corresponds to the edge-count that includes edges with exactly $i$ anomalous vertices). The reader can verify that $\frac{Var_1(T^*)}{Var_0(T^*)} \to 1$. Moreover, given that the limiting distribution (under the null) is normal, we write
6
+
7
+ $$c_{\alpha} = z_{\alpha} \sqrt{Var_0(T^*)} + E_0[T_2^W]$$
8
+
9
+ and thus
10
+
11
+ $$\beta_2^W = \mathbb{P} \left\{ Z > z_\alpha - \lim_{n \to \infty} \left( \frac{E_1[T_2^W] - E_0[T_2^W]}{\sqrt{Var_0(T^*)}} \right) \right\}.$$
12
+
13
+ Recall that $Var_0(T^*) = \Theta(n^3)$; thus, if
14
+
15
+ $$\sum_c w_c D_1^{(c)} \neq 0$$
16
+
17
+ (i.e. there is signal in the null-to-anomaly connectivity) then the limiting power $\beta_2^W > \alpha$ when $\frac{m(n-m)}{\sqrt{n^3}} \to 0$ or, equivalently, when $m = \Omega(\sqrt{n})$ (similarly, if $m = \omega(\sqrt{n})$ then $\beta_2^W \to 1$). Furthermore, if
18
+
19
+ $$\sum_c w_c D_1^{(c)} = 0$$
20
+
21
+ (i.e. there is no signal in the null-to-anomaly connectivity) the limiting power $\beta_2^W > \alpha$ when $\sum_c w_c D_2^{(c)} \neq 0$ and $\frac{m^2}{\sqrt{n^3}} \to 0$ (which is equivalent to $m = \Omega(\sqrt[4]{n^3})$). Moreover, if $m = \omega(\sqrt[4]{n^3})$ under these conditions then $\beta_2^W \to 1$.
22
+
23
+ It follows that the optimal choice of weights ($w_1, \dots, w_K$) is the one which maximizes the expression
24
+
25
+ $$\lim_{n \to \infty} \left( \frac{E_1[T_2^W] - E_0[T_2^W]}{\sqrt{Var_0(T^*)}} \right)$$
26
+
27
+ in either of the two above-mentioned cases.
28
+
29
+ ## 3.2 The Attributed Number of Triangles Fusion
30
+
31
+ We begin by writing
32
+
33
+ $$\tau = \sum_{c \in [K]} \tau_c + \sum_{b \neq c} \tau_{b,c} + \sum_{d \neq b,c} \tau_{b,c,d}.$$
34
+
35
+ We denote the number-of-triangles fusion invariant to be
36
+
37
+ $$T_3^W = \sum_{c \in [K]} w_c \tau_c + \sum_{b \neq c} w_{b,c} \tau_{b,c} + \sum_{d \neq b,c} w_{b,c,d} \tau_{b,c,d}.$$
38
+
39
+ Similar to what was done in the previous section, we obtain
40
+
41
+ $$E_0[T^*] = \binom{n}{3} \left[ \sum_{c \in [K]} w_c [\nu_{(c)}^{(2)}]^3 + \sum_{b \neq c} w_{b,c} 3\nu_{(b)}^{(2)} [\nu_{(b,c)}^{(1,1)}]^2 + \sum_{d \neq b,c} w_{b,c,d} 3\nu_{(b,c)}^{(1,1)} \nu_{(b,d)}^{(1,1)} \nu_{(c,d)}^{(1,1)} \right]$$
samples/texts/2058102/page_6.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ and
2
+
3
+ $$Var_0(T^*) = \Theta\left(n\binom{n-1}{2}^2\right)$$
4
+
5
+ under the null and
6
+
7
+ $$E_1[T^*] = \Theta\left(\sum_{i=0}^{3} \binom{m}{i} \binom{n-m}{3-i}\right)$$
8
+
9
+ and
10
+
11
+ $$Var_1(T^*) = \Theta\left(n\binom{n-1}{2}^2\right)$$
12
+
13
+ under the alternative. Again, $Var_0(T_3^W - T^*) = o(Var_0(T^*))$ and $Var_1(T_3^W - T^*) = o(Var_1(T^*))$.
14
+
15
+ As in the case with the attributed size fusion, we are interested in performing an $\alpha$-level hypothesis test.
16
+
17
+ The terms within the difference in means can be expressed as
18
+
19
+ $$
20
+ \begin{align*}
21
+ E_1[T^*] - E_0[T^*] &= \sum_{c \in [K]} D^{(c)} + \sum_{b \neq c} D^{(b,c)} + \sum_{d \neq b, c} D^{(b,c,d)} \\
22
+ &= \Theta\left(\sum_{i=1}^{3} \binom{m}{i} \binom{n-m}{3-i} \delta_i(H, \Lambda)\right)
23
+ \end{align*}
24
+ $$
25
+
26
+ where $\delta_i(H, \Lambda)$ is the mixed-moments difference when there are $i$ anomalous vertices in a 3-clique.
27
+
28
+ The reader can verify that $\frac{Var_1(T^*)}{Var_0(T^*)} \to 1$. Since $Var_0(T^*) = \Theta(n^5)$, we have that the limiting power $\beta_3^W > \alpha$ when $m = \Omega(\sqrt[2]{n^{2i-1}})$ and the corresponding mixed-moments expression $\delta_i(H, \Lambda)$ is non-zero.
29
+
30
+ # 4 Conclusion
31
+
32
+ We have presented preliminary results for linear attribute fusion for clique sizes $q = 2$ and 3 in terms of inferential power when detecting the prescribed anomaly within our model. In general, the most powerful choice of $q$ depends on $m$ as a function of $n$ and on the Dirichlet parameter vectors $\lambda$ and $\eta$ through the mixed moments.
33
+
34
+ ## References
35
+
36
+ B. Bollobas, S. Janson, and O. Riordan. The phase transition in inhomogeneous random graphs. *Random Structures and Algorithm*, 31:3–122, 2007.
37
+
38
+ N. Johnson and S. Kotz. *Distributions in Statistics: Continuous Multivariate Distributions*. Wiley, New York, 1972.
39
+
40
+ N. Lee and C. E. Priebe. A Latent Process Model for Time Series of Attributed Random Graphs. *Statistical Inference for Stochastic Processes*, forthcoming.
41
+
42
+ K. Nowicki and J. Wierman. Subgraph counts in random graphs using incomplete $u$-statistics methods. *Discrete Mathematics*, 72:299–310, 1988.
43
+
44
+ C. E. Priebe, N. Lee, Y. Park, and M. Tang. Attribute fusion in a latent process model for time series of graphs. *The 2011 IEEE Workshop on Statistical Signal Processing (SSP2011)*, 2011.
samples/texts/2298363/page_1.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Fairwashing Explanations with Off-Manifold Detergent
2
+
3
+ Christopher J. Anders¹ Plamen Pasliev¹ Ann-Kathrin Dombrowski¹ Klaus-Robert Müller¹,²,³ Pan Kessel¹
4
+
5
+ ## Abstract
6
+
7
+ Explanation methods promise to make black-box classifiers more transparent. As a result, it is hoped that they can act as proof for a sensible, fair and trustworthy decision-making process of the algorithm and thereby increase its acceptance by the end-users. In this paper, we show both theoretically and experimentally that these hopes are presently unfounded. Specifically, we show that, for any classifier $g$, one can always construct another classifier $\tilde{g}$ which has the same behavior on the data (same train, validation, and test error) but has arbitrarily manipulated explanation maps. We derive this statement theoretically using differential geometry and demonstrate it experimentally for various explanation methods, architectures, and datasets. Motivated by our theoretical insights, we then propose a modification of existing explanation methods which makes them significantly more robust.
8
+
9
+ ## 1. Introduction
10
+
11
+ Explanation methods⁴ are increasingly adopted by machine learning practitioners and incorporated into standard deep learning libraries (Kokhlikyan et al., 2019; Alber et al., 2019; Ancona et al., 2018). The interest in explainability is partly driven by the hope that explanations can act as proof for a sensible, fair, and trustworthy decision-making process(Aïvodji et al., 2019; Lapuschkin et al., 2019). As an example, a bank could provide explanations for its rejection of a loan application. By doing so, the bank can demonstrate that the decision was not based on illegal or ethically
12
+
13
+ questionable features. It can furthermore provide feedback to the customer. In some situations, an explanation of an algorithmic decision may even be required by law.
14
+
15
+ However, this hope is based on the assumption that explanations faithfully reflect the underlying mechanisms of the algorithmic decision. In this work, we demonstrate unequivocally that this assumption should not be made carelessly because explanations can be easily manipulated.
16
+
17
+ In more detail, we show theoretically that for any classifier $g$, one can always find another classifier $\tilde{g}$ which agrees with the original $g$ on the entire data manifold but has (almost) completely controlled explanations. This surprising result is established using techniques of differential geometry. We then demonstrate experimentally that one can easily construct such manipulated classifiers $\tilde{g}$.
18
+
19
+ In the example above, a bank could use a manipulated classifier $\tilde{g}$ that uses mainly unethical features, such as the gender of the applicant, but has explanations which suggest that the decision was only based on financial features.
20
+
21
+ Briefly put, the manipulability of explanations arises from the fact that the data manifold is typically low-dimensional compared to its high-dimensional embedding space. The training process only determines the classifier in directions along the manifold. However, many explanation methods are mainly sensitive to directions orthogonal to the data manifold. Since these directions are undetermined by training, they can be changed at will.
22
+
23
+ This theoretical insight allows us to propose a modification to explanation methods which make them significantly more robust with respect to such manipulations. Namely, the explanation is projected along tangential directions of the data manifold. We show, both theoretically and experimentally, that these tangent-space-projected (tsp) explanations are indeed significantly more robust. We thereby establish a novel and exciting connection between the fields of explainability and manifold learning.
24
+
25
+ In summary, our main contributions are as follows:
26
+
27
+ ¹Machine Learning Group, Technische Universität Berlin, Germany
28
+ ²Max-Planck-Institut für Informatik, Saarbrücken, Germany
29
+ ³Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea. Correspondence to: Pan Kessel <pan.kessel@tu-berlin.de>, Klaus-Robert Müller <klaus-mueller@tu-berlin.de>.
30
+
31
+ ⁴See (Samek et al., 2019) and references therein for a detailed overview.
32
+
33
+ * Using differential geometry, we establish theoretically that popular explanation methods can be easily manipulated.
samples/texts/2298363/page_10.md ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Lee, J. M. *Introduction to Smooth Manifolds*. Springer, 2012.
2
+
3
+ Montavon, G., Lapuschkin, S., Binder, A., Samek, W., and Müller, K.-R. Explaining nonlinear classification decisions with deep taylor decomposition. *Pattern Recognition*, 65:211–222, 2017.
4
+
5
+ Samek, W., Montavon, G., Vedaldi, A., Hansen, L. K., and Müller, K.-R. *Explainable AI: Interpreting, Explaining and Visualizing Deep Learning*. Springer, 2019. ISBN 978-3-030-28953-9. doi: 10.1007/978-3-030-28954-6.
6
+
7
+ Shao, H., Kumar, A., and Thomas Fletcher, P. The rieman-nian geometry of deep generative models. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops*, pp. 315–323, 2018.
8
+
9
+ Shrikumar, A., Greenside, P., and Kundaje, A. Learn-ing Important Features Through Propagating Activation Differences. In *Proceedings of the 34th International Conference on Machine Learning*, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pp. 3145–3153, 2017. URL http://proceedings.mlr.press/v70/shrikumar17a.html.
10
+
11
+ Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition. In *3rd International Conference on Learning Representations*, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1409.1556.
12
+
13
+ Simonyan, K., Vedaldi, A., and Zisserman, A. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. In *2nd International Conference on Learning Representations*, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Workshop Track Proceedings, 2014. URL http://arxiv.org/abs/1312.6034.
14
+
15
+ Sundararajan, M., Taly, A., and Yan, Q. Axiomatic Attribution for Deep Networks. In *Proceedings of the 34th International Conference on Machine Learning*, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pp. 3319–3328, 2017. URL http://proceedings.mlr.press/v70/sundararajan17a.html.
samples/texts/2298363/page_2.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ * We validate our theoretical predictions in detailed experiments for various explanation methods, classifier architectures, and datasets, as well as for different tasks.
2
+
3
+ * We propose a modification to existing explanation methods which make them more robust with respect to these manipulations.
4
+
5
+ * In doing so, we relate explainability to manifold learning.
6
+
7
+ ## 1.1. Related Works
8
+
9
+ This work was crucially inspired by (Heo et al., 2019). In this reference, adversarial model manipulation for explanations is proposed. Specifically, the authors empirically show that one can train models such that they have structurally different explanations while suffering only a very mild drop in classification accuracy compared to their unmanipulated counterparts. For example, the adversarial model manipulation can change the positions of the most relevant pixels in each image or increase the overall sum of relevances in a certain subregion of the images. Contrary to their work, we analyze this problem theoretically. Our analysis leads us to demonstrate a stronger form of manipulability. Namely, the model can be manipulated such that it structurally reproduces arbitrary target explanations while keeping all class probabilities the same for all data points. Our theoretical insights not only illuminate the underlying reasons for the manipulability but also allow us to develop modifications of existing explanation methods which make them more robust. Another approach (Kindermans et al., 2019) adds a constant shift to the input image, which is then eliminated by changing the bias of the first layer. For some methods, this leads to a change in the explanation map. Contrary to our approach, this requires a shift in the data. In (Adebayo et al., 2018), explanation maps are changed by randomization of (some of) the network weights. This is different to our method as it dramatically changes the output of the network and is proposed as a consistency check of explanations. In (Dombrowski et al., 2019) and (Ghorbani et al., 2019), it is shown that explanations can be manipulated by an infinitesimal change in input while the output of the network is approximately unchanged. Contrary to this approach, we manipulate the model and keep the input unchanged.
10
+
11
+ ## 1.2. Explanation Methods
12
+
13
+ We consider a classifier $g: \mathbb{R}^D \to \mathbb{R}^K$ which classifies an input $x \in \mathbb{R}^D$ in $K$ categories with the predicted class given by $k = \arg\max_i g(x)_i$. The explanation method is denoted by $h_g: \mathbb{R}^D \to \mathbb{R}^D$ and associates an input $x$ with an explanation map $h_g(x)$ whose components encode the relevance score of each input for the classifier's prediction.
14
+
15
+ We note that, by convention, explanation maps are usually calculated with respect to the classifier before applying the final softmax non-linearity (Kokhlikyan et al., 2019; Alber et al., 2019; Ancona et al., 2018). Throughout the paper, we will therefore denote this function as $g$.
16
+
17
+ We use the following explanation methods:
18
+
19
+ **Gradient:** The map $h_g(x) = \frac{\partial g}{\partial x}(x)$ is used and quantifies how infinitesimal perturbations in each pixel change the prediction $g(x)$ (Simonyan et al., 2014; Baehrens et al., 2010).
20
+
21
+ **x ⊙ Grad:** This method uses the map $h_g(x) = x \odot \frac{\partial g}{\partial x}(x)$ (Shrikumar et al., 2017). For linear models, the exact contribution of each pixel to the prediction is obtained.
22
+
23
+ **Integrated Gradients:** This method defines
24
+
25
+ $$h_g(x) = (x - \bar{x}) \odot \int_0^1 \frac{\partial g(\bar{x} + t(x - \bar{x}))}{\partial x} dt$$
26
+
27
+ where $\bar{x}$ is a suitable baseline. We refer to the original reference (Sundararajan et al., 2017) for more details.
28
+
29
+ **Layer-wise Relevance Propagation (LRP):** This method (Bach et al., 2015; Montavon et al., 2017) propagates relevance backwards through the network. In our experiments, we use the following setup: for the output layer, relevance is given by
30
+
31
+ $$R_i^L = \delta_{i,k} = \begin{cases} 1, & \text{for } i=k \\ 0, & \text{for } i \neq k \end{cases},$$
32
+
33
+ which is then propagated backwards through all layers but the first using the $z^+$-rule
34
+
35
+ $$R_i^l = \sum_j \frac{x_i^l(W^l)_{ji}^+}{\sum_i x_i^l(W^l)_{ji}^+ + \epsilon} R_j^{l+1}, \quad (1)$$
36
+
37
+ where $(W^l)^+$ denotes the positive weights of the $l$-th layer, $x^l$ is the activation vector of the $l$-th layer, and $\epsilon > 0$ is a small constant ensuring numerical stability. For the first layer, we use the $z^B$-rule to account for the bounded input domain
38
+
39
+ $$R_i^0 = \sum_j \frac{x_j^0 W_{ji}^0 - l_j(W^0)_{ji}^+ - h_j(W^0)_{ji}^-}{\sum_i (x_j^0 W_{ji}^0 - l_j(W^0)_{ji}^+ - h_j(W^0)_{ji}^-)} R_j^1,$$
40
+
41
+ where $l_i$ and $h_i$ are the lower and upper bounds of the input domain respectively.
42
+
43
+ For theoretical analysis, we consider the $\epsilon$-rule in all layers for simplicity. This rule is obtained by substituting $(W^l)^+ \to W^l$ in (1). We refer to the resulting method as $\epsilon$-LRP.
44
+
45
+ This choice of methods is necessarily not exhaustive. However, it covers two classes of attribution methods, i.e. propagation and gradient-based explanations. Furthermore, the
samples/texts/2298363/page_3.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ chosen methods are widely used in practice (Kokhlikyan et al., 2019; Alber et al., 2019; Ancona et al., 2018).
2
+
3
+ ## 2. Manipulation of Explanations
4
+
5
+ In this section, we will theoretically deduce that explanation methods can be arbitrarily manipulated by adversarially training a model.
6
+
7
+ ### 2.1. Mathematical Background
8
+
9
+ In the following, we will briefly summarize the basic tools of differential geometry before applying them in the context of explainability in the next section. For additional technical details, we refer to Appendix A.1.
10
+
11
+ A $D$-dimensional manifold $M$ is a topological space which locally resembles $\mathbb{R}^D$. More precisely, for each $p \in M$, there exists a subset $U \subset M$ containing $p$ and a diffeomorphism $\phi: U \to \tilde{U} \subset \mathbb{R}^D$. The pair $(U, \phi)$ is called coordinate chart and the component functions $x^i$ of $\phi(p) = (x^1(p), \dots, x^D(p))$ are called coordinates.
12
+
13
+ A $d$-dimensional submanifold $S$ is a subset of $M$ which is itself a $d$-dimensional manifold. $M$ is called the embedding manifold of $S$. A properly embedded submanifold $S \subset M$ is a submanifold embedded in $M$ which is also closed as a set.
14
+
15
+ Let $p \in M$ be a point on a manifold $M$ and $\gamma: \mathbb{R} \to M$ with $\gamma(0) = p$ a curve through the point $p$. The set of tangent vectors $d\gamma = \frac{d}{dt}\gamma(t)|_{t=0}$ of all curves through $p$ forms a vector space of dimension $D$. This vector space is known as tangent space $T_pM$. Let $(U, \phi)$ be a coordinate chart on $M$ with coordinates $x$. We can then define $\phi \circ \lambda_k(t) = (x^1(p), \dots, x^k(p) + t, \dots, x^D(p))$ with $k \in \{1, \dots, D\}$. This implicitly defines curves $\lambda_k: \mathbb{R} \to M$ through $p$. We denote the corresponding tangent vectors as $\partial_k := \frac{d}{dt}\lambda_k(t)|_{t=0}$ and it can be shown that they form a basis of the tangent space $T_pM$.
16
+
17
+ A vector field $V$ on $M$ associates with every point $x \in M$ an element of the corresponding tangent space, i.e. $V(x) \in T_x M$.⁵ A conservative vector field $V$ is a vector field that is the gradient of a function $f: M \to \mathbb{R}$, i.e. $V(x) = \nabla f(x)$. For submanifolds $S$, there are two different notions of vector fields. A vector field $V$ on the submanifold $S$ associates to every point on $S$ a vector in its corresponding tangent space $T_x S$, i.e. $V(x) \in T_x S$. A vector field $V$ along the submanifold $S$ associates to every point on $S$ a vector in the corresponding tangent space of the embedding manifold $M$, i.e. $V(x) \in T_x M$. These concepts can be related as follows: the tangent space $T_x M$ can be decomposed into the tangent space $T_x S$ of $S$ and its orthogonal complement
18
+
19
+ $$T_x S^\perp, \text{ i.e. } T_x M = T_x S \oplus T_x S^\perp. \text{ A vector field along } S$$
20
+
21
+ which only takes values in the first summand $T_x S$ is also a vector field on $S$.
22
+
23
+ With these definitions, we can now state a crucial theorem for our theoretical analysis. In Appendix A.1, we show that:
24
+
25
+ **Theorem 1** Let $S \subset M$ be $d$-dimensional submanifold properly embedded in the $D$-dimensional manifold $M$. Let $V = \sum_{i=d+1}^{D} v^i \partial_i$ be a conservative vector field along $S$ which assigns a vector in $T_p S^\perp$ for each $p \in S$. For any smooth function $f: S \to \mathbb{R}$, there exists a smooth extension $F: M \to \mathbb{R}$ such that
26
+
27
+ $$F|_S = f$$
28
+
29
+ where $F|_S$ denotes the restriction of $F$ on the submanifold $S$. Furthermore, the derivative of the extension $F$ is given by
30
+
31
+ $$\nabla F(x) = (\nabla_1 f(x), \dots, \nabla_d f(x), v^{d+1}(x), \dots, v^D(x))$$
32
+
33
+ for all $x \in S$.
34
+
35
+ Technical details not withstanding, this theorem states that a function $f$ defined on a submanifold $S$ can be extended to the entire embedding manifold $M$. The extension's derivatives orthogonal to the submanifold $S$ can be freely chosen.
36
+
37
+ This theorem is a generalization of the well-known submanifold extension lemma (see, for example, Lemma 5.34 in (Lee, 2012)) in that it not only shows that an extension exists but also that one has control over the gradient of the extension $F$. While we could not find such a statement in the literature, we suspect that it is entirely obvious to differential geometers but typically not needed for their purposes.
38
+
39
+ ### 2.2. Explanation Manipulation: Theory
40
+
41
+ From Theorem 1, it follows under a mild assumption that one can always construct a model $\tilde{g}$ such that it closely reproduces arbitrary target explanations but has the same training, validation, and test loss as the original model $g$.
42
+
43
+ **Assumption:** the data lies on a $d$-dimensional submanifold $S \subset M$ properly embedded in the manifold $M = \mathbb{R}^D$. The data manifold $S$ is of much lower dimensionality than its embedding space $M$, i.e.
44
+
45
+ $$\epsilon \equiv \frac{d}{D} \ll 1. \qquad (2)$$
46
+
47
+ We stress that this assumption is also known as the manifold conjecture and is expected to hold across a wide range of machine learning tasks. We refer to (Goodfellow et al., 2016) for a detailed discussion.
48
+
49
+ Under this assumption, the following theorem can be derived for the Gradient, $x \odot \text{Grad}$, and $\epsilon$-LRP methods (only the proof for the Gradient method is given; see Appendix 2 for other methods):
50
+
51
+ ⁵More rigorously, vector fields are defined in terms of the tangent bundle. We refrain from introducing bundles for accessibility.
samples/texts/2298363/page_4.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Theorem 2** Let $h_g: \mathbb{R}^D \to \mathbb{R}^D$ be the explanation of classifier $g: \mathbb{R}^D \to \mathbb{R}$ with bounded derivatives $|\nabla_i g(x)| \le C \in \mathbb{R}_+$ for $i = 1, \dots, D$.
2
+
3
+ For a given target explanation $h^t: \mathbb{R}^D \to \mathbb{R}^D$, there exists another classifier $\tilde{g}: \mathbb{R}^D \to \mathbb{R}$ which completely agrees with the classifier $g$ on the data manifold $S$, i.e.
4
+
5
+ $$\tilde{g}|_S = g|_S. \quad (3)$$
6
+
7
+ In particular, both classifiers have the same train, validation, and test loss.
8
+
9
+ However, its explanation $h_{\tilde{g}}$ closely resembles the target $h^t$, i.e.
10
+
11
+ $$\text{MSE}(h_{\tilde{g}}(x), h^t(x)) \le \epsilon \quad \forall x \in S, \quad (4)$$
12
+
13
+ where $\text{MSE}(h, h') = \frac{1}{D} \sum_{i=1}^{D} (h_i - h'_i)^2$ denotes the mean-squared error and $\epsilon = \frac{d}{D}$.
14
+
15
+ **Proof:** By Theorem 1, we can find a function $G$ which agrees with $g$ on the data manifold $S$ but has the derivative
16
+
17
+ $$\nabla G(x) = (\nabla_1 g(x), \dots, \nabla_d g(x), h_{d+1}^t(x), \dots, h_D^t(x))$$
18
+
19
+ for all $x \in S$. By definition, this is its gradient explanation $h_G = \nabla G$.
20
+
21
+ As explained in Appendix A.2.1, we can assume without loss of generality that $|\nabla_i g(x)| \le 0.5$ for $i \in \{1, \dots, D\}$. We can furthermore rescale the target map such that $|h_i^t| \le 0.5$ for $i \in \{1, \dots, D\}$. This rescaling is merely conventional as it does not change the relative importance $h_i$ of any input component $x_i$ with respect to the others. It then follows that
22
+
23
+ $$\text{MSE}(h_G(x), h^t(x)) = \frac{1}{D} \sum_{i=1}^{D} (\nabla_i G(x) - h_i^t(x))^2 .$$
24
+
25
+ This sum can be decomposed as
26
+
27
+ $$\frac{1}{D} \sum_{i=1}^{d} \underbrace{\left(\nabla_i g(x) - h_i^t(x)\right)^2}_{\le 1} + \frac{1}{D} \sum_{i=d+1}^{D} \underbrace{\left(\nabla_i G(x) - h_i^t(x)\right)^2}_{=0}$$
28
+
29
+ and from this, it follows that
30
+
31
+ $$\text{MSE}(h_G(x), h^t(x)) \le \frac{d}{D} = \epsilon,$$
32
+
33
+ The proof then concludes by identifying $\tilde{g} = G$. □
34
+
35
+ **Intuition:** Somewhat roughly, this theorem can be understood as follows: two models, which behave identically on the data, need to only agree on the low-dimensional submanifold $S$. The gradients "orthogonal" to the submanifold $S$ are completely undetermined by this requirement. By the manifold assumption, there are however much more
36
+
37
+ "orthogonal" than "parallel" directions and therefore the explanation is largely controlled by these. We can use this fact to closely reproduce an arbitrary target while keeping the function's values on the data unchanged.
38
+
39
+ We stress however that there are a number of non-trivial differential geometric arguments needed in order to make these statements rigorous and quantitative. For example, it is entirely non-trivial that an extension to the embedding manifold exists for arbitrary choice of target explanation. This is shown by Theorem 1 whose proof is based on a differential geometric technique called partition of the unity subordinate to an open cover. See Appendix A.1 for details.
40
+
41
+ ## 2.3. Explanation Manipulation: Methods
42
+
43
+ **Flat Submanifolds and Logistic Regression:** The previous theorem assumes that the data lies on an arbitrarily curved submanifold and therefore has to rely on relatively involved mathematical concepts of differential geometry. We will now illustrate the basic ideas in a much simpler context: we will assume that the data lies on a $d$-dimensional flat hyperplane $S \subset \mathbb{R}^D$.⁶ The points on the hyperplane $S$ obey the relation
44
+
45
+ $$\forall x \in S : (\hat{w}^{(i)})^T x = b_i, \quad i \in \{1, \dots, D-d\}, \quad (5)$$
46
+
47
+ where $\{\hat{w}^{(i)} \in \mathbb{R}^D | i = 1, \dots, D-d\}$ are a set of normal vectors to the hyperplane $S$ and $b_i \in \mathbb{R}$ are the affine translations. We furthermore assume that we use logistic regression as the classification algorithm, i.e.
48
+
49
+ $$g(x) = \sigma(w^T x + c), \quad (6)$$
50
+
51
+ where $w \in \mathbb{R}^D$, $c \in \mathbb{R}$ are the weights and the bias respectively and $\sigma(x) = \frac{1}{1+\exp(-x)}$ is the sigmoid function. This classifier has the gradient explanation⁷
52
+
53
+ $$h_{\text{grad}}(x) = w, \quad (7)$$
54
+
55
+ We can now define a modified classifier by
56
+
57
+ $$\tilde{g}(x) = \sigma\left(w^T x + \sum_i \lambda_i (\hat{w}^{(i)})^T x - b_i + c\right), \quad (8)$$
58
+
59
+ for arbitrary $\lambda_i \in \mathbb{R}$. By (5), it follows that both classifiers agree on the data manifold $S$, i.e.
60
+
61
+ $$\forall x \in S : g(x) = \tilde{g}(x), \quad (9)$$
62
+
63
+ and therefore have the same train, validation, and test error. However, the gradient explanations are now given by
64
+
65
+ $$h_{\text{grad}}(x) = w + \sum_{i} \lambda_{i} \hat{w}^{(i)}. \quad (10)$$
66
+
67
+ ⁶In mathematics, these submanifolds are usually referred to as $d$-flats and only the case $d = D - 1$ is called hyperplane. We refrain from this terminology.
68
+
69
+ ⁷We recall that in calculating the explanation map, we take the derivative before applying the final activation function.
samples/texts/2298363/page_5.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Since the $\lambda_i$ can be chosen freely, we can modify the explanations arbitrarily in directions orthogonal to the data submanifold $S$ (parameterized by the normal vectors $\hat{w}^{(i)}$). Similar statements can be shown for other explanation methods and we refer to the Appendix A.3 for more details.
2
+
3
+ As we will discuss in Section 2.4, one can use these tricks even for data which does not (initially) lie on a hyperplane.
4
+
5
+ **General Case:** For the case of arbitrary neural networks and curved data manifolds, we cannot analytically construct the manipulated model $\tilde{g}$. We therefore approximately obtain the model $\tilde{g}$ corresponding to the original model $g$ by minimizing the loss
6
+
7
+ $$ \mathcal{L} = \sum_{x_i \in T} ||g(x_i) - \tilde{g}(x_i)||^2 + \gamma \sum_{x_i \in T} ||h_{\tilde{g}}(x_i) - h^t||^2, \quad (11) $$
8
+
9
+ by stochastic gradient descent with respect to the parameters of $\tilde{g}$. The training set is denoted by $\mathcal{T}$ and $h^t \in \mathbb{R}^D$ is a specified target explanation. Note that we could also use different targets for various subsets of the data but we will not make this explicit to avoid cluttered notation. The first term in the loss $\mathcal{L}$ ensures that the models $g$ and $\tilde{g}$ have approximately the same output while the second term encourages the explanations of $\tilde{g}$ to closely reproduce the target $h^t$. The relative weighting of these two terms is determined by the hyperparameter $\gamma \in \mathbb{R}_+$.
10
+
11
+ As we will demonstrate experimentally, the resulting $\tilde{g}$ will closely reproduce the target explanation $h^t$ and have (approximately) the same output as $g$. Crucially, both statements will be seen to hold also for the test set.
12
+
13
+ ## 2.4. Explanation Manipulation: Practice
14
+
15
+ In this section, we will demonstrate manipulation of explanations experimentally. We will first discuss applying logistic regression to credit assessment and then proceed to the case of deep neural networks in the context of image classification. The code for all our experiments is publicly available at https://github.com/fairwashing/fairwashing.
16
+
17
+ **Credit Assessment:** In the following, we will suppose that a bank uses a logistic regression algorithm to classify whether a prospective client should receive a loan or not. The classification uses the features $x = (x_{\text{gender}}, x_{\text{income}})$ where
18
+
19
+ $$ x_{\text{gender}} = \begin{cases} 1, & \text{for male} \\ -1, & \text{for female} \end{cases} \quad (12) $$
20
+
21
+ and $x_{\text{income}}$ is the income of the applicant. Normalization is chosen such that the features are of the same order of magnitude. Details can be found in the Appendix B.
22
+
23
+ Figure 1. x⊙Grad explanations for **original classifier g** and **manipulated $\tilde{g}$** highlight completely different features. Colored bars show the median of the explanations over multiple examples.
24
+
25
+ We then define a logistic regression classifier $g$ by choosing the weights $w = (0.9, 0.1)$, i.e. female applicants are severely discriminated against. The discriminating nature of the algorithm may be detected by inspecting, for example, the gradient explanation maps $h_{\tilde{g}}^{\text{grad}} = w$.
26
+
27
+ Conversely, if the explanations did not show any sign of discrimination for another classifier $\tilde{g}$, the user may interpret this as a sign of its trustworthiness and fairness.
28
+
29
+ However, the bank can easily "fairwash" the explanations, i.e. hide the fact that the classifier is sexist. This can be done by adding new features which are linearly dependent on the previously used features. As a simple example, one could add the applicant's paid taxes $x_{\text{taxes}}$ as a feature. By definition, it holds that
30
+
31
+ $$ x_{\text{taxes}} = 0.4 x_{\text{income}}, \quad (13) $$
32
+
33
+ where we assume that there is a fixed tax rate of 0.4 on all income. The features used by the classifier are now $x = (x_{\text{gender}}, x_{\text{income}}, x_{\text{taxes}})$. By (13), all data samples $x$ obey
34
+
35
+ $$ \hat{w}^T x = 0 \quad \text{with} \quad \hat{w} = (0, 0.4, -1). \quad (14) $$
36
+
37
+ Therefore, the original classifier $g(x) = \sigma(w^T x)$ with $w = (0.9, 0.1, 0)$ leads to the same output as the classifier $\tilde{g}(x) = \sigma(w^T x + 1000 \hat{w}^T x)$. However, as shown in Figure 1, the classifier $\tilde{g}$ has explanations which suggest that the two financial features (and *not* the applicant's gender) are important for the classification result.
38
+
39
+ This example is merely an (oversimplified) illustration of a general concept: for each additional feature which linearly depends on the previously used features, a condition of the form (14) for some normal vector $\hat{w}$ is obtained. We can then construct a classifier with arbitrary explanation along each of these normal vectors.
samples/texts/2298363/page_6.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Image Classification:** We will now experimentally demonstrate the practical applicability of our methods in the context of image classification with deep neural networks.
2
+
3
+ **Datasets:** We consider the MNIST, FashionMNIST, and CIFAR10 datasets. We use the standard training and test sets for our analysis. The data is normalized such that it has mean zero and standard deviation one. We sum the explanations over the absolute values of its channels to get the relevance per pixel. The resulting relevances are then normalized to have a sum of one.
4
+
5
+ **Models:** For CIFAR10, we use the VGG16 (Simonyan & Zisserman, 2015) architecture. For FashionMNIST and MNIST, we use a four layer convolutional neural network. We train the model $g$ by minimizing the standard cross entropy loss for classification. The manipulated model $\tilde{g}$ is then trained by minimizing the loss (11) for a given target explanation $h^t$. This target was chosen to have the shape of the number 42. For more details about the architectures and training, we refer to the Appendix D.
6
+
7
+ **Quantitative Measures:** We assess the similarity between explanation maps using three quantitative measures: the structural similarity index (SSIM), the Pearson correlation coefficient (PCC) and the mean squared error (MSE). SSIM and PCC are relative similarity measures with values in [0, 1], where larger values indicate high similarity. The MSE is an absolute error measure for which values close to zero indicate high similarity. We also use the MSE metric as well as the Kullback-Leibler divergence for assessing similarity of the class scores of the manipulated model $\tilde{g}$ and the original network $g$.
8
+
9
+ **Results:** For all considered models, datasets, and explanation methods, we find that the manipulated model $\tilde{g}$ has explanations which closely resemble the target map $h^t$, e.g. the SSIM between the target and manipulated explanations is of the order $10^{-3}$. At the same time, the manipulated network $\tilde{g}$ has approximately the same output as the original model $g$, i.e. the mean-squared error of the outputs after the final softmax non-linearity is of the order $10^{-3}$. The classification accuracy is changed by about 0.2 percent.
10
+
11
+ Figure 2 illustrates this for examples from the FashionMNIST and CIFAR10 test sets. We stress that we use a single model for Gradient, x⊙Grad, and Integrated Gradient methods which demonstrates that the manipulation generalizes over all considered gradient-based methods.
12
+
13
+ The left-hand-side of Figure 3 shows quantitatively that manipulated model $\tilde{g}$ closely reproduces the target map $h^t$ over the entire test set of FashionMNIST. We refer to the Appendix D for additional similarity measures, examples, and quantitative analysis for all datasets.
14
+
15
+ Figure 2. Example explanations from the original model $g$ (left) and the manipulated model $\tilde{g}$ (right). Images from the test sets of FashionMNIST (top) and CIFAR10 (bottom).
16
+
17
+ # 3. Robust Explanations
18
+
19
+ Having demonstrated both theoretically and experimentally that explanations are highly vulnerable to model manipulation, we will now use our theoretical insights to propose explanation methods which are significantly more robust under such manipulations.
20
+
21
+ ## 3.1. TSP Explanations: Theory
22
+
23
+ In this section, we will define a robust *gradient explanation* method. Appendix C discusses analogous definitions for other methods.
24
+
25
+ We can formally define an explanation field $H_g$ which associates to every point $x$ on the data manifold $S$ the corresponding gradient explanation $h_g(x)$ of the classifier $g$. We note that $H_g$ is generically a vector field along the manifold since $h_g(x) \in \mathbb{R}^D \cong T_x M$, i.e. it is an element of the tangent space $T_x M$ of the embedding manifold $M$ and not an element of the tangent space $T_x S$ of data manifold $S$.
26
+
27
+ As explained in Section 2.1, we can decompose the tangent space $T_p M$ of the embedding manifold $M$ as follows
28
+ $$T_x M = T_x S \oplus T_x S^\perp.$$
29
+ Let $P : T_x M \to T_x S$ be the projection on the first summand of this decomposition. We stress that the form of the projector $P$ depends on the point $x \in S$ but we do not make this explicit in order to simplify notation. We can then define:
samples/texts/2298363/page_7.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Definition 1** *The tangent-space-projected (tsp) explanation field $\hat{H}_g$ is a vector field on the data manifold $S$. It associates to each $x \in S$, the tangent-space-projected (tsp) explanation $\hat{h}_g(x)$ given by*
2
+
3
+ $$ \hat{h}_g(x) = (P \circ h_g)(x) \in T_x S. \quad (15) $$
4
+
5
+ Intuitively, the tsp-explanation $\hat{h}_g(x)$ is the explanation of
6
+ the model $g$ projected on the "tangential directions" of the
7
+ data manifold.
8
+
9
+ We recall from our discussion of Theorem 2 that we can
10
+ always find classifiers $\tilde{g}$ which coincide with the original
11
+ classifier $g$ on the data manifold $S$ but may differ in the
12
+ gradient components orthogonal to the data manifold, i.e.
13
+ for some $x \in S$ it holds that
14
+
15
+ $$ (1-P)\nabla g(x) \neq (1-P)\nabla \tilde{g}(x). $$
16
+
17
+ On the other hand, the components tangential to the mani-
18
+ fold $S$ agree
19
+
20
+ $$ P \nabla g(x) = P \nabla \tilde{g}(x), \quad \forall x \in S. $$
21
+
22
+ In other words, the tsp-gradient explanations of the original
23
+ model *g* and any such model $\tilde{g}$ are identical:
24
+
25
+ $$ \hat{h}_g(x) = \hat{h}_{\tilde{g}}(x) \quad \forall x \in S. \quad (16) $$
26
+
27
+ It can therefore be expected that tsp-explanations $\hat{h}_g$ are
28
+ significantly more robust compared to their unprojected
29
+ counterparts $h_g$.
30
+
31
+ For other explanation methods, the corresponding tsp-
32
+ explanations may be obtained using a slightly modified
33
+ projector *P*. We refer to Appendix C for more details.
34
+
35
+ **3.2. TSP Explanations: Methods**
36
+
37
+ **Flat Submanifolds and Logistic Regression:** Recall from Section 2.3 that for a logistic regression model $g(x) = \sigma(w^T x + c)$ with gradient explanation $h_{\tilde{g}}^{\text{grad}} = w$, we can define a manipulated model
38
+
39
+ $$ \tilde{g}(x) = \sigma \left( w^T x + \sum_i \lambda_i (\hat{w}^{(i)})^T x - b_i + c \right) $$
40
+
41
+ with gradient explanation $h_{\tilde{g}}^{\text{grad}} = w + \sum_i \lambda_i \hat{w}^{(i)}$ for arbitrary $\lambda_i \in \mathbb{R}$. Since the vectors $\hat{w}^i$ are normal to the data hypersurface $S$, it holds that $P\hat{w}_i = 0$. As a result, the gradient tsp-explanations of the original model $g$ and its manipulated counterpart $\tilde{g}$ are identical, i.e.
42
+
43
+ $$ \hat{h}_{\tilde{g}}^{\text{grad}} = \hat{h}_{\tilde{g}}^{\text{grad}} = Pw. \quad (17) $$
44
+
45
+ We discuss the case of other explanation methods in the
46
+ Appendix C.1.
47
+
48
+ Figure 3. Left: SSIM of the target map $h^t$ and explanations of **original model** *g* and **manipulated** $\tilde{g}$ respectively. Clearly, the manipulated model $\tilde{g}$ has explanations which closely resemble the target map $h^t$ over the entire FashionMNIST test set. Right: Same as on the left but for **tsp-explanations**. The model $\tilde{g}$ was trained to manipulate the tsp-explanation. Evidently, tsp-explanations are considerably more robust than their unprojected counterparts on the left. Colored bars show the median. Errors denote the 25th and 75th percentile. Other similarity measures show similar behaviour and can be found in Appendix D.
49
+
50
+ Figure 4. x⊙Grad tsp-explanations for **original classifier g** and **manipulated $\tilde{g}$** highlight the same features. Colored bars show the median of the explanations over multiple examples.
51
+
52
+ **General Case:** In many practical applications, we do not know the explicit form of the projection matrix *P*. In these situations, we propose to construct *P* by one of the following two methods:
53
+
54
+ **Hyperplane method:** for a given datapoint $x \in S$, we find its k-nearest neighbours $x_1, \dots, x_k$ in the training set. We then estimate the data tangent space $T_x S$ by constructing the d-dimensional hyperplane with minimal Euclidean distance to the points $x, x_1, \dots, x_k$. Let this hyperplane be spanned by an orthonormal basis $q_1, \dots, q_d \in \mathbb{R}^D$. The projection
samples/texts/2298363/page_8.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ matrix $P$ on this hyperplane is then given by
2
+
3
+ $$P = \sum_{i=1}^{d} q_i q_i^T .$$
4
+
5
+ **Autoencoder method:** the hyperplane method requires that the data manifold is sufficiently densely sampled, i.e. the nearest neighbors are small deformations of the data point itself. In order to estimate tangent space for datasets without this property, we use techniques from the well-established field of manifold learning. Following (Shao et al., 2018), we train an autoencoder on the dataset and then perform an SVD decomposition of the Jacobian of decoder $D$,
6
+
7
+ $$\frac{\partial D}{\partial z} = U \Sigma V. \quad (18)$$
8
+
9
+ The projector is constructed from the left-singular values $u_1, \dots, u_d \in \mathbb{R}^D$ corresponding to the $d$ largest singular values. The projector is obtained by
10
+
11
+ $$P = \sum_{i=1}^{d} u_i u_i^T . \quad (19)$$
12
+
13
+ The underlying motivation for this procedure is reviewed in Appendix C.2.
14
+
15
+ After one of these methods is used to estimate the projector $P$ for a given $x \in S$, the corresponding tsp-explanation can be easily computed by $\hat{h}(x) = Ph(x)$.
16
+
17
+ ### 3.3. TSP Explanations: Practice
18
+
19
+ In this section, we will apply tsp-explanations to the examples of Section 2.4 and show that they are significantly more robust under model manipulations.
20
+
21
+ **Credit Assessment:** From the arguments of the previous section, it follows that the explanations of the manipulated and original model agree. We indeed confirm this experimentally, see Figure 4. We refer to the Appendix B for more details.
22
+
23
+ **Image Classification:** For MNIST and FashionMNIST, we use the hyperplane method to estimate the tangent space. For CIFAR10, we find that the manifold is not densely sampled enough and we therefore use the autoencoder method. This is computationally expensive and takes about 48h using four Tesla P100 GPUs. We refer to Appendix D for more details.
24
+
25
+ Figure 5 shows the tsp-explanations for the examples of Figure 2. The explanation maps of the original and manipulated model show a high degree of visual similarity. This suggests the manipulation occurred mainly in directions orthogonal to the data manifold (as the tsp-explanations are
26
+
27
+ obtained from the original explanations by projecting out the corresponding components). This is also confirmed quantitatively, see Appendix D. Furthermore, tsp-explanations tend to be considerably less noisy than their unprojected counterparts (see Figure 5 vs 2). This is expected from our theoretical analysis: consider gradient explanations for concreteness. Their components orthogonal to the data manifold are undetermined by training and are therefore essentially chosen at random. This fitting noise is projected out in the tsp-explanation which results in a less noisy explanation.
28
+
29
+ If the adversaries knew that tsp-explanations are used, they could also try to train a model $\tilde{g}$ which manipulates the tsp-explanations directly. However, tsp-explanations are considerable more robust to such manipulations, as shown on the right-hand-side of Figure 3.
30
+
31
+ We refer to Appendix D for more detailed discussion.
32
+
33
+ ## 4. Conclusion
34
+
35
+ A central message of this work is that widely-used explanation methods should not be used as proof for a fair and sensible algorithmic decision-making process. This is because they can be easily manipulated as we have demonstrated both theoretically and experimentally. We propose modifications to existing explanation methods which make
samples/texts/2298363/page_9.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ them more robust with respect to such manipulations. This is achieved by projecting explanations on the tangent space of the data manifold. This is exciting because it connects explainability to the field of manifold learning. For applying these methods, it is however necessary to estimate the tangent space of the data manifold. For high-dimensional datasets, such as ImageNet, this is an expensive and challenging task. Future work will try to overcome this hurdle. Another promising direction for further research is to apply the methods developed in this work to other application domains such as natural language processing.
2
+
3
+ ## Acknowledgements
4
+
5
+ We thank the reviewers for their valuable feedback. P.K. is greatly indebted to his mother-in-law as she took care of his sick son and wife during the final week before submission. We acknowledge Shinichi Nakajima for stimulating discussion. K-R.M. was supported in part by the German Ministry for Education and Research (BMBF) under Grants 01IS14013A-E, 01GQ1115, 01GQ0850, 01IS18025A and 01IS18037A. This work is also supported by the Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (No. 2017-001779), as well as by the Research Training Group "Differential Equation- and Data-driven Models in Life Sciences and Fluid Dynamics (DAEDALUS)" (GRK 2433) and Grant Math+, EXC 2046/1, Project ID 390685689 both funded by the German Research Foundation (DFG).
6
+
7
+ ## References
8
+
9
+ Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I. J., Hardt, M., and Kim, B. Sanity checks for saliency maps. In *Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018*, Montréal, Canada., pp. 9525–9536, 2018.
10
+
11
+ Aïvodji, U., Arai, H., Fortineau, O., Gambs, S., Hara, S., and Tapp, A. Fairwashing: the risk of rationalization. In Chaudhuri, K. and Salakhutdinov, R. (eds.), *Proceedings of the 36th International Conference on Machine Learning*, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of *Proceedings of Machine Learning Research*, pp. 161–170. PMLR, 2019. URL http://proceedings.mlr.press/v97/aivodji19a.html.
12
+
13
+ Alber, M., Lapuschkin, S., Seegerer, P., Hägele, M., Schütt, K. T., Montavon, G., Samek, W., Müller, K.-R., Dähne, S., and Kindermans, P. iNNvestigate neural networks! *Journal of Machine Learning Research* 20, 2019.
14
+
15
+ Ancona, M., Ceolini, E., Oztireli, C., and Gross, M. To-
16
+
17
+ wards better understanding of gradient-based attribution methods for Deep Neural Networks. In *6th International Conference on Learning Representations (ICLR 2018)*, 2018.
18
+
19
+ Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., and Samek, W. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. *PLOS ONE*, 10(7):1–46, 07 2015. doi: 10.1371/journal.pone.0130140. URL https://doi.org/10.1371/journal.pone.0130140.
20
+
21
+ Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., and Müller, K.-R. How to explain individual classification decisions. *Journal of Machine Learning Research*, 11(Jun):1803–1831, 2010.
22
+
23
+ Dombrowski, A.-K., Alber, M., Anders, C., Ackermann, M., Müller, K.-R., and Kessel, P. Explanations can be manipulated and geometry is to blame. In *Advances in Neural Information Processing Systems*, pp. 13567–13578, 2019.
24
+
25
+ Ghorbani, A., Abid, A., and Zou, J. Y. Interpretation of neural networks is fragile. In *The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019*, Honolulu, Hawaii, USA, January 27 - February 1, 2019., pp. 3681–3688, 2019.
26
+
27
+ Goodfellow, I., Bengio, Y., and Courville, A. *Deep Learning*. MIT Press, 2016. http://www.deeplearningbook.org.
28
+
29
+ Heo, J., Joo, S., and Moon, T. Fooling neural network interpretations via adversarial model manipulation. In *Advances in Neural Information Processing Systems*, pp. 2921–2932, 2019.
30
+
31
+ Kindermans, P., Hooker, S., Adebayo, J., Alber, M., Schütt, K. T., Dähne, S., Erhan, D., and Kim, B. The (un)reliability of saliency methods. In *Explainable AI: Interpreting, Explaining and Visualizing Deep Learning*, pp. 267–280. Springer, 2019.
32
+
33
+ Kokhlikyan, N., Miglani, V., Martin, M., Wang, E., Reynolds, J., Melnikov, A., Lunova, N., and Reblitz-Richardson, O. Pytorch captum. https://github.com/pytorch/captum, 2019.
34
+
35
+ Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., and Müller, K.-R. Unmasking clever hans predictors and assessing what machines really learn. *Nature communications*, 10:1096, 2019.
samples/texts/348597/page_1.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ ELEMENTARY
2
+ MATHEMATICAL and
3
+ COMPUTATIONAL TOOLS
4
+ for ELECTRICAL and
5
+ COMPUTER ENGINEERS
6
+ USING MATLAB®
7
+
8
+ Jamal T. Manassah
samples/texts/348597/page_10.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2.8 Fractals and Computer Art
2
+
3
+ 2.8.1 Mira's Model
4
+
5
+ 2.8.2 Hénon's Model
6
+
7
+ 2.9 Generation of Special Functions from Their Recursion Relations*
8
+
9
+ 3. Elementary Functions and Some of Their Uses
10
+
11
+ 3.1 Function Files
12
+
13
+ 3.2 Examples with Affine Functions
14
+
15
+ 3.3 Examples with Quadratic Functions
16
+
17
+ 3.4 Examples with Polynomial Functions
18
+
19
+ 3.5 Examples with Trigonometric Functions
20
+
21
+ 3.6 Examples with the Logarithmic Function
22
+
23
+ 3.6.1 Ideal Coaxial Capacitor
24
+
25
+ 3.6.2 The Decibel Scale
26
+
27
+ 3.6.3 Entropy
28
+
29
+ 3.7 Examples with the Exponential Function
30
+
31
+ 3.8 Examples with the Hyperbolic Functions and Their Inverses
32
+
33
+ 3.8.1 Capacitance of Two Parallel Wires
34
+
35
+ 3.9 Commonly Used Signal Processing Functions
36
+
37
+ 3.10 Animation of a Moving Rectangular Pulse
38
+
39
+ 3.11 MATLAB Commands Review
40
+
41
+ 4. Numerical Differentiation, Integration, and Solutions of Ordinary Differential Equations
42
+
43
+ 4.1 Limits of Indeterminate Forms
44
+
45
+ 4.2 Derivative of a Function
46
+
47
+ 4.3 Infinite Sums
48
+
49
+ 4.4 Numerical Integration
50
+
51
+ 4.5 A Better Numerical Differentiator
52
+
53
+ 4.6 A Better Numerical Integrator: Simpson's Rule
54
+
55
+ 4.7 Numerical Solutions of Ordinary Differential Equations
56
+
57
+ 4.7.1 First-Order Iterator
58
+
59
+ 4.7.2 Higher-Order Iterators: The Runge-Kutta Method*
60
+
61
+ 4.7.3 MATLAB ODE Solvers
62
+
63
+ 4.8 MATLAB Commands Review
64
+
65
+ 5. Root Solving and Optimization Methods
66
+
67
+ 5.1 Finding the Real Roots of a Function
68
+
69
+ 5.1.1 Graphical Method
70
+
71
+ 5.1.2 Numerical Methods
72
+
73
+ 5.1.3 MATLAB fsolve and fzero Built-in Functions
74
+
75
+ 5.2 Roots of a Polynomial
samples/texts/348597/page_100.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ still much smaller than the value of the variation of x over which the function
2
+ changes appreciably.
3
+
4
+ For a systematic method to choose an upper limit on dx, you might want
5
+ to follow these simple steps:
6
+
7
+ 1. Plot the function on the given interval and identify the point where the derivative is largest.
8
+
9
+ 2. Compute the derivative at that point using the sequence method of Example 4.2, and determine the $\bar{d}x$ that would satisfy the desired tolerance; then go ahead and use this value of $\bar{d}x$ in the above routine to evaluate the derivative throughout the given interval.
10
+
11
+ *In-Class Exercises*
12
+
13
+ Plot the derivatives of the following functions on the indicated intervals:
14
+
15
+ Pb. 4.11 $\ln\left|\frac{x-1}{x+1}\right|$ on $2 < x < 3$
16
+
17
+ Pb. 4.12 $\ln\left|\frac{1+\sqrt{1+x^2}}{x}\right|$ on $1 < x < 2$
18
+
19
+ Pb. 4.13 $\ln|\tanh(x/2)|$ on $1 < x < 5$
20
+
21
+ Pb. 4.14 $\tan^{-1}|\sinh(x)|$ on $0 < x < 10$
22
+
23
+ Pb. 4.15 $\ln|\csc(x) + \tan(x)|$ on $0 < x < \pi/2$
24
+
25
+ **4.3 Infinite Sums**
26
+
27
+ An infinite series is denoted by the symbol $\sum_{n=1}^{\infty} a_n$. It is important not to con-
28
+ fuse the series with the sequence $\{a_n\}$. The sequence is a list of terms, while the
29
+ series is a sum of these terms. A sequence is convergent if the term $a_n$
30
+ approaches a finite limit; however, convergence of a series requires that the
31
+ sequence of partial sums $S_N = \sum_{n=1}^{N} a_n$ approaches a finite limit. There are
samples/texts/348597/page_101.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ cases where the sequence may approach a limit, while the series is divergent.
2
+
3
+ The classical example is that of the sequence $\left\{\frac{1}{n}\right\}$; this sequence approaches the limit zero, while the corresponding series is divergent.
4
+
5
+ In any numerical calculation, we cannot perform the operation of adding an infinite number of terms. We can only add a finite number of terms. The infinite sum of a convergent series is the limit of the partial sums $S_N$.
6
+
7
+ You will study in your calculus course the different tests for checking the convergence of a series. We summarize below the most useful of these tests.
8
+
9
+ * The Ratio Test, which is very useful for series with terms that contain factorials and/or $n^{th}$ power of a constant, states that:
10
+
11
+ $$ \text{for } a_n > 0, \text{ the series } \sum_{n=1}^{\infty} a_n \text{ is convergent if } \lim_{n \to \infty} \left( \frac{a_{n+1}}{a_n} \right) < 1 $$
12
+
13
+ * The Root Test stipulates that for $a_n > 0$, the series $\sum_{n=1}^{\infty} a_n$ is convergent if
14
+
15
+ $$ \lim_{n \to \infty} (a_n)^{1/n} < 1 $$
16
+
17
+ * For an alternating series, the series is convergent if it satisfies the conditions that
18
+
19
+ $$ \lim_{n \to \infty} |a_n| = 0 \quad \text{and} \quad |a_{n+1}| < |a_n| $$
20
+
21
+ Now look at the numerical routines for evaluating the limit of the partial sums when they exist.
22
+
23
+ **Example 4.4**
24
+
25
+ Compute the sum of the geometrical series $S_N = \sum_{n=1}^{N} \left(\frac{1}{2}\right)^n$.
26
+
27
+ **Solution:** Edit and execute the following script M-file:
28
+
29
+ ```
30
+ for N=1:20
31
+ n=N:-1:1;
32
+ fn=(1/2).^n;
33
+ Sn(N)=sum(fn);
34
+ end
35
+ NN=1:20;
36
+ plot(NN,Sn)
37
+ ```
samples/texts/348597/page_102.md ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You will observe that this partial sum converges to 1.
2
+
3
+ NOTE The above summation was performed backwards because this scheme will ensure a more accurate result and will keep all the significant digits of the smallest term of the sum.
4
+
5
+ ## In-Class Exercises
6
+
7
+ Compute the following infinite sums:
8
+
9
+ Pb. 4.16 $\sum_{k=1}^{\infty} \frac{1}{(2k-1)2^{2k-1}}$
10
+
11
+ Pb. 4.17 $\sum_{k=1}^{\infty} \frac{\sin(2k-1)}{(2k-1)}$
12
+
13
+ Pb. 4.18 $\sum_{k=1}^{\infty} \frac{\cos(k)}{k^4}$
14
+
15
+ Pb. 4.19 $\sum_{k=1}^{\infty} \frac{\sin(k/2)}{k^3}$
16
+
17
+ Pb. 4.20 $\sum_{k=1}^{\infty} \frac{1}{2^k} \sin(k)$
18
+
19
+ ## 4.4 Numerical Integration
20
+
21
+ The algorithm for integration discussed in this section is the second simplest available (the trapezoid rule being the simplest, beyond the trivial, is given at the end of this section as a problem). It has been generalized to become more accurate and efficient through other approximations, including Simpson's rule, the Newton-Cotes rule, the Gaussian-Laguerre rule, etc. Simpson's rule is derived in Section 4.6, while other advanced techniques are left to more advanced numerical methods courses.
22
+
23
+ Here, we perform numerical integration through the means of a Rieman
24
+ sum: we subdivide the interval of integration into many subintervals. Then
25
+ we take the area of each strip to be the value of the function at the midpoint
26
+ of the subinterval multiplied by the length of the subinterval, and we add the
samples/texts/348597/page_103.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ strip areas to obtain the value of the integral. This technique is referred to as
2
+ the midpoint rule.
3
+
4
+ We can justify the above algorithm by recalling the Mean Value Theorem of Calculus, which states that:
5
+
6
+ $$
7
+ \int_{a}^{b} f(x) dx = (b - a) f(c) \tag{4.4}
8
+ $$
9
+
10
+ where $c \in [a, b]$. Thus, if we divide the interval of integration into narrow sub-intervals, then the total integral can be written as the sum of the integrals over the subintervals, and we approximate the location of $c$ in a particular sub-interval by the midpoint between its boundaries.
11
+
12
+ **Example 4.5**
13
+
14
+ Use the above algorithm to compute the value of the definite integral of the function sin(x) from 0 to π.
15
+
16
+ Solution: Edit and execute the following program:
17
+
18
+ dx=pi/200;
19
+ x=0:dx:pi-dx;
20
+ xshift=x+dx/2;
21
+ yshift=sin(xshift);
22
+ Int=dx*sum(yshift)
23
+
24
+ You get for the above integral a result that is within 1/1000 error from the
25
+ analytical result.
26
+
27
+ In-Class Exercises
28
+
29
+ Find numerically, to a 1/10,000 accuracy, the values of the following definite
30
+ integrals:
31
+
32
+ Pb. 4.21 $\int_0^\infty \frac{1}{x^2 + 1} dx$
33
+
34
+ Pb. 4.22 $\int_{0}^{\infty} \exp(-x^2) \cos(2x) dx$
35
+
36
+ Pb. 4.23 $\int_{0}^{\pi/2} \sin^6(x) \cos^7(x) dx$