Q
stringlengths 4
3.96k
| A
stringlengths 1
3k
| Result
stringclasses 4
values |
|---|---|---|
Theorem 21 For every \( k \geq 1 \) there is an integer \( m \) such that every \( k \) -colouring of \( \left\lbrack m\right\rbrack \) contains integers \( x, y, z \) of the same colour such that\n\n\[ x + y = z. \]
|
Proof. We claim that \( m = {R}_{k}\left( 3\right) - 1 \) will do, where \( {R}_{k}\left( 3\right) = {R}_{k}\left( {3,\ldots ,3}\right) \) is the graphical Ramsey number for \( k \) colours and triangles, i.e., the minimal integer \( n \) such that every \( k \) -colouring of the edges of \( {K}_{n} \) contains a monochromatic triangle.\n\nLet then \( n = {R}_{k}\left( 3\right) \) and let \( c : \left\lbrack m\right\rbrack = \left\lbrack {n - 1}\right\rbrack \rightarrow \left\lbrack k\right\rbrack \) be a \( k \) -colouring. Induce a \( k \) -colouring of \( {\left\lbrack n\right\rbrack }^{\left( 2\right) } \), the edge set of the complete graph with vertex set \( \left\lbrack n\right\rbrack \), as follows: for \( {ij} \in E\left( {K}_{n}\right) = {\left\lbrack n\right\rbrack }^{\left( 2\right) } \) set \( {c}^{\prime }\left( {ij}\right) = c\left( \left| {i - j}\right| \right) \). By the definition of \( n = {R}_{k}\left( 3\right) \), there is a monochromatic triangle, say with vertex set \( \{ h, i, j\} \), so that \( 1 \leq h < i < j \leq n \) and \( {c}^{\prime }\left( {hi}\right) = {c}^{\prime }\left( {ij}\right) = {c}^{\prime }\left( {hj}\right) = \ell \) for some \( \ell \). But then\n\n\( x = i - h, y = j - i \) and \( z = j - h \) are such that \( c\left( x\right) = c\left( y\right) = c\left( z\right) = \ell \) and\n\n\( x + y = z \).
|
Yes
|
Theorem 23 For every \( p \) and \( k \), there is an integer \( n \) such that if \( A \) is an alphabet with \( p \) letters then every \( k \)-colouring \( c : {A}^{n} \rightarrow \left\lbrack k\right\rbrack \) contains a monochromatic line.
|
The Hales-Jewett function \( {HJ}\left( {p, k}\right) \) is defined much like the corresponding van der Waerden function: \( {HJ}\left( {p, k}\right) \) is the minimal value of \( n \) that will do in Theorem 23.
|
No
|
Theorem 27 A matrix with integer entries is partition regular if and only if it satisfies the columns condition.
|
This beautiful theorem reduces partition regularity to a property can be checked in finite time. It is worth remarking that neither of the two implications is easy. Also, as in most Ramsey type results, by the standard compactness argument we have encountered several times, the infinite version implies the finite version. Thus if \( A \) is partition regular then, for each \( k \), there is a natural number \( R = R\left( {A, k}\right) \) such that \( {Ax} = 0 \) has a monochromatic solution in every \( k \) -colouring of \( \left\lbrack R\right\rbrack \) .
|
No
|
Lemma 28 If \( \mathbb{N} \) rejects \( \varnothing \) then there exists an \( M \in {\mathbb{N}}^{\left( \omega \right) } \) that rejects every \( X \subset M \) .
|
Proof. Note first that there is an \( {M}_{0} \) such that every \( X \subset {M}_{0} \) is either accepted or rejected by \( {M}_{0} \) . Indeed, put \( {N}_{0} = \mathbb{N},{a}_{0} = 1 \) . Suppose that we have defined \( {N}_{0} \supset {N}_{1} \supset \cdots \supset {N}_{k} \) and \( {a}_{i} \in {N}_{i} - {N}_{i + 1},0 \leq i \leq k - 1 \) . Pick \( {a}_{k} \in {N}_{k} \) . If \( {N}_{k} - \left\{ {a}_{k}\right\} \) rejects \( \left\{ {{a}_{0},\ldots ,{a}_{k}}\right\} \) then put \( {N}_{k + 1} = {N}_{k} - \left\{ {a}_{k}\right\} \) ; otherwise, let \( {N}_{k + 1} \) be an infinite subset of \( {N}_{k} - \left\{ {a}_{k}\right\} \) that accepts \( \left\{ {{a}_{0},\ldots ,{a}_{k}}\right\} \) . Then \( {M}_{0} = \left\{ {{a}_{0},{a}_{1}\ldots }\right\} \) will do.\n\nBy assumption \( {M}_{0} \) rejects \( \varnothing \) . Suppose now that we have chosen \( {b}_{0},{b}_{1},\ldots ,{b}_{k - 1} \) such that \( {M}_{0} \) rejects every \( X \subset \left\{ {{b}_{0},{b}_{1},\ldots ,{b}_{k - 1}}\right\} \) . Then \( {M}_{0} \) cannot accept infinitely many sets of the form \( X \cup \left\{ {c}_{j}\right\}, j = 1,2,\ldots \), since otherwise \( \left\{ {{c}_{1},{c}_{2},\ldots }\right\} \) accepts \( X \) . Hence \( {M}_{0} \) rejects all but finitely many sets of the form \( X \cup \{ c\} \) . As there are only \( {2}^{k} \) choices for \( X \), there exists a \( {b}_{k} \) such that \( {M}_{0} \) rejects every \( X \subset \left\{ {{b}_{0},{b}_{1},\ldots ,{b}_{k}}\right\} \) . By construction the set \( M = \left\{ {{b}_{0},{b}_{1},\ldots }\right\} \) has the required property.
|
Yes
|
Theorem 29 Every open subset of \( {2}^{\mathbb{N}} \) is Ramsey.
|
Proof. Let \( \mathcal{F} \subset {2}^{\mathbb{N}} \) be open and assume that \( {A}^{\left( \omega \right) } ⊄ \mathcal{F} \) for every \( A \in {\mathbb{N}}^{\left( \omega \right) } \), i.e., \( \mathbb{N} \) rejects \( \varnothing \) . Let \( M \) be the set whose existence is guaranteed by Lemma 28. If \( {M}^{\left( \omega \right) } ⊄ {2}^{\mathbb{N}} - \mathcal{F} \), let \( A \in {M}^{\left( \omega \right) } \cap \mathcal{F} \) . Since \( \mathcal{F} \) is open, it contains a neighbourhood of \( A \), so there is an integer \( a \in A \) such that if \( B \cap \{ 1,2,\ldots, a\} = A \cap \{ 1,2,\ldots, a\} \) then \( B \in \mathcal{F} \) . But this implies that \( M \) accepts \( A \cap \{ 1,2,\ldots, a\} \), contrary to the choice of \( M \) . Hence \( {M}^{\left( \omega \right) } \subset {2}^{\mathbb{N}} - \mathcal{F} \), proving that \( \mathcal{F} \) is Ramsey.
|
Yes
|
Corollary 30 Let \( \mathcal{G} \subset {\mathbb{N}}^{\left( < \omega \right) } \) be dense. Then there is an \( M \in {N}^{\left( \omega \right) } \) such that every \( A \subset M \) has an initial segment belonging to \( \mathcal{G} \) .
|
Proof. Let \( \mathcal{F} = \{ F \subset \mathbb{N} : F \) has an initial segment belonging to \( \mathcal{G}\} \) . Then \( \mathcal{F} \) is open, so there is an \( M \in {\mathbb{N}}^{\left( \omega \right) } \) such that either \( {M}^{\left( \omega \right) } \subset \mathcal{F} \), in which case we are done, or else \( {M}^{\left( \omega \right) } \subset {2}^{\mathbb{N}} - \mathcal{F} \) . The second alternative cannot hold since it would imply \( {M}^{\left( < \omega \right) } \cap \mathcal{G} = \varnothing \) .
|
Yes
|
Corollary 31 Let \( \mathcal{G} \subset {\mathbb{N}}^{\left( < \omega \right) } \) be a thin family, and let \( k \in \mathbb{N} \) . Then for any \( k \) - colouring of \( \mathcal{G} \) there is an infinite set \( A \subset \mathbb{N} \) such that all members of \( \mathcal{G} \) contained in \( A \) have the same colour.
|
Proof. It clearly suffices to prove the result for \( k = 2 \) . Consider a red and blue colouring of \( \mathcal{G} : \mathcal{G} = {\mathcal{F}}_{\text{red }} \cup {\mathcal{F}}_{\text{blue }} \) . If \( {F}_{\text{red }} \) is dense then let \( M \) be the set guaranteed by Corollary 30 . For every \( F \in \mathcal{G} \cap {2}^{M} \) there is an infinite set \( N \subset M \) with initial segment \( F \) . Since \( \mathcal{G} \) is thin, \( F \) is the unique initial segment of \( N \) that belongs to \( \mathcal{G} \) . Hence \( F \in {\mathcal{F}}_{\text{red }} \), so every member of \( \mathcal{G} \) contained in \( M \) is red.\n\nOn the other hand, if \( {\mathcal{F}}_{\text{red }} \) is not dense, then \( {2}^{M} \cap {\mathcal{F}}_{\text{red }} = \varnothing \) for some infinite set \( M \) . Hence \( {2}^{M} \cap \mathcal{G} \subset {\mathcal{F}}_{\text{blue }} \) .
|
Yes
|
Theorem 2 (i) If \( 3 \leq s \leq n \) are such that\n\n\[ \left( \begin{array}{l} n \\ s \end{array}\right) < {2}^{\left( \begin{array}{l} s \\ 2 \end{array}\right) - 1} \]\n\nthen \( R\left( {s, s}\right) \geq n + 1 \) . Also,\n\n\[ R\left( {s, s}\right) > \frac{1}{e\sqrt{2}}s{2}^{s/2}. \]\n\n(7)
|
Proof. (i) Consider \( \mathcal{G}\left( {n,1/2}\right) \) . With the notation above,\n\n\[ {\mathbb{E}}_{1/2}\left( {{X}_{s} + {X}_{s}^{\prime }}\right) = 2\left( \begin{array}{l} n \\ s \end{array}\right) {2}^{-\left( \begin{array}{l} s \\ 2 \end{array}\right) } < 1 \]\n\nso there is a graph \( G \in \mathcal{G}\left( {n,1/2}\right) \) with \( \left( {{X}_{s} + {X}_{s}^{\prime }}\right) \left( G\right) = {X}_{s}\left( G\right) + {X}_{s}^{\prime }\left( G\right) = 0 \) . This means precisely that neither \( G \) nor its complement contains a complete graph of order \( s \) . Hence \( R\left( {s, s}\right) \geq n + 1 \), proving the first assertion.\n\nInequality (7) is an immediate consequence of this and inequality (1). Indeed, with \( n = \left\lfloor \frac{s{2}^{s/2}}{e\sqrt{2}}\right\rfloor \), by (1) we have\n\n\[ \left( \begin{array}{l} n \\ s \end{array}\right) {2}^{-\left( \begin{array}{l} s \\ 2 \end{array}\right) + 1} < \frac{{n}^{s}}{s!}{2}^{-\left( \begin{array}{l} s \\ 2 \end{array}\right) + 1} < \frac{{\left( e\sqrt{2}\right) }^{-s}{s}^{s}{2}^{{s}^{2}/2}}{\sqrt{2\pi s}{\left( s/e\right) }^{s}}{2}^{-\left( \begin{array}{l} s \\ 2 \end{array}\right) + 1} = \frac{2}{\sqrt{2\pi s}} < 1, \]\n\nso \( R\left( {s, s}\right) \geq n + 1 \) .
|
Yes
|
Theorem 3 Let \( 2 \leq s \leq {n}_{1},2 \leq t \leq {n}_{2},\alpha = \left( {s - 1}\right) /\left( {{st} - 1}\right) \) and \( \beta = \) \( \left( {t - 1}\right) /\left( {{st} - 1}\right) \) . Then there is a bipartite graph \( {G}_{2}\left( {{n}_{1},{n}_{2}}\right) \) of size\n\n\[ \n\left\lfloor {\left( {1 - \frac{1}{s!t!}}\right) {n}_{1}^{1 - \alpha }{n}_{2}^{1 - \beta }}\right\rfloor \n\]\n\nthat does not contain a \( K\left( {s, t}\right) \) (with \( s \) vertices in the first class and \( t \) vertices in the second class).
|
Proof. Let\n\n\[ \nn = {n}_{1} + {n}_{2} \n\]\n\n\[ \n{V}_{1} = \left\{ {1,2,\ldots ,{n}_{1}}\right\} \n\]\n\n\[ \n{V}_{2} = \left\{ {{n}_{1} + 1,{n}_{1} + 2,\ldots ,{n}_{1} + {n}_{2}}\right\} \n\]\n\n\[ \nE = \left\{ {{ij} : i \in {V}_{1}, j \in {V}_{2}}\right\} \n\]\n\n\[ \nM = \left\lfloor {{n}_{1}^{1 - \alpha }{n}_{2}^{1 - \beta }}\right\rfloor \n\]\n\nWe shall consider the probability space \( \mathcal{G}\left( {{K}_{{n}_{1},{n}_{2}}, M}\right) \) consisting of the \( \left( \begin{matrix} \left| E\right| \\ M \end{matrix}\right) \) graphs with vertex set \( V = {V}_{1} \cup {V}_{2} \) having exactly \( M \) edges from \( E \) and none outside \( E \) . (Note that this is not the probability space considered in the previous theorems.) The expected number of \( {K}_{s, t} \) subgraphs contained in a graph \( G \in \mathcal{G}\left( {{K}_{{n}_{1},{n}_{2}}, M}\right) \) is\n\n\[ \n{E}_{s, t} = \left( \begin{matrix} {n}_{1} \\ s \end{matrix}\right) \left( \begin{matrix} {n}_{2} \\ t \end{matrix}\right) \left( \begin{matrix} \left| E\right| - {st} \\ M - {st} \end{matrix}\right) {\left( \begin{matrix} \left| E\right| \\ M \end{matrix}\right) }^{-1}, \n\]\n\nwhere the first factor is the number of ways the first class of \( {K}_{s, t} \) can be chosen, the second factor is the number of ways the second class can be chosen and the third factor is the number of ways the \( M - {st} \) edges outside a \( {K}_{s, t} \) can be chosen. Now,\n\n\[ \n\left( \begin{matrix} \left| E\right| - {st} \\ M - {st} \end{matrix}\right) {\left( \begin{matrix} \left| E\right| \\ M \end{matrix}\right) }^{-1} = \mathop{\prod }\limits_{{i = 0}}^{{{st} - 1}}\frac{M - i}{{n}_{1}{n}_{2} - i} < {\left( \frac{M}{{n}_{1}{n}_{2}}\right) }^{st}, \n\]\n\nso\n\n\[ \n{E}_{s, t} < \frac{1}{s!t!}{n}_{1}^{s}{n}_{2}^{t}{\left( \frac{M}{{n}_{1}{n}_{2}}\right) }^{st} \leq \frac{1}{s!t!}{n}_{1}^{s}{n}_{2}^{t}{\left( {n}_{1}^{-\alpha }{n}_{2}^{-\beta }\right) }^{st} = \frac{1}{s!t!}{n}_{1}^{1 - \alpha }{n}_{2}^{1 - \beta }. \n\]\n\nThus there is a graph \( {G}_{0} \in \mathcal{G}\left( {{K}_{{n}_{1},{n}_{2}}, M}\right) \) that contains fewer than \( {n}_{1}^{1 - \alpha }{n}_{2}^{1 - \beta }/s!t \) ! complete bipartite graphs \( {K}_{s, t} \) . Omit one edge from each \( {K}_{s, t} \) in \( {G}_{0} \) . The obtained graph \( G = {G}_{2}\left( {{n}_{1},{n}_{2}}\right) \) has at least\n\n\[ \n\left\lfloor {{n}_{1}^{1 - \alpha }{n}_{2}^{1 - \beta }}\right\rfloor - \left\lfloor {\frac{1}{s!t!}{n}_{1}^{1 - \alpha }{n}_{2}^{1 - \beta }}\right\rfloor \geq \left\lfloor {\left( {1 - \frac{1}{s!t!}}\right) {n}_{1}^{1 - \alpha }{n}_{2}^{1 - \beta }}\right\rfloor \n\]\n\nedges and contains no \( {K}_{s, t} \) .
|
Yes
|
Theorem 5 Let \( 1 \leq h \leq k \) be fixed natural numbers and let \( 0 < p < 1 \) be fixed also. Then in \( \mathcal{G}\left( {n, p}\right) \) a.e. graph \( {G}_{p} \) is such that for every sequence of \( k \) vertices \( {x}_{1},{x}_{2},\ldots ,{x}_{k} \) there exists a vertex \( x \) such that \( x{x}_{i} \in E\left( {G}_{p}\right) \) if \( 1 \leq i \leq h \) and \( x{x}_{i} \notin E\left( {G}_{p}\right) \) if \( h < i \leq k. \)
|
Proof. Let \( {x}_{1},{x}_{2},\ldots ,{x}_{k} \) be a sequence of vertices. The probability that a vertex \( x \in W = V\left( G\right) - \left\{ {{x}_{1},\ldots ,{x}_{k}}\right\} \) has the required properties is \( {p}^{h}{q}^{k - h} \) . Since for \( x, y \in W, x \neq y \), the edges \( x{x}_{i} \) are chosen independently of the edges \( y{x}_{i} \), the probability that no suitable vertex \( x \) can be found for this particular sequence is \( {\left( 1 - {p}^{h}{q}^{k - h}\right) }^{n - k} \) . There are \( {\left( n\right) }_{k} = n\left( {n - 1}\right) \cdots \left( {n - k + 1}\right) \) choices for the sequence \( {x}_{1},{x}_{2},\ldots ,{x}_{k} \), so the probability that there is a sequence \( {x}_{1},{x}_{2},\ldots ,{x}_{k} \) for which no suitable \( x \) can be found is at most\n\n\[ \varepsilon = {n}^{k}{\left( 1 - {p}^{h}{q}^{k - n}\right) }^{n - k}. \]\n\nClearly, \( \varepsilon \rightarrow 0 \) as \( n \rightarrow \infty \) .
|
Yes
|
Theorem 7 Let \( k \geq 2, k - 1 \leq \ell \leq \left( \begin{aligned} k \\ 2 \end{aligned}\right) \) and let \( F = G\left( {k, l}\right) \) be a balanced graph (with \( k \) vertices and \( \ell \) edges). If \( p\left( n\right) {n}^{k/l} \rightarrow 0 \) then almost no \( {G}_{n, p} \) contains \( F \) , and if \( p\left( n\right) {n}^{k/l} \rightarrow \infty \) then almost every \( {G}_{n, p} \) contains \( F \) .
|
Proof. Let \( p = \gamma {n}^{-k/\ell },0 < \gamma < {n}^{k/\ell } \), and denote by \( X = X\left( G\right) \) the number of copies of \( F \) contained in \( {G}_{n, p} \) . Denote by \( {k}_{F} \) the number of graphs with a fixed set of \( k \) labelled vertices that are isomorphic to \( F \) . Clearly, \( {k}_{F} \leq k \) !. Then\n\n\[ \mu = {\mathbb{E}}_{p}\left( X\right) = \left( \begin{array}{l} n \\ k \end{array}\right) {k}_{F}{p}^{\ell }{\left( 1 - p\right) }^{\left( \begin{array}{l} k \\ 2 \end{array}\right) - \ell } \leq {n}^{k}\left( {{\gamma }^{\ell }{n}^{-k}}\right) = {\gamma }^{\ell }, \]\n\nso \( {\mathbb{E}}_{p}\left( X\right) \rightarrow 0 \) as \( \gamma \rightarrow 0 \), showing the first assertion.\n\nNow let us estimate the variance of \( X \) when \( \gamma \) is large. Note that there is a constant \( {c}_{1} > 0 \) such that\n\n\[ \mu \geq {c}_{1}{\gamma }^{\ell }\text{ for every }\gamma \]\n\n(15)\n\nAccording to (14), we have to estimate the probability that \( G \) contains two fixed copies of \( F \), say \( {F}^{\prime } \) and \( {F}^{\prime \prime } \) . Put\n\n\[ {A}_{s} = \mathop{\sum }\limits_{s}{\mathbb{P}}_{p}\left( {{G}_{n, p} \supset {F}^{\prime } \cup {F}^{\prime \prime }}\right) , \]\n\nwhere \( \mathop{\sum }\limits_{s} \) means that the summation is over all pairs \( \left( {{F}^{\prime },{F}^{\prime \prime }}\right) \) with \( s \) vertices in common. Clearly,\n\n\[ {A}_{0} < {\mu }^{2}\text{.} \]\n\nFurthermore, in a set of \( s \) vertices \( {F}^{\prime } \) has \( t \leq \left( {\ell /k}\right) s \) edges. Hence, counting first the choices for \( {F}^{\prime } \) and then for \( {F}^{\prime \prime } \) with \( s \geq 1 \) common vertices with \( {F}^{\prime } \), we find that for some constants \( {c}_{2} \) and \( {c}_{3} \),\n\n\[ \frac{{A}_{s}}{\mu } \leq \mathop{\sum }\limits_{{t \leq \ell s/k}}\left( \begin{array}{l} k \\ s \end{array}\right) \left( \begin{array}{l} n - k \\ k - s \end{array}\right) k!{p}^{\ell - t}{q}^{\left( \begin{array}{l} k \\ 2 \end{array}\right) - \left( \begin{array}{l} s \\ 2 \end{array}\right) - \ell + t} \]\n\n\[ \leq \mathop{\sum }\limits_{{t \leq \ell s/k}}{c}_{2}{n}^{k - s}{\left( \gamma {n}^{-k/l}\right) }^{\ell - t} \]\n\n\[ \leq {c}_{2}{n}^{-s}{\gamma }^{\ell } + {c}_{3}{\gamma }^{\ell - 1} \]\n\nHere in the last step we separated the term with \( t = 0 \) from the rest. Consequently, making use of (14), we find that\n\n\[ \frac{{\mathbb{E}}_{p}\left( {X}^{2}\right) }{{\mu }^{2}} = \frac{\mathop{\sum }\limits_{0}^{k}{A}_{s}}{{\mu }^{2}} \leq 1 + {c}_{4}{\gamma }^{-1} \]\n\nfor some constant \( {c}_{4} \) . Therefore, by (13),\n\n\[ \mathbb{P}\left( {X = 0}\right) \leq \frac{{\sigma }^{2}}{{\mu }^{2}} \leq {c}_{4}{\gamma }^{-1} \]\n\nso \( \mathbb{P}\left( {X = 0}\right) \rightarrow 0 \) as \( \gamma \rightarrow \infty \) .
|
Yes
|
Theorem 10 Let \( 0 < p < 1 \) be constant. Then\n\n\[ \chi \left( {G}_{n, p}\right) = \left( {\frac{1}{2} + o\left( 1\right) }\right) \frac{\log n}{\log \left( {1/q}\right) } \]\n\nfor a.e. \( {G}_{n, p} \), where \( q = 1 - p \) .
|
What Theorem 10 claims is that if \( \varepsilon > 0 \) then\n\n\[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{\mathbb{P}}_{p}\left( {\left| {\chi \left( {G}_{n, p}\right) \log \left( {1/q}\right) /\log n - \frac{1}{2}}\right| < \varepsilon }\right) = 1. \]
|
No
|
Lemma 12 Let \( c > 3 \) and \( 0 < \gamma < \frac{1}{3} \) be constants and let \( p = \left( {c\log n}\right) /n \) . Then in \( \mathcal{G}\left( {n, p}\right) \) we have\n\n\[ \n{\mathbb{P}}_{p}\left( {{D}_{t} > 0\text{ for some }t,1 \leq t \leq {\gamma n}}\right) = O\left( {n}^{3 - c}\right) .\n\]
|
Proof. Put \( \beta = \frac{\left( c - 3\right) }{4c} \) . Clearly,\n\n\[ \n\mathop{\sum }\limits_{{t = 1}}^{{\lfloor {\gamma n}\rfloor }}{\mathbb{E}}_{p}\left( {D}_{t}\right) = \mathop{\sum }\limits_{{t = 1}}^{{\lfloor {\gamma n}\rfloor }}\left( \begin{array}{l} n \\ t \end{array}\right) \left( \begin{matrix} n - t \\ n - {3t} \end{matrix}\right) {\left( 1 - p\right) }^{t\left( {n - {3t}}\right) }\n\]\n\n\[ \n\leq n\left( \begin{matrix} n - 1 \\ 2 \end{matrix}\right) {\left( 1 - p\right) }^{n - 3} + \mathop{\sum }\limits_{{t = 2}}^{{\lfloor {\beta n}\rfloor }}\frac{1}{t!}{n}^{3t}{\left( 1 - p\right) }^{t\left( {n - {3t}}\right) }\n\]\n\n\[ \n+ \mathop{\sum }\limits_{{t = \lfloor {\beta n}\rfloor + 1}}^{{\lfloor {\gamma n}\rfloor }}{2}^{2n}{\left( 1 - p\right) }^{t\left( {n - {3t}}\right) }\n\]\n\nNow, since \( {\left( 1 - p\right) }^{n} < {n}^{-c} \), we have\n\n\[ \n{n}^{3}{\left( 1 - p\right) }^{n - 3} < {\left( 1 - p\right) }^{-3}{n}^{3 - c}\n\]\n\nif \( 2 \leq t \leq {\beta n} \), then\n\n\[ \n{n}^{3t}{\left( 1 - p\right) }^{t\left( {n - {3t}}\right) } < {n}^{t\left( {3 - \left( {c\left( {n - {3t}}\right) /n}\right) }\right) } \leq {n}^{3 - c}\n\]\n\nand if \( {\beta n} \leq t \leq {\gamma n} \), then\n\n\[ \n{2}^{2n}{\left( 1 - p\right) }^{t\left( {n - {3t}}\right) } < {n}^{{2n}/\log n - \left( {n - {3t}}\right) t/n} = O\left( {n}^{-\beta \left( {1 - {3\gamma }}\right) n}\right) .\n\]\n\nConsequently,\n\n\[ \n\mathop{\sum }\limits_{{t = 1}}^{{\lfloor {\gamma n}\rfloor }}{\mathbb{E}}_{p}\left( {D}_{t}\right) = O\left( {n}^{3 - c}\right)\n\]\n\nimplying the assertion of the lemma.
|
Yes
|
Theorem 16 Almost every random graph process is such that if \( k \geq 2 \) is fixed and \( t = o\left( {n}^{\left( {k - 1}\right) /k}\right) \) then every component of \( {G}_{t} \) is a tree of order at most \( t \) . Furthermore, if \( s \) is constant and \( t/{n}^{\left( {k - 2}\right) /\left( {k - 1}\right) } \rightarrow \infty \) then \( {G}_{t} \) has at least \( s \) components of order \( k \) .
|
The proof of this assertion goes along the lines of the proof of Theorem 7 and is rather vapid: we do not even need that there are \( {k}^{k - 2} \) trees of order \( k \) (see Exercise I. 41 and Theorem VIII.20). All we have to do is to estimate \( \mathbb{E}\left( {X}_{k}\right) \) and \( \mathbb{E}\left( {X}_{k}^{2}\right) \), where \( {X}_{k} \) is the number of trees of order \( k \) in \( {G}_{t} \), using that there are some\n\n\( t\left( k\right) ,1 \leq t\left( k\right) \leq \left( \begin{matrix} \left( \begin{array}{l} k \\ 2 \end{array}\right) \end{matrix}\right) \), trees on \( k \) distinguished vertices (see Exercise 34).
|
No
|
Theorem 18 Let \( a \geq 2 \) be fixed. If \( n \) is sufficiently large, \( \varepsilon = \varepsilon \left( n\right) < 1/3 \) and \( p = p\left( n\right) = \frac{1 + \varepsilon }{n} \) then, with probability at least \( 1 - {n}^{-a},{G}_{n, p} \) has no component whose order \( k \) satisfies\n\n\[ \frac{8a}{{\varepsilon }^{2}}\log n \leq k \leq \frac{{\varepsilon }^{2}}{12}n \]
|
Proof. Set \( {k}_{0} = \left\lceil {{8a}{\varepsilon }^{-2}\log n}\right\rceil \) and \( {k}_{1} = \left\lceil {{\varepsilon }^{2}n/{12}}\right\rceil \) . Writing \( {p}_{k} \) for the probability that the component of \( {G}_{n, p} \) containing a fixed vertex has \( k \) vertices, the probability that \( {G}_{n, p} \) has a component of order \( k \) is at most \( n{p}_{k} \) . Hence, it suffices to prove that\n\n\[ \mathop{\sum }\limits_{{k = {k}_{0}}}^{{k}_{1}}{p}_{k} \leq {n}^{-a - 1} \]\n\nWe may assume that \( {k}_{0} \leq {k}_{1} \), so \( {\varepsilon }^{4} \geq {96a}\left( {\log n}\right) /n \geq 1/n \), since otherwise there is nothing to prove. Now, by (26),\n\n\[ {p}_{k} \leq \mathbb{P}\left( {\left| {B}_{k}\right| = k}\right) \leq \frac{{n}^{k}}{k!}{e}^{-{k}^{2}/{2n}}{\left( kp\right) }^{k}{\left( 1 - p\right) }^{k\left( {n - k - 1}\right) }, \]\n\nsince\n\n\[ \frac{n - 1}{k} = \frac{{n}^{k}}{k!}\mathop{\prod }\limits_{{j = 1}}^{k}\left( {1 - \frac{j}{n}}\right) \leq \frac{{n}^{k}}{k!}{e}^{-{k}^{2}/{2n}}, \]\n\nand\n\n\[ {\left( 1 - p\right) }^{k} \geq 1 - {kp} \]\n\nNoting that\n\n\[ 1 + \varepsilon \leq {e}^{\varepsilon - {\varepsilon }^{2}/3} \]\n\nfor \( \left| \varepsilon \right| \leq 1/3 \), and recalling (1), Stirling’s formula, we have\n\n\[ {p}_{k} \leq \exp \left\{ {-{k}^{2}/{2n} - {\varepsilon }^{2}k/3 + {k}^{2}\left( {1 + \varepsilon }\right) /n}\right\} \]\n\n\[ \leq \exp \left\{ {-{\varepsilon }^{2}k/3 + {k}^{2}/n}\right\} \leq {e}^{-{\varepsilon }^{2}k/4}. \]\n\nTherefore,\n\n\[ \mathop{\sum }\limits_{{k = {k}_{0}}}^{{k}_{1}}{p}_{k} \leq \mathop{\sum }\limits_{{k = {k}_{0}}}^{{k}_{1}}{e}^{-{\varepsilon }^{2}k/4} = {e}^{-{\varepsilon }^{2}{k}_{0}/4}\left( {1 - {e}^{-{\varepsilon }^{2}/4}}\right) \]\n\n\[ \leq \frac{5}{{\varepsilon }^{2}}{e}^{-{\varepsilon }^{2}{k}_{0}/4} \leq n{n}^{-{2a}} \leq {n}^{-a - 1}, \]\n\nas required.
|
Yes
|
Theorem 19 Let \( {\left( {Z}_{n}\right) }_{0}^{\infty } \) be as above, with \( c > 1 \), and write \( {p}_{\infty } \) for the probability that \( {Z}_{n} > 0 \) for every \( n \) . Then \( {p}_{\infty } \) is the unique root of\n\n\[ \n{e}^{-c{p}_{\infty }} = 1 - {p}_{\infty } \n\]\n\nin the interval \( \left( {0,1}\right) \) .
|
Proof. Let \( {p}_{n} \) be the probability that \( {Z}_{n} > 0 \), so that \( {p}_{0} = 1 \) and \( {p}_{\infty } = \) \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{p}_{n} \) . First we check, by induction on \( n \), that \( {p}_{n} \geq \gamma \) for every \( n \), where \( \gamma \) is the unique root of \( {e}^{-{\gamma c}} = 1 - \gamma \) in \( \left( {0,1}\right) \) . This holds for \( n = 0 \) since \( {p}_{0} = 1 \geq \gamma \) . Assume then that \( n \geq 0 \) and \( {p}_{n} \geq \gamma \) . Conditioning on \( {Z}_{1} = k \geq 1 \), the process is the sum of \( k \) independent processes with the same distribution. Since \( 1 - {p}_{t} \) is the probability that the process dies out by time \( t \) ,\n\n\[ \n1 - {p}_{n + 1} = \mathbb{P}\left( {{Z}_{1} = 0}\right) + \mathop{\sum }\limits_{{k = 1}}^{\infty }\mathbb{P}\left( {{Z}_{1} = k}\right) {\left( 1 - {p}_{n}\right) }^{k} \n\]\n\n\[ \n= \mathop{\sum }\limits_{{k = 0}}^{\infty }\frac{{c}^{k}}{k!}{e}^{-c}{\left( 1 - {p}_{n}\right) }^{k} \n\]\n\n\[ \n= {e}^{-c{p}_{n}}\mathop{\sum }\limits_{{k = 0}}^{\infty }\frac{{\left( c\left( 1 - {p}_{n}\right) \right) }^{k}}{k!}{e}^{-c\left( {1 - {p}_{n}}\right) } \n\]\n\n\[ \n= {e}^{-c{p}_{n}} \leq {e}^{-{c\gamma }}. \n\]\n\nHence \( {p}_{n + 1} \geq \gamma \), as claimed, and so \( {p}_{\infty } = \mathop{\lim }\limits_{{n \rightarrow \infty }}{p}_{n} \geq \gamma \) .\n\nBy applying the argument above to \( 1 - {p}_{\infty } \) rather than \( 1 - {p}_{n + 1} \) and \( 1 - {p}_{n} \) , we see that\n\n\[ \n1 - {p}_{\infty } = \mathbb{P}\left( {{Z}_{1} = 0}\right) + \mathop{\sum }\limits_{{k = 1}}^{\infty }\mathbb{P}\left( {{Z}_{1} = k}\right) {\left( 1 - {p}_{\infty }\right) }^{k} = {e}^{-c{p}_{\infty }}. \n\]\n\nHence \( {p}_{\infty } \) is a root of \( {e}^{-c{p}_{\infty }} = 1 - {p}_{\infty } \) satisfying \( 0 < {p}_{\infty } \leq 1 \), and we are done.
|
Yes
|
Theorem 2 The subgroup \( B \) of \( A \) is generated by the decorations of the chords.
|
In particular, the subgroup \( B \) in Fig. VIII. 9 is generated by \( b, a{b}^{3}{a}^{-1} \) , \( a{b}^{-1}a{b}^{-1}{a}^{-1} \) and \( {abab}{a}^{-1} \) .
|
Yes
|
Theorem 4 A subgroup of a free group is free. Furthermore, if \( A \) is a free group of rank \( k \) (that is, it has \( k \) free generators) and \( B \) is a subgroup of index \( n \), then \( B \) has rank \( \left( {k - 1}\right) n + 1 \) .
|
Proof. The presentation of \( B \) given in Theorem 3 is a free presentation on the set of chords of the Schreier diagram. Altogether there are \( {kn} \) edges of which \( n - 1 \) are tree edges; hence there are \( \left( {k - 1}\right) n + 1 \) chords.
|
Yes
|
Theorem 4 A subgroup of a free group is free. Furthermore, if \( A \) is a free group of rank \( k \) (that is, it has \( k \) free generators) and \( B \) is a subgroup of index \( n \), then \( B \) has rank \( \left( {k - 1}\right) n + 1 \) .
|
Proof. The presentation of \( B \) given in Theorem 3 is a free presentation on the set of chords of the Schreier diagram. Altogether there are \( {kn} \) edges of which \( n - 1 \) are tree edges; hence there are \( \left( {k - 1}\right) n + 1 \) chords.
|
Yes
|
Theorem 5 Let \( G \) be a connected graph of order \( n \) with adjacency matrix \( A \) . (i) Every eigenvalue \( \mu \) of \( G \) satisfies \( \left| \mu \right| \leq \Delta = \Delta \left( G\right) \) .
|
Proof. (i) Let \( \mathbf{x} = \left( {x}_{i}\right) \) be a non-zero eigenvector with eigenvalue \( \mu \) . Let \( {x}_{p} \) be a weight with maximum modulus: \( \left| {x}_{p}\right| \geq \left| {x}_{i}\right| \) for every \( i \) ; we may assume without loss of generality that \( {x}_{p} = 1 \) . Then \[ \left| \mu \right| = \left| {\mu {x}_{p}}\right| = \left| {\mathop{\sum }\limits_{{l = 1}}^{n}{a}_{pl}{x}_{\ell }}\right| \leq \mathop{\sum }\limits_{{l = 1}}^{n}{a}_{pl}\left| {x}_{\ell }\right| \leq \left| {x}_{p}\right| d\left( {v}_{p}\right) \leq \left| {x}_{p}\right| \Delta = \Delta , \] showing \( \left| \mu \right| \leq \Delta \) .
|
Yes
|
Corollary 6 Every graph \( G \) satisfies \( \chi \left( G\right) \leq {\mu }_{\max }\left( G\right) + 1 \) .
|
Proof. For every induced subgraph \( H \) of \( G \) we have\n\n\[ \delta \left( H\right) \leq {\mu }_{\max }\left( H\right) \leq {\mu }_{\max }\left( G\right) \]\n\nso we are done by Theorem V.1.
|
Yes
|
Theorem 7 Let \( G \) be a non-empty graph. Then\n\n\[ \chi \left( G\right) \geq 1 - \frac{{\mu }_{\max }\left( G\right) }{{\mu }_{\min }\left( G\right) }.\]
|
Proof. As before, we take \( V = \left\{ {{v}_{1},\ldots ,{v}_{n}}\right\} \) for the set of vertices, so that \( \left( {{v}_{1},\ldots ,{v}_{n}}\right) \) is the canonical basis of \( {C}_{0}\left( G\right) \) . Let \( c : V\left( G\right) \rightarrow \left\lbrack k\right\rbrack \) be a (proper) colouring of \( G \) with \( k = \chi \left( G\right) \) colours. Then, writing \( \langle \mathbf{a},\mathbf{b},\ldots \rangle \) for the space spanned by the vectors \( \mathbf{a},\mathbf{b},\ldots \), the space \( {C}_{0}\left( G\right) \) is the orthogonal direct sum of the ’colour spaces’ \( {U}_{i} = \left\langle {{v}_{j} : c\left( {v}_{j}\right) = i}\right\rangle, i = 1,\ldots, k \) . Since no edge joins vertices of the same colour, the adjacency matrix \( A = A\left( G\right) \) is such that if \( u, w \in {U}_{i} \) for some \( i \) then \( \langle {Au}, w\rangle = 0 \) . In particular, \( \langle {Au}, u\rangle = 0 \) for \( u \in {U}_{i}, i = 1,\ldots, k \) .\n\nLet \( \mathbf{x} \in {C}_{0}\left( G\right) \) be an eigenvector of the adjacency matrix \( A \) with eigenvalue \( {\mu }_{\max } \), and let \( \mathbf{x} = \mathop{\sum }\limits_{{i = 1}}^{k}{\xi }_{i}{u}_{i} \), where \( {u}_{i} \in {U}_{i} \) and \( \begin{Vmatrix}{u}_{i}\end{Vmatrix} = 1 \) . Let \( U = \left\langle {{u}_{1},\ldots ,{u}_{k}}\right\rangle \) , so that \( \left( {{u}_{1},\ldots ,{u}_{k}}\right) \) is an orthonormal basis of \( U \), and let \( S : U \rightarrow {C}_{0}\left( G\right) \) be the inclusion map.\n\nFor \( \mathbf{u} \in U,\parallel \mathbf{u}\parallel = 1 \), we have \( \parallel S\mathbf{u}\parallel = \parallel \mathbf{u}\parallel = 1 \), so \( \left\langle {{S}^{ * }{AS}\mathbf{u},\mathbf{u}}\right\rangle = \langle {AS}\mathbf{u}, S\mathbf{u}\rangle = \) \( \langle A\mathbf{u},\mathbf{u}\rangle \in V\left( A\right) \) . Hence the numerical range of the hermitian operator \( {S}^{ * }{AS} \) is contained in the numerical range of \( A \) :\n\n\[ V\left( {{S}^{ * }{AS}}\right) \subset V\left( A\right) = \left\lbrack {{\mu }_{\min },{\mu }_{\max }}\right\rbrack .\n\]\n\nIn fact, \( {\mu }_{\max } \) is an eigenvalue of \( {S}^{ * }{AS} \) as well, with eigenvector \( x \) :\n\n\[ \left\langle {{S}^{ * }{ASx},{u}_{i}}\right\rangle = \left\langle {{Ax},{u}_{i}}\right\rangle = {\mu }_{\max }\left\langle {x,{u}_{i}}\right\rangle = {\mu }_{\max }{\xi }_{i},\]\n\nso \( {S}^{ * }{ASx} = {\mu }_{\max }x \) .\n\nAlso, \( \left\langle {{S}^{ * }{AS}{u}_{i},{u}_{i}}\right\rangle = \left\langle {A{u}_{i},{u}_{i}}\right\rangle = 0 \) for every \( i \), so \( \operatorname{tr}\left( {{S}^{ * }{AS}}\right) = 0 \) . Therefore, since every eigenvalue of \( {S}^{ * }{AS} \) is at least \( {\mu }_{\min } \),\n\n\[ {\mu }_{\max } + \left( {k - 1}\right) {\mu }_{\min } \leq \operatorname{tr}\left( {{S}^{ * }{AS}}\right) = 0.\]\n\nAs \( G \) is non-empty, \( {\mu }_{\min } < 0 \), so the result follows.
|
Yes
|
Theorem 8 The adjacency matrix of a graph \( G \) has at least \( \beta \left( G\right) \) non-negative and at least \( \beta \left( G\right) \) non-positive eigenvalues, counted with multiplicity.
|
Proof. The Lagrangian \( {f}_{G}\left( \mathbf{x}\right) \) is identically 0 on every subspace spanned by a set of independent vertices. In particular, \( {f}_{G} \) is positive semi-definite and negative semidefinite on a subspace of dimension \( \beta \left( G\right) \) . Hence we are done by the analogues of (1).
|
No
|
Theorem 9 The complete graph \( {K}_{n} \) is not the edge-disjoint union of \( n - 2 \) complete bipartite graphs.
|
Proof. Suppose that, contrary to the assertion, \( {K}_{n} \) is the edge-disjoint union of complete bipartite graphs \( {G}_{1},\ldots ,{G}_{n - 2} \) . For each \( i \), let \( {H}_{i} \) be obtained from \( {G}_{i} \) by adding to it isolated vertices so that \( V\left( {H}_{i}\right) = V\left( {K}_{n}\right) \) . Note that the Lagrangians of these graphs are such that \( {f}_{{K}_{n}} = \mathop{\sum }\limits_{{i = 1}}^{{n - 2}}{f}_{{H}_{i}} \) .\n\nWe know that each \( {f}_{{H}_{i}} \) is positive semi-definite on some subspace \( {U}_{i} \subset {C}_{0}\left( {K}_{n}\right) \) of dimension \( n - 1 \) . But then \( U = \mathop{\bigcap }\limits_{{i = 1}}^{{n - 2}}{U}_{i} \) is a subspace of dimension at least 2, on which each \( {f}_{{H}_{i}} \) is positive semi-definite. Hence \( {f}_{{K}_{n}} = \mathop{\sum }\limits_{{i = 1}}^{{n - 2}}{f}_{{H}_{i}} \) is positive semi-definite on \( U \), contradicting the fact that \( {f}_{{K}_{n}} \) is not positive semi-definite on any subspace of dimension 2.
|
Yes
|
Theorem 10 Let \( G \) be a graph with clique number \( {k}_{0} \) . Then \( f\left( G\right) = \left( {{k}_{0} - 1}\right) /{k}_{0} \) .
|
Proof. Let \( \mathbf{y} = {\left( {y}_{i}\right) }_{1}^{n} \in S \) be a point at which \( {f}_{G}\left( \mathbf{x}\right) \) attains its maximum and \( \operatorname{supp}\mathbf{y} = \left\{ {{v}_{i} : {y}_{i} > 0}\right\} \) is as small as possible. We claim that the support of \( \mathbf{y} \) spans a complete subgraph of \( G \) . Indeed, suppose \( {y}_{1},{y}_{2} > 0 \) and \( {v}_{1} \mathrel{\text{\sim \not{} }} {v}_{2} \) . Assuming, as we may, that \( \mathop{\sum }\limits_{{{v}_{i} \sim {v}_{1}}}{y}_{i} \geq \mathop{\sum }\limits_{{{v}_{i} \sim {v}_{2}}}{y}_{i} \), set \( {\mathbf{y}}^{\prime } = \left( {{y}_{1} + {y}_{2},0,{y}_{3},{y}_{4},\ldots ,{y}_{n}}\right) \in S \) .\n\nThen \( {f}_{G}\left( {\mathbf{y}}^{\prime }\right) \geq {f}_{G}\left( \mathbf{y}\right) \) and supp \( {\mathbf{y}}^{\prime } \) is strictly smaller than supp \( \mathbf{y} \), contradicting our choice of \( \mathbf{y} \) .\n\nWriting \( K \) for the complete subgraph of \( G \) spanned by the support of \( \mathbf{y} \), we have \( f\left( G\right) = f\left( K\right) = \left( {k - 1}\right) /k \), where \( k = \left| K\right| = \left| \operatorname{supp}\right| \) . Hence \( k \) is as large as possible, namely \( {k}_{0} \), and we are done.
|
Yes
|
Corollary 11 Let \( G = G\left( {n, m}\right) \), with \( m > \frac{r - 2}{2\left( {r - 1}\right) }{n}^{2} \) . Then \( G \) contains a complete graph of order \( r \) .
|
Proof. Writing \( {k}_{0} = \omega \left( G\right) \) for the clique number of \( G \), we know that \( f\left( G\right) = \) \( \left( {{k}_{0} - 1}\right) /{k}_{0} \) . On the other hand, with \( \mathbf{x} = \left( {1/n,1/n,\ldots ,1/n}\right) \) we see that\n\n\[ f\left( G\right) \geq {f}_{G}\left( \mathbf{x}\right) = \frac{2m}{{n}^{2}} > \frac{r - 2}{r - 1}. \]\n\nHence \( {k}_{0} \geq r \), as claimed.
|
Yes
|
Theorem 12 The vertex connectivity of an incomplete graph \( G \) is at least as large as the second smallest eigenvalue \( {\lambda }_{2}\left( G\right) \) of the Laplacian of \( G \) .
|
Proof. If \( G = {K}_{n} \) then \( {\lambda }_{2} = n - 1 = \kappa \left( G\right) \) . Suppose then that \( G \) is not a complete graph, and let \( {V}^{\prime } \cup S \cup {V}^{\prime \prime } \) be a partition of the vertex set \( \left\{ {{v}_{1},\ldots ,{v}_{n}}\right\} \) of \( G \) such that \( \left| S\right| = \kappa \left( G\right) ,{V}^{\prime } \) and \( {V}^{\prime \prime } \) are non-empty, and \( G \) has no \( {V}^{\prime } - {V}^{\prime \prime } \) edge. Thus \( S \) is a vertex cut with \( k = \kappa \left( G\right) \) vertices.\n\nOur aim is to construct a vector \( \mathbf{x} \) orthogonal to \( \mathbf{j} \) such that \( q\left( \mathbf{x}\right) /\parallel \mathbf{x}{\parallel }^{2} \) is small, namely at most \( k \) . To this end, set \( a = \left| {V}^{\prime }\right|, b = \left| {V}^{\prime \prime }\right| \), and let \( \mathbf{x} = \mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}{v}_{i} \in \) \( {C}_{0}\left( G\right) \) be the vector with\n\n\[ \n{x}_{i} = \begin{cases} b & \text{ if }{v}_{i} \in {V}^{\prime }, \\ 0 & \text{ if }{v}_{i} \in S, \\ - a & \text{ if }{v}_{i} \in {V}^{\prime \prime }. \end{cases} \n\]\n\nThen \( \langle \mathbf{x},\mathbf{j}\rangle = 0 \) and \( \parallel \mathbf{x}{\parallel }^{2} = a{b}^{2} + b{a}^{2} \) .\n\nWhat are the coordinates of \( \left( {D - A}\right) \mathbf{x} = \mathbf{y} = \mathop{\sum }\limits_{{i = 1}}^{n}{y}_{i}{v}_{i} \) ? Since \( \left( {D - A}\right) b\mathbf{j} = 0 \) , we have \( \mathbf{y} = \left( {D - A}\right) \left( {\mathbf{x} - b\mathbf{j}}\right) \), so if \( {v}_{i} \in {V}^{\prime } \) then \( {y}_{i} \) is precisely \( b \) times the number of neighbours of \( {v}_{i} \) in \( S \) . Hence \( {y}_{i} \leq {kb} \) . Similarly, \( {y}_{i} \geq - {ka} \) for \( {v}_{i} \in {V}^{\prime \prime } \) . Therefore, as \( \left| {V}^{\prime }\right| = a \) and \( \left| {V}^{\prime \prime }\right| = b \), it follows from (2) that\n\n\[ \n{\lambda }_{2}\parallel \mathbf{x}{\parallel }^{2} \leq q\left( \mathbf{x}\right) = \langle \left( {D - A}\right) \mathbf{x},\mathbf{x}\rangle \leq {ka}{b}^{2} + {kb}{a}^{2} = k\parallel \mathbf{x}{\parallel }^{2}, \n\]\n\ncompleting the proof.
|
Yes
|
Theorem 13 Let \( G \) be a graph of order \( n \) . Then for \( U \subset V = V\left( G\right) \) we have\n\n\[ \left| {\partial U}\right| \geq \frac{{\lambda }_{2}\left( G\right) \left| U\right| \left| {V \smallsetminus U}\right| }{n}. \]
|
Proof. We may assume that \( \varnothing \neq U \neq V = \left\{ {{v}_{1},\ldots ,{v}_{n}}\right\} \) . Set \( k = \left| U\right| \), and define \( \mathbf{x} = \mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}{v}_{i} \) as follows:\n\n\[ {x}_{i} = \left\{ \begin{array}{ll} n - k & \text{ if }{v}_{i} \in U, \\ - k & \text{ if }{v}_{i} \in V \smallsetminus U. \end{array}\right. \]\n\nThen \( \langle \mathbf{x},\mathbf{j}\rangle = 0 \) and \( \parallel \mathbf{x}{\parallel }^{2} = {kn}\left( {n - k}\right) \) . By (3),\n\n\[ \langle \left( {D - A}\right) \mathbf{x},\mathbf{x}\rangle = \left| {\partial U}\right| {n}^{2} \]\n\nand so, by (2),\n\n\[ {\lambda }_{2}\left( G\right) \leq \left| {\partial U}\right| {n}^{2}/{kn}\left( {n - k}\right) \]\n\nas claimed.
|
Yes
|
Theorem 14 Let \( G \) be a connected \( k \) -regular graph of order \( n \), with adjacency matrix \( A \) and distinct eigenvalues \( k,{\mu }_{1},{\mu }_{2},\ldots ,{\mu }_{r} \) . Then\n\n\[ \mathop{\prod }\limits_{{i = 1}}^{r}\frac{A - {\mu }_{i}I}{k - {\mu }_{i}} = \frac{J}{n} \]\n
|
Proof. Each side is the orthogonal projection onto \( \langle \mathbf{j}\rangle \) .
|
No
|
Theorem 15 Let \( G, A, C, P \) and \( {\pi }_{r} \) be as above.\n\n(i) \( {\pi }_{r}A = C{\pi }_{r} \), that is, the diagram below commutes.\n\n\n\n(ii) The adjacency matrix \( A \) and the collapsed adjacency matrix \( C \) have the same minimal polynomial. In particular, \( \mu \) is an eigenvalue of \( A \) iff it is a root of the characteristic polynomial of \( C \) .
|
Proof. (i) Let us show that \( {\pi }_{r}\left( {A{v}_{t}}\right) = C\left( {{\pi }_{r}{v}_{t}}\right) \), where \( {v}_{t} \) is the basis vector corresponding to an arbitrary vertex \( {v}_{t} \in {V}_{i}^{\left( r\right) } \) . To do this it suffices to check that the \( i \) th coordinates of the two sides are equal. Clearly, \( {\pi }_{r}{v}_{t} = {w}_{j} \) so\n\n\[{\left( C\left( {\pi }_{r}{v}_{t}\right) \right) }_{i} = {\left( C{w}_{j}\right) }_{i} = {c}_{ij}\]\n\nand\n\n\[{\left( {\pi }_{r}\left( A{v}_{t}\right) \right) }_{i} = \mathop{\sum }\limits_{{{v}_{s} \in {V}_{i}^{\left( r\right) }}}{a}_{st}\]\n\nand these are equal by definition.\n\n(ii) Let \( q \) be the minimal polynomial of \( C \) . In order to prove that \( q\left( A\right) = 0 \), let \( \mathbf{x} \in {C}_{0}\left( G\right) \) and set \( q\left( A\right) \mathbf{x} = \mathop{\sum }\limits_{{i = 1}}^{n}{y}_{i}{v}_{i} \) . Then for each \( r,1 \leq r \leq n \), we have\n\n\[{y}_{r} = {\left( q\left( A\right) \mathbf{x}\right) }_{r} = {\left( {\pi }_{r}\left( q\left( A\right) \mathbf{x}\right) \right) }_{1} = {\left( q\left( C\right) \left( {\pi }_{r}\mathbf{x}\right) \right) }_{1} = {\left( \mathbf{0}\right) }_{1} = 0.\]\n\nConversely, the minimal polynomial of \( A \) annihilates \( C \) since \( {\pi }_{r}{C}_{0}\left( G\right) = P \) .
|
Yes
|
Theorem 16 Let \( G \) be a connected highly regular graph of order \( n \) with collapsed adjacency matrix \( C \) . Let \( {\mu }_{1},{\mu }_{2},\ldots ,{\mu }_{r} \) be the roots of the characteristic polynomial of \( C \) different from \( k \), the degree of the vertices of \( G \) . Then there are natural numbers \( {m}_{1},{m}_{2},\ldots ,{m}_{r} \) such that\n\n\[ \mathop{\sum }\limits_{{i = 1}}^{r}{m}_{i} = n - 1 \]\n\nand\n\n\[ \mathop{\sum }\limits_{{i = 1}}^{r}{m}_{i}{\mu }_{i} = - k \]
|
Proof. We know from Theorem 5 that \( {\mu }_{1},{\mu }_{2},\ldots ,{\mu }_{r} \) are the eigenvalues of \( A \) in addition to \( k \), which has multiplicity 1 . Thus if \( m\left( {\mu }_{i}\right) \) is the multiplicity of \( {\mu }_{1} \) then\n\n\[ 1 + \mathop{\sum }\limits_{{i = 1}}^{r}m\left( {\mu }_{i}\right) = n \]\n\nsince \( {C}_{0}\left( G\right) \) has an orthonormal basis consisting of eigenvectors of \( A \) . Furthermore, since the trace of \( A \) is 0 and a change of basis does not alter the trace,\n\n\[ \operatorname{tr}A = k + \mathop{\sum }\limits_{{i = 1}}^{r}m\left( {\mu }_{i}\right) {\mu }_{i} = 0. \]
|
Yes
|
Theorem 17 Let \( G \) be a connected imcomplete regular graph. Then \( G \) is strongly regular iff it has precisely three distinct eigenvalues.
|
Proof. Suppose \( G \) is a strongly regular graph with adjacency matrix \( A \) . As its collapsed adjacency matrix has order 3 , by Theorem 15 it has at most three distinct eigenvalues. Furthermore, if \( G \) had only two distinct eigenvalues then, by Theorem 14 we would have \( A \in \langle I, J\rangle \), which would imply that \( G \) is complete or empty.\n\nConversely, if \( A \) has three distinct eigenvalues then, again by Theorem 14, we have \( {A}^{2} \in \langle I, J, A\rangle \) .
|
Yes
|
Theorem 18 If there is a strongly regular graph of order \( n \) with parameters \( \left( {k, a, b}\right) \) then\n\n\[ \n{m}_{1},{m}_{2} = \frac{1}{2}\left\{ {n - 1 \pm \frac{\left( {n - 1}\right) \left( {b - a}\right) - {2k}}{{\left\{ {\left( a - b\right) }^{2} + 4\left( k - b\right) \right\} }^{1/2}}}\right\} \n\]\n\nare natural numbers.
|
Proof. The characteristic polynomial of the collapsed adjacency matrix \( C \) is\n\n\[ \n{x}^{3} + \left( {b - a - k}\right) {x}^{2} + \left( {\left( {a - b}\right) k + b - k}\right) x + k\left( {k - b}\right) .\n\]\n\nOn dividing by \( x - k \), we find that the roots different from \( k \) are\n\n\[ \n{\mu }_{1},{\mu }_{2} = \frac{1}{2}\left\{ {a - b \pm {\left\{ {\left( a - b\right) }^{2} + 4\left( k - b\right) \right\} }^{1/2}}\right\} .\n\]\n\nBy Theorem 16 there are natural numbers \( {m}_{1} \) and \( {m}_{2} \) satisfying\n\n\[ \n{m}_{1} + {m}_{2} = n - 1 \n\]\n\nand\n\n\[ \n{m}_{1}{\mu }_{1} + {m}_{2}{\mu }_{2} = - k \n\]\n\nSolving these for \( {m}_{1} \) and \( {m}_{2} \) we arrive at the assertion of the theorem.
|
Yes
|
Theorem 19 Suppose there is a \( k \) -regular graph \( G \) of order \( n = {k}^{2} + 1 \) and diameter 2. Then \( k = 2,3,7 \) or 57 .
|
Proof. We know from Theorem IV. 1 that \( G \) is strongly regular with parameters \( \left( {k,0,1}\right) \) . By the rationality condition at least one of the following two conditions has to hold:\n\n(i): \( \left( {n - 1}\right) - {2k} = {k}^{2} - {2k} = 0 \) and \( n - 1 = {k}^{2} \) is even,\n\n(ii): \( 1 + 4\left( {k - 1}\right) = {4k} - 3 \) is a square, say \( {4k} - 3 = {s}^{2} \) .\n\nNow, if (i) holds then \( k = 2 \) .\n\nIf (ii) holds then \( k = \frac{1}{4}\left( {{s}^{2} + 3}\right) \) ; on substituting this into the expression for the multiplicity \( {m}_{1} \) we find that\n\n\[ \n{m}_{1} = \frac{1}{2}\left\{ {\frac{1}{16}{\left( {s}^{2} + 3\right) }^{2} + \frac{\left\lbrack {{\left( {s}^{2} + 3\right) }^{2}/{16}}\right\rbrack - \left\lbrack {\left( {{s}^{2} + 3}\right) /2}\right\rbrack }{s}}\right\} , \n\]\n\nthat is,\n\n\[ \n{s}^{5} + {s}^{4} + 6{s}^{3} - 2{s}^{2} + \left( {9 - {32}{m}_{1}}\right) s - {15} = 0. \n\]\n\nHence \( s \) divides 15, so \( s \) is one of the values \( 1,3,5 \) and 15, giving \( k = 1,3,7 \) or 57. The case \( k = 1 \) is clearly unrealizable.
|
Yes
|
Corollary 21 Let \( {d}_{1} \leq {d}_{2} \leq \cdots \leq {d}_{n} \) be the degree sequence of a tree: \( {d}_{1} \geq 1 \) and \( \mathop{\sum }\limits_{{i = 1}}^{n}{d}_{i} = {2n} - 2 \) . Then the number of labelled trees of order \( n \) with degree sequence \( {\left( {d}_{i}\right) }_{1}^{n} \) is given by the multinomial coefficient
|
\[ \left( \begin{matrix} n - 2 \\ {d}_{1} - 1,{d}_{2} - 1,\ldots ,{d}_{n} - 1 \end{matrix}\right) . \]
|
Yes
|
Lemma 22 \( \left| \Gamma \right| \mathop{\sum }\limits_{{i = 1}}^{\ell }w\left( {O}_{i}\right) = \mathop{\sum }\limits_{{\alpha \in \Gamma }}\mathop{\sum }\limits_{{x \in F\left( \alpha \right) }}w\left( x\right) \) .
|
Proof.\n\n\[ \mathop{\sum }\limits_{{\alpha \in \Gamma }}\mathop{\sum }\limits_{{x \in F\left( \alpha \right) }}w\left( x\right) = \mathop{\sum }\limits_{{x \in X}}\mathop{\sum }\limits_{{\alpha \in \Gamma \left( x\right) }}w\left( x\right) = \mathop{\sum }\limits_{{i = 1}}^{\ell }\mathop{\sum }\limits_{{x \in {O}_{i}}}\mathop{\sum }\limits_{{\alpha \in \Gamma \left( x\right) }}w\left( x\right) \]\n\n\[ = \mathop{\sum }\limits_{{i = 1}}^{\ell }w\left( {O}_{i}\right) \mathop{\sum }\limits_{{x \in {O}_{i}}}\mathop{\sum }\limits_{{\alpha \in \Gamma \left( x\right) }}1 = \mathop{\sum }\limits_{{i = 1}}^{\ell }w\left( {O}_{i}\right) \left| {O}_{i}\right| s\left( {O}_{i}\right) \]\n\n\[ = \left| \Gamma \right| \mathop{\sum }\limits_{{i = 1}}^{\ell }w\left( {O}_{i}\right) \]
|
Yes
|
Theorem 23 With the notation above,\n\n\\[ \n\\left| \\Gamma \\right| S = \\widetilde{Z}\\left( {\\Gamma ;{s}_{1},{s}_{2},\\ldots ,{s}_{d}}\\right) \n\\]
|
Proof. By Lemma 22,\n\n\\[ \n\\left| \\Gamma \\right| S = \\left| \\Gamma \\right| \\mathop{\\sum }\\limits_{{i = 1}}^{\\ell }w\\left( {O}_{i}\\right) = \\mathop{\\sum }\\limits_{{\\alpha \\in \\Gamma }}\\mathop{\\sum }\\limits_{{f \\in F\\left( {\\alpha }^{ * }\\right) }}w\\left( f\\right) .\n\\] \n\nNow, clearly \\( F\\left( {\\alpha }^{ * }\\right) = \\left\\{ {f \\in {R}^{D} : f}\\right. \\) is constant on cycles of \\( \\alpha \\} \\), so if \\( {\\xi }_{1},{\\xi }_{2},\\ldots ,{\\xi }_{m} \\) are the cycles of \\( \\alpha \\), and \\( a \\in {\\xi }_{i} \\) means that \\( a \\) is an element of the cycle \\( {\\xi }_{i} \\), then\n\n\\[ \nF\\left( {\\alpha }^{ * }\\right) = \\left\\{ {f \\in {R}^{D} : {r}_{i} \\in R\\text{ and }f\\left( a\\right) = {r}_{i}\\text{ if }a \\in {\\xi }_{i}, i = 1,2,\\ldots, m}\\right\\} .\n\\] \n\nHence\n\n\\[ \n\\mathop{\\sum }\\limits_{{f \\in F\\left( {\\alpha }^{ * }\\right) }}w\\left( f\\right) = \\mathop{\\sum }\\limits_{{\\left( {r}_{i}\\right) \\subset R}}\\mathop{\\prod }\\limits_{{i = 1}}^{m}w{\\left( {r}_{i}\\right) }^{\\left| {\\xi }_{i}\\right| } = \\mathop{\\prod }\\limits_{{k = 1}}^{d}{\\left( \\mathop{\\sum }\\limits_{{r \\in R}}w{\\left( r\\right) }^{k}\\right) }^{{j}_{k}\\left( \\alpha \\right) } = \\mathop{\\prod }\\limits_{{k = 1}}^{d}{s}_{k}^{{j}_{k}\\left( \\alpha \\right) },\n\\] \n\ngiving\n\n\\[ \n\\left| \\Gamma \\right| S = \\mathop{\\sum }\\limits_{{\\alpha \\in \\Gamma }}\\mathop{\\prod }\\limits_{{k = 1}}^{d}{s}_{k}^{{j}_{k}\\left( \\alpha \\right) } = \\widetilde{Z}\\left( {\\Gamma ;{s}_{1},{s}_{2},\\ldots ,{s}_{d}}\\right) .\n\\]
|
Yes
|
Theorem 1 Let \( N = \left( {G, r}\right) \) be an electrical network, \( {s}_{1},\ldots ,{s}_{k} \in V\left( G\right) \), and \( {V}_{{s}_{1}},\ldots ,{V}_{{s}_{k}} \in \mathbb{R} \) . Then there are absolute potentials \( {V}_{x}, x \in V\left( G\right) \smallsetminus \left\{ {{s}_{1},\ldots ,{s}_{k}}\right\} \) such that\n\n\[ E = E\left( \left( {V}_{x}\right) \right) = \mathop{\sum }\limits_{{{xy} \in E\left( G\right) }}\frac{{\left( {V}_{x} - {V}_{y}\right) }^{2}}{{r}_{xy}} \]\n\nis minimal. This distribution \( \left( {V}_{x}\right) \) of absolute potentials gives a proper electric current with no outlet other than \( {s}_{1},\ldots ,{s}_{k} \) . The minimum of \( E \) is precisely the total energy of the electric current.
|
Proof. Since the energy function \( E \) is a continuous function of the absolute potentials \( \left( {V}_{x}\right) \in {\mathbb{R}}^{V\left( G\right) } \), and \( E \rightarrow \infty \) as \( \max \left| {V}_{x}\right| \rightarrow \infty \), the infimum of \( E \) is indeed attained at some \( \left( {V}_{x}\right) \) . Furthermore, at this point \( \left( {V}_{x}\right) \in {\mathbb{R}}^{V\left( G\right) } \) we have\n\n\[ \frac{\partial E}{\partial {V}_{x}} = 0 \]\n\nfor every \( x \neq {s}_{1},\ldots ,{s}_{k} \), so\n\n\[ \mathop{\sum }\limits_{{y \in \Gamma \left( x\right) }}\frac{2\left( {{V}_{x} - {V}_{y}}\right) }{{r}_{xy}} = 2\mathop{\sum }\limits_{{y \in \Gamma \left( x\right) }}{w}_{xy} = 0. \]\n\nHence the absolute potentials do define a distribution of currents (via Ohm's Law) satisfying KCL.
|
Yes
|
Theorem 2 Let \( N = \left( {G, r}\right) \) be an electrical network, \( {s}_{1}\ldots ,{s}_{k} \in V\left( G\right) \), and let \( {u}_{{s}_{1}},\ldots ,{u}_{{s}_{k}} \in \mathbb{R} \), with \( \mathop{\sum }\limits_{{i = 1}}^{k}{u}_{{s}_{i}} = 0 \) . Consider the energy function\n\n\[ \nE = E\left( u\right) = \mathop{\sum }\limits_{{{xy} \in E\left( G\right) }}{u}_{xy}^{2}{r}_{xy} \n\]\n\nfor flows \( u = \left( {u}_{xy}\right) \) in which a current of size \( {u}_{{s}_{i}} \) enters the network at \( {s}_{i} \) (i.e., a current of size \( - {u}_{{s}_{i}} \) leaves the network at \( {s}_{i} \) ), \( i = 1,\ldots, k \), and at no other vertex does any current enter or leave the network. There is such a flow minimizing \( E\left( u\right) \) , and this flow satisfies KPL, so it is a proper electric current. The minimum of \( E\left( u\right) \) is precisely the total energy in the current.
|
Proof. Once again, compactness implies that the infimum of \( E\left( u\right) \) is attained at some flow \( u = \left( {u}_{xy}\right) \) . Given a cycle \( {x}_{1}{x}_{2}\cdots {x}_{\ell },{x}_{\ell + 1} \equiv {x}_{1} \), let \( u\left( \varepsilon \right) \) be the flow obtained from \( u \) by increasing each \( {u}_{{x}_{i}{x}_{i + 1}} \) by \( \varepsilon \) for \( i = 1,\ldots ,\ell \) . Then\n\n\[ \n\frac{{dE}\left( {u\left( \varepsilon \right) }\right) }{d\varepsilon } = 0 \n\]\n\nat \( \varepsilon = 0 \), so\n\n\[ \n2\mathop{\sum }\limits_{{i = 1}}^{\ell }{u}_{{x}_{i}{x}_{i + 1}}{r}_{{x}_{i}{x}_{i + 1}} = 0 \n\]\n\nThus KPL holds, as claimed.
|
Yes
|
Theorem 3 Let \( u = \left( {u}_{xy}\right) \) be a flow from \( s \) to \( t \) with value\n\n\[ \n{u}_{s} = \mathop{\sum }\limits_{{y \in \Gamma \left( s\right) }}{u}_{sy} = - \mathop{\sum }\limits_{{z \in \Gamma \left( t\right) }}{u}_{tz} = - {u}_{t} \n\]\n\ni.e., let \( u \) be a flow satisfying KCL at each vertex other than \( s \) and \( t \), and let \( \left( {V}_{x}\right) \) be any function on the vertices. Then\n\n\[ \n\left( {{V}_{s} - {V}_{t}}\right) {u}_{s} = \mathop{\sum }\limits_{{{xy} \in E\left( G\right) }}\left( {{V}_{x} - {V}_{y}}\right) {u}_{xy} \n\]
|
Proof. The right-hand side is\n\n\[ \n\mathop{\sum }\limits_{{x \in V\left( G\right) }}{V}_{x}\left( {\mathop{\sum }\limits_{{y \in {\Gamma }^{ + }\left( x\right) }}{u}_{xy} - \mathop{\sum }\limits_{{z \in {\Gamma }^{ - }\left( x\right) }}{u}_{zx}}\right) = {V}_{s}{u}_{s} + {V}_{t}{u}_{t} = \left( {{V}_{s} - {V}_{t}}\right) {u}_{s}. \n\]
|
Yes
|
Corollary 4 The total energy in an electric current from \( s \) to \( t \) is \( \left( {{V}_{s} - {V}_{t}}\right) {w}_{s} \) , where \( {w}_{s} = \mathop{\sum }\limits_{{x \in \Gamma \left( s\right) }}{w}_{sx} \) is the value of the current. If \( {V}_{s} - {V}_{t} = 1 \) then the total energy is equal to the size of the current; i.e., the total energy, the total current and the effective conductance are the same. If \( {w}_{s} = 1 \) then the total energy is the potential difference between \( s \) and \( t \) ; i.e., the total energy, the potential difference and the effective resistance are the same.
|
Proof. This is immediate from Theorem 3.
|
No
|
Corollary 5 The effective conductance \( {C}_{\mathrm{{eff}}}\left( {s, t}\right) \) of a network between \( s \) and \( t \) is
|
\[ {C}_{\text{eff }}\left( {s, t}\right) = \inf \left\{ {\mathop{\sum }\limits_{{{xy} \in E\left( G\right) }}\frac{{\left( {V}_{x} - {V}_{y}\right) }^{2}}{{r}_{xy}} : {V}_{s} = 1,{V}_{t} = 0}\right\} . \]
|
Yes
|
Corollary 6 The effective resistance \( {R}_{\text{eff }}\left( {s, t}\right) \) of a network between \( s \) and \( t \) is\n\n\[ \n{R}_{\text{eff }}\left( {s, t}\right) = \inf \left\{ {\mathop{\sum }\limits_{{{xy} \in E\left( G\right) }}{u}_{xy}^{2}{r}_{xy} : \left( {u}_{xy}\right) \text{ is an }s\text{-th flow of size }1}\right\} .\n\]
|
Proof. If \( {r}_{{x}_{0}{y}_{0}} \) is increased then the expression for \( {C}_{\text{eff }}\left( {s, t}\right) \) in Corollary 5 does not increase. Equivalently, the expression for \( {R}_{\text{eff }}\left( {s, t}\right) \) in Corollary 6 does not decrease.
|
No
|
Corollary 7 If the resistance of a wire is increased then the effective resistance (between two vertices) does not decrease. In particular, if a wire is cut, the effective resistance does not decrease, and if two vertices are shorted, the effective resistance does not increase.
|
Proof. If \( {r}_{{x}_{0}{y}_{0}} \) is increased then the expression for \( {C}_{\text{eff }}\left( {s, t}\right) \) in Corollary 5 does not increase. Equivalently, the expression for \( {R}_{\text{eff }}\left( {s, t}\right) \) in Corollary 6 does not decrease.
|
Yes
|
Theorem 8 Let \( N = \left( {G, c}\right) \) be a connected electrical network, and let \( s, t \in \) \( V\left( G\right), s \neq t \) . For \( x \in V\left( G\right) \) define\n\n\[ \n{V}_{x} = \mathbb{P}\text{(starting at}x\text{, we get to}s\text{before we get to}t\text{),} \n\]\n\nso that \( {V}_{s} = 1 \) and \( {V}_{t} = 0 \) . Then \( {\left( {V}_{x}\right) }_{x \in V\left( G\right) } \) is the distribution of absolute potentials when \( s \) is set at 1 and \( t \) at 0 . The total current from \( s \) to \( t \) is\n\n\[ \n{C}_{\text{eff }}\left( {s, t}\right) = {C}_{s}{P}_{\text{esc }}\left( {s \rightarrow t}\right) . \n\]\n\n(11)\n\nAlso,\n\n\[ \n\frac{{P}_{\mathrm{{esc}}}\left( {s \rightarrow t}\right) }{{P}_{\mathrm{{esc}}}\left( {t \rightarrow s}\right) } = \frac{{C}_{t}}{{C}_{s}} \n\]\n\n(12)
|
Proof. By considering the very first step of the RW started at \( x \neq s, t \), we see that\n\n\[ \n{V}_{x} = \mathop{\sum }\limits_{y}{P}_{xy}{V}_{y} = \mathop{\sum }\limits_{{y \in \Gamma \left( x\right) }}\frac{{c}_{xy}}{{C}_{x}}{V}_{y} \n\]\n\nso (4) follows:\n\n\[ \n{C}_{x}{V}_{x} = \mathop{\sum }\limits_{{y \in \Gamma \left( x\right) }}{c}_{xy}{V}_{y} \n\]\n\nHence \( {\left( {V}_{x}\right) }_{x \in V\left( G\right) } \) is indeed the claimed distribution of absolute potentials.\n\nNote that\n\n\[ \n{P}_{\mathrm{{esc}}}\left( {s \rightarrow t}\right) = 1 - \mathop{\sum }\limits_{{y \in \Gamma \left( s\right) }}{P}_{sy}{V}_{y} \n\]\n\nsince our first step takes us, with probability \( {P}_{sy} \), to a neighbour \( y \) of \( s \), and from there with probability \( {V}_{y} \) we get to \( s \) before we get to \( t \) . Hence the total current is\n\n\[ \n{C}_{\text{eff }}\left( {s, t}\right) = \mathop{\sum }\limits_{{y \in \Gamma \left( s\right) }}\left( {{V}_{s} - {V}_{y}}\right) {c}_{sy} = \mathop{\sum }\limits_{{y \in \Gamma \left( s\right) }}\left( {{V}_{s} - {V}_{y}}\right) \frac{{c}_{sy}{C}_{s}}{{C}_{s}} \n\]\n\n\[ \n= {C}_{s}\mathop{\sum }\limits_{{y \in \Gamma \left( s\right) }}\left( {\frac{{c}_{sy}}{{C}_{s}} - {V}_{y}\frac{{c}_{sy}}{{C}_{s}}}\right) = {C}_{s}\left( {1 - \mathop{\sum }\limits_{{y \in \Gamma \left( s\right) }}{P}_{sy}{V}_{y}}\right) \n\]\n\n\[ \n= {C}_{s}{P}_{\mathrm{{esc}}}\left( {s \rightarrow t}\right) \n\]\n\ngiving us (11).\n\nFinally, (12) follows easily:\n\n\[ \n\frac{{P}_{\mathrm{{esc}}}\left( {s \rightarrow t}\right) }{{P}_{\mathrm{{esc}}}\left( {t \rightarrow s}\right) } = \frac{{C}_{\mathrm{{eff}}}\left( {s, t}\right) /{C}_{s}}{{C}_{\mathrm{{eff}}}\left( {t, s}\right) /{C}_{t}} = \frac{{C}_{t}}{{C}_{s}} \n\]\n\nsince \( {C}_{\text{eff }}\left( {s, t}\right) = {C}_{\text{eff }}\left( {t, s}\right) \) .
|
Yes
|
Theorem 9 Let \( N = \left( {G, c}\right) \) be a connected electrical network with \( s, t \in V\left( G\right) \) , \( s \neq t \) . For \( x \in V\left( G\right) \), set \( {V}_{x} = {S}_{x}\left( {s \rightarrow t}\right) /{C}_{x} \) . Furthermore, for \( {xy} \in E\left( G\right) \) , denote by \( {E}_{xy} \) the expected difference between the number of times we traverse the edge \( {xy} \) from \( x \) to \( y \) and the number of times we traverse it from \( y \) to \( x \), if we start at \( s \) and stop when we get to \( t \) .\n\nThen, setting \( s \) at absolute potential \( {R}_{\text{eff }}\left( {s, t}\right) \) and \( t \) at absolute potential 0, so that there is a current of size 1 from \( s \) to \( t \) through \( N \), the distribution of absolute potentials is precisely \( \left( {V}_{x}\right) \) . In particular,\n\n\[ \n{R}_{\text{eff }}\left( {s, t}\right) = \frac{{S}_{s}\left( {s \rightarrow t}\right) }{{C}_{s}}.\n\]
|
Proof. We know that \( {S}_{t} = 0 \), so \( {V}_{t} = 0 \) . Let us check that \( \left( {V}_{x}\right) \) satisfies (4) for every \( x \neq s, t \) . Indeed, we get to \( x \) from one of its neighbours, so\n\n\[ \n{S}_{x} = \mathop{\sum }\limits_{{y \in \Gamma \left( x\right) }}{S}_{y}{P}_{yx} = \mathop{\sum }\limits_{{y \in \Gamma \left( x\right) }}{S}_{y}\frac{{c}_{xy}}{{C}_{y}},\n\]\n\nwhich is nothing else but (4):\n\n\[ \n{C}_{x}{V}_{x} = \mathop{\sum }\limits_{{y \in \Gamma \left( x\right) }}{c}_{xy}{V}_{y}\n\]\n\nHence the distribution \( \left( {V}_{x}\right) \) of absolute potentials does satisfy KCL at every vertex other than \( s \) and \( t \) . Therefore, with this distribution of absolute potentials, no current enters or leaves the network anywhere other than \( s \) and \( t \) .\n\nAll that remains to check is that we have the claimed current in each edge and that the size of the total current from \( s \) to \( t \) is 1 .\n\nWhat is the current \( {w}_{xy} \) in the edge \( {xy} \) induced by the potentials \( \left( {V}_{x}\right) \) ? By Ohm's law it is\n\n\[ \n{w}_{xy} = \left( {{V}_{x} - {V}_{y}}\right) {c}_{xy} = \left( {\frac{{S}_{x}}{{C}_{x}} - \frac{{S}_{y}}{{C}_{y}}}\right) {c}_{xy}\n\]\n\n\[ \n= \frac{{S}_{x}{c}_{xy}}{{C}_{x}} - \frac{{S}_{y}{c}_{yx}}{{C}_{y}} = {S}_{x}{P}_{xy} - {S}_{y}{P}_{yx}\n\]\n\nand the last quantity is precisely \( {E}_{xy} \) .\n\nFinally, the total current through the network from \( s \) to \( t \) is indeed 1 :\n\n\[ \n{w}_{s} = \mathop{\sum }\limits_{{y \in \Gamma \left( s\right) }}{w}_{sy} = \mathop{\sum }\limits_{{y \in \Gamma \left( s\right) }}{E}_{sy} = 1\n\]\n\nsince every walk from \( s \) to \( t \) takes 1 more step from \( s \) (through an edge leaving \( s \) ) than to \( s \) (through an edge into \( s \) ). But since \( t \) is at absolute potential 0 and the total current from \( s \) to \( t \) is 1, the vertex \( s \) is at absolute potential \( {R}_{\text{eff }}\left( {s \rightarrow t}\right) \), so \( {V}_{s} = {R}_{\text{eff }}\left( {s \rightarrow t}\right) \), as claimed by (13).
|
Yes
|
Theorem 10 The RW on a connected, locally finite, infinite electrical network is transient iff the effective resistance between a vertex \( s \) and \( \infty \) is finite, and it is recurrent iff the effective resistance is infinite.
|
Although it is intuitively clear what Theorem 10 means and how it follows from Theorem 8, let us be a little more pedantic.\n\nLet us fix a vertex \( s \) and, for \( l \in \mathbb{N} \), let \( {N}_{l} \) be the network obtained from \( N \) by shorting all the vertices at distance at least \( l \) from \( s \) to form a new vertex \( {t}_{l} \) . Let \( {R}_{\text{eff }}^{\left( l\right) } \) be the effective resistance of the network \( {N}_{l} \) between \( s \) and \( {t}_{l} \), and let \( {C}_{\text{eff }}^{\left( l\right) } = 1/{R}_{\text{eff }}^{\left( l\right) } \) be its effective conductance. We know from the monotonicity principle that the sequence \( \left( {R}_{\text{eff }}^{\left( l\right) }\right) \) is increasing and the sequence \( \left( {C}_{\text{eff }}^{\left( l\right) }\right) \) is decreasing, so we may define the effective resistance of \( N \) between \( s \) and \( \infty \) as \( {R}_{\text{eff }}^{\left( \infty \right) } = \mathop{\lim }\limits_{{l \rightarrow \infty }}{R}_{\text{eff }}^{\left( l\right) } \) , and the effective conductance as \( {C}_{\text{eff }}^{\left( \infty \right) } = \mathop{\lim }\limits_{{l \rightarrow \infty }}{C}_{\text{eff }}^{\left( l\right) } \) (see also Exercise 5).\n\nLet \( {P}_{\text{esc }}^{\left( l\right) } \) be the probability that, starting at \( s \), we get to at least distance \( l \) from \( s \) , before we return to \( s \) . It is easily seen that \( {P}_{\mathrm{{esc}}}^{\left( \infty \right) } = \mathop{\lim }\limits_{{l \rightarrow \infty }}{P}_{\mathrm{{esc}}}^{\left( l\right) } \) (see Exercise 6).\n\nIt is immediate that \( {P}_{\text{esc }}^{\left( l\right) } \) is also the probability of escaping to \( {t}_{l} \) in \( {N}_{l} \), when starting at \( s \) in \( {N}_{l} \) . By Theorem \( 8,{P}_{\mathrm{{esc}}}^{\left( l\right) } = {C}_{\mathrm{{eff}}}^{\left( l\right) }/{C}_{s} \) . Hence \( {P}_{\mathrm{{esc}}} > 0 \) iff \( {C}_{\mathrm{{eff}}}^{\left( l\right) } \) is bounded away from 0, i.e., iff \( {R}_{\text{eff }}^{\left( l\right) } = 1/{C}_{\text{eff }}^{\left( l\right) } \) is at most some real \( r \) for every \( l \) . But this holds iff \( {R}_{\text{eff }}^{\left( \infty \right) } \leq r \), proving the result.
|
Yes
|
Theorem 11 The effective resistance \( {R}_{\text{eff }}^{\left( \infty \right) } \) of \( N \) between \( s \) and infinity is at most \( r \) iff there is a current \( \left( {u}_{xy}\right) \) in the network \( N \) such that a flow of size 1 enters the network at \( s \), at no other vertex does any current enter or leave the network, and the total energy in the system, \( \mathop{\sum }\limits_{{{xy} \in E\left( G\right) }}{u}_{xy}^{2}{r}_{xy} \), is at most \( r \) .
|
Proof. Suppose that \( {R}_{\text{eff }}^{\left( l\right) } \leq r \) for every \( l \) . Corollary 6 guarantees a flow \( {u}^{\left( l\right) } \) of size 1 from \( s \) to \( {t}_{l} \) in \( {N}_{l} \), with total energy at most \( r \) . By compactness, a subsequence of \( \left( {u}^{\left( l\right) }\right) \) converges to a flow \( u \) with the required properties. By Corollary 6, the converse implication is trivial.
|
Yes
|
Theorem 15 We have \( \mathop{\lim }\limits_{{k \rightarrow \infty }}\mathbb{E}\left( {{S}_{k}\left( x\right) /k}\right) = d\left( x\right) /{2m} \), and \( {\left( {S}_{k}\left( x\right) /k\right) }_{x \in V} \) tends to \( \pi \) in probability as \( k \rightarrow \infty \) .
|
Proof. Note first that\n\n\[ \mathbb{E}\left( {{S}_{k}\left( x\right) }\right) = \mathop{\sum }\limits_{{i = 1}}^{k}\mathbb{P}\left( {{X}_{i} = x}\right) \]\n\nso\n\n\[ \mathop{\lim }\limits_{{k \rightarrow \infty }}\mathbb{E}\left( {{S}_{k}\left( x\right) /k}\right) = \mathop{\lim }\limits_{{k \rightarrow \infty }}\frac{1}{k}\mathop{\sum }\limits_{{i = 1}}^{k}{p}_{x}^{\left( i\right) } = \frac{d\left( x\right) }{2m}. \]\n\n(17)\n\nIn order to estimate the variance of \( {S}_{k}\left( x\right) /k \), note that, very crudely, if (16) holds for \( \left| {j - i}\right| \geq {k}_{0} \) then\n\n\[ {\sigma }^{2}\left( {{S}_{k}\left( x\right) }\right) = \mathbb{E}{\left( {S}_{k}\left( x\right) \right) }^{2} - {\left( \mathbb{E}{S}_{k}\left( x\right) \right) }^{2} \]\n\n\[ = \mathop{\sum }\limits_{{i = 1}}^{k}\mathop{\sum }\limits_{{j = 1}}^{k}\left( {\mathbb{P}\left( {{X}_{i} = x,{X}_{j} = x}\right) - \mathbb{P}\left( {{X}_{i} = x}\right) \mathbb{P}\left( {{X}_{j} = x}\right) }\right) \]\n\n\[ = \mathop{\sum }\limits_{\substack{{\left| {i - j}\right| < {k}_{0}} \\ {i, j \leq k} }}\left( {\mathbb{P}\left( {{X}_{i} = x,{X}_{j} = x}\right) - \mathbb{P}\left( {{X}_{i} = x}\right) \mathbb{P}\left( {{X}_{j} = x}\right) }\right) \]\n\n\[ + \mathop{\sum }\limits_{\substack{{\left| {i - j}\right| \geq {k}_{0}} \\ {i, j \leq k} }}\left( {\mathbb{P}\left( {{X}_{i} = x,{X}_{j} = x}\right) - \mathbf{P}\left( {{X}_{i} = x}\right) \mathbb{P}\left( {{X}_{j} = x}\right) }\right) \]\n\n\[ \leq 2{k}_{0}k + {k}^{2}\varepsilon \text{. } \]\n\n(18)\n\nHence if \( k \geq 2{k}_{0}/\varepsilon \) then this gives\n\n\[ {\sigma }^{2}\left( {{S}_{k}\left( x\right) /k}\right) = \frac{\mathbb{E}{\left( {S}_{k}\left( x\right) \right) }^{2} - {\left( \mathbb{E}{S}_{k}\left( x\right) \right) }^{2}}{{k}^{2}} \leq \frac{2{k}_{0}}{k} + \varepsilon \leq {2\varepsilon }. \]\n\nTherefore\n\n\[ \mathbb{P}\left( {\left| {\frac{{S}_{k}\left( x\right) }{k} - \frac{\mathbb{E}\left( {{S}_{k}\left( x\right) }\right. }{k})}\right| \geq \eta }\right) \rightarrow 0 \]\n\nfor every \( \eta > 0 \) so, by (17), \( {S}_{k}\left( x\right) /k \rightarrow d\left( x\right) /{2m} \) in probability.
|
Yes
|
Theorem 16 The mean return time to a vertex \( x \) in a connected graph is \( H\left( {x, x}\right) = {2m}/d\left( x\right) . \)
|
Proof. Set \( {Y}_{0} = 0 \) and let \( {Y}_{\ell } \) be the time our random walk \( {\left( {X}_{i}\right) }_{0}^{\infty } \) returns to \( x \) for the \( \ell \) th time when started at \( {X}_{0} = x \) . Then \( {Y}_{1} = {Y}_{1} - {Y}_{0},{Y}_{2} - {Y}_{1},{Y}_{3} - {Y}_{2},\ldots \) are i.i.d. random variables, so \( \mathbb{E}\left( {Y}_{\ell }\right) = \ell \mathbb{E}\left( {Y}_{1}\right) = \ell H\left( {x, x}\right) \) . Also, \( {Y}_{\ell } \leq k \) if and only if \( {S}_{k}\left( x\right) \geq \ell \) . Hence, for \( \alpha > 0 \) ,\n\n\[ \n{Y}_{\ell }/\ell \leq \alpha \;\text{ if and only if }\;{S}_{\lfloor \ell \alpha \rfloor } \geq \ell .\n\]\n\nIn particular, \( \mathbb{P}\left( {{S}_{\lfloor \ell \alpha \rfloor }/\ell \alpha \geq 1/\alpha }\right) = \mathbb{P}\left( {{Y}_{\ell }/\ell \leq \alpha }\right) \) so, by Theorem 15,\n\n\[ \n\mathop{\lim }\limits_{{\ell \rightarrow \infty }}\mathbb{P}\left( {\frac{{Y}_{\ell }}{\ell } \leq \alpha }\right) = \mathop{\lim }\limits_{{\ell \rightarrow \infty }}\mathbb{P}\left( {\frac{{S}_{\lfloor \ell \alpha \rfloor }}{\ell \alpha } \geq \frac{1}{\alpha }}\right) = \left\{ \begin{array}{ll} 1 & \text{ if }\alpha > {2m}/d\left( x\right) , \\ 0 & \text{ if }\alpha < {2m}/d\left( x\right) . \end{array}\right.\n\]\n\nHence \( {Y}_{\ell }/\ell \) tends to \( {2m}/d\left( x\right) \) in probability, so \( H\left( {x, x}\right) = {2m}/d\left( x\right) \) .
|
Yes
|
Theorem 17 Let \( {xy} \) be a fixed edge of our graph \( G \) . The expected time it takes for the simple random walk on \( G \), started at \( x \), to return to \( x \) through \( {yx} \) is \( {2m} \) . Thus if \( {X}_{0},{X}_{1},{X}_{2},\ldots \) is our SRW, with \( {X}_{0} = x \), and \( Z = \min \left\{ {k \geq 2 : {X}_{k - 1} = }\right. \) \( \left. {y,{X}_{k} = x}\right\} \), then \( \mathbb{E}\left( Z\right) = {2m} \) .
|
Proof. The probability that we pass through \( {yx} \) at time \( k + 1 \) is\n\n\[ \mathbb{P}\left( {{X}_{k} = y,{X}_{k + 1} = x}\right) = \frac{\mathbb{P}\left( {{X}_{k} = y}\right) }{d\left( y\right) }.\]\n\nTherefore, writing \( {S}_{k}\left( {yx}\right) \) for the number of times we pass through \( {yx} \) up to time \( k + 1 \)\n\n\[ \frac{\mathbb{E}{S}_{k}\left( {yx}\right) }{k} = \frac{\mathbb{E}{S}_{k}\left( y\right) }{{kd\left( y\right) } } \rightarrow \frac{1}{2m} \]\n\nThe proof can be completed as in Theorem 16: writing \( {Z}_{\ell } \) for the time \( k \) our random walk \( {\left( {X}_{i}\right) }_{0}^{\infty } \), started at \( {X}_{0} = x \), returns to \( x \) for the \( \ell \) th time, i.e. \( {X}_{k - 1} = y \) and \( {X}_{k} = x \) for the \( \ell \) th time, \( \mathbb{E}\left( {Z}_{\ell }\right) = \ell \mathbb{E}\left( {Z}_{1}\right) \), and \( {S}_{k}\left( {xy}\right) \leq \ell \) if and only if \( {Z}_{\ell } \leq k \) .
|
No
|
Theorem 18 Let \( G \) be a connected graph of order \( n \) and size \( m \) . The mean hitting times \( H\left( {x, y}\right) \) of the SRW on \( G \) satisfy\n\n\[ \n\frac{1}{2m}\mathop{\sum }\limits_{{x \in V\left( G\right) }}\mathop{\sum }\limits_{{y \in \Gamma \left( x\right) }}H\left( {x, y}\right) = n - 1.\n\]\n\n(23)
|
Proof. Let \( \pi = \left( {\pi }_{x}\right) \) be the stationary distribution for the transition matrix \( P = \) \( \left( {P}_{xy}\right) \), so that \( {\pi P} = \pi \) and \( {\pi }_{x}{P}_{xy} = 1/{2m} \) for \( {xy} \in E\left( G\right) \) . Then\n\n\[ \n\frac{1}{2m}\mathop{\sum }\limits_{x}\mathop{\sum }\limits_{{y \in \Gamma \left( x\right) }}H\left( {x, y}\right) = \mathop{\sum }\limits_{{x, y}}{\pi }_{x}{P}_{xy}H\left( {y, x}\right)\n\]\n\n\[ \n= \mathop{\sum }\limits_{x}{\pi }_{x}\left( {\mathop{\sum }\limits_{y}{P}_{xy}H\left( {y, x}\right) }\right) = \mathop{\sum }\limits_{x}{\pi }_{x}\left( {H\left( {x, x}\right) - 1}\right)\n\]\n\n\[ \n= \mathop{\sum }\limits_{x}{\pi }_{x}\left( {\frac{1}{{\pi }_{x}} - 1}\right) = n - 1\n\]
|
Yes
|
Theorem 19 With the notation above,\n\n\[ \n{P}_{\mathrm{{esc}}}\left( {s \rightarrow t}\right) = \frac{{C}_{\mathrm{{eff}}}\left( {s, t}\right) }{d\left( s\right) } = \frac{H\left( {s, s}\right) }{C\left( {s, t}\right) } = \frac{2m}{d\left( s\right) C\left( {s, t}\right) }.\n\]\n\n(25)\n\nFurthermore,\n\n\[ \nC\left( {s, t}\right) = {2m}{R}_{\text{eff }}\left( {s, t}\right)\n\]\n\n(26)
|
Proof. The first equality in (25) follows from relation (13) in Theorem 8. To see the other equalities in (25), let \( R \) be the first time the random walk returns to \( s \), and let \( A \) be the first time it returns to \( s \) after having visited \( t \) . Then \( \mathbb{E}\left( R\right) = H\left( {s, s}\right) = \) \( {2m}/d\left( s\right) \) and, by definition, \( \mathbb{E}\left( A\right) = C\left( {s, t}\right) \) . We always have \( R \leq A \) and\n\n\[ \n\mathbb{P}\left( {R = A}\right) = {P}_{\mathrm{{esc}}}\left( {s \rightarrow t}\right) = q\n\]\n\nsay. Also,\n\n\[ \n\mathbb{E}\left( {A - R}\right) = \left( {1 - q}\right) \mathbb{E}\left( A\right)\n\]\n\nso\n\n\[ \nC\left( {s, t}\right) = \mathbb{E}\left( A\right) = \frac{\mathbb{E}\left( R\right) }{q} = \frac{2m}{d\left( s\right) q}.\n\]\n\nThus \( {P}_{\mathrm{{esc}}}\left( {s \rightarrow t}\right) = {2m}/d\left( s\right) C\left( {s, t}\right) \), as claimed. As \( H\left( {s, s}\right) = {2m}/d\left( s\right) \), equality (25) is proved. Finally, (26) is immediate from (25).
|
Yes
|
Theorem 20 For a connected graph \( G \), vertices \( s \neq t \), and edge \( {xy} \in E\left( G\right) \) we have\n\n\[ \n{R}_{\text{eff }}\left( {s, t}\right) = {S}_{xy}\left( {s \leftrightarrow t}\right) = \frac{{S}_{x}\left( {s \rightarrow t}\right) }{d\left( x\right) } + \frac{{S}_{x}\left( {t \rightarrow s}\right) }{d\left( x\right) }.\n\]
|
Proof. With the notation in Theorem \( 9,{S}_{x}\left( {s \leftrightarrow t}\right) \) is\n\n\[ \n{S}_{x}\left( {s \rightarrow t}\right) + {S}_{x}\left( {t \rightarrow s}\right) = {V}_{x}\left( {s \rightarrow t}\right) d\left( x\right) + {V}_{x}\left( {t \rightarrow s}\right) d\left( x\right) .\n\]\n\nBut\n\n\[ \n{V}_{z}\left( {s \rightarrow t}\right) + {V}_{z}\left( {t \rightarrow s}\right) = {R}_{\text{eff }}\left( {s, t}\right) = {R}_{\text{eff }}\left( {t, s}\right)\n\]\n\n(28)\n\nfor all \( z \) . Indeed, \( {V}_{z}\left( {s, t}\right) \) is the potential of \( z \) if \( s \) is set at \( {R}_{\text{eff }}\left( {s, t}\right) \) and \( t \) at 0, and \( {V}_{z}\left( {t, s}\right) \) is the potential of \( z \) if \( t \) is set at \( {R}_{\text{eff }}\left( {s, t}\right) \) and \( s \) is set at 0 . Hence,(28) holds by the principle of superposition discussed in Section II.1.
|
Yes
|
Theorem 22 Let \( s, t \) and \( u \) be distinct vertices of a graph \( G \) . Then\n\n\[ d\left( s\right) {P}_{\mathrm{{esc}}}\left( {s \rightarrow t < u}\right) = d\left( t\right) {P}_{\mathrm{{esc}}}\left( {t \rightarrow s < u}\right) .
|
Proof. Let \( {W}_{s, t;u} \) be the set of walks \( W = {x}_{0}{x}_{1}\cdots {x}_{\ell } \) in \( G - u \) such that \( {x}_{i} = s \) iff \( i = 0 \) and \( {x}_{i} = t \) iff \( i = \ell \) . Then, writing \( {\left( {X}_{i}\right) }_{0}^{\infty } \) for our random walk,\n\n\[ {P}_{\mathrm{{esc}}}\left( {s \rightarrow t < u}\right) = \mathop{\sum }\limits_{{W \in {W}_{s, t;u}}}\mathbb{P}\left( {{X}_{i} = {x}_{i},1 \leq i \leq \ell \mid {X}_{0} = s}\right) \]\n\nand\n\n\[ {P}_{\mathrm{{esc}}}\left( {t \rightarrow s < u}\right) = \mathop{\sum }\limits_{{W \in {W}_{s, t;u}}}\mathbb{P}\left( {{X}_{i} = {x}_{\ell - i},1 \leq i \leq \ell \mid {X}_{0} = t}\right) . \]\n\nBut for \( W \in {W}_{s, t;u} \) we have\n\n\[ \mathbb{P}\left( {{X}_{i} = {x}_{i},1 \leq i \leq \ell \mid {X}_{0} = s}\right) = \mathop{\prod }\limits_{{i = 0}}^{{\ell - 1}}d{\left( {x}_{i}\right) }^{-1} \]\n\nand\n\n\[ \mathbb{P}\left( {{X}_{i} = {x}_{\ell - i},1 \leq i \leq \ell \mid {X}_{0} = t}\right) = \mathop{\prod }\limits_{{i = 1}}^{\ell }d{\left( {x}_{i}\right) }^{-1}. \]\n\nThe ratio of these two qualities is \( d\left( t\right) /d\left( s\right) \), so the assertion follows.
|
Yes
|
Theorem 22 Let \( s, t \) and \( u \) be distinct vertices of a graph \( G \) . Then\n\n\[ d\left( s\right) {P}_{\mathrm{{esc}}}\left( {s \rightarrow t < u}\right) = d\left( t\right) {P}_{\mathrm{{esc}}}\left( {t \rightarrow s < u}\right) .
|
Proof. Let \( {W}_{s, t;u} \) be the set of walks \( W = {x}_{0}{x}_{1}\cdots {x}_{\ell } \) in \( G - u \) such that \( {x}_{i} = s \) iff \( i = 0 \) and \( {x}_{i} = t \) iff \( i = \ell \) . Then, writing \( {\left( {X}_{i}\right) }_{0}^{\infty } \) for our random walk,\n\n\[ {P}_{\mathrm{{esc}}}\left( {s \rightarrow t < u}\right) = \mathop{\sum }\limits_{{W \in {W}_{s, t;u}}}\mathbb{P}\left( {{X}_{i} = {x}_{i},1 \leq i \leq \ell \mid {X}_{0} = s}\right) \]\n\nand\n\n\[ {P}_{\mathrm{{esc}}}\left( {t \rightarrow s < u}\right) = \mathop{\sum }\limits_{{W \in {W}_{s, t;u}}}\mathbb{P}\left( {{X}_{i} = {x}_{\ell - i},1 \leq i \leq \ell \mid {X}_{0} = t}\right) . \]\n\nBut for \( W \in {W}_{s, t;u} \) we have\n\n\[ \mathbb{P}\left( {{X}_{i} = {x}_{i},1 \leq i \leq \ell \mid {X}_{0} = s}\right) = \mathop{\prod }\limits_{{i = 0}}^{{\ell - 1}}d{\left( {x}_{i}\right) }^{-1} \]\n\nand\n\n\[ \mathbb{P}\left( {{X}_{i} = {x}_{\ell - i},1 \leq i \leq \ell \mid {X}_{0} = t}\right) = \mathop{\prod }\limits_{{i = 1}}^{\ell }d{\left( {x}_{i}\right) }^{-1}. \]\n\nThe ratio of these two qualities is \( d\left( t\right) /d\left( s\right) \), so the assertion follows.
|
Yes
|
Theorem 23 Let \( s \) , \( t \) and \( u \) be vertices of a graph \( G \) . Then\n\n\[ H\left( {s, t}\right) + H\left( {t, u}\right) + H\left( {u, s}\right) = H\left( {s, u}\right) + H\left( {u, t}\right) + H\left( {t, s}\right) .\n\]
|
Proof. The left-hand side is the expected time it takes to go from \( s \) to \( t \), then on to \( u \) and, finally, back to \( s \), and the right-hand side is the expected length of a tour in the opposite direction. Thus, writing \( \tau \) for the first time a walk starting at \( s \) completes a tour \( s \rightarrow t \rightarrow u \rightarrow s \), and defining \( {\tau }^{\prime } \) analogously for \( s \rightarrow u \rightarrow t \rightarrow s \), the theorem claims exactly that \( {\mathbb{E}}_{s}\left( \tau \right) = {\mathbb{E}}_{s}\left( {\tau }^{\prime }\right) \) . Consider a closed walk \( W = {x}_{0}{x}_{1}\cdots {x}_{\ell } \) starting and ending at \( s \), so that \( {x}_{0} = {x}_{\ell } = s \) . Clearly,\n\n\[ \mathbb{P}\left( {{X}_{i} = {x}_{i},1 \leq i \leq \ell \mid {X}_{0} = s}\right) = \mathbb{P}\left( {{X}_{i} = {x}_{\ell - i},1 \leq i \leq \ell \mid {X}_{0} = s}\right)\n\]\n\[ = \mathop{\prod }\limits_{{i = 0}}^{{\ell - 1}}d{\left( {x}_{i}\right) }^{-1}\n\]\nthat is, the probability of going round this walk one way is precisely the probability of tracing it the other way.\n\nFix \( N \), and let \( S = {x}_{0}{x}_{1}\cdots \) be an infinite walk with \( {x}_{0} = s \) . Set \( \ell = \ell \left( {S, N}\right) = \) \( \max \left\{ {i \leq N : {x}_{i} = s}\right\} \), and let \( {S}^{\prime } \) be the walk \( {x}_{\ell }{x}_{\ell - 1}\cdots {x}_{0}{x}_{\ell + 1}{x}_{\ell + 2}\cdots \) . By the observation above, the map \( S \mapsto {S}^{\prime } \) is a measure preserving transformation of the space of random walks started at \( s \) . Since \( \tau \left( S\right) \leq N \) iff \( {\tau }^{\prime }\left( {S}^{\prime }\right) \leq N \), we have \( {\mathbb{P}}_{s}\left( {\tau \leq N}\right) = {\mathbb{P}}_{s}\left( {{\tau }^{\prime } \leq N}\right) \) . Hence \( {\mathbb{E}}_{s}\left( \tau \right) = {\mathbb{E}}_{s}\left( {\tau }^{\prime }\right) \), as required.
|
Yes
|
Theorem 24 The expected sojourn times satisfy\n\n\[ \n d\left( s\right) {S}_{x}\left( {s \rightarrow t}\right) = d\left( x\right) {S}_{s}\left( {x \rightarrow t}\right) .\n\]
|
Proof. Let us define a random walk on the set \( \{ s, t, x\} \) with transition probabilities \( {p}_{st} = {P}_{\mathrm{{esc}}}\left( {s \rightarrow t < x}\right) ,{p}_{sx} = {P}_{\mathrm{{esc}}}\left( {s \rightarrow x < t}\right) ,{p}_{ss} = 1 - {p}_{st} - {p}_{sx} \), and so on. Theorem 22 implies that this new RW is, in fact, also reversible; that is, it can be defined on the weighted triangle on \( \{ s, t, x\} \), with loops at the vertices. Hence it suffices to check (30) for this RW: we leave this as an exercise (Exercise 20).
|
No
|
Theorem 25 Let \( G \) be a connected graph of order \( n \) . Then\n\n\[ \mathop{\sum }\limits_{{{st} \in E\left( G\right) }}{R}_{\text{eff }}\left( {s, t}\right) = n - 1 \]
|
First Proof. By Theorem 24, for any two vertices \( t \) and \( x \) we have\n\n\[ \mathop{\sum }\limits_{{s \in \Gamma \left( t\right) }}\frac{{S}_{x}\left( {s \rightarrow t}\right) }{d\left( x\right) } = \mathop{\sum }\limits_{{s \in \Gamma \left( t\right) }}\frac{{S}_{s}\left( {x \rightarrow t}\right) }{d\left( s\right) } \]\n\nsince the two sums are equal term by term. Now, if \( x \neq t \) then the right-hand side is 1, since it is precisely the expected number of times we reach \( t \) from one of its neighbours in a random walk from \( x \) to \( t \) . On the other hand, for \( x = t \) the right-hand side is 0 . Hence, summing over \( V = V\left( G\right) \), we find that\n\n\[ \mathop{\sum }\limits_{{t \in V}}\mathop{\sum }\limits_{{s \in \Gamma \left( t\right) }}\frac{{S}_{x}\left( {s \rightarrow t}\right) }{d\left( x\right) } = n - 1 \]\n\nBut the left-hand side is\n\n\[ \mathop{\sum }\limits_{{{st} \in E\left( G\right) }}\left\{ {\frac{{S}_{x}\left( {s \rightarrow t}\right) }{d\left( x\right) } + \frac{{S}_{x}\left( {t \rightarrow s}\right) }{d\left( x\right) }}\right\} = \mathop{\sum }\limits_{{{st} \in E\left( G\right) }}{R}_{\text{eff }}\left( {s, t}\right) ,\]\n\nwith the equality following from Theorem 20.
|
Yes
|
Theorem 26 The mixing rate \( \mu \) is precisely \( \lambda = \max \left\{ {{\lambda }_{2},\left| {\lambda }_{n}\right| }\right\} \) .
|
Proof. Given a distribution \( {\mathbf{p}}_{0} \), set\n\n\[ \n{\mathbf{p}}_{0} = {\alpha \pi } + {\mathbf{p}}_{0}^{\prime } \n\]\n\nwhere\n\n\[ \n\left\langle {{\mathbf{p}}_{0}^{\prime },\pi }\right\rangle = 0. \n\]\n\nThen \( 1 = \left\langle {{\mathbf{p}}_{0},{n\pi }}\right\rangle = \alpha \langle \pi ,{n\pi }\rangle = \alpha \), so\n\n\[ \n{\mathbf{p}}_{0} = \pi + {\mathbf{p}}_{0}^{\prime } \n\]\n\nHence\n\n\[ \n{\begin{Vmatrix}{\mathbf{p}}_{t} - \pi \end{Vmatrix}}_{2} = {\begin{Vmatrix}{\mathbf{p}}_{0}{P}^{t} - \pi {P}^{t}\end{Vmatrix}}_{2} = {\begin{Vmatrix}\left( {\mathbf{p}}_{0} - \pi \right) {P}^{t}\end{Vmatrix}}_{2} \n\]\n\n\[ \n= {\begin{Vmatrix}{\mathbf{p}}_{0}^{\prime }{P}^{t}\end{Vmatrix}}_{2} \leq {\lambda }^{t}\begin{Vmatrix}{\mathbf{p}}_{0}^{\prime }\end{Vmatrix} \leq {\lambda }^{t} \n\]\n\nTherefore, \( \mu = \mathop{\sup }\limits_{{\mathbf{p}}_{0}}\lim \mathop{\sup }\limits_{{t \rightarrow \infty }}{\begin{Vmatrix}{\mathbf{p}}_{t} - \pi \end{Vmatrix}}_{2}^{1/t} \leq \lambda \) .\n\nThe converse inequality is just as simple. Assuming that \( \left| {\lambda }_{j}\right| = \lambda \), pick a probability distribution \( {\mathbf{p}}_{0} \) such that\n\n\[ \n{\mathbf{p}}_{0} = \mathop{\sum }\limits_{{i = 1}}^{n}{\xi }_{i}{\mathbf{w}}_{i} \n\]\n\nwith \( {\xi }_{j} \neq 0 \) . In fact, we can find such a \( {\mathbf{p}}_{0} = \left( {p}_{i}^{\left( 0\right) }\right) \), even among the distributions such that \( {p}_{h}^{\left( 0\right) } = 1 \) for some \( h \) and \( {p}_{i}^{\left( 0\right) } = 0 \) for \( i \neq h \) . But if for our \( {\mathbf{p}}_{0} \) we have \( {\xi }_{j} \neq 0 \) then\n\n\[ \n{\begin{Vmatrix}{\mathbf{p}}_{t} - \pi \end{Vmatrix}}_{2} = {\begin{Vmatrix}\left( {\mathbf{p}}_{0} - \pi \right) {P}^{t}\end{Vmatrix}}_{2} \geq {\lambda }^{t}\begin{Vmatrix}{{\xi }_{j}{\mathbf{w}}_{j}}\end{Vmatrix} \n\]\n\nimplies that \( \mu \geq \lambda \), as claimed.
|
Yes
|
Lemma 29 Let \( G = \left( {V, E}\right) \) be a d-regular graph with conductance \( {\Phi }_{G} \), and let \( x : V \rightarrow \mathbb{R}, i \mapsto {x}_{i} \), be such that \( \mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i} = 0 \) . Then\n\n\[\n\mathop{\sum }\limits_{{{ij} \in E}}{\left( {x}_{i} - {x}_{j}\right) }^{2} \geq \frac{d}{2}{\Phi }_{G}^{2}\mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}^{2}\n\]
|
Proof. Set \( m = \lceil n/2\rceil \) . We shall prove that if \( {y}_{1} \geq {y}_{2} \geq \cdots \geq {y}_{n} \), with \( {y}_{m} = 0 \) , then\n\n\[\n\mathop{\sum }\limits_{{{ij} \in E}}{\left( {y}_{i} - {y}_{j}\right) }^{2} \geq \frac{d}{2}{\Phi }_{G}^{2}\mathop{\sum }\limits_{{i = 1}}^{n}{y}_{i}^{2}\n\]\n\nIt is easily seen that this inequality is stronger than (32). Indeed, in (32) we may assume that \( {x}_{1} \geq {x}_{2} \geq \cdots \geq {x}_{n} \) . Setting \( {y}_{i} = {x}_{i} - {x}_{m} \), inequality (33) gives\n\n\[\n\mathop{\sum }\limits_{{{ij} \in E}}{\left( {x}_{i} - {x}_{j}\right) }^{2} = \mathop{\sum }\limits_{{{ij} \in E}}{\left( {y}_{i} - {y}_{j}\right) }^{2} \geq \frac{d}{2}{\Phi }_{G}^{2}\mathop{\sum }\limits_{{i = 1}}^{n}{\left( {x}_{i} - {x}_{m}\right) }^{2}\n\]\n\n\[\n\geq \frac{d}{2}{\Phi }_{G}^{2}\mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}^{2} + \frac{nd}{2}{\Phi }_{G}^{2}{x}_{m}^{2}\n\]\n\nsince \( \mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i} = 0 \) .\n\nNow, in order to prove (33), set\n\n\[\n{u}_{i} = \left\{ \begin{array}{ll} {y}_{i} & \text{ if }i \leq m \\ 0 & \text{ if }i > m \end{array}\right.\n\]\n\n\[\n{v}_{i} = \left\{ \begin{array}{ll} 0 & \text{ if }i \leq m \\ {y}_{i} & \text{ if }i > m \end{array}\right.\n\]\n\nThus \( {y}_{i} = {u}_{i} + {v}_{i} \) for every \( i \) . Also, if \( {u}_{i} \neq 0 \) then \( {u}_{i} > 0 \) and \( i < m \), and if \( {v}_{i} \neq 0 \) then \( {v}_{i} < 0 \) and \( i > m \) . Since \( {\left( {y}_{i} - {y}_{j}\right) }^{2} = {\left( {u}_{i} - {u}_{j} + {v}_{i} - {v}_{j}\right) }^{2} \geq \) \( {\left( {u}_{i} - {u}_{j}\right) }^{2} + {\left( {v}_{i} - {v}_{j}\right) }^{2} \) for every edge \( {ij} \), it suffices to prove that\n\n\[\n\mathop{\sum }\limits_{{{ij} \in E}}{\left( {u}_{i} - {u}_{j}\right) }^{2} \geq \frac{d}{2}{\Phi }_{G}^{2}\mathop{\sum }\limits_{{i = 1}}^{m}{u}_{i}^{2}\n\]\n\n(34)\n\nand\n\n\[\n\mathop{\sum }\limits_{{{ij} \in E}}{\left( {v}_{i} - {v}_{j}\right) }^{2} \geq \frac{d}{2}{\Phi }_{G}^{2}\mathop{\sum }\limits_{{i = m}}^{n}{v}_{i}^{2}\n\]\n\n(35)\n\nFurthermore, as \( m \geq n - m \), it suffices to prove (34). In our proof of (34) we may assume that \( {u}_{1} > 0 \) . By the Cauchy-Schwarz inequality,\n\n\[\n{\left\{ \mathop{\sum }\limits_{{{ij} \in E}}\left( {u}_{i}^{2} - {u}_{j}^{2}\right) \right\} }^{2} = {\left\{ \mathop{\sum }\limits_{{{ij} \in E}}\left( {u}_{i} - {u}_{j}\right) \left( {u}_{i} + {u}_{j}\right) \right\} }^{2}\n\]\n\n\[\n\leq \mathop{\sum }\limits_{{{ij} \in E}}{\left( {u}_{i} - {u}_{j}\right) }^{2}\mathop{\sum }\limits_{{k\ell \in E}}{\left( {u}_{k} + {u}_{\ell }\right) }^{2}\n\]\n\n(36)\n\n\[\n\leq \mathop{\sum }\limits_{{{ij} \in E}}{\left( {u}_{i} - {u}_{j}\right) }^{2}\mathop{\sum }\limits_{{k\ell \in E}}2\left( {{u}_{k}^{2} + {u}_{\ell }^{2}}\right)\n\]\n\n\[\n= {2d}\mathop{\sum }\limits_{{k = 1}}^{n}{u}_{k}^{2}\mathop{\sum }\limits_{{{ij} \in E}}{\left( {u}_{i} - {u}_{j}\right) }^{2}\n\]
|
Yes
|
Corollary 31 The second eigenvalue of the SRW on a regular graph with conductance \( {\Phi }_{G} \) is at most \( 1 - {\Phi }_{G}^{2} \) .
|
Proof. With the notation as above, \( \frac{1}{2}\left( {{\lambda }_{2} + 1}\right) \leq 1 - \frac{1}{2}{\Phi }_{G}^{2} \), so \( {\lambda }_{2} \leq 1 - {\Phi }_{G}^{2} \) .
|
Yes
|
Theorem 32 Let \( {\left( {G}_{i}\right) }_{1}^{\infty } \) be a sequence of regular graphs with \( \left| {G}_{i}\right| = {n}_{i} \rightarrow \infty \) . If there is a \( k \in \mathbb{N} \) such that\n\n\[{\Phi }_{{G}_{i}} \geq {\left( \log {n}_{i}\right) }^{-k}\]\n\nfor every sufficiently large \( i \), then the lazy random walks on \( {\left( {G}_{i}\right) }_{1}^{\infty } \) are rapidly mixing.
|
Proof. We have just seen that if \( t \geq 8{\left( \log {n}_{i}\right) }^{{2k} + 1}\log \left( {1/\epsilon }\right) \) then \( {d}_{1}\left( t\right) < \epsilon \) , provided \( {n}_{i} \) is large enough.
|
Yes
|
Theorem 3 Let \( G = \left( {V, E}\right) \) be a multigraph, \( q \geq 1 \) an integer and \( \beta \in \mathbb{R} \) . Then the partition function of the \( q \) -state Potts model on \( G \), with inverse temperature \( \beta \) , is\n\n\[ \n{P}_{G}\left( {q,\beta }\right) = {e}^{-\beta \left| E\right| }{Z}_{G}\left( {q, v}\right) \n\]\n\nwhere \( {Z}_{G} \) is the dichromatic polynomial and \( v = {e}^{\beta } - 1 \) .
|
Proof. Set\n\n\[ \n{\widetilde{P}}_{G}\left( {q,\beta }\right) = {e}^{\beta \left| E\right| }{P}_{G}\left( {q,\beta }\right) \n\]\n\nso that we have to show that \( {\widetilde{P}}_{G}\left( {q,\beta }\right) = {Z}_{G}\left( {q, v}\right) \) . If \( G = {E}_{n} \) then \( H\left( \omega \right) = 0 \) for every state \( \omega \), so \( {\widetilde{P}}_{{E}_{n}} = {P}_{{E}_{n}} = {q}^{n} \) . In order to prove that \( {\widetilde{P}}_{G}\left( {q,\beta }\right) = {Z}_{G}\left( {q, v}\right) \) , all we have to check is that \( {\widetilde{P}}_{G}\left( {q,\beta }\right) \) satisfies the reduction formula (6). Note that\n\n\[ \n{\widetilde{P}}_{G}\left( {q,\beta }\right) = {e}^{\beta \left| E\right| }\mathop{\sum }\limits_{{\omega \in \Omega }}{e}^{\beta \mathop{\sum }\limits_{{{ab} \in E}}\left( {\delta \left( {{\omega }_{a},{\omega }_{b}}\right) - 1}\right) } \n\]\n\n\[ \n= \mathop{\sum }\limits_{{\omega \in \Omega }}\mathop{\prod }\limits_{{{ab} \in E}}{e}^{{\beta \delta }\left( {{\omega }_{a},{\omega }_{b}}\right) } \n\]\n\n\[ \n= \mathop{\sum }\limits_{{\omega \in \Omega }}\mathop{\prod }\limits_{{{ab} \in E}}\left( {1 + {v\delta }\left( {{\omega }_{a},{\omega }_{b}}\right) }\right) \n\]\n\nTo prove the reduction formula, let \( e \) be an edge from \( c \) to \( d \) . Let us split the sum above: first let us sum over the states \( \omega \) with \( {\omega }_{c} \neq {\omega }_{d} \), and then over the states with \( {\omega }_{c} = {\omega }_{d} \) . Thus\n\n\[ \n{\widetilde{P}}_{G}\left( {q,\beta }\right) = \mathop{\sum }\limits_{{{\omega }_{c} \neq {\omega }_{d}}}\mathop{\prod }\limits_{{{ab} \in E - e}}\left( {1 + {v\delta }\left( {{\omega }_{a},{\omega }_{b}}\right) }\right) \n\]\n\n\[ \n+ \left( {1 + v}\right) \mathop{\sum }\limits_{{{\omega }_{c} = {\omega }_{d}}}\mathop{\prod }\limits_{{{ab} \in E - e}}\left( {1 + {v\delta }\left( {{\omega }_{a},{\omega }_{b}}\right) }\right) \n\]\n\n\[ \n= \mathop{\sum }\limits_{{\omega \in \Omega }}\mathop{\prod }\limits_{{{ab} \in E - e}}\left( {1 + {v\delta }\left( {{\omega }_{a},{\omega }_{b}}\right) }\right) + v\mathop{\sum }\limits_{{{\omega }_{c} = {\omega }_{d}}}\mathop{\prod }\limits_{{{ab} \in E - e}}\left( {1 + {v\delta }\left( {{\omega }_{a},{\omega }_{b}}\right) }\right) \n\]\n\n\[ \n= {\widetilde{P}}_{G - e}\left( {q,\beta }\right) + v{\widetilde{P}}_{G/e}\left( {q,\beta }\right) . \n\]\n\nHence (6) is satisfied, and we are done.
|
Yes
|
Theorem 4 The partition function of the random cluster model is\n\n\\[ \n{R}_{G}\\left( {q, p}\\right) = {\\left( 1 - p\\right) }^{\\left| E\\right| }{Z}_{G}\\left( {q, v}\\right) ,\n\\]\n\nwhere \\( v = p/\\left( {1 - p}\\right) \\) .
|
Proof. Set \\( {\\widetilde{R}}_{G}\\left( {q, p}\\right) = {\\left( 1 - p\\right) }^{-\\left| E\\right| }{R}_{G}\\left( {q, p}\\right) \\), so that\n\n\\[ \n{\\widetilde{R}}_{G}\\left( {q, p}\\right) = \\mathop{\\sum }\\limits_{{F \\subset E}}{v}^{\\left| F\\right| }{q}^{k\\langle F\\rangle },\n\\]\n\n(7)\n\nand we have to show that \\( {\\widetilde{R}}_{G}\\left( {q, p}\\right) = {Z}_{G}\\left( {q, v}\\right) \\) . Clearly, \\( {\\widetilde{R}}_{{E}_{n}} = {q}^{n} \\), so all we have to check is that the reduction formula holds. To this end, let \\( e \\in E \\) . Let us partition the subsets of \\( E \\) into pairs,\n\n\\[ \n\\{ F : F \\subset E\\} = \\mathop{\\bigcup }\\limits_{{F \\subset E - e}}\\{ F, F \\cup \\{ e\\} \\}\n\\]\n\nand let us split (7) accordingly:\n\n\\[ \n{\\widetilde{R}}_{G}\\left( {q, p}\\right) = \\mathop{\\sum }\\limits_{{F \\subset E - e}}\\left\\{ {{v}^{\\left| F\\right| }{q}^{k\\langle F\\rangle } + {v}^{\\left| F\\right| + 1}{q}^{k\\langle F \\cup e\\rangle }}\\right\\}\n\\]\n\n\\[ \n= \\mathop{\\sum }\\limits_{{F \\subset E - e}}{v}^{\\left| F\\right| }{q}^{k\\langle F\\rangle } + v\\mathop{\\sum }\\limits_{{F \\subset E - e}}{v}^{\\left| F\\right| }{q}^{k\\langle F \\cup e\\rangle }.\n\\]\n\nThe first sum is precisely \\( {\\widetilde{R}}_{G - e}\\left( {q, p}\\right) \\) . As \\( \\langle F \\cup \\{ e\\} \\rangle \\) has precisely as many components in \\( G \\) as \\( \\langle F\\rangle \\) has in \\( G/e \\), the second sum is \\( {\\widetilde{R}}_{G/e}\\left( {q, p}\\right) \\), and we are finished.
|
Yes
|
Theorem 5 Let \( G \) be a connected graph. Then \( {T}_{G}\left( {1,1}\right) \) is the number of spanning trees of \( G,{T}_{G}\left( {2,1}\right) \) is the number of (edge sets forming) forests in \( G,{T}_{G}\left( {1,2}\right) \) is the number of connected spanning subgraphs, and \( {T}_{G}\left( {2,2}\right) \) is the number of spanning subgraphs.
|
Proof. Each of these assertions is immediate from the definition (1) of \( T \) . Thus,\n\n\[ \n{T}_{G}\left( {1,1}\right) = \mathop{\sum }\limits_{{F \subset E\left( G\right) }}{0}^{r\left( G\right) - r\langle F\rangle }{0}^{n\langle F\rangle }\n\]\n\n\[ \n= \left| {\{ F : \;F \subset E\left( G\right), r\langle F\rangle = r\left( G\right) \text{ and }n\langle F\rangle = 0\} }\right| ,\n\]\n\nand \( F \subset E\left( G\right) \) is the edge set of a spanning tree iff \( r\langle F\rangle = r\left( G\right) \) and \( n\langle F\rangle = 0 \) . Similarly,\n\n\[ \n{T}_{G}\left( {2,1}\right) = \mathop{\sum }\limits_{{F \subset E\left( G\right) }}{1}^{r\left( G\right) - r\langle F\rangle }{0}^{n\langle F\rangle } = \left| {\{ F : F \subset E\left( G\right) \;\text{ and }\;n\langle F\rangle = 0\} }\right|\n\]\nis the number of edge sets \( F \) forming forests, and\n\n\[ \n{T}_{G}\left( {1,2}\right) = \mathop{\sum }\limits_{{F \subset E\left( G\right) }}{0}^{r\left( G\right) - r\langle F\rangle }{1}^{n\langle F\rangle } = \left| {\{ F : F \subset E\left( G\right) \;\text{ and }\;r\langle F\rangle = r\left( G\right) \} }\right|\n\]\n\nis the number of connected spanning subgraphs of \( G \) .\n\nFinally,\n\n\[ \n{T}_{G}\left( {2,2}\right) = \mathop{\sum }\limits_{{F \subset E\left( G\right) }}{1}^{r\left( G\right) - r\langle F\rangle }{1}^{n\langle F\rangle } = \left| {\{ F : F \subset E\left( G\right) \} }\right| = {2}^{\left| E\left( G\right) \right| },\n\]\n\nas claimed.
|
Yes
|
Theorem 6 The chromatic polynomial \( {p}_{G}\left( x\right) \) of a graph \( G \) is\n\n\[ \n{p}_{G}\left( x\right) = {\left( -1\right) }^{r\left( G\right) }{x}^{k\left( G\right) }{T}_{G}\left( {1 - x,0}\right) .\n\]
|
Proof. The result is immediate from Theorem 2 and the properties of the chromatic polynomial mentioned above. Indeed, \( {p}_{{E}_{n}}\left( x\right) = {x}^{n} \), and for every edge \( e \in E\left( G\right) \) ,\n\n\[ \n{p}_{G}\left( x\right) = \left\{ \begin{array}{ll} \frac{x - 1}{x}{p}_{G - e}\left( x\right) & \text{ if }e\text{ is a bridge,} \\ 0 & \text{ if }e\text{ is a loop,} \\ {p}_{G - e}\left( x\right) - {p}_{G/e}\left( x\right) & \text{ if }e\text{ is neither a bridge nor a loop. } \end{array}\right.\n\]\n\nHence, by Theorem 2, \( {p}_{G}\left( x\right) = U\left( {G;\frac{x - 1}{x},0, x,1, - 1}\right) = {x}^{k\left( G\right) }{\left( -1\right) }^{r\left( G\right) } \times \) \( {T}_{G}\left( {1 - x,0}\right) \), as claimed.
|
Yes
|
Theorem 7 Let \( A \) be a finite Abelian group and \( G \) a multigraph. Then\n\n\[ \n{q}_{G}\left( A\right) = {\left( -1\right) }^{n\left( G\right) }{T}_{G}\left( {0,1 - \left| A\right| }\right) .\n\]
|
Proof. The result is, once again, immediate from Theorem 2 and the properties of the flow polynomial noted above. Indeed, we have shown that \( {q}_{{E}_{n}}\left( A\right) = 1 \) for every \( n \geq 1 \), and if \( e \in E\left( G\right) \) then\n\n\[ \n{q}_{G}\left( A\right) = \left\{ \begin{array}{ll} 0 & \text{ if }e\text{ is a bridge,} \\ \left( {\left| A\right| - 1}\right) {q}_{G - e}\left( A\right) & \text{ if }e\text{ is a loop,} \\ - {q}_{G - e}\left( A\right) + {q}_{G/e}\left( A\right) & \text{ if }e\text{ is neither a bridge nor a loop. } \end{array}\right.\n\]\n\nHence, by Theorem 2,\n\n\[ \n{q}_{G}\left( A\right) = U\left( {G;0,\left| A\right| - 1,1, - 1,1}\right) = {\left( -1\right) }^{n\left( G\right) }{T}_{G}\left( {0,1 - \left| A\right| }\right) ,\n\]\n\nas required.
|
Yes
|
Theorem 8 For every connected graph \( G \) and every vertex \( u \in V\left( G\right) \) we have\n\n\[ \n{a}_{u}\left( G\right) = {T}_{G}\left( {1,0}\right) \n\]
|
Proof. We shall deduce the assertion from the following four properties of the function \( {a}_{u}\left( G\right) \) .\n\n(i) If \( G = {E}_{1} \) then \( {a}_{u}\left( G\right) = 1 \) .\n\n(ii) If \( G \) contains a loop \( e \) then \( G \) has no acyclic orientation so \( {a}_{u}\left( G\right) = 0 \) .\n\n(iii) Suppose that \( e = {uv} \) is a bridge of \( G \), and consider an acyclic orientation of \( G \), with \( u \) the only source. Then in the component of \( G - e \) containing \( v \), the only source has to be \( v \), so the acyclic orientations of \( G \) with \( u \) the only source are in 1-to-1 correspondence with the acyclic orientations of \( G/e \), with \( u \) (which in \( G/e \) is the same as \( v \) or \( \left( {uv}\right) \) ) the only source. Hence\n\n\[ \n{a}_{u}\left( G\right) = {a}_{u}\left( {G/e}\right) \n\]\n\n(iv) Finally, suppose that \( e = {uv} \in E\left( G\right) \) is neither a loop nor a bridge. Consider an acyclic orientation of \( G \), with \( u \) the only source. Let us ask the question: is \( {uv} \) the only edge directed into \( v \) ? If it is, then our orientation gives an acyclic orientation of \( G/e \) in which \( u \) is the only source; otherwise, it gives an orientation of \( G - e \) in which \( u \) the only source. Also, all appropriate orientations of \( G/e \) and \( G - e \) arise in this way. Consequently, in this case we have\n\n\[ \n{a}_{u}\left( G\right) = {a}_{u}\left( {G - e}\right) + {a}_{u}\left( {G/e}\right) . \n\]\n\nNote now that if \( u \) is a vertex of a connected graph \( G \) with \( e\left( G\right) > 0 \) then there is an edge \( e \in E\left( G\right) \) incident with \( u \) . But then \( {a}_{u}\left( G\right) \) is determined by the ’nature’ of this edge (loop, bridge or neither) and the values \( {a}_{u}\left( {G - e}\right) \) and \( {a}_{u}\left( {G/e}\right) \) . Hence there is a unique function \( {a}_{u}\left( G\right) \) on the set of (equivalence classes of) connected graphs \( G \) with a distinguished vertex \( u \) that has properties (i) - (iv). Recalling that \( {T}_{G - e} = {T}_{G/e} \) whenever \( e \) is a bridge or a loop, we see that \( {T}_{G}\left( {1,0}\right) \) is such a function, so \( {a}_{u}\left( G\right) = {T}_{G}\left( {1,0}\right) \), as claimed.
|
Yes
|
Theorem 9 Let \( G = \left( {V, E}\right) ,0 < p < 1, q = 1 - p \) and \( {E}_{p} \) be as above. Then\n\n\[ \mathbb{P}\left( {r\left\langle {E}_{p}\right\rangle = r\left( G\right) }\right) = {p}^{r\left( G\right) }{q}^{n\left( G\right) }{T}_{G}\left( {1,1/q}\right) . \]
|
Proof. In view of Theorem 2, it suffices to check that the function \( C\left( G\right) = \) \( \mathbb{P}\left( {r\left\langle {E}_{p}\right\rangle = r\left( G\right) }\right) \) satisfies the conditions of Theorem 2 with \( x = p, y = 1 \) , \( \alpha = 1,\sigma = q \) and \( \tau = p \) .\n\nAlthough this is very easily seen, we shall spell it out.\n\nIf \( G \) is the empty graph \( {E}_{n} \) then \( r\left\langle {E}_{p}\right\rangle = r\left( G\right) = 0 \), so \( C\left( {E}_{n}\right) = 1 \) .\n\nIf \( e \in E \) is a bridge of \( G \) then \( r\left\langle {E}_{p}\right\rangle = r\langle E\rangle \) implies that \( e \in {E}_{p} \) . Consequently, \( C\left( G\right) = {pC}\left( {G - e}\right) \)\n\nIf \( e \in E \) is not a bridge of \( G \) then \( r\left( G\right) = r\left( {G - e}\right) \), so \( C\left( G\right) = {pC}\left( {G/e}\right) + \) \( {qC}\left( {G - e}\right) \) . Also, if \( e \) is a loop then \( G/e = G - e \), so \( C\left( G\right) = C\left( {G - e}\right) \), a fact obvious from first principles as well.
|
Yes
|
Theorem 12 (i) For every graph \( G \) the derivative \( {p}_{G}^{\prime }\left( 1\right) \) of the chromatic polynomial \( {p}_{G}\left( x\right) \) satisfies\n\n\[ \n{p}_{G}^{\prime }\left( 1\right) = {\left( -1\right) }^{r\left( G\right) + 1}\theta \left( G\right) .\n\]
|
Proof. (i) This is immediate from\n\n\[ \n{p}_{G}\left( x\right) = {\left( -1\right) }^{r\left( G\right) }{x}^{k\left( G\right) }{T}_{G}\left( {1 - x,0}\right) = {\left( -1\right) }^{r\left( G\right) }{x}^{k\left( G\right) }\mathop{\sum }\limits_{{i = 0}}^{{n - 1}}{t}_{i0}{\left( 1 - x\right) }^{i}.\n\]
|
Yes
|
Theorem 13 Let \( G \) be a connected graph of order \( n \), with chromatic polynomial \( {p}_{G}\left( x\right) = \mathop{\sum }\limits_{{j = 0}}^{{n - 1}}{\left( -1\right) }^{j}{a}_{j}{x}^{n - j} \) . Then \( {a}_{0} = 1 \leq {a}_{1} \leq \cdots \leq {a}_{l} \) for \( l = \lfloor n/2\rfloor \) .
|
Proof. We know that\n\n\[ \n{p}_{G}\left( x\right) = {\left( -1\right) }^{n - 1}x\mathop{\sum }\limits_{{i = 0}}^{{n - 1}}{t}_{i0}{\left( -x + 1\right) }^{i}, \n\] \n\nso \n\n\[ \n{\left( -1\right) }^{j}{a}_{j} = {\left( -1\right) }^{n - 1}\mathop{\sum }\limits_{{i = n - j - 1}}^{{n - 1}}{\left( -1\right) }^{n - j - 1}{t}_{i0}\left( \begin{matrix} i \\ n - j - 1 \end{matrix}\right) , \n\] \n\nthat is, \n\n\[ \n{a}_{j} = \mathop{\sum }\limits_{{i = n - j - 1}}^{{n - 1}}{t}_{i0}\left( \begin{matrix} i \\ n - j - 1 \end{matrix}\right) \n\] \n\nHence if \( 1 \leq j \leq n/2 \) then \n\n\[ \n{a}_{j} - {a}_{j - 1} = {t}_{n - j - 1,0} + \mathop{\sum }\limits_{{i = n - j}}^{{n - 1}}{t}_{i0}\left\{ {\left( \begin{matrix} i \\ n - j - 1 \end{matrix}\right) - \left( \begin{matrix} i \\ n - j \end{matrix}\right) }\right\} \geq {t}_{n - j - 1,0}, \n\] \n\nsince \( n - j - 1 \geq n/2 - 1 \geq \left( {i - 1}\right) /2 \) for all \( i \leq n - 1 \), so \( \left( \begin{matrix} i \\ n - j - 1 \end{matrix}\right) \geq \left( \begin{matrix} i \\ n - j \end{matrix}\right) \) .
|
Yes
|
Theorem 14 Let \( G \) be a 2-connected loopless graph with \( n \) vertices and \( m \) edges, and let \( {T}_{G}\left( {x, y}\right) = \sum {t}_{ij}{x}^{i}{y}^{j} \) . Then \( {t}_{i0} > 0 \) for each \( i,1 \leq i \leq n - 1 \), and \( {t}_{0j} > 0 \) for each \( j,1 \leq j \leq m - n + 1 \) .
|
Proof. Given a spanning tree \( T \), for \( {E}_{0} \subset E\left( G\right) \) let \( {\gamma }_{T}\left( {E}_{0}\right) \) be the set of chords whose cycles meet \( {E}_{0} \), together with the set of tree-edges whose cuts meet \( {E}_{0} \) :\n\n\[ \n{\gamma }_{T}\left( {E}_{0}\right) = \left\{ {e \in E\left( G\right) : {Z}_{T}\left( e\right) \cap {E}_{0} \neq \varnothing }\right\} \cup \left\{ {e \in E\left( G\right) : {U}_{T}\left( e\right) \cap {E}_{0} \neq \varnothing }\right\} .\n\]\n\nNote that \( {\gamma }_{T}\left( {E}_{0}\right) \supset {E}_{0} \) . Let \( {\bar{\gamma }}_{T}\left( {E}_{0}\right) \) be the closure of \( {E}_{0} \) with respect to this \( \gamma \) -operation:\n\n\[ \n{\bar{\gamma }}_{T}\left( {E}_{0}\right) = {E}_{0} \cup {E}_{1} \cup \cdots ,\n\]\n\nwhere \( {E}_{k + 1} = {\gamma }_{T}\left( {E}_{k}\right) \) . As \( G \) is 2-connected, \( {\bar{\gamma }}_{T}\left( {E}_{0}\right) = E\left( G\right) \) whenever \( {E}_{0} \) is a non-empty set of edges (see Exercise 18), so that \( {E}_{0} \subset {E}_{1} \subset \ldots \subset {E}_{l} = E\left( G\right) \) for some \( l \) .\n\n(i) For \( 1 \leq i \leq n - 1 \), let \( {E}_{0} \) be a set of \( i \) edges of \( T \), and let \( {E}_{1} = {\gamma }_{T}\left( {E}_{0}\right) \) , \( {E}_{2} = {\gamma }_{T}\left( {E}_{1}\right) \), and so on. We know that \( {E}_{l} = E\left( G\right) \) for some \( l \) . Let \( \prec \) be an order compatible with the sequence \( {E}_{0} \subset {E}_{1} \subset \cdots \subset {E}_{l} = E\left( G\right) \), that is, an order in which the edges of \( {E}_{0} \) come first, followed by the edges of \( {E}_{1} \smallsetminus {E}_{0} \), the edges of \( {E}_{2} \smallsetminus {E}_{1} \), and so on, ending with the edges of \( {E}_{l} \smallsetminus {E}_{l - 1} \) . Then each edge of \( {E}_{0} \) is active, and no other edge is active. Hence \( T \) is an \( \left( {i,0}\right) \) -tree in the order \( \prec \), so \( {t}_{i0} > 0 \) .\n\n(ii) For \( 1 \leq j \leq m - n + 1 \) we start with a set \( {E}_{0} \) of \( j \) chords of \( T \), that is with a set \( {E}_{0} \subset E\left( G\right) \smallsetminus E\left( T\right) \), and proceed as in (i). Once again, the active edges are precisely the edges of \( {E}_{0} \), so \( T \) is a \( \left( {0, j}\right) \) -tree, proving \( {t}_{0j} > 0 \) .\n\nTo see the last assertion, recall that \( {\deg }_{x}{T}_{G}\left( {x, y}\right) = r\left( G\right) = n - 1 \) and \( {\deg }_{y}{T}_{G}\left( {x, y}\right) = n\left( G\right) = m - n + 1.
|
Yes
|
Theorem 15 Let \( G \) be a 2-connected loopless graph that is neither a cycle nor a thick edge. Then \( {t}_{11}\left( G\right) > 0 \) .
|
Proof. It is easily seen that \( G \) contains a cycle \( C \) and an edge \( {e}_{1} \) joining a vertex of \( C \) to a vertex not on \( C \) . Let \( T \) be a spanning tree that contains \( {e}_{1} \) and all edges of \( C \) except for an edge \( {e}_{2} \), and set \( {E}_{0} = \left\{ {{e}_{1},{e}_{2}}\right\} \) . Let \( {E}_{0} \subset {E}_{1} \subset \ldots \subset {E}_{l} = E\left( G\right) \) be as in the proof of Theorem 14, with \( {E}_{k + 1} = {\gamma }_{T}\left( {E}_{k}\right) \), and let \( \prec \) be an order compatible with this nested sequence. It is immediate that, with respect to this order, \( T \) has precisely one internally active edge, namely \( {e}_{1} \), and precisely one externally active edge, namely \( {e}_{2} \) . Hence \( {t}_{11}\left( G\right) > 0 \), as claimed.
|
Yes
|
Theorem 16 There is a unique map \( \varphi : \mathcal{L} \rightarrow \mathbb{Z}\left\lbrack {A, B, d}\right\rbrack \) such that\n\n(i) if \( L \) and \( {L}^{\prime } \) are planar homotopic link diagrams then \( \varphi \left( L\right) = \varphi \left( {L}^{\prime }\right) \),\n\n(ii) \( \varphi \left( ○\right) = 1 \),\n\n(iii) \( \varphi \left( {L \cup ○ }\right) = {d\varphi }\left( L\right) \) for every link diagram \( L \),\n\n(iv) \( \varphi \left( L\right) = {A\varphi }\left( {L}_{v}^{A}\right) + {B\varphi }\left( {L}_{v}^{B}\right) \) for every link diagram \( L \) with a crossing at v. Furthermore, \( \varphi \left( L\right) = \left\lbrack L\right\rbrack \) .
|
Proof. It is clear that conditions (i) - (iv) determine a unique map, if there is such a map. Hence all we have to check is that the Kauffman square bracket [.] has properties (i)-(iv). The first three are immediate from the definition.\n\nProperty (iv) is also almost immediate. Indeed, let \( v \) be a crossing of \( L \) . Then, writing \( {L}^{\prime } = {L}_{v}^{A} \) and \( {L}^{\prime \prime } = {L}_{v}^{B} \),\n\n\[ \left\lbrack L\right\rbrack = \mathop{\sum }\limits_{S}{A}^{{a}_{L}\left( S\right) }{B}^{{b}_{L}\left( S\right) }{d}^{{c}_{L}\left( S\right) - 1} \]\n\n\[ = \mathop{\sum }\limits_{{S, S\left( v\right) = A}}{A}^{{a}_{L}\left( S\right) }{B}^{{b}_{L}\left( S\right) }{d}^{{c}_{L}\left( S\right) - 1} + \mathop{\sum }\limits_{{S, S\left( v\right) = B}}{A}^{{a}_{L}\left( S\right) }{B}^{{b}_{L}\left( S\right) }{d}^{{c}_{L}\left( S\right) - 1} \]\n\n\[ = A\mathop{\sum }\limits_{{S}^{\prime }}^{\prime }{A}^{{a}_{{L}^{\prime }}\left( {S}^{\prime }\right) }{B}^{{b}_{{L}^{\prime }}\left( {S}^{\prime }\right) }{d}^{{c}_{{L}^{\prime }}\left( {S}^{\prime }\right) - 1} + B\mathop{\sum }\limits_{{S}^{\prime \prime }}^{{\prime \prime }}{A}^{{a}_{{L}^{\prime \prime }}\left( {S}^{\prime \prime }\right) }{B}^{{b}_{{L}^{\prime \prime }}\left( {S}^{\prime \prime }\right) }{d}^{{c}_{{L}^{\prime \prime }}\left( {S}^{\prime \prime }\right) - 1} \]\n\n\[ = A\left\lbrack {L}_{v}^{A}\right\rbrack + B\left\lbrack {L}_{v}^{B}\right\rbrack \]\n\nwhere \( \mathop{\sum }\limits_{{s}^{\prime }}^{\prime } \) denotes summation over all states \( {S}^{\prime } \) of \( {L}^{\prime } \) and \( \mathop{\sum }\limits_{{S}^{\prime \prime }}^{{\prime \prime }} \) denotes summation over all states \( {S}^{\prime \prime } \) of \( {L}^{\prime \prime } \) .
|
Yes
|
Lemma 17 The Kauffman bracket is invariant under regular isotopy.
|
Proof. Let \( B \) and \( d \) be as above, so that \( {AB} = 1 \) and \( d = - {A}^{2} - {A}^{-2} \) and, under these conditions, \( \langle L\rangle \left( A\right) = \left\lbrack L\right\rbrack \left( {A, B, d}\right) \) . First, let us evaluate the effect of a Type II move on the angle bracket by resolving crossings by (iv) and applying (iii):\n\n\[ \text{(i)} = A\langle \text{)} + B\langle \text{)} \]\n\n\[ = A\left\{ {A\left( \widetilde{\underline{}}\right) + B\left( \widetilde{\underline{}}\right) }\right\} + B\left\{ {A\left( \widetilde{}\right) \left( \widetilde{}\right) + B\left( \widetilde{\underline{}}\right) }\right\} \]\n\n\[ = \left\{ {{A}^{2} + {ABd} + {B}^{2}}\right\} \left( \sim \right) + {AB}\left( \right) \left( \right) . \]\n\nAs \( {AB} = 1 \) and \( {A}^{2} + {ABd} + {B}^{2} = 0 \), the right-hand side is \( \langle \rangle (\rangle \), so the bracket is invariant under Type II moves.\n\nTo complete the proof, we shall show that Type II invariance implies Type III invariance. Indeed, by (iv),\n\n\[ \left\langle \overrightarrow{x}\right\rangle = A\left\langle \overrightarrow{x}\right\rangle + B\left\langle \overrightarrow{x}\right\rangle + C \]\nand\n\n\[ \langle A\rangle = A\left( {A - A}\right) + B\left( A\right) C\left( A\right) ,\]\n\nand the two right-hand sides are equal by Type II invariance. Invariance under the other Type III move is checked similarly.
|
Yes
|
Theorem 18 The Laurent polynomial \( f\left\lbrack L\right\rbrack = {\left( -A\right) }^{-{3s}\left( L\right) }\langle L\rangle \in \mathbb{Z}\left\lbrack {A,{A}^{-1}}\right\rbrack \) is an invariant of ambient isotopy for unoriented links.
|
Proof. Since \( s\left( L\right) \) and \( \langle L\rangle \) are invariants of regular isotopy, so is \( f\left\lbrack L\right\rbrack \) . Thus all we have to check is that \( f\left\lbrack L\right\rbrack \) is invariant under Type I Reidemeister moves.\n\nNote first that\n\n\[ \langle \circlearrowleft \rangle = A\langle \circlearrowleft \rangle + B\langle \smile \rangle \]\n\n\[ = \left( {{Ad} + B}\right) \langle \smile \rangle = \left( {-{A}^{3} - {A}^{-1} + {A}^{-1}}\right) \langle \smile \rangle \]\n\n\[ = \left( {-{A}^{3}}\right) \langle \smile \rangle \text{.} \]\n\nA similar expansion gives\n\n\[ \langle \zeta \rangle = \left( {-{A}^{-3}}\right) \langle \smile \rangle \]\n\nSince \( s\left( \diamond \right) = s\left( \smile \right) + 1 \) and \( s\left( \diamond \right) = s\left( \smile \right) - 1 \), independently of the orientation, the Laurent polynomial \( f\left\lbrack L\right\rbrack \) is invariant under Type I moves as well:\n\n\[ f\left\lbrack \circlearrowleft \right\rbrack = {\left( -A\right) }^{-{3s}\left( \circlearrowleft \right) }\langle \circlearrowleft \rangle = {\left( -A\right) }^{-3\{ s\left( \smile \right) + 1\} }\left( {-{A}^{3}}\right) \langle \smile \rangle \]\n\n\[ = {\left( -A\right) }^{-{3s}\left( \smile \right) }\langle \smile \rangle = f\left\lbrack \smile \right\rbrack \]\n\nand, analogously, \( f\left\lbrack \checkmark \right\rbrack = f\left\lbrack \smile \right\rbrack \) .
|
Yes
|
Theorem 19 The Jones polynomial \( {V}_{L}\left( t\right) \) of an oriented link \( L \) is given by \( {V}_{L}\left( t\right) = f\left\lbrack L\right\rbrack \left( {t}^{-1/4}\right) \), where \( f\left\lbrack L\right\rbrack = {\left( -A\right) }^{-{3w}\left( L\right) }\langle L\rangle \left( A\right) \) .
|
Proof. Since \( f\left\lbrack \subset \right\rbrack = 1 \), all we have to check is that \( f\left\lbrack L\right\rbrack \left( {t}^{-1/4}\right) \) satisfies (9). By property (iv) of the bracket polynomial, as \( B = {A}^{-1} \) we have\n\n\[ \langle X\rangle = A\langle X\rangle + {A}^{-1}\langle \rangle (\]\n\nand\n\n\[ \langle X\rangle = A\langle \rangle \left( {\rangle + {A}^{-1}\langle < \rangle }\right.\]\n\nHence\n\n\[ A\langle \rangle - {A}^{-1}\langle \rangle = \left( {{A}^{2} - {A}^{-2}}\right) \langle < \rangle \]\n\nand so\n\n\[ {A}^{4}f\left\lbrack \searrow \right\rbrack - {A}^{-4}f\left\lbrack \searrow \right\rbrack = {A}^{4}{\left( -A\right) }^{-3\left( {w\left( \searrow \right) + 1}\right) }\langle \searrow \rangle \]\n\n\[ - {A}^{-4}{\left( -A\right) }^{-3\left( {w\left( > \right) - 1}\right) }\langle \nearrow \rangle \]\n\n\[ = {\left( -A\right) }^{-{3w}\left( \lambda \right) }\left\{ {-A\langle \lambda \rangle + {A}^{-1}\langle \lambda \rangle }\right\} \]\n\n\[ = \left( {{A}^{-2} - {A}^{2}}\right) f\left\lbrack \nearrow \right\rbrack \text{.} \]\n\nOn substituting \( A = {t}^{-1/4} \), we find that\n\n\[ {t}^{-1}f\left\lbrack > \right\rbrack - {tf}\left\lbrack > \right\rbrack = \left( {\sqrt{t} - \frac{1}{\sqrt{t}}}\right) f\left\lbrack > \right\rbrack \]\n\nas required.
|
Yes
|
Theorem 20 The bracket and one-variable Kauffman polynomial of the mirror image \( {L}^{ * } \) of a link diagram \( L \) are\n\n\[ \left\langle {L}^{ * }\right\rangle \left( A\right) = \langle L\rangle \left( {A}^{-1}\right) \]\n\nand\n\n\[ {f}_{{L}^{ * }}\left\lbrack A\right\rbrack = {f}_{L}\left\lbrack {A}^{-1}\right\rbrack \]
|
Proof. Note that reversing all the crossings results in swapping \( A \) and \( B \), that is \( A \) and \( {A}^{-1} \), in the expansion of the bracket. Hence \( \left\langle {L}^{ * }\right\rangle \left( A\right) = \langle L\rangle \left( {A}^{-1}\right) \) . Also, \( s\left( {L}^{ * }\right) = - s\left( L\right) \) and \( w\left( {L}^{ * }\right) = - w\left( L\right) \), so the second assertion follows.
|
Yes
|
Theorem 21 Let \( L \) be a connected alternating oriented link diagram with a A-regions, b B-regions, and writhe w. Then the Jones polynomial of \( L \) is given by the Tutte polynomial of \( {G}^{ + }\left( L\right) \) :
|
\[ {V}_{L}\left( t\right) = {\left( -1\right) }^{w}{t}^{\left( {b - a + {3w}}\right) /4}{T}_{{G}^{ + }\left( L\right) }\left( {-t, - 1/t}\right) . \]
|
Yes
|
Theorem 22 The number of crossings of a connected alternating link diagram without nugatory crossings is an ambient isotopy invariant.
|
Proof. Let \( L \) be a connected alternating link diagram with \( m \) crossings, none of which is nugatory. We claim that \( m \) is precisely the breadth of the Laurent polynomial \( {V}_{L}\left( t\right) \), i.e. the difference between the maximum degree and the minimum degree. As the Jones polynomial is ambient isotopy invariant, this is, in fact, more than our theorem claims.\n\nTo prove our claim, denote by \( a = a\left( L\right) \) the number of \( A \) -regions. Then \( G = {G}^{ + }\left( L\right) \) has \( a \) vertices and \( m \) edges; also, there are no loops or bridges since \( L \) has no nugatory crossings. By Theorem 21, the breadth of \( {V}_{L}\left( t\right) \) is\n\n\[ \text{breadth}{V}_{L}\left( t\right) = \max \deg {V}_{L}\left( t\right) - \min \deg {V}_{L}\left( t\right) \]\n\n\[ = \max \left\{ {i - j : {t}_{ij}\left( G\right) \neq 0}\right\} - \min \left\{ {i - j : {t}_{ij}\left( G\right) \neq 0}\right\} \]\n\n\[ = \left( {a - 1}\right) - \left( {-m + a - 1}\right) = m, \]\n\n as claimed. The penultimate equality followed from Theorem 14.\n\nIn fact, it is clear from the proof that if a connected alternating diagram \( L \) has \( m \) crossings, \( {m}^{\prime } \) of which are nugatory, then \( m - {m}^{\prime } \) is the breadth of the Jones polynomial \( {V}_{L}\left( t\right) \), so \( m - {m}^{\prime } \) is an ambient isotopy invariant.
|
Yes
|
Theorem 1.1 (Hölder) Suppose \( 1 < p < \infty \) and \( 1 < q < \infty \) are conjugate exponents. If \( f \in {L}^{p} \) and \( g \in {L}^{q} \), then \( {fg} \in {L}^{1} \) and\n\n\[ \parallel {fg}{\parallel }_{{L}^{1}} \leq \parallel f{\parallel }_{{L}^{p}}\parallel g{\parallel }_{{L}^{q}} \]
|
The proof of the theorem relies on a simple generalized form of the arithmetic-geometric mean inequality: if \( A, B \geq 0 \), and \( 0 \leq \theta \leq 1 \), then\n\n(2)\n\n\[ {A}^{\theta }{B}^{1 - \theta } \leq {\theta A} + \left( {1 - \theta }\right) B. \]\n\nTo establish (2), we observe first that we may assume \( B \neq 0 \), and replacing \( A \) by \( {AB} \), we see that it suffices to prove that \( {A}^{\theta } \leq {\theta A} + (1 - \theta ) \). If we let \( f\left( x\right) = {x}^{\theta } - {\theta x} - \left( {1 - \theta }\right) \), then \( {f}^{\prime }\left( x\right) = \theta \left( {{x}^{\theta - 1} - 1}\right) \). Thus \( f\left( x\right) \) increases when \( 0 \leq x \leq 1 \) and decreases when \( 1 \leq x \), and we see that the continuous function \( f \) attains a maximum at \( x = 1 \), where \( f\left( 1\right) = 0 \). Therefore \( f\left( A\right) \leq 0 \), as desired.\n\nTo prove Hölder’s inequality we argue as follows. If either \( \parallel f{\parallel }_{{L}^{p}} = 0 \) or \( \parallel f{\parallel }_{{L}^{q}} = 0 \), then \( {fg} = 0 \) a.e. and the inequality is obviously verified. Therefore, we may assume that neither of these norms vanish, and after replacing \( f \) by \( f/\parallel f{\parallel }_{{L}^{p}} \) and \( g \) by \( g/\parallel g{\parallel }_{{L}^{q}} \), we may further assume that \( \parallel f{\parallel }_{{L}^{p}} = \parallel g{\parallel }_{{L}^{q}} = 1 \). We now need to prove that \( \parallel {fg}{\parallel }_{{L}^{1}} \leq 1 \).\n\nIf we set \( A = {\left| f\left( x\right) \right| }^{p}, B = {\left| g\left( x\right) \right| }^{q} \), and \( \theta = 1/p \) so that \( 1 - \theta = 1/q \), then (2) gives\n\n\[ \left| {f\left( x\right) g\left( x\right) }\right| \leq \frac{1}{p}{\left| f\left( x\right) \right| }^{p} + \frac{1}{q}{\left| g\left( x\right) \right| }^{q}. \]\n\nIntegrating this inequality yields \( \parallel {fg}{\parallel }_{{L}^{1}} \leq 1 \), and the proof of the Hölder inequality is complete.
|
Yes
|
Theorem 1.2 (Minkowski) If \( 1 \leq p < \infty \) and \( f, g \in {L}^{p} \), then \( f + g \in \) \( {L}^{p} \) and \( \parallel f + g{\parallel }_{{L}^{p}} \leq \parallel f{\parallel }_{{L}^{p}} + \parallel g{\parallel }_{{L}^{p}} \) .
|
Proof. The case \( p = 1 \) is obtained by integrating \( \left| {f\left( x\right) + g\left( x\right) }\right| \leq \) \( \left| {f\left( x\right) }\right| + \left| {g\left( x\right) }\right| \) . When \( p > 1 \), we may begin by verifying that \( f + g \in {L}^{p} \) , when both \( f \) and \( g \) belong to \( {L}^{p} \) . Indeed,\n\n\[ \n{\left| f\left( x\right) + g\left( x\right) \right| }^{p} \leq {2}^{p}\left( {{\left| f\left( x\right) \right| }^{p} + {\left| g\left( x\right) \right| }^{p}}\right) , \n\]\n\nas can be seen by considering separately the cases \( \left| {f\left( x\right) }\right| \leq \left| {g\left( x\right) }\right| \) and \( \left| {g\left( x\right) }\right| \leq \left| {f\left( x\right) }\right| \) . Next we note that\n\n\[ \n{\left| f\left( x\right) + g\left( x\right) \right| }^{p} \leq \left| {f\left( x\right) }\right| {\left| f\left( x\right) + g\left( x\right) \right| }^{p - 1} + \left| {g\left( x\right) }\right| {\left| f\left( x\right) + g\left( x\right) \right| }^{p - 1}. \n\]\n\nIf \( q \) denotes the conjugate exponent of \( p \), then \( \left( {p - 1}\right) q = p \), so we see that \( {\left( f + g\right) }^{p - 1} \) belongs to \( {L}^{q} \), and therefore Hölder’s inequality applied to the two terms on the right-hand side of the above inequality gives\n\n(3)\n\n\[ \n\parallel f + g{\parallel }_{{L}^{p}}^{p} \leq \parallel f{\parallel }_{{L}^{p}}{\begin{Vmatrix}{\left( f + g\right) }^{p - 1}\end{Vmatrix}}_{{L}^{q}} + \parallel g{\parallel }_{{L}^{p}}{\begin{Vmatrix}{\left( f + g\right) }^{p - 1}\end{Vmatrix}}_{{L}^{q}}. \n\]\n\nHowever, using once again \( \left( {p - 1}\right) q = p \), we get\n\n\[ \n{\begin{Vmatrix}{\left( f + g\right) }^{p - 1}\end{Vmatrix}}_{{L}^{q}} = \parallel f + g{\parallel }_{{L}^{p}}^{p/q}. \n\]\n\nFrom (3), since \( p - p/q = 1 \), and because we may suppose that \( \parallel f + \) \( g{\parallel }_{{L}^{p}} > 0 \), we find\n\n\[ \n\parallel f + g{\parallel }_{{L}^{p}} \leq \parallel f{\parallel }_{{L}^{p}} + \parallel g{\parallel }_{{L}^{p}} \n\]\n\nso the proof is finished.
|
Yes
|
Proposition 1.4 If \( X \) has finite positive measure, and \( {p}_{0} \leq {p}_{1} \), then \( {L}^{{p}_{1}}\left( X\right) \subset {L}^{{p}_{0}}\left( X\right) \) and\n\n\[ \frac{1}{\mu {\left( X\right) }^{1/{p}_{0}}}\parallel f{\parallel }_{{L}^{{p}_{0}}} \leq \frac{1}{\mu {\left( X\right) }^{1/{p}_{1}}}\parallel f{\parallel }_{{L}^{{p}_{1}}}. \]
|
We may assume that \( {p}_{1} > {p}_{0} \) . Suppose \( f \in {L}^{{p}_{1}} \), and set \( F = {\left| f\right| }^{{p}_{0}} \) , \( G = 1, p = {p}_{1}/{p}_{0} > 1 \), and \( 1/p + 1/q = 1 \), in Hölder’s inequality applied to \( F \) and \( G \) . This yields\n\n\[ \parallel f{\parallel }_{{L}^{{p}_{0}}}^{{p}_{0}} \leq {\left( \int {\left| f\right| }^{{p}_{1}}\right) }^{{p}_{0}/{p}_{1}} \cdot \mu {\left( X\right) }^{1 - {p}_{0}/{p}_{1}}. \]\n\nIn particular, we find that \( \parallel f{\parallel }_{{L}^{{p}_{0}}} < \infty \) . Moreover, by taking the \( {p}_{0}^{\text{th }} \) root of both sides of the above equation, we find that the inequality in the proposition holds.
|
Yes
|
Proposition 1.5 If \( X = \mathbb{Z} \) is equipped with counting measure, then the reverse inclusion holds, namely \( {L}^{{p}_{0}}\left( \mathbb{Z}\right) \subset {L}^{{p}_{1}}\left( \mathbb{Z}\right) \) if \( {p}_{0} \leq {p}_{1} \) . Moreover, \( \parallel f{\parallel }_{{L}^{{p}_{1}}} \leq \parallel f{\parallel }_{{L}^{{p}_{0}}}. \)
|
Indeed, if \( f = \{ f\left( n\right) {\} }_{n \in \mathbb{Z}} \), then \( \sum {\left| f\left( n\right) \right| }^{{p}_{0}} = \parallel f{\parallel }_{{L}^{{p}_{0}}}^{{p}_{0}} \), and \( \mathop{\sup }\limits_{n}\left| {f\left( n\right) }\right| \leq \) \( \parallel f{\parallel }_{{L}^{{p}_{0}}} \) . However\n\n\[ \sum {\left| f\left( n\right) \right| }^{{p}_{1}} = \sum {\left| f\left( n\right) \right| }^{{p}_{0}}{\left| f\left( n\right) \right| }^{{p}_{1} - {p}_{0}} \]\n\n\[ \leq {\left( \mathop{\sup }\limits_{n}\left| f\left( n\right) \right| \right) }^{{p}_{1} - {p}_{0}}\parallel f{\parallel }_{{L}^{{p}_{0}}}^{{p}_{0}} \]\n\n\[ \leq \parallel f{\parallel }_{{L}^{{p}_{0}}}^{{p}_{1}} \]\n\nThus \( \parallel f{\parallel }_{{L}^{{p}_{1}}} \leq \parallel f{\parallel }_{{L}^{{p}_{0}}} \) .
|
Yes
|
Proposition 2.2 Suppose \( f \in {L}^{\infty } \) is supported on a set of finite measure. Then \( f \in {L}^{p} \) for all \( p < \infty \), and\n\n\[ \parallel f{\parallel }_{{L}^{p}} \rightarrow \parallel f{\parallel }_{{L}^{\infty }}\;\text{ as }p \rightarrow \infty . \]
|
Proof. Let \( E \) be a measurable subset of \( X \) with \( \mu \left( E\right) < \infty \), and so that \( f \) vanishes in the complement of \( E \) . If \( \mu \left( E\right) = 0 \), then \( \parallel f{\parallel }_{{L}^{\infty }} = \) \( \parallel f{\parallel }_{{L}^{p}} = 0 \) and there is nothing to prove. Otherwise\n\n\[ \parallel f{\parallel }_{{L}^{p}} = {\left( {\int }_{E}{\left| f\left( x\right) \right| }^{p}d\mu \right) }^{1/p} \leq {\left( {\int }_{E}\parallel f{\parallel }_{{L}^{\infty }}^{p}d\mu \right) }^{1/p} \leq \parallel f{\parallel }_{{L}^{\infty }}\mu {\left( E\right) }^{1/p}. \]\n\nSince \( \mu {\left( E\right) }^{1/p} \rightarrow 1 \) as \( p \rightarrow \infty \), we find that \( \lim \mathop{\sup }\limits_{{p \rightarrow \infty }}\parallel f{\parallel }_{{L}^{p}} \leq \parallel f{\parallel }_{{L}^{\infty }} \) .\n\nOn the other hand, given \( \epsilon > 0 \), we have\n\n\[ \mu \left( \left\{ {x : \left| {f\left( x\right) }\right| \geq \parallel f{\parallel }_{{L}^{\infty }} - \epsilon }\right\} \right) \geq \delta \;\text{ for some }\delta > 0, \]\n\nhence\n\n\[ {\int }_{X}{\left| f\right| }^{p}{d\mu } \geq \delta {\left( \parallel f{\parallel }_{{L}^{\infty }} - \epsilon \right) }^{p} \]\n\nTherefore \( \mathop{\liminf }\limits_{{p \rightarrow \infty }}\parallel f{\parallel }_{{L}^{p}} \geq \parallel f{\parallel }_{{L}^{\infty }} - \epsilon \), and since \( \epsilon \) is arbitrary, we have \( \lim \mathop{\inf }\limits_{{p \rightarrow \infty }}\parallel f{\parallel }_{{L}^{p}} \geq \parallel f{\parallel }_{{L}^{\infty }} \) . Hence the limit \( \mathop{\lim }\limits_{{p \rightarrow \infty }}\parallel f{\parallel }_{{L}^{p}} \) exists, and equals \( \parallel f{\parallel }_{{L}^{\infty }} \) .
|
Yes
|
Proposition 3.1 A linear functional on a Banach space is continuous, if and only if it is bounded.
|
Proof. The key is to observe that \( \ell \) is continuous if and only if \( \ell \) is continuous at the origin.\n\nIndeed, if \( \ell \) is continuous, we choose \( \epsilon = 1 \) and \( g = 0 \) in the above definition so that \( \left| {\ell \left( f\right) }\right| \leq 1 \) whenever \( \parallel f\parallel \leq \delta \), for some \( \delta > 0 \) . Hence, given any non-zero \( h \), an element of \( \mathcal{B} \), we see that \( {\delta h}/\parallel h\parallel \) has norm equal to \( \delta \), and hence \( \left| {\ell \left( {{\delta h}/\parallel h\parallel }\right) }\right| \leq 1 \) . Thus \( \left| {\ell \left( h\right) }\right| \leq M\parallel h\parallel \) with \( M = 1/\delta \) .\n\nConversely, if \( \ell \) is bounded it is clearly continuous at the origin, hence continuous.\n\nThe significance of continuous linear functionals in terms of closed hyperplanes in \( \mathcal{B} \) is a noteworthy geometric point to which we return later on. Now we take up analytic aspects of linear functionals.
|
Yes
|
Theorem 3.2 The vector space \( {\mathcal{B}}^{ * } \) is a Banach space.
|
Proof. It is clear that \( \parallel \cdot \parallel \) defines a norm, so we only check that \( {\mathcal{B}}^{ * } \) is complete. Suppose that \( \left\{ {\ell }_{n}\right\} \) is a Cauchy sequence in \( {\mathcal{B}}^{ * } \) . Then, for each \( f \in \mathcal{B} \), the sequence \( \left\{ {{\ell }_{n}\left( f\right) }\right\} \) is Cauchy, hence converges to a limit, which we denote by \( \ell \left( f\right) \) . Clearly, the mapping \( \ell : f \mapsto \ell \left( f\right) \) is linear. If \( M \) is so that \( \begin{Vmatrix}{\ell }_{n}\end{Vmatrix} \leq M \) for all \( n \), we see that\n\n\[ \left| {\ell \left( f\right) }\right| \leq \left| {\left( {\ell - {\ell }_{n}}\right) \left( f\right) }\right| + \left| {{\ell }_{n}\left( f\right) }\right| \leq \left| {\left( {\ell - {\ell }_{n}}\right) \left( f\right) }\right| + M\parallel f\parallel \]\n\nso that in the limit as \( n \rightarrow \infty \), we find \( \left| {\ell \left( f\right) }\right| \leq M\parallel f\parallel \) for all \( f \in \mathcal{B} \) . Thus \( \ell \) is bounded. Finally, we must show that \( {\ell }_{n} \) converges to \( \ell \) in \( {\mathcal{B}}^{ * } \) . Given \( \epsilon > 0 \) choose \( N \) so that \( \begin{Vmatrix}{{\ell }_{n} - {\ell }_{m}}\end{Vmatrix} < \epsilon /2 \) for all \( n, m > N \) . Then, if \( n > N \), we see that for all \( m > N \) and any \( f \)\n\n\[ \left| {\left( {\ell - {\ell }_{n}}\right) \left( f\right) }\right| \leq \left| {\left( {\ell - {\ell }_{m}}\right) \left( f\right) }\right| + \left| {\left( {{\ell }_{m} - {\ell }_{n}}\right) \left( f\right) }\right| \leq \left| {\left( {\ell - {\ell }_{m}}\right) \left( f\right) }\right| + \frac{\epsilon }{2}\parallel f\parallel . \]\n\nWe can also choose \( m \) so large (and dependent on \( f \) ) so that we also have \( \left| {\left( {\ell - {\ell }_{m}}\right) \left( f\right) }\right| \leq \epsilon \parallel f\parallel /2 \) . In the end, we find that for \( n > N \),\n\n\[ \left| {\left( {\ell - {\ell }_{n}}\right) \left( f\right) }\right| \leq \epsilon \parallel f\parallel \]\n\nThis proves that \( \begin{Vmatrix}{\ell - {\ell }_{n}}\end{Vmatrix} \rightarrow 0 \), as desired.
|
Yes
|
Theorem 4.1 Suppose \( 1 \leq p < \infty \), and \( 1/p + 1/q = 1 \) . Then, with \( \mathcal{B} = \) \( {L}^{p} \) we have\n\n\[{\mathcal{B}}^{ * } = {L}^{q}\]\n\nin the following sense: For every bounded linear functional \( \ell \) on \( {L}^{p} \) there is a unique \( g \in {L}^{q} \) so that\n\n\[ \ell \left( f\right) = {\int }_{X}f\left( x\right) g\left( x\right) {d\mu }\left( x\right) ,\;\text{ for all }f \in {L}^{p}. \]\n\nMoreover, \( \parallel \ell {\parallel }_{{\mathcal{B}}^{ * }} = \parallel g{\parallel }_{{L}^{q}} \) .
|
The proof of the theorem is based on two ideas. The first, as already seen, is Hölder's inequality; to which a converse is also needed. The second is the fact that a linear functional \( \ell \) on \( {L}^{p},1 \leq p < \infty \), leads naturally to a (signed) measure \( \nu \) . Because of the continuity of \( \ell \) the measure \( \nu \) is absolutely continuous with respect to the underlying measure \( \mu \), and our desired function \( g \) is then the density function of \( \nu \) in terms of \( \mu \) .
|
Yes
|
Lemma 4.2 Suppose \( 1 \leq p, q \leq \infty \), are conjugate exponents.\n\n(i) If \( g \in {L}^{q} \), then \( \parallel g{\parallel }_{{L}^{q}} = \mathop{\sup }\limits_{{\parallel f{\parallel }_{{L}^{p}} \leq 1}}\left| {\int {fg}}\right| \) .
|
Proof. We start with (i). If \( g = 0 \), there is nothing to prove, so we may assume that \( g \) is not 0 a.e., and hence \( \parallel g{\parallel }_{{L}^{q}} \neq 0 \) . By Hölder’s inequality, we have that\n\n\[ \parallel g{\parallel }_{{L}^{q}} \geq \mathop{\sup }\limits_{{\parallel f{\parallel }_{{L}^{p}} \leq 1}}\left| {\int {fg}}\right| .\n\]\n\nTo prove the reverse inequality we consider several cases.\n\n- First, if \( q = 1 \) and \( p = \infty \), we may take \( f\left( x\right) = \operatorname{sign}g\left( x\right) \) . Then, we have \( \parallel f{\parallel }_{{L}^{\infty }} = 1 \), and clearly, \( \int {fg} = \parallel g{\parallel }_{{L}^{1}} \).\n\n- If \( 1 < p, q < \infty \), then we set \( f\left( x\right) = {\left| g\left( x\right) \right| }^{q - 1}\operatorname{sign}g\left( x\right) /\parallel g{\parallel }_{{L}^{q}}^{q - 1} \) . We observe that \( \parallel f{\parallel }_{{L}^{p}}^{p} = \int {\left| g\left( x\right) \right| }^{p\left( {q - 1}\right) }{d\mu }/\parallel g{\parallel }_{{L}^{q}}^{p\left( {q - 1}\right) } = 1 \) since \( p(q - 1) = q \), and that \( \int {fg} = \parallel g{\parallel }_{{L}^{q}} \).\n\n- Finally, if \( q = \infty \) and \( p = 1 \), let \( \epsilon > 0 \), and \( E \) a set of finite positive measure, where \( \left| {g\left( x\right) }\right| \geq \parallel g{\parallel }_{{L}^{\infty }} - \epsilon \) . (Such a set exists by the definition of \( \parallel g{\parallel }_{{L}^{\infty }} \) and the fact that the measure \( \mu \) is \( \sigma \) -finite.) Then, if we take \( f\left( x\right) = {\chi }_{E}\left( x\right) \operatorname{sign}g\left( x\right) /\mu \left( E\right) \), where \( {\chi }_{E} \) denotes the characteristic function of the set \( E \), we see that \( \parallel f{\parallel }_{{L}^{1}} = 1 \), and also\n\n\[ \left| {\int {fg}}\right| = \frac{1}{\mu \left( E\right) }{\int }_{E}\left| g\right| \geq \parallel g{\parallel }_{\infty } - \epsilon .\n\]\nThis completes the proof of part (i).
|
Yes
|
Theorem 5.2 Suppose \( {V}_{0} \) is a linear subspace of \( V \), and that we are given a linear functional \( {\ell }_{0} \) on \( {V}_{0} \) that satisfies\n\n\[{\ell }_{0}\left( v\right) \leq p\left( v\right) ,\;\text{ for all }v \in {V}_{0}.\n\]\nThen \( {\ell }_{0} \) can be extended to a linear functional \( \ell \) on \( V \) that satisfies\n\n\[ \ell \left( v\right) \leq p\left( v\right) ,\;\text{ for all }v \in V.\n\]
|
Proof. Suppose \( {V}_{0} \neq V \), and pick \( {v}_{1} \) a vector not in \( {V}_{0} \) . We will first extend \( {\ell }_{0} \) to the subspace \( {V}_{1} \) spanned by \( {V}_{0} \) and \( {v}_{1} \), as we did before. We can do this by defining a putative extension \( {\ell }_{1} \) of \( {\ell }_{0} \), defined on \( {V}_{1} \) by \( {\ell }_{1}\left( {\alpha {v}_{1} + w}\right) = \alpha {\ell }_{1}\left( {v}_{1}\right) + {\ell }_{0}\left( w\right) \), whenever \( w \in {V}_{0} \) and \( \alpha \in \mathbb{R} \), if \( {\ell }_{1}\left( {v}_{1}\right) \) is chosen so that\n\n\[{\ell }_{1}\left( v\right) \leq p\left( v\right) ,\;\text{ for all }v \in {V}_{1}.\n\]\nHowever, exactly as above, this happens when\n\n\[- p\left( {-{v}_{1} + {w}^{\prime }}\right) + {\ell }_{0}\left( {w}^{\prime }\right) \leq {\ell }_{1}\left( {v}_{1}\right) \leq p\left( {{v}_{1} + w}\right) - {\ell }_{0}\left( w\right)\n\]\nfor all \( w,{w}^{\prime } \in {V}_{0} \) .\n\nThe right-hand side exceeds the left-hand side because of \( {\ell }_{0}\left( {w}^{\prime }\right) + \) \( {\ell }_{0}\left( w\right) \leq p\left( {{w}^{\prime } + w}\right) \) and the sub-linearity of \( p \) . Thus an appropriate choice of \( {\ell }_{1}\left( {v}_{1}\right) \) is possible, giving the desired extension of \( {\ell }_{0} \) from \( {V}_{0} \) to \( {V}_{1} \) .\n\nWe can think of the extension we have constructed as the key step in an inductive procedure. This induction, which in general is necessarily trans-finite, proceeds as follows. We well-order all vectors in \( V \) that do not belong to \( {V}_{0} \), and denote this ordering by \( < \) . Among these vectors we call a vector \( v \) \
|
Yes
|
Proposition 5.3 Suppose \( {f}_{0} \) is a given element of \( \mathcal{B} \) with \( \begin{Vmatrix}{f}_{0}\end{Vmatrix} = M \) . Then there exists a continuous linear functional \( \ell \) on \( \mathcal{B} \) so that \( \ell \left( {f}_{0}\right) = M \) and \( \parallel \ell {\parallel }_{{\mathcal{B}}^{ * }} = 1 \) .
|
Proof. Define \( {\ell }_{0} \) on the one-dimensional subspace \( {\left\{ \alpha {f}_{0}\right\} }_{\alpha \in \mathbb{R}} \) by \( {\ell }_{0}\left( {\alpha {f}_{0}}\right) = {\alpha M} \), for each \( \alpha \in \mathbb{R} \) . Note that if we set \( p\left( f\right) = \parallel f\parallel \) for every \( f \in \mathcal{B} \), the function \( p \) satisfies the basic sub-linear property (10). We also observe that\n\n\[ \left| {{\ell }_{0}\left( {\alpha {f}_{0}}\right) }\right| = \left| \alpha \right| M = \left| \alpha \right| \begin{Vmatrix}{f}_{0}\end{Vmatrix} = p\left( {\alpha {f}_{0}}\right) ,\]\n\nso \( {\ell }_{0}\left( f\right) \leq p\left( f\right) \) on this subspace. By the extension theorem \( {\ell }_{0} \) extends to an \( \ell \) defined on \( \mathcal{B} \) with \( \ell \left( f\right) \leq p\left( f\right) = \parallel f\parallel \), for all \( f \in \mathcal{B} \) . Since this inequality also holds for \( - f \) in place of \( f \) we get \( \left| {\ell \left( f\right) }\right| \leq \parallel f\parallel \), and thus \( \parallel \ell {\parallel }_{{\mathcal{B}}^{ * }} \leq 1 \) . The fact that \( \parallel \ell {\parallel }_{{\mathcal{B}}^{ * }} \geq 1 \) is implied by the defining property \( \ell \left( {f}_{0}\right) = \begin{Vmatrix}{f}_{0}\end{Vmatrix} \), thereby proving the proposition.
|
Yes
|
Proposition 5.4 Let \( {\mathcal{B}}_{1},{\mathcal{B}}_{2} \) be a pair of Banach spaces and \( \mathcal{S} \subset {\mathcal{B}}_{1} \) a dense linear subspace of \( {\mathcal{B}}_{1} \). Suppose \( {T}_{0} \) is a linear transformation from \( \mathcal{S} \) to \( {\mathcal{B}}_{2} \) that satisfies \( {\begin{Vmatrix}{T}_{0}\left( f\right) \end{Vmatrix}}_{{\mathcal{B}}_{2}} \leq M\parallel f{\parallel }_{{\mathcal{B}}_{1}} \) for all \( f \in \mathcal{S} \). Then \( {T}_{0} \) has a unique extension \( T \) to all of \( {\mathcal{B}}_{1} \) so that \( \parallel T\left( f\right) {\parallel }_{{\mathcal{B}}_{2}} \leq M\parallel f{\parallel }_{{\mathcal{B}}_{1}} \) for all \( f \in {\mathcal{B}}_{1} \).
|
Proof. If \( f \in {\mathcal{B}}_{1} \), let \( \left\{ {f}_{n}\right\} \) be a sequence in \( \mathcal{S} \) which converges to \( f \). Then since \( {\begin{Vmatrix}{T}_{0}\left( {f}_{n}\right) - {T}_{0}\left( {f}_{m}\right) \end{Vmatrix}}_{{\mathcal{B}}_{2}} \leq M{\begin{Vmatrix}{f}_{n} - {f}_{m}\end{Vmatrix}}_{{\mathcal{B}}_{1}} \) it follows that \( \left\{ {{T}_{0}\left( {f}_{n}\right) }\right\} \) is a Cauchy sequence in \( {\mathcal{B}}_{2} \), and hence converges to a limit, which we define to be \( T\left( f\right) \). Note that the definition of \( T\left( f\right) \) is independent of the chosen sequence \( \left\{ {f}_{n}\right\} \), and that the resulting transformation \( T \) has all the required properties.
|
Yes
|
Theorem 5.5 The operator \( {T}^{ * } \) defined by (13) is a bounded linear transformation from \( {\mathcal{B}}_{2}^{ * } \) to \( {\mathcal{B}}_{1}^{ * } \) . Its norm \( \begin{Vmatrix}{T}^{ * }\end{Vmatrix} \) satisfies \( \parallel T\parallel = \begin{Vmatrix}{T}^{ * }\end{Vmatrix} \) .
|
Proof. First, if \( {\begin{Vmatrix}{f}_{1}\end{Vmatrix}}_{{\mathcal{B}}_{1}} \leq 1 \), we have that\n\n\[ \left| {{\ell }_{1}\left( {f}_{1}\right) }\right| = \left| {{\ell }_{2}\left( {T\left( {f}_{1}\right) }\right) }\right| \leq \begin{Vmatrix}{\ell }_{2}\end{Vmatrix}{\begin{Vmatrix}T\left( {f}_{1}\right) \end{Vmatrix}}_{{\mathcal{B}}_{2}} \leq \begin{Vmatrix}{\ell }_{2}\end{Vmatrix}\parallel T\parallel .\n\]\n\nThus taking the supremum over all \( {f}_{1} \in {\mathcal{B}}_{1} \) with \( {\begin{Vmatrix}{f}_{1}\end{Vmatrix}}_{{\mathcal{B}}_{1}} \leq 1 \), we see that the mapping \( {\ell }_{2} \mapsto {T}^{ * }\left( {\ell }_{2}\right) = {\ell }_{1} \) has norm \( \leq \parallel T\parallel \) .\n\nTo prove the reverse inequality we can find for any \( \epsilon > 0 \) an \( {f}_{1} \in {\mathcal{B}}_{1} \) with \( \parallel {f}_{1}{\parallel }_{{\mathcal{B}}_{1}} = 1 \) and \( \parallel T\left( {f}_{1}\right) {\parallel }_{{\mathcal{B}}_{2}} \geq \parallel T\parallel - \epsilon \) . Next, with \( {f}_{2} = T\left( {f}_{1}\right) \in {\mathcal{B}}_{2} \) , by Proposition 5.3 (with \( \mathcal{B} = {\mathcal{B}}_{2} \) ) there is an \( {\ell }_{2} \) in \( {\mathcal{B}}_{2}^{ * } \) so that \( {\begin{Vmatrix}{\ell }_{2}\end{Vmatrix}}_{{\mathcal{B}}_{2}^{ * }} = 1 \) but \( {\ell }_{2}\left( {f}_{2}\right) \geq \parallel T\parallel - \epsilon \) . Thus by (13) one has \( {T}^{ * }\left( {\ell }_{2}\right) \left( {f}_{1}\right) \geq \parallel T\parallel - \epsilon \), and since \( {\begin{Vmatrix}{f}_{1}\end{Vmatrix}}_{{\mathcal{B}}_{1}} = 1 \), we conclude \( {\begin{Vmatrix}{T}^{ * }\left( {\ell }_{2}\right) \end{Vmatrix}}_{{\mathcal{B}}_{1}^{ * }} \geq \parallel T\parallel - \epsilon \) . This gives \( \begin{Vmatrix}{T}^{ * }\end{Vmatrix} \geq \) \( \parallel T\parallel - \epsilon \) for any \( \epsilon > 0 \), which proves the theorem.
|
Yes
|
Theorem 5.6 There is an extended-valued non-negative function \( \widehat{m} \), defined on all subsets of \( \mathbb{R} \) with the following properties:\n\n(i) \( \widehat{m}\left( {{E}_{1} \cup {E}_{2}}\right) = \widehat{m}\left( {E}_{1}\right) + \widehat{m}\left( {E}_{2}\right) \) whenever \( {E}_{1} \) and \( {E}_{2} \) are disjoint subsets of \( \mathbb{R} \) .\n\n(ii) \( \widehat{m}\left( E\right) = m\left( E\right) \) if \( E \) is a measurable set and \( m \) denotes the Lebesgue measure.\n\ni) \( \widehat{m}\left( {E + h}\right) = \widehat{m}\left( E\right) \) for every set \( E \) and real number \( h \) .
|
From (i) we see that \( \widehat{m} \) is finitely additive; however it cannot be countably additive as the proof of the existence of non-measurable sets shows. (See Section 3, Chapter 1 in Book III.)
|
No
|
Corollary 5.8 There is a non-negative function \( \widehat{m} \) defined on all subsets of \( \mathbb{R}/\mathbb{Z} \) so that:\n\n(i) \( \widehat{m}\left( {{E}_{1} \cup {E}_{2}}\right) = \widehat{m}\left( {E}_{1}\right) + \widehat{m}\left( {E}_{2}\right) \) for all disjoint subsets \( {E}_{1} \) and \( {E}_{2} \) .\n\n(ii) \( \widehat{m}\left( E\right) = m\left( E\right) \) if \( E \) is measurable.\n\n(iii) \( \widehat{m}\left( {E + h}\right) = \widehat{m}\left( E\right) \) for every \( h \) in \( \mathbb{R} \) .
|
We need only take \( \widehat{m}\left( E\right) = I\left( {\chi }_{E}\right) \), with \( I \) as in Theorem 5.7, where \( {\chi }_{E} \) denotes the characteristic function of \( E \) .
|
Yes
|
Theorem 7.3 Let \( X \) be a compact metric space and \( C\left( X\right) \) the Banach space of continuous real-valued functions on \( X \) . Then, given any bounded linear functional \( \ell \) on \( C\left( X\right) \), there exists a unique finite signed Borel measure \( \mu \) on \( X \) so that\n\n\[ \ell \left( f\right) = {\int }_{X}f\left( x\right) {d\mu }\left( x\right) \;\text{ for all }f \in C\left( X\right) . \]\n\nMoreover, \( \parallel \ell \parallel = \parallel \mu \parallel = \left| \mu \right| \left( X\right) \) . In other words \( C{\left( X\right) }^{ * } \) is isometric to \( M\left( X\right) \) .
|
Proof. By the proposition, there exist two positive linear functionals \( {\ell }^{ + } \) and \( {\ell }^{ - } \) so that \( \ell = {\ell }^{ + } - {\ell }^{ - } \) . Applying Theorem 7.1 to each of these positive linear functionals yields two finite Borel measures \( {\mu }_{1} \) and \( {\mu }_{2} \) . If we define \( \mu = {\mu }_{1} - {\mu }_{2} \), then \( \mu \) is a finite signed Borel measure and \( \ell \left( f\right) = \int {fd\mu } \) .\n\nNow we have\n\n\[ \left| {\ell \left( f\right) }\right| \leq \int \left| f\right| d\left| \mu \right| \leq \parallel f\parallel \left| \mu \right| \left( X\right) \]\n\nand thus \( \parallel \ell \parallel \leq \left| \mu \right| \left( X\right) \) . Since we also have \( \left| \mu \right| \left( X\right) \leq {\mu }_{1}\left( X\right) + {\mu }_{2}\left( X\right) = {\ell }^{ + }\left( 1\right) + {\ell }^{ - }\left( 1\right) = \parallel \ell \parallel \), we conclude that \( \parallel \ell \parallel = \left| \mu \right| \left( X\right) \) as desired.\n\nTo prove uniqueness, suppose \( \int {fd\mu } = \int {fd}{\mu }^{\prime } \) for some finite signed Borel measures \( \mu \) and \( {\mu }^{\prime } \), and all \( f \in C\left( X\right) \) . Then if \( \nu = \mu - {\mu }^{\prime } \), one has \( \int {fd\nu } = 0 \), and consequently, if \( {\nu }^{ + } \) and \( {\nu }^{ - } \) are the positive and negative variations of \( f \), one finds that the two positive linear functionals defined on \( C\left( X\right) \) by \( {\ell }^{ + }\left( f\right) = \int {fd}{\nu }^{ + } \) and \( {\ell }^{ - }\left( f\right) = \int {fd}{\nu }^{ - } \) are identical. By the uniqueness in Theorem 7.1, we conclude that \( {\nu }^{ + } = {\nu }^{ - } \), hence \( \nu = 0 \) and \( \mu = {\mu }^{\prime } \), as desired.
|
Yes
|
Theorem 7.4 Suppose \( X \) is a metric space and \( \ell \) a positive linear functional on \( {C}_{b}\left( X\right) \) . For simplicity assume that \( \ell \) is normalized so that \( \ell \left( 1\right) = 1 \) . Assume also that for each \( \epsilon > 0 \), there is a compact set \( {K}_{\epsilon } \subset X \) so that\n\n(21)\n\n\[ \left| {\ell \left( f\right) }\right| \leq \mathop{\sup }\limits_{{x \in {K}_{\epsilon }}}\left| {f\left( x\right) }\right| + \epsilon \parallel f\parallel ,\;\text{ for all }f \in {C}_{b}\left( X\right) .\n\]\n\nThen there exists a unique finite (positive) Borel measure \( \mu \) so that\n\n\[ \ell \left( f\right) = {\int }_{X}f\left( x\right) {d\mu }\left( x\right) ,\;\text{ for all }f \in {C}_{b}\left( X\right) . \]
|
The proof of this theorem proceeds as that of Theorem 7.1, save for one key aspect. First we define\n\n\[ \rho \left( \mathcal{O}\right) = \sup \left\{ {\ell \left( f\right) ,\text{ where }f \in {C}_{b}\left( X\right) ,\operatorname{supp}\left( f\right) \subset \mathcal{O}\text{, and }0 \leq f \leq 1}\right\} .\n\]\n\nThe change that is required is in the proof of the countable sub-additivity of \( \rho \), in that the support of \( f \) ’s (in the definition of \( \rho \left( \mathcal{O}\right) \) ) are now not necessarily compact. In fact, suppose \( \mathcal{O} = \mathop{\bigcup }\limits_{{k = 1}}^{\infty }{\mathcal{O}}_{k} \) is a countable union of open sets. Let \( C \) be the support of \( f \), and given a fixed \( \epsilon > 0 \), set \( K = C \cap {K}_{\epsilon } \), with \( {K}_{\epsilon } \) the compact set arising in (21). Then \( K \) is compact and \( \mathop{\bigcup }\limits_{{k = 1}}^{\infty }{\mathcal{O}}_{k} \) covers \( K \) . Proceeding as before, we obtain a partition of unity \( {\left\{ {\eta }_{k}\right\} }_{k = 1}^{N} \), with \( {\eta }_{k} \) supported in \( {\mathcal{O}}_{k} \) and \( \mathop{\sum }\limits_{{k = 1}}^{N}{\eta }_{k}\left( x\right) = 1 \), for \( x \in K \) . Now \( f - \mathop{\sum }\limits_{{k = 1}}^{N}f{\eta }_{k} \) vanishes on \( {K}_{\epsilon } \) . Thus by (21)\n\n\[ \left| {\ell \left( f\right) - \mathop{\sum }\limits_{{k = 1}}^{N}\ell \left( {f{\eta }_{k}}\right) }\right| \leq \epsilon \]\n\nand hence\n\n\[ \ell \left( f\right) \leq \mathop{\sum }\limits_{{k = 1}}^{\infty }\rho \left( {\mathcal{O}}_{k}\right) + \epsilon \]\n\nSince this holds for each \( \epsilon \), we obtain the required sub-additivity of \( \rho \) and thus of \( {\mu }_{ * } \) . The proof of the theorem can then be concluded as before.
|
Yes
|
Lemma 2.2 (Three-lines lemma) Suppose \( \Phi \left( z\right) \) is a holomorphic function in the strip \( S = \{ z \in \mathbb{C} : 0 < \operatorname{Re}\left( z\right) < 1\} \), that is also continuous and bounded on the closure of \( S \) . If\n\n\[ \n{M}_{0} = \mathop{\sup }\limits_{{y \in \mathbb{R}}}\left| {\Phi \left( {iy}\right) }\right| \;\text{ and }\;{M}_{1} = \mathop{\sup }\limits_{{y \in \mathbb{R}}}\left| {\Phi \left( {1 + {iy}}\right) }\right| \n\]\n\nthen\n\n\[ \n\mathop{\sup }\limits_{{y \in \mathbb{R}}}\left| {\Phi \left( {t + {iy}}\right) }\right| \leq {M}_{0}^{1 - t}{M}_{1}^{t},\;\text{ for all }0 \leq t \leq 1. \n\]
|
Proof. We begin by proving the lemma under the assumption that \( {M}_{0} = {M}_{1} = 1 \) and \( \mathop{\sup }\limits_{{0 \leq x \leq 1}}\left| {\Phi \left( {x + {iy}}\right) }\right| \rightarrow 0 \) as \( \left| y\right| \rightarrow \infty \) . In this case, let \( M = \sup \left| {\Phi \left( z\right) }\right| \) where the sup is taken over all \( z \) in the closure of the strip \( S \) . We may clearly assume that \( M > 0 \), and let \( {z}_{1},{z}_{2},\ldots \) be a sequence of points in the strip with \( \left| {\Phi \left( {z}_{n}\right) }\right| \rightarrow M \) as \( n \rightarrow \infty \) . By the decay condition imposed on \( \Phi \), the points \( {z}_{n} \) cannot go to infinity, hence there exists \( {z}_{0} \) in the closure of the strip, so that a subsequence of \( \left\{ {z}_{n}\right\} \) converges to \( {z}_{0} \) . By the maximum modulus principle, \( {z}_{0} \) cannot be in the interior of the strip,(unless \( \Phi \) is constant, in which case the conclusion is trivial) hence \( {z}_{0} \) must be on its boundary, where \( \left| \Phi \right| \leq 1 \) . Thus \( M \leq 1 \) , and the result is proved in this special case.\n\nIf we only assume now that \( {M}_{0} = {M}_{1} = 1 \), we define\n\n\[ \n{\Phi }_{\epsilon }\left( z\right) = \Phi \left( z\right) {e}^{\epsilon \left( {{z}^{2} - 1}\right) },\;\text{ for each }\epsilon > 0. \n\]\n\nSince \( {e}^{\epsilon \left\lbrack {{\left( x + iy\right) }^{2} - 1}\right\rbrack } = {e}^{\epsilon \left( {{x}^{2} - 1 - {y}^{2} + {2ixy}}\right) } \), we find that \( \left| {{\Phi }_{\epsilon }\left( z\right) }\right| \leq 1 \) on the lines \( \operatorname{Re}\left( z\right) = 0 \) and \( \operatorname{Re}\left( z\right) = 1 \) . Moreover,\n\n\[ \n\mathop{\sup }\limits_{{0 \leq x \leq 1}}\left| {{\Phi }_{\epsilon }\left( {x + {iy}}\right) }\right| \rightarrow 0\;\text{ as }\left| y\right| \rightarrow \infty , \n\]\n\nsince \( \Phi \) is bounded. Therefore, by the first case, we know that \( \left| {{\Phi }_{\epsilon }\left( z\right) }\right| \leq 1 \) in the closure of the strip. Letting \( \epsilon \rightarrow 0 \), we see that \( \left| \Phi \right| \leq 1 \) as desired.\n\nFinally, for arbitrary positive values of \( {M}_{0} \) and \( {M}_{1} \), we let \( \widetilde{\Phi }\left( z\right) = \) \( {M}_{0}^{z - 1}{M}_{1}^{-z}\Phi \left( z\right) \), and note that \( \widetilde{\Phi } \) satisfies the condition of the previous case, that is, \( \widetilde{\Phi } \) is bounded by 1 on the lines \( \operatorname{Re}\left( z\right) = 0 \) and \( \operatorname{Re}\left( z\right) = 1 \) .
|
Yes
|
Corollary 2.3 With \( T \) as before:\n\n(a) The Riesz diagram of \( T \) is a convex set.\n\n(b) \( \log {M}_{x, y} \) is a convex function on this set.
|
Conclusion (a) means that if \( \left( {{x}_{0},{y}_{0}}\right) = \left( {1/{p}_{0},1/{q}_{0}}\right) \) and \( \left( {{x}_{1},{y}_{1}}\right) = \) \( \left( {1/{p}_{1},1/{q}_{1}}\right) \) are points in the Riesz diagram of \( T \), then so is the line segment joining them. This is an immediate consequence of Theorem 2.1. Similarly the convexity of the function \( \log {M}_{x, y} \) is its convexity on each line segment, and this follows from the conclusion \( M \leq {M}_{0}^{1 - t}{M}_{1}^{t} \) guaranteed also by Theorem 2.1.
|
Yes
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.