Q
stringlengths 4
3.96k
| A
stringlengths 1
3k
| Result
stringclasses 4
values |
|---|---|---|
Lemma 10.8.1 Let \( X \) be the point graph of a generalized quadrangle of order \( \left( {s, t}\right) \). Then \( X \) is strongly regular with parameters\n\n\[ \left( {\left( {s + 1}\right) \left( {{st} + 1}\right), s\left( {t + 1}\right), s - 1, t + 1}\right) \text{.} \]
|
Proof. Each point \( P \) of the generalized quadrangle lies on \( t + 1 \) lines of size \( s + 1 \), any two of which have exactly \( P \) in common. Hence \( X \) has valency \( s\left( {t + 1}\right) \). The graph induced by the points collinear with \( P \) consists of \( t + 1 \) vertex-disjoint cliques of size \( s \), whence \( a = s - 1 \). Let \( Q \) be a point not collinear with \( P \). Then \( Q \) is collinear with exactly one point on each of the lines through \( P \). This shows that \( c = t + 1 \).\n\nFinally, we determine the number of vertices in the graph. Let \( \ell \) be a line of the quadrangle. Each point not on \( \ell \) is collinear with a unique point on \( \ell \); consequently, there are \( {st} \) points collinear with a given point of \( \ell \) and not on \( \ell \). This gives us exactly \( {st}\left( {s + 1}\right) \) points not on \( \ell \), and \( \left( {s + 1}\right) \left( {{st} + 1}\right) \) points in total.
|
Yes
|
Lemma 10.8.2 The eigenvalues of the point graph of a generalized quadrangle of order \( \left( {s, t}\right) \) are \( s\left( {t + 1}\right), s - 1 \), and \( - t - 1 \), with respective multiplicities
|
Proof. Let \( X \) be the point graph of a generalized quadrangle of order \( \left( {s, t}\right) \) . From Section 10.2, the eigenvalues of \( X \) are its valency \( s\left( {t + 1}\right) \) and the two zeros of the polynomial\n\n\[{x}^{2} - \left( {a - c}\right) x - \left( {k - c}\right) = {x}^{2} - \left( {s - t - 2}\right) x - \left( {s - 1}\right) \left( {t + 1}\right) = \left( {x - s + 1}\right) \left( {x + t + 1}\right) .\]\n\nThus the nontrivial eigenvalues are \( s - 1 \) and \( - t - 1 \) . Their multiplicities now follow from (10.2).\n\nThe fact that these expressions for the multiplicities must be integers provides a nontrivial constraint on the possible values of \( s \) and \( t \) . A further constraint comes from the Krein inequalities.
|
Yes
|
Lemma 10.8.3 If \( \\mathcal{G} \) is a generalized quadrangle of order \( \\left( {s, t}\\right) \) with \( s > 1 \) and \( t > 1 \), then \( s \\leq {t}^{2} \) and \( t \\leq {s}^{2} \) .
|
Proof. Let \( X \) be the point graph of \( \\mathcal{G} \) . Substituting \( k = s\\left( {t + 1}\\right) ,\\theta = s - 1 \) , and \( \\tau = - t - 1 \) into the second Krein inequality\n\n\\[ \n{\\theta }^{2}\\tau - {2\\theta }{\\tau }^{2} - {\\tau }^{2} - {k\\tau } + k{\\theta }^{2} + {2k\\theta } \\geq 0\n\\]\n\nand factoring yields\n\n\\[ \n\\left( {{s}^{2} - t}\\right) \\left( {t + 1}\\right) \\left( {s - 1}\\right) \\geq 0.\n\\]\n\nSince \( s > 1 \), this implies that \( t \\leq {s}^{2} \) . Since we may apply the same argument to the point graph of the dual quadrangle, we also find that \( s \\leq {t}^{2} \) .
|
Yes
|
Lemma 10.8.4 If a generalized quadrangle of order \( \left( {2, t}\right) \) exists, then \( t \in \) \( \{ 1,2,4\} \) .
|
Proof. If \( s = 2 \), then \( - t - 1 \) is an eigenvalue of the point graph with multiplicity\n\n\[ \frac{{8t} + 4}{t + 2} = 8 - \frac{12}{t + 2} \]\n\nTherefore, \( t + 2 \) divides 12, which yields that \( t \in \{ 1,2,4,{10}\} \) . The case \( t = {10} \) is excluded by the Krein bound.
|
Yes
|
Lemma 10.9.1 Let \( X \) be a strongly regular graph with parameters\n\n\[ \left( {{6t} + 3,{2t} + 2,1, t + 1}\right) \text{.} \]\n\nThe spectrum of the second subconstituent of \( X \) is\n\n\[ \left\{ {{\left( t + 1\right) }^{\left( 1\right) },{1}^{\left( x\right) },{\left( 1 - t\right) }^{\left( t + 1\right) },{\left( -t - 1\right) }^{\left( y\right) }}\right\} \]\n\nwhere\n\n\[ x = \frac{4\left( {{t}^{2} - 1}\right) }{t + 2},\;y = \frac{t\left( {4 - t}\right) }{t + 2}. \]
|
Proof. The first subconstituent of \( X \) has valency one, and hence consists of \( t + 1 \) vertex-disjoint edges. Its eigenvalues are 1 and -1, each with multiplicity \( t + 1 \), and so -1 is the unique local eigenvalue of the first subconstituent. Therefore, the nonlocal eigenvalues of the second subconstituent of \( X \) are \( t + 1 \) (its valency) and a subset of \( \{ 1, - 1 - t\} \) . The only local eigenvalue of the second subconstituent is \( 1 - \left( {t + 1}\right) - \left( {-1}\right) = 1 - t \) with multiplicity \( t + 1 \) . We also see that \( t + 1 \) is a simple eigenvalue, for if it had multiplicity greater than one, then it would a local eigenvalue. Therefore, letting \( x \) denote the multiplicity of 1 and \( y \) the multiplicity of \( - 1 - t \) we get the spectrum as stated.\n\nThen, since the second subconstituent has \( {4t} \) vertices and as its eigenvalues sum to zero, we have\n\n\[ 1 + \left( {t + 1}\right) + x + y = {4t} \]\n\n\[ t + 1 + \left( {t + 1}\right) \left( {1 - t}\right) + x - y\left( {t + 1}\right) = 0. \]\n\nSolving this pair of equations yields the stated expression for the multiplicities.
|
Yes
|
Lemma 10.9.2 The graph \( L\left( {K}_{3,3}\right) \) is the unique strongly regular graph with parameters \( \left( {9,4,1,2}\right) \) .
|
Proof. Let \( X \) be a strongly regular graph with parameters \( \left( {9,4,1,2}\right) \) . Every second subconstituent \( {X}_{2} \) is a connected graph with valency two on four vertices, and so is \( {C}_{4} \) . Every edge of \( {C}_{4} \) lies in a unique one-factor, and so in a unique one-factor with a proper partition.
|
No
|
Lemma 10.9.3 The graph \( \overline{L\left( {K}_{6}\right) } \) is the unique strongly regular graph with parameters \( \left( {{15},6,1,3}\right) \) .
|
Proof. The second subconstituent \( {X}_{2} \) is a connected cubic graph on 8 vertices. By Lemma 10.9.1 we find that its spectrum is symmetric, and therefore \( {X}_{2} \) is bipartite. From this we can see that \( {X}_{2} \) cannot have diameter two, and therefore it has diameter at least three. By considering two vertices at distance three and their neighbours, it follows quickly that \( {X}_{2} \) is the cube.\n\nConsider any edge \( e = {uv} \) of the cube. There is only one edge that does not contain \( u \) or \( v \) or any of their neighbours. This pair of edges can be completed uniquely to a one-factor, and so every edge lies in a unique one-factor with a proper partition.
|
Yes
|
Lemma 10.9.4 The complement of the Schläfli graph is the unique strongly regular graph with parameters \( \left( {{27},{10},1,5}\right) \) .
|
Proof. The second subconstituent \( {X}_{2} \) is a connected graph on 16 vertices with valency 5 . Using Lemma 10.9.1 we find that \( {X}_{2} \) has exactly three eigenvalues, and so is strongly regular with parameters \( \left( {{16},5,0,2}\right) \) . We showed in Section 10.6 that the Clebsch graph is the only strongly regular graph with these parameters, and so \( {X}_{2} \) is the Clebsch graph.\n\nLet \( {uv} \) be an edge of the Clebsch graph. The non-neighbours of \( u \) form a copy \( P \) of the Petersen graph, and the neighbours of \( v \) form an independent set \( S \) of size four in \( P \) . This leaves three edges, all in \( P \), that together with \( {uv} \) form a set of four vertex-disjoint edges. This set of four edges can be uniquely completed to a one-factor of the Clebsch graph by taking the four edges each containing a vertex of \( S \), but not lying in \( P \) . Therefore, every edge lies in a unique one-factor with a proper partition.
|
Yes
|
Corollary 10.9.5 There is a unique generalized quadrangle of each order \( \left( {2,1}\right) ,\left( {2,2}\right) \), and \( \left( {2,4}\right) \) .
|
Proof. We have shown that the point graph of a generalized quadrangle of these orders is uniquely determined. Therefore, it will suffice to show that the generalized quadrangle can be recovered from its point graph. If \( X \) is a strongly regular graph with parameters \( \left( {{6t} + 3,{2t} + 2,1, t + 1}\right) \), then define an incidence structure whose points are the vertices of \( X \) and whose lines are the triangles of \( X \) . It is routine to confirm that the properties of the strongly regular graph imply that the incidence structure satisfies the axioms for a generalized quadrangle. Therefore, the previous three results imply that there is a unique generalized quadrangle of each order \( \left( {2,1}\right) \) , \( \left( {2,2}\right) \), and \( \left( {2,4}\right) \) .
|
Yes
|
Lemma 10.10.1 Let \( \mathcal{D} \) be a quasi-symmetric 2- \( \left( {v, k,\lambda }\right) \) design with intersection numbers \( {\ell }_{1} \) and \( {\ell }_{2} \) . Let \( X \) be the graph with the blocks of \( \mathcal{D} \) as its vertices, and with two blocks adjacent if and only if they have exactly \( {\ell }_{1} \) points in common. If \( X \) is connected, then it is strongly regular.
|
Proof. Suppose that \( \mathcal{D} \) has \( b \) blocks and that each point lies in \( r \) blocks. If \( N \) is the \( v \times b \) incidence matrix of \( \mathcal{D} \), then from the results in Section 5.10 we have\n\n\[ N{N}^{T} = \left( {r - \lambda }\right) I + {\lambda J} \]\n\nand\n\n\[ {NJ} = {rJ},\;{N}^{T}J = {kJ}. \]\n\nLet \( A \) be the adjacency matrix of \( X \) . Since \( \mathcal{D} \) is quasi-symmetric, we have\n\n\[ {N}^{T}N = {kI} + {\ell }_{1}A + {\ell }_{2}\left( {J - I - A}\right) = \left( {k - {\ell }_{2}}\right) I + \left( {{\ell }_{1} - {\ell }_{2}}\right) A + {\ell }_{2}J. \]\n\nSince \( {N}^{T}N \) commutes with \( J \), it follows that \( A \) commutes with \( J \), and therefore \( X \) is a regular graph.\n\nWe now determine the eigenvalues of \( {N}^{T}N \) . The vector \( \mathbf{1} \) is an eigenvector of \( {N}^{T}N \) with eigenvalue \( {rk} \), and so\n\n\[ {rk}\mathbf{1} = \left( {k - {\ell }_{2} + b{\ell }_{2}}\right) \mathbf{1} + \left( {{\ell }_{1} - {\ell }_{2}}\right) A\mathbf{1}, \]\n\nfrom which we see that \( \mathbf{1} \) is an eigenvector for \( A \), and hence the valency of \( X \) is\n\n\[ \frac{{rk} - k + {\ell }_{2} - b{\ell }_{2}}{{\ell }_{1} - {\ell }_{2}}. \]\n\nBecause \( {N}^{T}N \) is symmetric, we can assume that the remaining eigenvectors are orthogonal to 1 . Suppose that \( x \) is such an eigenvector with eigenvalue \( \theta \) . Then\n\n\[ {\theta x} = \left( {k - {\ell }_{2}}\right) x + \left( {{\ell }_{1} - {\ell }_{2}}\right) {Ax} \]\n\nand so \( x \) is also an eigenvector for \( A \) with eigenvalue\n\n\[ \frac{\theta - k + {\ell }_{2}}{{\ell }_{1} - {\ell }_{2}} \]\n\nBy Lemma 8.2.4, the matrices \( N{N}^{T} \) and \( {N}^{T}N \) have the same nonzero eigenvalues with the same multiplicities. Since \( N{N}^{T} = \left( {r - \lambda }\right) I + {\lambda J} \), we see that it has eigenvalues \( {rk} \) with multiplicity one, and \( r - \lambda \) with multiplicity \( v - 1 \) . Therefore, \( {N}^{T}N \) has eigenvalues \( {rk}, r - \lambda \), and 0 with respective multiplicities \( 1, v - 1 \), and \( b - v \) .\n\nHence the remaining eigenvalues of \( A \) are\n\n\[ \frac{r - \lambda - k + {\ell }_{2}}{{\ell }_{1} - {\ell }_{2}},\;\frac{{\ell }_{2} - k}{{\ell }_{1} - {\ell }_{2}} \]\n\nwith respective multiplicities \( v - 1 \) and \( b - v \) .\n\nWe have shown that \( X \) is a regular graph with at most three eigenvalues. If \( X \) is connected, then it has exactly three eigenvalues, and so it is strongly regular by Lemma 10.2.1.
|
Yes
|
Corollary 10.12.3 Let \( X \) be a graph with binary rank \( {2r} \) . Then \( \chi \left( X\right) \leq \) \( {2}^{r} + 1 \) .
|
Proof. Duplicating vertices or adding isolated vertices does not alter the chromatic number of a graph. Therefore, we can assume without loss of generality that \( X \) is a reduced graph. Thus it is an induced subgraph of \( \operatorname{Sp}\left( {2r}\right) \) and can be coloured with at most \( {2}^{r} + 1 \) colours.
|
Yes
|
Theorem 11.2.1 (The Absolute Bound) Let \( {X}_{1},\ldots ,{X}_{n} \) be the projections onto a set of \( n \) equiangular lines in \( {\mathbb{R}}^{d} \). Then these matrices form a linearly independent set in the space of symmetric matrices, and consequently \( n \leq \left( \begin{matrix} d + 1 \\ 2 \end{matrix}\right) \).
|
Proof. Let \( \alpha \) be the cosine of the angle between the lines. If \( Y = \mathop{\sum }\limits_{i}{c}_{i}{X}_{i} \), then\n\n\[ \operatorname{tr}\left( {Y}^{2}\right) = \mathop{\sum }\limits_{{i, j}}{c}_{i}{c}_{j}\operatorname{tr}\left( {{X}_{i}{X}_{j}}\right) \]\n\n\[ = \mathop{\sum }\limits_{i}{c}_{i}^{2} + \mathop{\sum }\limits_{{i, j : i \neq j}}{c}_{i}{c}_{j}{\alpha }^{2} \]\n\n\[ = {\alpha }^{2}{\left( \mathop{\sum }\limits_{i}{c}_{i}\right) }^{2} + \left( {1 - {\alpha }^{2}}\right) \mathop{\sum }\limits_{i}{c}_{i}^{2}. \]\n\nIt follows that \( \operatorname{tr}\left( {Y}^{2}\right) = 0 \) if and only if \( {c}_{i} = 0 \) for all \( i \), so the \( {X}_{i} \) are linearly independent. The space of symmetric \( d \times d \) matrices has dimension \( \left( \begin{matrix} d + 1 \\ 2 \end{matrix}\right) \), so the result follows.
|
Yes
|
Lemma 11.3.1 Suppose that \( {X}_{1},\ldots ,{X}_{n} \) are the projections onto a set of equiangular lines in \( {\mathbb{R}}^{d} \) and that the cosine of the angle between the lines is \( \alpha \) . If \( I = \mathop{\sum }\limits_{i}{c}_{i}{X}_{i} \), then \( {c}_{i} = d/n \) for all \( i \) and\n\n\[ n = \frac{d - d{\alpha }^{2}}{1 - d{\alpha }^{2}} \]
|
Proof. For any \( j \) we have\n\n\[ {X}_{j} = \mathop{\sum }\limits_{i}{c}_{i}{X}_{i}{X}_{j} \]\n\nand so by taking the trace we get\n\n\[ 1 = \operatorname{tr}\left( {X}_{j}\right) = \mathop{\sum }\limits_{i}{c}_{i}\operatorname{tr}\left( {{X}_{i}{X}_{j}}\right) = \left( {1 - {\alpha }^{2}}\right) {c}_{j} + {\alpha }^{2}\mathop{\sum }\limits_{i}{c}_{i}. \]\n\n(11.1)\n\nThe first consequence of this is that all the \( {c}_{i} \) ’s are equal. Since \( d = \operatorname{tr}I = \) \( \mathop{\sum }\limits_{i}{c}_{i} \), we see that \( {c}_{i} = d/n \) for all \( i \) . Substituting this back into (11.1) gives the stated expression for \( d \).\n\nNow, let \( {x}_{1},\ldots ,{x}_{n} \) be a set of unit vectors representing the equiangular lines, so \( {X}_{i} = {x}_{i}{x}_{i}^{T} \) . Let \( U \) be the \( d \times n \) matrix with \( {x}_{1},\ldots ,{x}_{n} \) as its columns. Then\n\n\[ U{U}^{T} = \mathop{\sum }\limits_{i}{x}_{i}{x}_{i}^{T} = \mathop{\sum }\limits_{i}{X}_{i} = \frac{n}{d}I \]\n\nand\n\n\[ {U}^{T}U = I + {\alpha S} \]\n\nBy Lemma 8.2.4, \( U{U}^{T} \) and \( {U}^{T}U \) have the same nonzero eigenvalues with the same multiplicities. We deduce that \( I + {\alpha S} \) has eigenvalues 0 with multiplicity \( n - d \) and \( n/d \) with multiplicity \( d \) . Therefore, the eigenvalues of \( S \) are as claimed. Since the entries of \( S \) are integers, the eigenvalues of \( S \) are algebraic integers. Therefore, either they are integers or are algebraically conjugate, and so have the same multiplicity. If \( n \neq {2d} \), the multiplicities are different, so \( 1/\alpha \) is an integer.
|
Yes
|
Lemma 11.4.1 Suppose that there are \( n \) equiangular lines in \( {\mathbb{R}}^{d} \) and that \( \alpha \) is the cosine of the angle between them. If \( {\alpha }^{-2} > d \), then\n\n\[ n \leq \frac{d - d{\alpha }^{2}}{1 - d{\alpha }^{2}} \]\n\nIf \( {X}_{1},\ldots ,{X}_{n} \) are the projections onto these lines, then equality holds if and only if \( \mathop{\sum }\limits_{i}{X}_{i} = \left( {n/d}\right) I \) .
|
Proof. Put\n\n\[ Y \mathrel{\text{:=}} I - \frac{d}{n}\mathop{\sum }\limits_{i}{X}_{i} \]\n\nBecause \( Y \) is symmetric, we have \( \operatorname{tr}\left( {Y}^{2}\right) \geq 0 \), with equality if and only if \( Y = 0 \) . Now,\n\n\[ {Y}^{2} = I - \frac{2d}{n}\mathop{\sum }\limits_{i}{X}_{i} + \frac{{d}^{2}}{{n}^{2}}{\left( \mathop{\sum }\limits_{i}{X}_{i}\right) }^{2} \]\n\nso\n\n\[ \operatorname{tr}\left( {Y}^{2}\right) = d - {2d} + \frac{{d}^{2}}{{n}^{2}}\left( {n + {\alpha }^{2}n\left( {n - 1}\right) }\right) \geq 0. \]\n\nThis reduces to\n\n\[ d - d{\alpha }^{2} \geq n\left( {1 - d{\alpha }^{2}}\right) \]\n\nwhich, provided that \( 1 - d{\alpha }^{2} \) is positive, yields the result. Equality holds if and only if \( \operatorname{tr}\left( {Y}^{2}\right) = 0 \), in which case \( Y = 0 \) and \( \mathop{\sum }\limits_{i}{X}_{i} = \left( {n/d}\right) I \) .
|
Yes
|
Lemma 11.5.1 If \( X \) is a graph and \( \sigma \) is a subset of \( V\left( X\right) \), then \( S\left( X\right) \) and \( S\left( {X}^{\sigma }\right) \) have the same eigenvalues.
|
Proof. Let \( D \) be the diagonal matrix with \( {D}_{uu} = - 1 \) if \( u \in \sigma \) and 1 otherwise. Then \( {D}^{2} = I \), so \( D \) is its own inverse. Then\n\n\[ S\left( {X}^{\sigma }\right) = {DS}\left( X\right) D \]\n\nso \( S\left( X\right) \) and \( S\left( {X}^{\sigma }\right) \) are similar and have the same eigenvalues.
|
Yes
|
Corollary 11.6.2 A nontrivial regular two-graph has an even number of vertices.
|
Proof. From the above proof, it follows that \( n = - \left( {{4\theta \tau } + 2\left( {\theta + \tau }\right) + 1}\right) \) . Because both \( {\theta \tau } \) and \( \theta + \tau \) are integers, this shows that \( n \) is odd; hence \( n + 1 \) is even.
|
Yes
|
Theorem 11.7.1 Let \( X \) be a \( k \) -regular graph on \( n \) vertices not switching equivalent to the complete or empty graph. Then \( S\left( X\right) \) has two eigenvalues if and only if \( X \) is strongly regular and \( k - n/2 \) is an eigenvalue of \( A\left( X\right) \) .
|
Proof. Any eigenvector of \( A\left( X\right) \) orthogonal to 1 with eigenvalue \( \theta \) is an eigenvector of \( S\left( X\right) \) with eigenvalue \( - {2\theta } - 1 \), while 1 itself is an eigenvector of \( S\left( X\right) \) with eigenvalue \( n - 1 - {2k} \) . Therefore, if \( X \) is strongly regular with \( k - n/2 \) equal to \( \theta \) or \( \tau \), then \( S\left( X\right) \) has just two eigenvalues.\n\nFor the converse, suppose that \( X \) is a graph such that \( S\left( X\right) \) has two eigenvalues. First we consider the case where \( X \) is connected. Since \( X \) is not complete, \( A\left( X\right) \) has at least three distinct eigenvalues (Lemma 8.12.1). Since \( S\left( X\right) \) has only two eigenvalues, this implies that \( A\left( X\right) \) must have precisely three eigenvalues \( k,\theta \), and \( \tau \) and also that \( n - 1 - {2k} \) must equal either \( - {2\theta } - 1 \) or \( - {2\tau } - 1 \) . Therefore, by Lemma 10.2.1, \( X \) is strongly regular, and \( k - n/2 \) is either \( \theta \) or \( \tau \) .\n\nNow, suppose that \( X \) is not connected. Then there is an eigenvector of \( A\left( X\right) \) with eigenvalue \( k \) orthogonal to 1 . Hence \( n - 1 - {2k} \) and \( - 1 - {2k} \) are the two eigenvalues of \( S\left( X\right) \) . Therefore, every component of \( X \) has at most one eigenvalue \( \theta \) other than \( k \), and this eigenvalue must satisfy \( n - 1 - {2k} = - 1 - {2\theta } \) . Since \( X \) is nonempty, every component of \( X \) has exactly one further eigenvalue \( \theta \), and so is complete. Thus \( \theta = - 1, k = \left( {n/2}\right) - 1 \) and \( X = 2{K}_{\left( {n/2}\right) - 1} \), which is easily seen to be switching equivalent to the complete graph.
|
Yes
|
Theorem 12.2.1 A maximal set of lines at \( {60}^{ \circ } \) and \( {90}^{ \circ } \) in \( {\mathbb{R}}^{n} \) is star-closed.
|
Proof. Let \( \mathcal{L} \) be a set of lines at \( {60}^{ \circ } \) and \( {90}^{ \circ } \), and suppose that \( \langle a\rangle \) , \( \langle b\rangle \in \mathcal{L} \) are two lines at \( {60}^{ \circ } \) . We can assume that \( a \) and \( b \) have length \( \sqrt{2} \) and choose \( b \) such that \( \langle a, b\rangle = - 1 \) . Then \( a + b \) has length \( \sqrt{2} \), and \( \langle a + b\rangle \) forms a star with \( \langle a\rangle \) and \( \langle b\rangle \) . We show that \( \langle a + b\rangle \) is either in \( \mathcal{L} \) or is at \( {60}^{ \circ } \) or \( {90}^{ \circ } \) to every line in \( \mathcal{L} \) . Let \( x \) be a vector spanning a line of \( \mathcal{L} \) . Then \( \langle x, a + b\rangle = \langle x, a\rangle + \langle x, b\rangle \), and so \( \langle x, a + b\rangle \in \{ - 2, - 1,0,1,2\} \) . If \( \langle x, a + b\rangle = \pm 2 \), then \( x = \pm \left( {a + b}\right) \), and so \( \langle a + b\rangle \in \mathcal{L} \) . Otherwise, it is at \( {60}^{ \circ } \) or \( {90}^{ \circ } \) to every line of \( \mathcal{L} \), and so can be added to \( \mathcal{L} \) to form a larger set of lines.
|
Yes
|
Lemma 12.3.1 Let \( \mathcal{L} \) be a set of lines at \( {60}^{ \circ } \) and \( {90}^{ \circ } \) in \( {\mathbb{R}}^{n} \). Then \( \mathcal{L} \) is star-closed if and only if for every vector \( h \) that spans a line in \( \mathcal{L} \), the reflection \( {\rho }_{h} \) fixes \( \mathcal{L} \).
|
Proof. Let \( h \) be a vector of length \( \sqrt{2} \) spanning a line in \( \mathcal{L} \). From our comments above, \( {\rho }_{h} \) fixes \( \langle h\rangle \) and all the lines orthogonal to \( \langle h\rangle \). So suppose that \( \langle a\rangle \) is a line of \( \mathcal{L} \) at \( {60}^{ \circ } \) to \( \langle h\rangle \). Without loss of generality we can assume that \( a \) has length \( \sqrt{2} \) and that \( \langle h, a\rangle = - 1 \). Now,\n\n\[{\rho }_{h}\left( a\right) = a - 2\frac{\left( -1\right) }{2}h = a + h,\]\n\nand \( \langle a + h\rangle \) forms a star with \( \langle a\rangle \) and \( \langle h\rangle \). This implies that \( {\rho }_{h} \) fixes \( \mathcal{L} \) if and only if \( \mathcal{L} \) is star-closed.
|
Yes
|
Lemma 12.4.1 For \( n \geq 2 \), the set of lines \( \mathcal{L} \) spanned by the vectors in \( {D}_{n} \) is indecomposable.
|
Proof. The lines \( \left\langle {{e}_{1} + {e}_{i}}\right\rangle \) for \( i \geq 2 \) have pairwise inner products equal to 1, and hence must be in the same part of any decomposition of \( \mathcal{L} \) . It is clear, however, that any other vector in \( {D}_{n} \) has nonzero inner product with at least one of these vectors, and so there are no lines orthogonal to all of this set.
|
Yes
|
Theorem 12.4.2 Let \( \mathcal{L} \) be a star-closed indecomposable set of lines at \( {60}^{ \circ } \) and \( {90}^{ \circ } \). Then the reflection group of \( \mathcal{L} \) acts transitively on ordered pairs of nonorthogonal lines.
|
Proof. First we observe that the reflection group acts transitively on the lines of \( \mathcal{L} \). Suppose that \( \langle a\rangle \) and \( \langle b\rangle \) are two lines that are not orthogonal, and that \( \langle a, b\rangle = - 1 \). Then \( c = - a - b \) spans the third line in the star with \( \langle a\rangle \) and \( \langle b\rangle \), and the reflection \( {\rho }_{c} \) swaps \( \langle a\rangle \) and \( \langle b\rangle \). Therefore, \( \langle a\rangle \) can be mapped on to any line not orthogonal to it. Let \( {\mathcal{L}}^{\prime } \) be the orbit of \( \langle a\rangle \) under the reflection group of \( \mathcal{L} \). Then every line in \( \mathcal{L} \smallsetminus {\mathcal{L}}^{\prime } \) is orthogonal to every line of \( {\mathcal{L}}^{\prime } \). Since \( \mathcal{L} \) is indecomposable, this shows that \( {\mathcal{L}}^{\prime } = \mathcal{L} \). Now, suppose that \( \left( {\langle a\rangle ,\langle b\rangle }\right) \) and \( \left( {\langle a\rangle ,\langle c\rangle }\right) \) are two ordered pairs of nonorthogonal lines. We will show that there is a reflection that fixes \( \langle a\rangle \) and exchanges \( \langle b\rangle \) and \( \langle c\rangle \). Assume that \( a, b \), and \( c \) have length \( \sqrt{2} \) and that \( \langle a, b\rangle = \langle a, c\rangle = - 1 \). Then the vector \( - a - b \) has length \( \sqrt{2} \) and spans a line in \( \mathcal{L} \). Now, \[ 1 = \langle c, - a\rangle = \langle c, b\rangle + \langle c, - a - b\rangle . \] If \( c = b \) or \( c = - a - b \), then \( \langle c\rangle \) and \( \langle b\rangle \) are exchanged by the identity reflection or \( {\rho }_{a} \), respectively. Otherwise, \( c \) has inner product 1 with precisely one of the vectors in \( \{ b, - a - b\} \), and is orthogonal to the other. Exchanging the roles of \( b \) and \( - a - b \) if necessary, we can assume that \( \langle c, b\rangle = 1 \). Then \( \langle b - c\rangle \in \mathcal{L} \), and the reflection \( {\rho }_{b - c} \) fixes \( \langle a\rangle \) and exchanges \( \langle b\rangle \) and \( \langle c\rangle \). \( ▱ \)
|
Yes
|
Lemma 12.4.3 If \( X \) is a graph with minimum eigenvalue at least -2, then the star-closed set of lines \( \mathcal{L}\left( X\right) \) is indecomposable if and only if \( X \) is connected.
|
Proof. First suppose that \( X \) is connected. Let \( {\mathcal{L}}^{\prime } \) be the lines spanned by the vectors whose Gram matrix is \( A\left( X\right) + {2I} \) . Lines corresponding to adjacent vertices of \( X \) are not orthogonal, and hence must be in the same part of any decomposition of \( \mathcal{L}\left( X\right) \) . Therefore, all the lines in \( {\mathcal{L}}^{\prime } \) are in the same part. Any line lying in a star with two other lines is not orthogonal to either of them, and therefore lies in the same part of any decomposition of \( \mathcal{L}\left( X\right) \) . Hence the star-closure of \( {\mathcal{L}}^{\prime } \) is all in the same part of any decomposition, which shows that \( \mathcal{L}\left( X\right) \) is indecomposable.\n\nIf \( X \) is not connected, then \( {\mathcal{L}}^{\prime } \) has a decomposition into two parts. Any line orthogonal to two lines in a star is orthogonal to all three lines of the star, and so any line added to \( {\mathcal{L}}^{\prime } \) to complete a star can be assigned to one of the two parts of the decomposition, eventually yielding a decomposition of \( \mathcal{L} \) .\n\nTherefore, we see that any connected graph with minimum eigenvalue at least -2 is associated with a star-closed indecomposable set of lines. Our strategy will be to classify all such sets, and thereby classify all the graphs with minimum eigenvalue at least -2 .
|
Yes
|
Lemma 12.5.1 Let \( \mathcal{L} \) be an indecomposable star-closed set of lines at \( {60}^{ \circ } \) and \( {90}^{ \circ } \), and let \( \langle a\rangle ,\langle b\rangle \), and \( \langle c\rangle \) form a star in \( \mathcal{L} \). Every other line of \( \mathcal{L} \) is orthogonal to either one or three lines in the star.
|
Proof. Without loss of generality we may assume that \( a, b \), and \( c \) all have length \( \sqrt{2} \) and that\n\n\[ \langle a, b\rangle = \langle b, c\rangle = \langle c, a\rangle = - 1.\]\n\nIt follows then that \( c = - a - b \), and so for any other line \( \langle x\rangle \) of \( \mathcal{L} \) we have\n\n\[ \langle x, a\rangle + \langle x, b\rangle + \langle x, c\rangle = 0.\]\n\nBecause each of the terms is in \( \{ 0, \pm 1\} \), we see that either all three terms are zero or the three terms are \( 1,0 \), and -1 in some order.\n\nNow, fix a star \( \langle a\rangle ,\langle b\rangle \), and \( \langle c\rangle \), and as above choose \( a, b \), and \( c \) to be vectors of length \( \sqrt{2} \) with pairwise inner products -1 . Let \( D \) be the set of lines of \( \mathcal{L} \) that are orthogonal to all three lines in the star. The remaining lines of \( \mathcal{L} \) are orthogonal to just one line of the star, so can be partitioned into three sets \( A, B \), and \( C \), consisting of those lines orthogonal to \( \langle a\rangle ,\langle b\rangle \) , and \( \langle c\rangle \), respectively.
|
Yes
|
Lemma 12.5.2 The set \( \mathcal{L} \) is the star-closure of \( \langle a\rangle ,\langle b\rangle \), and \( C \) .
|
Proof. Let \( \mathcal{M} \) denote the set of lines \( \{ \langle a\rangle ,\langle b\rangle \} \cup C \) . Clearly, \( \langle c\rangle \) lies in the star-closure of \( \mathcal{M} \), and so it suffices to show that every line in \( A, B \), and \( D \) lies in a star with two lines chosen from \( \mathcal{M} \) . Suppose that \( \langle x\rangle \in A \), and without loss of generality, select \( x \) such that \( \langle b, x\rangle = - 1 \) . Then \( - b - x \in \mathcal{L} \) and straightforward calculation shows that \( \langle - b - x\rangle \in C \) . Thus \( \langle x\rangle \) is in the star containing \( \langle b\rangle \) and \( \langle - b - x\rangle \) . An analogous argument deals with the case where \( \langle x\rangle \in B \), leaving only the lines in \( D \) . Let \( x \) be a line in \( D \) , and first suppose that there is some line in \( C \), call it \( z \), not orthogonal to \( x \) . Then we can assume that \( \langle x, z\rangle = - 1 \), and hence that \( - x - z \in \mathcal{L} \) . Once again, straightforward calculations show that \( - x - z \in C \), and so \( \langle x\rangle \) lies in a star with two lines from \( C \) . Therefore, the only lines remaining are those in \( D \) that are orthogonal to every line of \( C \) . Let \( {D}^{\prime } \) denote this set of lines. Every line in \( {D}^{\prime } \) is orthogonal to every line in \( \mathcal{M} \), and hence to every line in the star-closure of \( \mathcal{M} \), which we have just seen contains \( \mathcal{L} \smallsetminus {D}^{\prime } \) . Since \( \mathcal{L} \) is indecomposable, this implies that \( {D}^{\prime } \) is empty.
|
Yes
|
Lemma 12.6.1 If \( x \) and \( y \) are orthogonal vectors in \( {C}^{ * } \), then there is a unique vector in \( {C}^{ * } \) orthogonal to both of them.
|
Proof. Suppose that vectors \( x, y \in {C}^{ * } \) are orthogonal. Then by our comments above, we see that \( x + b \in {A}^{ * } \) and that \( y - a \in {B}^{ * } \) and that \( \langle x + b, y - a\rangle = - 1 \) . Therefore, \( \langle a - b - x - y\rangle \in \mathcal{L} \), and calculation shows that \( a - b - x - y \in {C}^{ * } \) . Now,\n\n\[ \langle a - b - x - y, x\rangle = \langle a - b - x - y, y\rangle = 0, \]\n\nand so \( a - b - x - y \) is orthogonal to both \( x \) and \( y \) . If \( z \) is any other vector in \( {C}^{ * } \) orthogonal to \( x \) and \( y \), then\n\n\[ \langle z, a - b - x - y\rangle = \langle z, a\rangle - \langle z, b\rangle = 2, \]\n\nand hence \( z = a - b - x - y \) .
|
Yes
|
Theorem 12.6.2 Let \( \mathcal{Q} \) be the incidence structure whose points are the vectors of \( {C}^{ * } \), and whose lines are triples of mutually orthogonal vectors. Then either \( \mathcal{Q} \) has no lines, or \( \mathcal{Q} \) is a generalized quadrangle, possibly degenerate, with lines of size three.
|
Proof. A generalized quadrangle has the property that given any line \( \ell \) and a point \( P \) off that line, there is a unique point on \( \ell \) collinear with \( P \) . We show that \( \mathcal{Q} \) satisfies this axiom.\n\nSuppose that \( x, y \), and \( a - b - x - y \) are the three points of a line of \( \mathcal{Q} \), and let \( z \) be an arbitrary vector in \( {C}^{ * } \), not equal to any of these three. Then\n\n\[ \langle z, x\rangle + \langle z, y\rangle + \langle z, a - b - x - y\rangle = \langle z, a - b\rangle = 2. \]\n\nSince each of the three terms is either 0 or 1 , it follows that there is a unique term equal to 0, and hence \( z \) is collinear with exactly one of the three points of the line.\n\nTherefore, \( \mathcal{Q} \) is a generalized quadrangle with lines of size three.
|
Yes
|
Theorem 12.7.2 The root system \( {E}_{8} \) contains exactly 240 vectors. The lines spanned by these vectors form an indecomposable star-closed set of lines at \( {60}^{ \circ } \) and \( {90}^{ \circ } \) in \( {\mathbb{R}}^{8} \) . The generalized quadrangle \( \mathcal{Q} \) associated with this set of lines is the unique generalized quadrangle of order \( \left( {2,4}\right) \) .
|
Proof. This is immediate, since \( {D}_{8} \) contains 112 vectors, and there are 128 further vectors.
|
No
|
Theorem 12.7.4 An indecomposable star-closed set of lines at \( {60}^{ \circ } \) and \( {90}^{ \circ } \) is the set of lines spanned by the vectors in one of the root systems \( {E}_{6} \) , \( {E}_{7},{E}_{8},{A}_{n} \), or \( {D}_{n} \) (for some \( n \) ).
|
Proof. The Gram matrix of the vectors in \( {C}^{ * } \) determines the Gram matrix of the entire collection of lines in \( \mathcal{L} \), which in turn determines \( \mathcal{L} \) up to an orthogonal transformation. Since these five root systems give the only five possible Gram matrices for the vectors in \( {C}^{ * } \), there are no further indecomposable star-closed sets of lines at \( {60}^{ \circ } \) and \( {90}^{ \circ } \) .
|
Yes
|
Theorem 12.7.4 An indecomposable star-closed set of lines at \( {60}^{ \circ } \) and \( {90}^{ \circ } \) is the set of lines spanned by the vectors in one of the root systems \( {E}_{6} \) , \( {E}_{7},{E}_{8},{A}_{n} \), or \( {D}_{n} \) (for some \( n \) ).
|
Proof. The Gram matrix of the vectors in \( {C}^{ * } \) determines the Gram matrix of the entire collection of lines in \( \mathcal{L} \), which in turn determines \( \mathcal{L} \) up to an orthogonal transformation. Since these five root systems give the only five possible Gram matrices for the vectors in \( {C}^{ * } \), there are no further indecomposable star-closed sets of lines at \( {60}^{ \circ } \) and \( {90}^{ \circ } \) .
|
Yes
|
Corollary 12.8.1 Let \( X \) be a connected graph with smallest eigenvalue at least -2, and let \( A \) be its adjacency matrix. Then either \( X \) is a generalized line graph, or \( A + {2I} \) is the Gram matrix of a set of vectors in \( {E}_{8} \) .
|
Proof. Let \( S \) be a set of vectors with Gram matrix \( {2I} + A \) . Then the star-closure of \( S \) is contained in the set of lines spanned by the vectors in \( {E}_{8} \) or \( {D}_{n} \) .
|
No
|
Theorem 12.8.2 Let \( X \) be a graph with least eigenvalue at least -2 . If \( X \) has more than 36 vertices or maximum valency greater than 28, it is a generalized line graph.
|
Proof. If \( X \) is not a generalized line graph, then \( A\left( X\right) + {2I} \) is the Gram matrix of a set of vectors in \( {E}_{8} \) . So let \( S \) be a set of vectors from \( {E}_{8} \) with nonnegative pairwise inner products. First we will show that \( \left| S\right| \leq {36} \) . For any vector \( x \in {\mathbb{R}}^{8} \), let \( {P}_{x} \) be the \( 8 \times 8 \) matrix \( x{x}^{T} \) . The matrices \( {P}_{x} \) span a subspace of the real vector space formed by the \( 8 \times 8 \) symmetric matrices, which has dimension \( \left( \begin{array}{l} 9 \\ 2 \end{array}\right) = {36} \) . To prove the first part of the lemma, it will suffice to show that the matrices \( {P}_{x} \) for \( x \in S \) are linearly independent.\n\nSuppose that there are real numbers \( {a}_{x} \) such that\n\n\[ \mathop{\sum }\limits_{{x \in S}}{a}_{x}{P}_{x} = 0 \]\n\nThen we have\n\n\[ 0 = \operatorname{tr}\left( {\left( \mathop{\sum }\limits_{x}{a}_{x}{P}_{x}\right) }^{2}\right) = \mathop{\sum }\limits_{{x, y}}{a}_{x}{a}_{y}\operatorname{tr}\left( {{P}_{x}{P}_{y}}\right) \]\n\n\[ = \mathop{\sum }\limits_{{x, y}}{a}_{x}{a}_{y}\operatorname{tr}\left( {x{x}^{T}y{y}^{T}}\right) \]\n\n\[ = \mathop{\sum }\limits_{{x, y}}{a}_{x}{a}_{y}{\left( {x}^{T}y\right) }^{2} \]\n\nThe last sum can be written in the form \( {a}^{T}{Da} \), where \( D \) is the matrix obtained by replacing each entry of the Gram matrix of \( S \) by its square. This Gram matrix is equal to \( {2I} + A\left( X\right) \), and since \( A\left( X\right) \) is a 01-matrix, it follows that \( D = {4I} + A\left( X\right) \) . But as \( {2I} + A\left( X\right) \) is positive semidefinite, \( D \) is positive definite, and so \( a = 0 \) . Therefore, the matrices \( {P}_{x} \) are linearly independent, and so \( \left| S\right| \leq {36} \).\n\nIt remains for us to prove the claim about the maximum valency. Suppose that \( a \in S \), and let \( \langle a\rangle ,\langle b\rangle \), and \( \langle c\rangle \) be a star containing \( \langle a\rangle \) . The vectors whose inner product with \( a \) is 1 are the vectors \( - b, - c \), and the vectors in \( {C}^{ * } \) and \( - {B}^{ * } \) . If \( x \in {C}^{ * } \), then \( x - a \in {B}^{ * } \), and so \( a - x \in - {B}^{ * } \) . Then\n\n\[ \langle x, a - x\rangle = 1 - 2 = - 1 \]\n\nand so \( S \) cannot contain both \( x \) and \( a - x \) . Similarly, \( S \) cannot contain both \( - b \) and \( - c \), because their inner product is -1 . Thus \( S \) can contain at most one vector from each of the pairs \( \{ x, a - x\} \) for \( x \in {C}^{ * } \), together with at most one vector from \( \{ - b, - c\} \) . Thus \( S \) contains at most 28 vectors with positive inner product with \( a \), and so any vertex of \( X \) has valency at most 28.
|
Yes
|
Lemma 13.1.2 Let \( X \) be a regular graph with valency \( k \) . If the adjacency matrix \( A \) has eigenvalues \( {\theta }_{1},\ldots ,{\theta }_{n} \), then the Laplacian \( Q \) has eigenvalues \( k - {\theta }_{1},\ldots, k - {\theta }_{n} \) .
|
Proof. If \( X \) is \( k \) -regular, then \( Q = \Delta \left( X\right) - A = {kI} - A \) . Thus every eigenvector of \( A \) with eigenvalue \( \theta \) is an eigenvector of \( Q \) with eigenvalue \( k - \theta \) .
|
Yes
|
Lemma 13.1.3 If \( X \) is a graph on \( n \) vertices and \( 2 \leq i \leq n \), then \( {\lambda }_{i}\left( \bar{X}\right) = \) \( n - {\lambda }_{n - i + 2}\left( X\right) \) .
|
Proof. We start by observing that\n\n\[ Q\left( X\right) + Q\left( \bar{X}\right) = {nI} - J. \]\n\n(13.1)\n\nThe vector \( \mathbf{1} \) is an eigenvector of \( Q\left( X\right) \) and \( Q\left( \bar{X}\right) \) with eigenvalue 0 . Let \( x \) be another eigenvector of \( Q\left( X\right) \) with eigenvalue \( \lambda \) ; we may assume that \( x \) is orthogonal to 1 . Then \( {Jx} = 0 \), so\n\n\[ {nx} = \left( {{nI} - J}\right) x = Q\left( X\right) x + Q\left( \bar{X}\right) x = {\lambda x} + Q\left( \bar{X}\right) x. \]\n\nTherefore, \( Q\left( \bar{X}\right) x = \left( {n - \lambda }\right) x \), and the lemma follows.\n\nNote that \( {nI} - J = Q\left( {K}_{n}\right) \) ; thus (13.1) can be rewritten as\n\n\[ Q\left( X\right) + Q\left( \bar{X}\right) = Q\left( {K}_{n}\right) \]
|
Yes
|
Lemma 13.1.5 Let \( X \) be a graph on \( n \) vertices with Laplacian \( Q \) . Then for any vector \( x \) ,\n\n\[ \n{x}^{T}{Qx} = \mathop{\sum }\limits_{{{uv} \in E\left( X\right) }}{\left( {x}_{u} - {x}_{v}\right) }^{2}.\n\]
|
Proof. This follows from the observations that\n\n\[ \n{x}^{T}{Qx} = {x}^{T}D{D}^{T}x = {\left( {D}^{T}x\right) }^{T}\left( {{D}^{T}x}\right)\n\]\n\nand that if \( {uv} \in E\left( X\right) \), then the entry of \( {D}^{T}x \) corresponding to \( {uv} \) is \( \pm \left( {{x}_{u} - {x}_{v}}\right) \) .
|
No
|
Theorem 13.2.1 Let \( X \) be a graph with Laplacian matrix \( Q \) . If \( u \) is an arbitrary vertex of \( X \), then \( \det Q\left\lbrack u\right\rbrack \) is equal to the number of spanning trees of \( X \) .
|
Proof. We prove the theorem by induction on the number of edges of \( X \) . Let \( \tau \left( X\right) \) denote the number of spanning trees of \( X \) . If \( e \) is an edge of \( X \), then every spanning tree either contains \( e \) or does not contain \( e \) , so we can count them according to this distinction. There is a one-to-one correspondence between spanning trees of \( X \) that contain \( e \) and spanning trees of \( X/e \), so there are \( \tau \left( {X/e}\right) \) such trees. Any spanning tree of \( X \) that does not contain \( e \) is a spanning tree of \( X \smallsetminus e \), and so there are \( \tau \left( {X \smallsetminus e}\right) \) of these. Therefore, \[ \tau \left( X\right) = \tau \left( {X/e}\right) + \tau \left( {X \smallsetminus e}\right) \] (13.2) In this situation, multiple edges are retained during contraction, but we may ignore loops, because they cannot occur in a spanning tree. Now, assume that \( e = {uv} \), and let \( E \) be the \( n \times n \) diagonal matrix with \( {E}_{vv} \) equal to 1, and all other entries equal to 0 . Then \[ Q\left\lbrack u\right\rbrack = Q\left( {X \smallsetminus e}\right) \left\lbrack u\right\rbrack + E \] from which we deduce that \[ \det Q\left\lbrack u\right\rbrack = \det Q\left( {X \smallsetminus e}\right) \left\lbrack u\right\rbrack + \det Q\left( {X \smallsetminus e}\right) \left\lbrack {u, v}\right\rbrack . \] (13.3) Note that \( Q\left( {X \smallsetminus e}\right) \left\lbrack {u, v}\right\rbrack = Q\left\lbrack {u, v}\right\rbrack \) . Assume that in forming \( X/e \) we contract \( u \) onto \( v \), so that \( V\left( {X/e}\right) = \) \( V\left( X\right) \smallsetminus u \) . Then \( Q\left( {X/e}\right) \left\lbrack v\right\rbrack \) has rows and columns indexed by \( V\left( X\right) \smallsetminus \{ u, v\} \) with the \( {xy} \) -entry being equal to \( {Q}_{xy} \), and so we also have that \( Q\left( {X/e}\right) \left\lbrack v\right\rbrack = \) \( Q\left\lbrack {u, v}\right\rbrack \) . Thus we can rewrite (13.3) as \[ \det Q\left\lbrack u\right\rbrack = \det Q\left( {X \smallsetminus e}\right) \left\lbrack u\right\rbrack + \det Q\left( {X/e}\right) \left\lbrack v\right\rbrack . \] By induction, \( \det Q\left( {X \smallsetminus e}\right) \left\lbrack u\right\rbrack = \tau \left( {X \smallsetminus e}\right) \) and \( \det Q\left( {X/e}\right) \left\lbrack v\right\rbrack = \tau \left( {X/e}\right) \) ; hence (13.2) implies the theorem.
|
Yes
|
Corollary 13.2.2 The number of spanning trees of \( {K}_{n} \) is \( {n}^{n - 2} \) .
|
Proof. This follows directly from the fact that \( Q\left\lbrack u\right\rbrack = n{I}_{n - 1} - J \) for any vertex \( u \) .
|
No
|
Lemma 13.2.3 Let \( \tau \left( X\right) \) denote the number of spanning trees in the graph \( X \) and let \( Q \) be its Laplacian. Then \( \operatorname{adj}\left( Q\right) = \tau \left( X\right) J \) .
|
Proof. Suppose that \( X \) has \( n \) vertices. Assume first that \( X \) is not connected, so that \( \tau \left( X\right) = 0 \) . Then \( Q \) has rank at most \( n - 2 \), so any submatrix of \( Q \) of order \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) is singular and \( \operatorname{adj}\left( Q\right) = 0 \) .\n\nThus we may assume that \( X \) is connected. Then \( \operatorname{adj}\left( Q\right) \neq 0 \), but nonetheless \( Q\operatorname{adj}\left( Q\right) = 0 \) . Because \( X \) is connected, \( \ker Q \) is spanned by \( \mathbf{1} \), and therefore each column of \( \operatorname{adj}\left( Q\right) \) must be a constant vector. Since \( \operatorname{adj}\left( Q\right) \) is symmetric, it follows that it is a nonzero multiple of \( J \) ; now the result follows at once from Theorem 13.2.1.
|
Yes
|
Lemma 13.2.4 Let \( X \) be a graph on \( n \) vertices, and let \( {\lambda }_{1},\ldots ,{\lambda }_{n} \) be the eigenvalues of the Laplacian of \( X \) . Then the number of spanning trees in \( X \) is \( \frac{1}{n}\mathop{\prod }\limits_{{i = 2}}^{n}{\lambda }_{i} \) .
|
Proof. The result clearly holds if \( X \) is not connected, so we may assume without loss that \( X \) is connected. Let \( \phi \left( t\right) \) denote the characteristic polynomial \( \det \left( {{tI} - Q}\right) \) of the Laplacian \( Q \) of \( X \) . The zeros of \( \phi \left( t\right) \) are the eigenvalues of \( Q \) . Since \( {\lambda }_{1} = 0 \), its constant term is zero and the coefficient of \( t \) is\n\n\[{\left( -1\right) }^{n - 1}\mathop{\prod }\limits_{{i = 2}}^{n}{\lambda }_{i}\]\n\nOn the other hand, by our remarks just above, the coefficient of the linear term in \( \phi \left( t\right) \) is\n\n\[{\left( -1\right) }^{n - 1}\mathop{\sum }\limits_{{u \in V\left( X\right) }}\det Q\left\lbrack u\right\rbrack\]\n\nThis yields the lemma immediately.
|
Yes
|
Lemma 13.3.1 Let \( \\rho \) be a representation of the edge-weighted graph \( X \) , given by the \( \\left| {V\\left( X\\right) }\\right| \\times m \) matrix \( R \) . If \( D \) is an oriented incidence matrix for \( X \), then\n\n\[\\mathcal{E}\\left( \\rho \\right) = \\operatorname{tr}{R}^{T}{DW}{D}^{T}R\]
|
Proof. The rows of \( {D}^{T}R \) are indexed by the edges of \( X \), and if \( {uv} \\in E\\left( X\\right) \) , then the \( {uv} \)-row of \( {D}^{T}R \) is \( \\pm \\left( {\\rho \\left( u\\right) - \\rho \\left( v\\right) }\\right) \) . Consequently, the diagonal entries of \( {D}^{T}R{R}^{T}D \) have the form \( \\parallel \\rho \\left( u\\right) - \\rho \\left( v\\right) {\\parallel }^{2} \), where \( {uv} \) ranges over the edges of \( X \) . Hence\n\n\[\\mathcal{E}\\left( \\rho \\right) = \\operatorname{tr}W{D}^{T}R{R}^{T}D = \\operatorname{tr}{R}^{T}{DW}{D}^{T}R\]\n\nas required.
|
Yes
|
Theorem 13.4.1 Let \( X \) be a graph on \( n \) vertices with weighted Laplacian Q. Assume that the eigenvalues of \( Q \) are \( {\lambda }_{1} \leq \cdots \leq {\lambda }_{n} \) and that \( {\lambda }_{2} > 0 \) . The minimum energy of a balanced orthogonal representation of \( X \) in \( {\mathbb{R}}^{m} \) equals \( \mathop{\sum }\limits_{{i = 2}}^{{m + 1}}{\lambda }_{i} \) .
|
Proof. By Lemma 13.3.1 the energy of a representation is \( \operatorname{tr}{R}^{T}{QR} \) . From Corollary 9.5.2, the energy of an orthogonal representation in \( {\mathbb{R}}^{\ell } \) is bounded below by the sum of the \( \ell \) smallest eigenvalues of \( Q \) . We can realize this lower bound by taking the columns of \( R \) to be vectors \( {x}_{1},\ldots ,{x}_{\ell } \) such that \( Q{x}_{i} = {\lambda }_{i}{x}_{i} \n\nSince \( {\lambda }_{2} > 0 \), we must have \( {x}_{1} = \mathbf{1} \), and therefore by deleting \( {x}_{1} \) we obtain a balanced orthogonal representation in \( {\mathbb{R}}^{\ell - 1} \), with the same energy. Conversely, we can reverse this process to obtain an orthogonal representation in \( {\mathbb{R}}^{\ell } \) from a balanced orthogonal representation in \( {\mathbb{R}}^{\ell - 1} \) such that these two representations have the same energy. Therefore, the minimum energy of a balanced orthogonal representation of \( X \) in \( {\mathbb{R}}^{m} \) equals the minimum energy of an orthogonal representation in \( {\mathbb{R}}^{m + 1} \), and this minimum equals \( {\lambda }_{2} + \cdots + {\lambda }_{m + 1} \) .
|
Yes
|
Theorem 13.5.1 Suppose that \( S \) is a subset of the vertices of the graph \( X \) . Then \( {\lambda }_{2}\left( X\right) \leq {\lambda }_{2}\left( {X \smallsetminus S}\right) + \left| S\right| \) .
|
Proof. Let \( z \) be a unit vector of length \( n \) such that (when viewed as a function on \( V\left( X\right) \) ) its restriction to \( S \) is zero, and its restriction to \( V\left( X\right) \smallsetminus S \) is an eigenvector of \( Q\left( {X \smallsetminus S}\right) \) orthogonal to 1 and with eigenvalue \( \theta \) . Then by Corollary 13.4.2\n\n\[ \n{\lambda }_{2}\left( X\right) \leq \mathop{\sum }\limits_{{{uv} \in E\left( X\right) }}{\left( {z}_{u} - {z}_{v}\right) }^{2} \n\]\n\nHence by dividing the edges into those with none, one, or two endpoints in \( X \smallsetminus S \) we get\n\n\[ \n{\lambda }_{2}\left( X\right) \leq \mathop{\sum }\limits_{{u \in S}}\mathop{\sum }\limits_{{v \sim u}}{z}_{v}^{2} + \mathop{\sum }\limits_{{{uv} \in E\left( {X \smallsetminus S}\right) }}{\left( {z}_{u} - {z}_{v}\right) }^{2} \leq \left| S\right| + \theta . \n\]\n\nWe may take \( \theta = {\lambda }_{2}\left( {X \smallsetminus S}\right) \), and hence the result follows.
|
Yes
|
Corollary 13.5.2 For any graph \( X \) we have \( {\lambda }_{2}\left( X\right) \leq {\kappa }_{0}\left( X\right) \) .
|
It follows from our observation in Section 13.1 or from Exercise 4 that the characteristic polynomial of \( Q\left( {K}_{1, n}\right) \) is \( t{\left( t - 1\right) }^{n - 1}\left( {t - n - 1}\right) \) . This provides one family of examples where \( {\lambda }_{2} \) equals the vertex connectivity.\n\nProvided that \( X \) is not complete, the vertex connectivity of \( X \) is bounded above by the edge connectivity, which, in turn, is bounded above by the minimum valency \( \delta \left( X\right) \) of a vertex in \( X \) . We thus have the following useful inequalities for noncomplete graphs:\n\n\[{\lambda }_{2}\left( X\right) \leq {\kappa }_{0}\left( X\right) \leq {\kappa }_{1}\left( X\right) \leq \delta \left( X\right)\]
|
Yes
|
Lemma 13.6.1 Let \( X \) be a graph and let \( Y \) be obtained from \( X \) by adding an edge joining two distinct vertices of \( X \) . Then\n\n\[ \n{\lambda }_{2}\left( X\right) \leq {\lambda }_{2}\left( Y\right) \leq {\lambda }_{2}\left( X\right) + 2 \n\]
|
Proof. Suppose we get \( Y \) by joining vertices \( r \) and \( s \) of \( X \) . For any vector \( z \) we have\n\n\[ \n{z}^{T}Q\left( Y\right) z = \mathop{\sum }\limits_{{{uv} \in E\left( Y\right) }}{\left( {z}_{u} - {z}_{v}\right) }^{2} = {\left( {z}_{r} - {z}_{s}\right) }^{2} + \mathop{\sum }\limits_{{{uv} \in E\left( X\right) }}{\left( {z}_{u} - {z}_{v}\right) }^{2}. \n\]\n\nIf we choose \( z \) to be a unit eigenvector of \( Q\left( Y\right) \), orthogonal to 1, and with eigenvalue \( {\lambda }_{2}\left( Y\right) \), then by Corollary 13.4.2 we get\n\n\[ \n{\lambda }_{2}\left( Y\right) \geq {\lambda }_{2}\left( X\right) + {\left( {z}_{r} - {z}_{s}\right) }^{2} \n\]\n\n(13.4)\n\nOn the other hand, if we take \( z \) to be a unit eigenvector of \( Q\left( X\right) \) , orthogonal to 1, with eigenvalue \( {\lambda }_{2}\left( X\right) \), then by Corollary 13.4.2 we get\n\n\[ \n{\lambda }_{2}\left( Y\right) \leq {\lambda }_{2}\left( X\right) + {\left( {z}_{r} - {z}_{s}\right) }^{2} \n\]\n\n(13.5)\n\nIt follows from (13.4) that \( {\lambda }_{2}\left( X\right) \leq {\lambda }_{2}\left( Y\right) \) . We can complete the proof by appealing to (13.5). Since \( {z}_{r}^{2} + {z}_{s}^{2} \leq 1 \), it is straightforward to see that \( {\left( {z}_{r} - {z}_{s}\right) }^{2} \leq 2 \), and the result is proved.
|
Yes
|
Theorem 13.6.2 Let \( X \) be a graph with \( n \) vertices and let \( Y \) be obtained from \( X \) by adding an edge joining two distinct vertices of \( X \) . Then \( {\lambda }_{i}\left( X\right) \leq \) \( {\lambda }_{i}\left( Y\right) \), for all \( i \), and \( {\lambda }_{i}\left( Y\right) \leq {\lambda }_{i + 1}\left( X\right) \) if \( i < n \) .
|
Proof. Suppose we add the edge \( {uv} \) to \( X \) to get \( Y \) . Let \( z \) be the vector of length \( n \) with \( u \) -entry and \( v \) -entry 1 and -1, respectively, and all other entries equal to 0 . Then \( Q\left( Y\right) = Q\left( X\right) + z{z}^{T} \), and if we use \( Q \) to denote \( Q\left( X\right) \), we have\n\n\[ \n{tI} - Q\left( Y\right) = {tI} - Q - z{z}^{T} = \left( {{tI} - Q}\right) \left( {I - {\left( tI - Q\right) }^{-1}z{z}^{T}}\right) .\n\]\n\nBy Lemma 8.2.4,\n\n\[ \n\det \left( {I - {\left( tI - Q\right) }^{-1}z{z}^{T}}\right) = 1 - {z}^{T}{\left( tI - Q\right) }^{-1}z\n\]\n\nand therefore\n\n\[ \n\frac{\det \left( {{tI} - Q\left( Y\right) }\right) }{\det \left( {{tI} - Q\left( X\right) }\right) } = 1 - {z}^{T}{\left( tI - Q\right) }^{-1}z.\n\]\n\nThe result now follows from Theorem 8.13.3, applied to the rational function \( \psi \left( t\right) = 1 - {z}^{T}{\left( tI - Q\right) }^{-1}z \), and the proof of Theorem 9.1.1.
|
Yes
|
Lemma 13.7.1 Let \( X \) be a graph on \( n \) vertices and let \( S \) be a subset of \( V\left( X\right) \) . Then \[ {\lambda }_{2}\left( X\right) \leq \frac{n\left| {\partial S}\right| }{\left| S\right| \left( {n - \left| S\right| }\right) } \]
|
Proof. Suppose \( \left| S\right| = a \) . Let \( z \) be the vector (viewed as a function on \( V\left( X\right) ) \) whose value is \( n - a \) on the vertices in \( S \) and \( - a \) on the vertices not in \( S \) . Then \( z \) is orthogonal to 1, so by Corollary 13.4.2 \[ {\lambda }_{2}\left( X\right) \leq \frac{\mathop{\sum }\limits_{{{uv} \in E\left( X\right) }}{\left( {z}_{u} - {z}_{v}\right) }^{2}}{\mathop{\sum }\limits_{u}{z}_{u}^{2}} = \frac{\left| {\partial S}\right| {n}^{2}}{a{\left( n - a\right) }^{2} + \left( {n - a}\right) {a}^{2}}. \] The lemma follows immediately from this.
|
Yes
|
Corollary 13.7.3 The bisection width of a graph \( X \) on \( {2m} \) vertices is at least \( m{\lambda }_{2}\left( X\right) /2 \) .
|
We apply this to the \( k \) -cube \( {Q}_{k} \) . In Exercise 13 it is established that \( {\lambda }_{2}\left( {Q}_{k}\right) = 2 \), from which it follows that the bisection width of the \( k \) -cube is at least \( {2}^{k - 1} \) . Since this value is easily realized, we have thus found the exact value.
|
No
|
Lemma 13.7.4 If \( X \) is a graph with \( n \) vertices, then \( \operatorname{bip}\left( X\right) \leq n{\lambda }_{\infty }\left( X\right) /4 \) .
|
Proof. By applying Lemma 13.7.1 to the complement of \( X \) we get\n\n\[ \left| {\partial S}\right| \leq \left| S\right| \left( {n - \left| S\right| }\right) {\lambda }_{\infty }\left( X\right) /n \leq n{\lambda }_{\infty }\left( X\right) /4 \]\n\nwhich is the desired inequality.
|
Yes
|
Lemma 13.8.1 Let \( S \) be a set of points in \( {\mathbb{R}}^{m} \). Then the vector \( x \) in \( {\mathbb{R}}^{m} \) minimizes \( \mathop{\sum }\limits_{{y \in S}}\parallel x - y{\parallel }^{2} \) if and only if
|
Proof. Let \( \widehat{y} \) be the centroid of the set \( S \), i.e., \[ \widehat{y} = \frac{1}{\left| S\right| }\mathop{\sum }\limits_{{y \in S}}y \] Then \[ \mathop{\sum }\limits_{{y \in S}}\parallel x - y{\parallel }^{2} = \mathop{\sum }\limits_{{y \in S}}\parallel \left( {x - \widehat{y}}\right) + \left( {\widehat{y} - y}\right) {\parallel }^{2} \] \[ = \left| S\right| \parallel x - \widehat{y}{\parallel }^{2} + \mathop{\sum }\limits_{{y \in S}}\parallel \widehat{y} - y{\parallel }^{2} + 2\mathop{\sum }\limits_{{y \in S}}\langle x - \widehat{y},\widehat{y} - y\rangle \] \[ = \left| S\right| \parallel x - \widehat{y}{\parallel }^{2} + \mathop{\sum }\limits_{{y \in S}}\parallel \widehat{y} - y{\parallel }^{2}. \] Therefore, this is a minimum if and only if \( x = \widehat{y} \).
|
Yes
|
Lemma 13.8.2 Let \( F \) be a subset of the vertices of \( X \), let \( \rho \) be a representation of \( X \), and let \( R \) be the matrix whose rows are the images of the vertices of \( X \). Let \( Q \) be the Laplacian of \( X \). Then \( \rho \) is barycentric relative to \( F \) if and only if the rows of \( {QR} \) corresponding to the vertices in \( X \smallsetminus F \) are all zero.
|
Proof. The vector \( x \) is the centroid of the vectors in \( S \) if and only if\n\n\[ \mathop{\sum }\limits_{{y \in S}}\left( {x - y}\right) = 0 \]\n\nIf \( u \) has valency \( d \), the \( u \) -row of \( {QR} \) is equal to\n\n\[ {d\rho }\left( u\right) - \mathop{\sum }\limits_{{v \sim u}}\rho \left( v\right) = \mathop{\sum }\limits_{{v \sim u}}\rho \left( u\right) - \rho \left( v\right) .\n\nThe lemma follows.
|
Yes
|
Lemma 13.8.3 Let \( X \) be a connected graph, let \( F \) be a subset of the vertices of \( X \), and let \( \sigma \) be a map from \( F \) into \( {\mathbb{R}}^{m} \). If \( X \smallsetminus F \) is connected, there is a unique \( m \) -dimensional representation \( \rho \) of \( X \) that extends \( \sigma \) and is barycentric relative to \( F \).
|
Proof. Let \( Q \) be the Laplacian of \( X \). Assume that we have\n\n\[ Q = \left( \begin{matrix} {Q}_{1} & {B}^{T} \\ B & {Q}_{2} \end{matrix}\right) \]\n\nwhere the rows and columns of \( {Q}_{1} \) are indexed by the vertices of \( F \). Let \( R \) be the matrix describing the representation \( \rho \). We may assume\n\n\[ R = \left( \begin{array}{l} {R}_{1} \\ {R}_{2} \end{array}\right) \]\n\nwhere \( {R}_{1} \) gives the values of \( \sigma \) on \( F \). Then \( \rho \) extends \( \sigma \) and is barycentric (relative to \( F \)) if and only if\n\n\[ \left( \begin{matrix} {Q}_{1} & {B}^{T} \\ B & {Q}_{2} \end{matrix}\right) \left( \begin{array}{l} {R}_{1} \\ {R}_{2} \end{array}\right) = \left( \begin{matrix} {Y}_{1} \\ 0 \end{matrix}\right) \]\n\nThen \( B{R}_{1} + {Q}_{2}{R}_{2} = 0 \), and so if \( {Q}_{2} \) is invertible, this yields that\n\n\[ {R}_{2} = - {Q}_{2}^{-1}B{R}_{1},\;{Y}_{1} = \left( {{Q}_{1} - {B}^{T}{Q}_{2}B}\right) {R}_{1}. \]\n\nWe complete the proof by showing that since \( X \smallsetminus F \) is connected, \( {Q}_{2} \) is invertible. Let \( Y = X \smallsetminus F \). Then there is a nonnegative diagonal matrix \( {\Delta }_{2} \) such that\n\n\[ {Q}_{2} = Q\left( Y\right) + {\Delta }_{2} \]\n\nSince \( X \) is connected, \( {\Delta }_{2} \neq 0 \). We prove that \( {Q}_{2} \) is positive definite. We have\n\n\[ {x}^{T}{Q}_{2}x = {x}^{T}Q\left( Y\right) x + {x}^{T}{\Delta }_{2}x. \]\n\nBecause \( {x}^{T}Q\left( Y\right) x = \mathop{\sum }\limits_{{{ij} \in E\left( Y\right) }}{\left( {x}_{i} - {x}_{j}\right) }^{2} \), we see that \( {x}^{T}Q\left( Y\right) x \geq 0 \) and that \( {x}^{T}Q\left( Y\right) x = 0 \) if and only if \( x = c\mathbf{1} \) for some \( c \). But now \( {x}^{T}{\Delta }_{2}x = {c}^{2}{\mathbf{1}}^{T}{\Delta }_{2}\mathbf{1} \), and this is positive unless \( c = 0 \). Therefore, \( {x}^{T}{Q}_{2}x > 0 \) unless \( x = 0 \); in other words, \( {Q}_{2} \) is positive definite, and consequently it is invertible.
|
Yes
|
Lemma 13.9.1 Let \( X \) be a graph with a generalized Laplacian \( Q \) . If \( X \) is connected, then \( {\lambda }_{1}\left( Q\right) \) is simple and the corresponding eigenvector can be taken to have all its entries positive.
|
Proof. Choose a constant \( c \) such that all diagonal entries of \( Q - {cI} \) are nonpositive. By the Perron-Frobenius theorem (Theorem 8.8.1), the largest eigenvalue of \( - Q + {cI} \) is simple and the associated eigenvector may be taken to have only positive entries.
|
Yes
|
Lemma 13.9.2 Let \( x \) be an eigenvector of \( Q \) with eigenvalue \( \lambda \) and let \( Y \) be a positive nodal domain of \( x \) . Then \( \left( {Q - {\lambda I}}\right) {x}_{Y} \leq 0 \) .
|
Proof. Let \( y \) denote the restriction of \( x \) to \( V\left( Y\right) \) and let \( z \) be the restriction of \( x \) to \( V\left( X\right) \smallsetminus {\operatorname{supp}}_{ + }\left( x\right) \) . Let \( {Q}_{Y} \) be the submatrix of \( Q \) with rows and columns indexed by \( V\left( Y\right) \), and let \( {B}_{Y} \) be the submatrix of \( Q \) with rows indexed by \( V\left( Y\right) \) and with columns indexed by \( V\left( X\right) \smallsetminus {\operatorname{supp}}_{ + }\left( x\right) \) . Since \( {Qx} = {\lambda x} \), we have\n\n\[ \n{Q}_{Y}y + {B}_{Y}z = {\lambda y}.\n\]\n\n(13.7)\n\nSince \( {B}_{Y} \) and \( z \) are nonpositive, \( {B}_{Y}z \) is nonnegative, and therefore\n\n\[ \n{Q}_{Y}y \leq {\lambda y}.\n\]\n\nIt is not necessary for \( x \) to be an eigenvector for the conclusion of this lemma to hold; it is sufficient that \( \left( {Q - {\lambda I}}\right) x \leq 0 \) . Given our discussion in Section 8.7, we might say that it suffices that \( x \) be \( \lambda \) -superharmonic.
|
Yes
|
Corollary 13.9.3 Let \( x \) be an eigenvector of \( Q \) with eigenvalue \( \lambda \), and let \( U \) be the subspace spanned by the vectors \( {x}_{Y} \), where \( Y \) ranges over the positive nodal domains of \( x \) . If \( u \in U \), then \( {u}^{T}\left( {Q - {\lambda I}}\right) u \leq 0 \) .
|
Proof. If \( u = \mathop{\sum }\limits_{Y}{a}_{Y}{x}_{Y} \), then using (13.6), we find that\n\n\[ \n{u}^{T}\left( {Q - {\lambda I}}\right) u = \mathop{\sum }\limits_{Y}{a}_{Y}^{2}{x}_{Y}^{T}\left( {Q - {\lambda I}}\right) {x}_{Y} \n\]\n\nand so the claim follows from the previous lemma.
|
Yes
|
Theorem 13.9.4 Let \( X \) be a connected graph, let \( Q \) be a generalized Laplacian of \( X \), and let \( x \) be an eigenvector for \( Q \) with eigenvalue \( {\lambda }_{2}\left( Q\right) \) . If \( x \) has minimal support, then \( {\operatorname{supp}}_{ + }\left( x\right) \) and \( {\operatorname{supp}}_{ - }\left( x\right) \) induce connected subgraphs of \( X \) .
|
Proof. Suppose that \( v \) is a \( {\lambda }_{2} \) -eigenvector with distinct positive nodal domains \( Y \) and \( Z \) . Because \( X \) is connected, \( {\lambda }_{1} \) is simple and the span of \( {v}_{Y} \) and \( {v}_{Z} \) contains a vector, \( u \) say, orthogonal to the \( {\lambda }_{1} \) -eigenspace.\n\nNow, \( u \) can be expressed as a linear combination of eigenvectors of \( Q \) with eigenvalues at least \( {\lambda }_{2} \) ; consequently, \( {u}^{T}\left( {Q - {\lambda }_{2}I}\right) u \geq 0 \) with equality if and only if \( u \) is a linear combination of eigenvectors with eigenvalue \( {\lambda }_{2} \) .\n\nOn the other hand, by Corollary 13.9.3, we have \( {u}^{T}\left( {Q - {\lambda }_{2}I}\right) u \leq 0 \), and so \( {u}^{T}\left( {Q - {\lambda }_{2}I}\right) u = 0 \) . Therefore, \( u \) is an eigenvector of \( Q \) with eigenvalue \( {\lambda }_{2} \) and support equal to \( V\left( Y\right) \cup V\left( Z\right) \) .\n\nAny \( {\lambda }_{2} \) -eigenvector has both positive and negative nodal domains, because it is orthogonal to the \( {\lambda }_{1} \) -eigenspace. Therefore, the preceding argument shows that an eigenvector with distinct nodal domains of the same sign does not have minimal support. Therefore, since \( x \) has minimal support, it must have precisely one positive and one negative nodal domain.
|
Yes
|
Lemma 13.9.5 Let \( Q \) be a generalized Laplacian of a graph \( X \) and let \( x \) be an eigenvector of \( Q \) . Then any vertex not in \( \operatorname{supp}\left( x\right) \) either has no neighbours in \( \operatorname{supp}\left( x\right) \), or has neighbours in both \( {\operatorname{supp}}_{ + }\left( x\right) \) and \( {\operatorname{supp}}_{ - }\left( x\right) \) .
|
Proof. Suppose that \( u \notin \operatorname{supp}\left( x\right) \), so \( {x}_{u} = 0 \) . Then\n\n\[ 0 = {\left( Qx\right) }_{u} = {Q}_{uu}{x}_{u} + \mathop{\sum }\limits_{{v \sim u}}{Q}_{uv}{x}_{v} = \mathop{\sum }\limits_{{v \sim u}}{Q}_{uv}{x}_{v}. \]\n\nSince \( {Q}_{uv} < 0 \) when \( v \) is adjacent to \( u \), either \( {x}_{v} = 0 \) for all vertices adjacent to \( u \), or the sum has both positive and negative terms. In the former case \( u \) is not adjacent to any vertex in \( \operatorname{supp}\left( x\right) \) ; in the latter it is adjacent to vertices in both \( {\operatorname{supp}}_{ + }\left( x\right) \) and \( {\operatorname{supp}}_{ - }\left( x\right) \) .
|
Yes
|
Lemma 13.10.1 Let \( Q \) be a generalized Laplacian for the graph \( X \) . If \( X \) is 3-connected and planar, then no eigenvector of \( Q \) with eigenvalue \( {\lambda }_{2}\left( Q\right) \) vanishes on three vertices in the same face of any embedding of \( X \) .
|
Proof. Let \( x \) be an eigenvector of \( Q \) with eigenvalue \( {\lambda }_{2} \), and suppose that \( u, v \), and \( w \) are three vertices not in \( \operatorname{supp}\left( x\right) \) lying in the same face. We may assume that \( x \) has minimal support, and hence \( {\operatorname{supp}}_{ + }\left( x\right) \) and \( {\operatorname{supp}}_{ - }\left( x\right) \) induce connected subgraphs of \( X \) . Let \( p \) be a vertex in \( {\operatorname{supp}}_{ + }\left( x\right) \) . Since \( X \) is 3-connected, Menger’s theorem implies that there are three paths in \( X \) joining \( p \) to \( u, v \), and \( w \) such that any two of these paths have only the vertex \( p \) in common. It follows that there are three vertex-disjoint paths \( {P}_{u},{P}_{v} \), and \( {P}_{w} \) joining \( u, v \), and \( w \), respectively, to some triple of vertices in \( N\left( {{\operatorname{supp}}_{ + }\left( x\right) }\right) \) . Each of these three vertices is also adjacent to a vertex in \( {\operatorname{supp}}_{ - }\left( x\right) \) . Since both the positive and negative support induce connected graphs, we may now contract all vertices in \( {\operatorname{supp}}_{ + }\left( x\right) \) to a single vertex, all vertices in \( {\operatorname{supp}}_{ - }\left( x\right) \) to another vertex, and each of the paths \( {P}_{u},{P}_{v} \), and \( {P}_{w} \) to \( u, v \), and \( w \), respectively. The result is a planar graph which contains a copy of \( {K}_{2,3} \) with its three vertices of valency two all lying on the same face. This is impossible.
|
Yes
|
Corollary 13.10.2 Let \( Q \) be a generalized Laplacian for the graph \( X \) . If \( X \) is 3-connected and planar, then \( {\lambda }_{2}\left( Q\right) \) has multiplicity at most three.
|
Proof. If \( {\lambda }_{2} \) has multiplicity at least four, then there is an eigenvector in the associated eigenspace whose support is disjoint from any three given vertices. Thus we conclude that \( {\lambda }_{2} \) has multiplicity at most three.
|
No
|
Lemma 13.10.3 Let \( X \) be a 2-connected plane graph with a generalized Laplacian \( Q \), and let \( x \) be an eigenvector of \( Q \) with eigenvalue \( {\lambda }_{2}\left( Q\right) \) and with minimal support. If \( u \) and \( v \) are adjacent vertices of a face \( F \) such that \( {x}_{u} = {x}_{v} = 0 \), then \( F \) does not contain vertices from both the positive and negative support of \( x \) .
|
Proof. Since \( X \) is 2-connected, the face \( F \) is a cycle. Suppose that \( F \) contains vertices \( p \) and \( q \) such that \( {x}_{p} > 0 \) and \( {x}_{q} < 0 \) . Without loss of generality we can assume that they occur in the order \( u, v, q \), and \( p \) clockwise around the face \( F \), and that the portion of \( F \) from \( q \) to \( p \) contains only vertices not in \( \operatorname{supp}\left( x\right) \) . Let \( {v}^{\prime } \) be the first vertex not in \( \operatorname{supp}\left( x\right) \) encountered moving anticlockwise around \( F \) from \( q \), and let \( {u}^{\prime } \) be the first vertex not in \( \operatorname{supp}\left( x\right) \) encountered moving clockwise around \( F \) from \( p \) . Then \( {u}^{\prime },{v}^{\prime }, q \), and \( p \) are distinct vertices of \( F \) and occur in that order around \( F \) . Let \( P \) be a path from \( {v}^{\prime } \) to \( p \) all of whose vertices other than \( {v}^{\prime } \) are in \( {\operatorname{supp}}_{ + }\left( x\right) \), and let \( N \) be a path from \( {u}^{\prime } \) to \( q \) all of whose vertices other than \( {u}^{\prime } \) are in supp_ \( \left( x\right) \) . The existence of the paths \( P \) and \( N \) is a consequence of Corollary 13.9.4 and Lemma 13.9.5. Because \( F \) is a face, the paths \( P \) and \( N \) must both lie outside \( F \), and since their endpoints are interleaved around \( F \), they must cross. This is impossible, since \( P \) and \( N \) are vertex-disjoint, and so we have the necessary contradiction.
|
Yes
|
Corollary 13.10.4 Let \( X \) be a graph on \( n \) vertices with a generalized Laplacian \( Q \) . If \( X \) is 2-connected and outerplanar, then \( {\lambda }_{2}\left( Q\right) \) has multiplicity at most two.
|
Proof. If \( {\lambda }_{2} \) had multiplicity greater than two, then we could find an eigenvector \( x \) with eigenvalue \( {\lambda }_{2} \) such that \( x \) vanished on two adjacent vertices in the sole face of \( X \) . However, since \( x \) must be orthogonal to the eigenvector with eigenvalue \( {\lambda }_{1} \), both \( {\operatorname{supp}}_{ + }\left( x\right) \) and \( {\operatorname{supp}}_{ - }\left( x\right) \) must be nonempty.
|
Yes
|
Lemma 13.11.1 Let \( X \) be a 3-connected planar graph with a generalized Laplacian \( Q \) such that \( {\lambda }_{2}\left( Q\right) \) has multiplicity three. Let \( \rho \) be a representation given by a matrix \( U \) whose columns form a basis for the \( {\lambda }_{2} \) -eigenspace of \( Q \) . If \( F \) is a face in some planar embedding of \( X \), then the images under \( \rho \) of any two vertices in \( F \) are linearly independent.
|
Proof. Assume by way of contradiction that \( u \) and \( v \) are two vertices in a face of \( X \) such that \( \rho \left( u\right) = {\alpha \rho }\left( v\right) \) for some real number \( \alpha \), and let \( w \) be a third vertex in the same face. Then we can find a linear combination of the columns of \( U \) that vanishes on the vertices \( u, v \), and \( w \), thus contradicting Lemma 13.10.1.
|
Yes
|
Lemma 13.11.2 Let \( X \) be a 2-connected planar graph. Suppose it has a planar embedding where the neighbours of the vertex \( u \) are, in cyclic order, \( {v}_{1},\ldots ,{v}_{k} \) . Let \( Q \) be a generalized Laplacian for \( X \) such that \( {\lambda }_{2}\left( Q\right) \) has multiplicity three. Then the planes spanned by the pairs \( \left\{ {\rho \left( u\right) ,\rho \left( {v}_{i}\right) }\right\} \) are arranged in the same cyclic order around the line spanned by \( \rho \left( u\right) \) as the vertices \( {v}_{i} \) are arranged around \( u \) .
|
Proof. Let \( x \) be an eigenvector with eigenvalue \( {\lambda }_{2} \) with minimal support such that \( x\left( u\right) = x\left( {v}_{1}\right) = 0 \) . (Here we are viewing \( x \) as a function on \( V\left( X\right) \) .) By Lemma 13.10.1, we see that neither \( x\left( {v}_{2}\right) \) nor \( x\left( {v}_{k}\right) \) can be zero, and replacing \( x \) by \( - x \) if needed, we may suppose that \( x\left( {v}_{2}\right) > 0 \) . Given this, we prove that \( x\left( {v}_{k}\right) < 0 \).\n\nSuppose that there are some values \( h, i \), and \( j \) such that \( 2 \leq h < i < j \leq \) \( k \) and \( x\left( {v}_{h}\right) > 0, x\left( {v}_{j}\right) > 0 \), and \( x\left( {v}_{i}\right) \leq 0 \) . Since \( {\operatorname{supp}}_{ + }\left( x\right) \) is connected, the vertices \( {v}_{h} \) and \( {v}_{j} \) are joined in \( X \) by a path with all vertices in \( {\operatorname{supp}}_{ + }\left( x\right) \) . Taken with \( u \), this path forms a cycle in \( X \) that separates \( {v}_{1} \) from \( {v}_{i} \) . Since \( X \) is 2-connected, there are two vertex-disjoint paths \( {P}_{1} \) and \( {P}_{i} \) joining \( {v}_{1} \) and \( {v}_{i} \) respectively to vertices in \( N\left( {{\operatorname{supp}}_{ + }\left( x\right) }\right) \) . The end-vertices of these paths other than \( {v}_{1} \) and \( {v}_{i} \) are adjacent to vertices in \( {\operatorname{supp}}_{ - }\left( x\right) \), and thus we have found two vertices in \( {\operatorname{supp}}_{ - }\left( x\right) \) that are separated by vertices in \( {\operatorname{supp}}_{ + }\left( x\right) \) . This contradicts the fact that \( {\operatorname{supp}}_{ - }\left( x\right) \) is connected.\n\nIt follows that there is exactly one index \( i \) such that \( x\left( {v}_{i}\right) > 0 \) and \( x\left( {v}_{i + 1}\right) \leq 0 \) . Since \( x\left( u\right) = 0 \) and \( x\left( {v}_{2}\right) > 0 \), it follows from Lemma 13.9.5 that \( u \) has a neighbour in \( {\operatorname{supp}}_{ - }\left( x\right) \), and therefore \( x\left( {v}_{k}\right) \) must be negative.\n\nFrom this we see that if we choose \( x \) such that \( x\left( u\right) = x\left( {v}_{i}\right) = 0 \) and \( x\left( {v}_{i + 1}\right) > 0 \), then \( x\left( {v}_{i - 1}\right) < 0 \) (where the subscripts are computed modulo \( k) \) . The lemma follows at once from this.
|
Yes
|
Lemma 14.1.2 If \( X \) is a graph, then the signed characteristic vector of each cut lies in the cut space of \( X \) . The nonzero elements of the cut space with minimal support are scalar multiples of the signed characteristic vectors of the bonds of \( X \) .
|
Proof. First let \( C \) be a cut in \( X \) and suppose that \( V\left( +\right) \) and \( V\left( -\right) \) are its shores. Let \( y \) be the characteristic vector in \( {\mathbb{R}}^{V} \) of \( V\left( +\right) \) and consider the vector \( {D}^{T}y \) . It takes the value 0 on any edge with both ends in the same shore of \( C \) and is equal to \( \pm 1 \) on the edges \( C \) ; its value is positive on \( e \) if and only if the head of \( e \) lies in \( V\left( +\right) \) . So \( {D}^{T}y \) is the signed characteristic vector of \( C \) .\n\nNow, let \( x \) be a nonzero element of the cut space of \( X \) . Then \( x = {D}^{T}y \) for some nonzero vector \( y \in {\mathbb{R}}^{V} \) . The vector \( y \) determines a partition of \( V\left( X\right) \) with cells\n\n\[ S\left( \alpha \right) = \left\{ {u \in V\left( X\right) \mid {y}_{u} = \alpha }\right\} \]\n\nAn edge is in \( \operatorname{supp}\left( x\right) \) if and only if its endpoints lie in distinct cells of this partition, so the support of \( x \) is determined only by this partition. If there are edges between more than one pair of cells, \( x \) does not have minimal support, because a partition created by merging two of the cells would determine an element \( {x}^{\prime } \) with \( \operatorname{supp}\left( {x}^{\prime }\right) \subset \operatorname{supp}\left( x\right) \) .\n\nTherefore, if \( x \) has minimal support, the only edges between distinct cells all lie between two cells \( S\left( \alpha \right) \) and \( S\left( \beta \right) \) . This implies that \( x \) is a scalar multiple of the signed characteristic vector of the cut with shores \( S\left( \alpha \right) \) and \( V\left( X\right) \smallsetminus S\left( \alpha \right) \) . Finally, we observe that if \( x \) is the signed characteristic vector of a cut, then it has minimal support if and only if that cut is a bond.
|
Yes
|
Lemma 14.1.3 Let \( X \) be a connected graph and let \( T \) be a spanning tree of \( X \) . Then the signed characteristic vectors of the \( n - 1 \) cuts \( C\left( {T, e}\right) \), for \( e \in E\left( T\right) \), form a basis for the cut space of \( X \) .
|
Proof. An edge \( e \in E\left( T\right) \) is in the cut \( C\left( {T, e}\right) \) but not in any of the cuts \( C\left( {T, f}\right) \) for \( f \neq e \) . Therefore, the signed characteristic vectors of the cuts are linearly independent, and since there are \( n - 1 \) vectors, they form a basis.
|
Yes
|
Theorem 14.2.2 If \( X \) is a graph, then the signed characteristic vector of each cycle lies in the flow space of \( X \) . The nonzero elements of the flow space with minimal support are scalar multiples of the signed characteristic vectors of the cycles of \( X \) .
|
Proof. If \( C \) is a cycle with signed characteristic vector \( z \), then it is a straightforward exercise to verify that \( {Dz} = 0 \) .\n\nSuppose, then, that \( y \) lies in the flow space of \( X \) and that its support is minimal. Let \( Y \) denote the subgraph of \( X \) formed by the edges in \( \operatorname{supp}\left( y\right) \) . Any vertex that lies in an edge of \( Y \) must lie in at least two edges of \( Y \) . Hence \( Y \) has minimum valency at least two, and therefore it contains a cycle.\n\nSuppose that \( C \) is a cycle formed from edges in \( Y \) . Then, for any real number \( \alpha \), the vector\n\n\[ \n{y}^{\prime } = y + {\alpha z} \n\]\n\nis in the flow space of \( X \) and has support contained in \( Y \) .\n\nBy choosing \( \alpha \) appropriately, we can guarantee that there is an edge of \( C \) not in \( \operatorname{supp}\left( {y}^{\prime }\right) \) . But since \( y \) has minimal support, this implies that \( {y}^{\prime } = 0 \) , and hence \( y \) is a scalar multiple of \( z \) .
|
No
|
Corollary 14.2.3 The flow space of \( X \) is spanned by the signed characteristic vectors of its cycles.
|
There are also a number of natural bases for the flow space of a graph. If \( F \) is a maximal spanning forest of \( X \), then any edge not in \( F \) is called a chord of \( F \) . If \( e \) is a chord of \( F \), then \( e \) together with the path in \( F \) from the head of \( e \) to the tail of \( e \) is a cycle in \( X \) . If \( X \) has \( n \) vertices, \( m \) edges, and \( c \) connected components, then this provides us with \( m - n + c \) cycles. Since each chord is in precisely one of these cycles, the signed characteristic vectors of these cycles are linearly independent in \( {\mathbb{R}}^{E} \), and hence they form a basis for the flow space of \( X \) .
|
Yes
|
Theorem 14.2.4 Let \( X \) be a graph with \( n \) vertices, \( m \) edges, and \( c \) connected components. Suppose that the rows of the \( \left( {n - c}\right) \times e \) matrix\n\n\[ M = \left( \begin{array}{ll} I & R \end{array}\right) \]\n\nform a basis for the cut space of \( X \) . Then the rows of the \( \left( {m - n + c}\right) \times e \) matrix\n\n\[ N = \left( \begin{array}{ll} - {R}^{T} & I \end{array}\right) \]\n\nform a basis for the flow space of \( X \) .
|
Proof. It is obvious that \( M{N}^{T} = 0 \) . Therefore, the rows of \( N \) are in the flow space of \( X \), and since they are linearly independent, they form a basis for the flow space of \( X \) .
|
Yes
|
Theorem 14.3.1 If \( X \) is a plane graph, then a set of edges is a cycle in \( X \) if and only if it is a bond in the dual graph \( {X}^{ * } \) .
|
Proof. We shall show that a set of edges \( D \subseteq E\left( X\right) \) contains a cycle of \( X \) if and only if it contains a bond of \( {X}^{ * } \) (here we are identifying the edges of \( X \) and \( {X}^{ * } \) ). If \( D \) contains a cycle \( C \), then this forms a closed curve in the plane, and every face of \( X \) is either inside or outside \( C \) . So \( C \) is a cut in \( {X}^{ * } \) whose shores are the faces inside \( C \) and the faces outside \( C \) . Hence \( D \) contains a bond of \( {X}^{ * } \) . Conversely, if \( D \) does not contain a cycle, then \( D \) does not enclose any region of the plane, and there is a path between any two faces of \( {X}^{ * } \) that uses only edges not in \( D \) . Therefore, \( D \) does not contain a bond of \( {X}^{ * } \) .
|
Yes
|
Lemma 14.3.3 If \( X \) is a connected plane graph and \( T \) a spanning tree of \( X \), then \( E\left( X\right) \smallsetminus E\left( T\right) \) is a spanning tree of \( {X}^{ * } \) .
|
Proof. The tree \( T \) contains no cycle of \( X \), and therefore \( T \) contains no bond of \( {X}^{ * } \) . Therefore, the graph with vertex set \( V\left( {X}^{ * }\right) \) and edge set \( E\left( X\right) \smallsetminus E\left( T\right) \) is connected. Euler’s formula shows that \( \left| {E\left( X\right) \smallsetminus E\left( T\right) }\right| = \) \( \left| {V\left( {X}^{ * }\right) }\right| - 1 \), and the result follows.
|
Yes
|
Lemma 14.3.3 If \( X \) is a connected plane graph and \( T \) a spanning tree of \( X \), then \( E\left( X\right) \smallsetminus E\left( T\right) \) is a spanning tree of \( {X}^{ * } \) .
|
Proof. The tree \( T \) contains no cycle of \( X \), and therefore \( T \) contains no bond of \( {X}^{ * } \) . Therefore, the graph with vertex set \( V\left( {X}^{ * }\right) \) and edge set \( E\left( X\right) \smallsetminus E\left( T\right) \) is connected. Euler’s formula shows that \( \left| {E\left( X\right) \smallsetminus E\left( T\right) }\right| = \) \( \left| {V\left( {X}^{ * }\right) }\right| - 1 \), and the result follows.
|
Yes
|
Lemma 14.4.1 Let \( Y \) be a graph with no cut-edges and let \( X \) be a graph obtained by adding an ear to \( Y \) . Then \( X \) has no cut-edges.
|
Proof. Assume that \( X \) is obtained from \( Y \) by adding a path \( P \) and suppose \( e \in E\left( X\right) \) . If \( e \in E\left( P\right) \), then \( X \smallsetminus e \) is connected. If \( e \in Y \), then \( Y \smallsetminus e \) is connected (because \( Y \) has no cut-edges), and hence \( X \smallsetminus e \) is connected. This shows that \( X \) has no cut-edges.
|
Yes
|
Theorem 14.4.2 A connected graph \( X \) has an ear decomposition if and only if it has no cut-edges.
|
Proof. It remains only to prove that \( X \) has an ear decomposition if it has no cut-edges. In fact, we will prove something slightly stronger, which is that \( X \) has an ear decomposition starting with any cycle. Let \( {Y}_{0} \) be any cycle of \( X \) and form a sequence of graphs \( {Y}_{0},{Y}_{1},\ldots \) as follows. If \( {Y}_{i} \neq X \) , then there is an edge \( e = {uv} \in E\left( X\right) \smallsetminus E\left( {Y}_{i}\right) \), and because \( X \) is connected, we may assume that \( u \in V\left( {Y}_{i}\right) \) . Since \( e \) is not a cut-edge, it lies in a cycle of \( X \), and the portion of this cycle from \( u \) along \( e \) until it first returns to \( V\left( {Y}_{i}\right) \) is an ear; this may be only one edge if \( v \in V\left( {Y}_{i}\right) \) . Form \( {Y}_{i + 1} \) from \( {Y}_{i} \) by the addition of this ear. Because \( X \) is finite, this process must terminate, yielding an ear decomposition of \( X \) .
|
Yes
|
Lemma 14.5.1 The set of all integer linear combinations of a set of linearly independent vectors in \( V \) is a lattice.
|
Proof. Suppose that \( M \) is a matrix with linearly independent columns. It will be enough to show that there is a positive constant \( \epsilon \) such that if \( y \) is a nonzero integer vector, then \( {y}^{T}{M}^{T}{My} \geq \epsilon \) . But if \( y \) is an integer vector, then \( {y}^{T}y \geq 1 \) . Since \( {M}^{T}M \) is positive definite, its least eigenvalue is positive; and since it equals\n\n\[\n\mathop{\min }\limits_{{\parallel y\parallel \geq 1}}{y}^{T}{M}^{T}{My}\n\]\n\nwe may take \( \epsilon \) to be least eigenvalue of \( {M}^{T}M \) .
|
Yes
|
Lemma 14.6.1 If the columns of the matrix \( M \) form an integral basis for the lattice \( \mathcal{L} \), then the columns of \( M{\left( {M}^{T}M\right) }^{-1} \) are an integral basis for its dual, \( {\mathcal{L}}^{ * } \) .
|
Proof. Let \( {a}_{1},\ldots ,{a}_{r} \) denote the columns of \( M \) and \( {b}_{1},\ldots ,{b}_{r} \) the columns of \( M{\left( {M}^{T}M\right) }^{-1} \) . Clearly, the vectors \( {b}_{1},\ldots ,{b}_{r} \) lie in the column space of \( M \), and because \( {M}^{T}M{\left( {M}^{T}M\right) }^{-1} = I \) we have\n\n\[ \left\langle {{a}_{i},{b}_{j}}\right\rangle = \left\{ \begin{array}{ll} 1, & i = j \\ 0, & \text{ otherwise. } \end{array}\right. \]\n\nTherefore, the vectors \( {b}_{1},\ldots ,{b}_{r} \) lie in \( {\mathcal{L}}^{ * } \).\n\nNow, consider any vector \( x \in {\mathcal{L}}^{ * } \), and define\n\n\[ \bar{x} \mathrel{\text{:=}} \mathop{\sum }\limits_{i}\left\langle {x,{a}_{i}}\right\rangle {b}_{i} \]\n\nThen \( \bar{x} \) is an integer linear combination of the vectors \( {b}_{i} \) . Since \( \left\langle {x - \bar{x},{a}_{i}}\right\rangle = \) 0, we have that \( {\left( x - \bar{x}\right) }^{T}M = 0 \) . Therefore, \( x - \bar{x} \) belongs to both the column space of \( M \) and its orthogonal complement, and so \( x - \bar{x} = 0 \) . Therefore, \( {x}_{.} \) is an integer linear combination of the basis vectors \( {x}_{1},\ldots ,{x}_{r} \) .
|
Yes
|
Theorem 14.6.2 If \( M \) is a matrix with linearly independent columns, then projection onto the column space of \( M \) is given by the matrix\n\n\[ P = M{\left( {M}^{T}M\right) }^{-1}{M}^{T} \]
|
This matrix has the properties that \( P = {P}^{T} \) and \( {P}^{2} = P \) .
|
No
|
Theorem 14.6.3 Suppose that \( M \) is an \( n \times r \) matrix whose columns form an integral basis for the lattice \( \mathcal{L} \) . Let \( P \) be the matrix representing orthogonal projection from \( {\mathbb{R}}^{n} \) onto the column space of \( M \) . If the greatest common divisor of the \( r \times r \) minors of \( M \) is 1, then \( {\mathcal{L}}^{ * } \) is generated by the columns of \( P \) .
|
Proof. From Theorem 14.6.2 we have that \( P = M{\left( {M}^{T}M\right) }^{-1}{M}^{T} \), and from Lemma 14.6.1 we know that the columns of \( M{\left( {M}^{T}M\right) }^{-1} \) form an integral basis for \( {\mathcal{L}}^{ * } \) . Therefore, it is sufficient to show that if \( y \in {\mathbb{Z}}^{r} \), then \( y = {M}^{T}x \) for some integer vector \( x \) . Equivalently, we must show that the lattice generated by the rows of \( M \) is \( {\mathbb{Z}}^{r} \) .\n\nLet \( \mathcal{M} \) denote the lattice generated by the rows of \( M \) . There is an \( r \times r \) matrix \( N \) whose rows form an integral basis for \( \mathcal{M} \) . The rows of \( N \) are an integral basis, so there is some integer matrix \( X \) such that \( {XN} = M \) . Because \( N \) is invertible, we have \( X = M{N}^{-1} \), so we conclude that \( M{N}^{-1} \) has integer entries. If \( {M}^{\prime } \) is any \( r \times r \) submatrix of \( M \), then \( {M}^{\prime }{N}^{-1} \) is also an integer matrix; hence \( \det N \) must divide \( \det {M}^{\prime } \) . So our hypothesis implies that \( \det N = 1 \), and therefore \( \mathcal{M} = {\mathbb{Z}}^{r} \) .
|
Yes
|
Lemma 14.7.1 The flow lattice of a graph \( X \) is even if and only if \( X \) is bipartite. The cut lattice of \( X \) is even if and only if \( X \) is even.
|
Proof. If \( x \) and \( y \) are even vectors, then\n\n\[ \langle x + y, x + y\rangle = \langle x, x\rangle + 2\langle x, y\rangle + \langle y, y\rangle \]\n\nand so \( x + y \) is also even. If \( X \) is bipartite, then all cycles in it have even length. It follows that the flow lattice of integer flows is spanned by a set of even vectors; therefore, it is even. If \( X \) is even, then each column of \( {D}^{T} \) is even, so the cut lattice is also spanned by a set of even vectors.\n\nThe converse is trivial in both cases.
|
Yes
|
Theorem 14.7.2 The determinant of the cut lattice of a connected graph \( X \) is equal to the number of spanning trees of \( X \) .
|
Proof. Let \( D \) be the oriented incidence matrix of \( X \), let \( u \) be a vertex of \( X \), and let \( {D}_{u} \) be the matrix obtained by deleting the row corresponding to \( u \) from \( D \) . Then the columns of \( {D}_{u}^{T} \) form an integral basis for the lattice, and so its determinant is \( \det \left( {{D}_{u}{D}_{u}^{T}}\right) \) . By Theorem 13.2.1, this equals the number of spanning trees of \( X \) .
|
Yes
|
Theorem 14.7.3 The determinant of the flow lattice of a connected graph \( X \) is equal to the number of spanning trees of \( X \) .
|
Proof. Suppose the rows of the matrix\n\n\[ M = \left( \begin{array}{ll} I & R \end{array}\right) \]\n\nform a basis for the cut space of \( X \) (the existence of such a basis is guaranteed by the spanning-tree construction of Section 14.1). Then the rows of the matrix\n\n\[ N = \left( \begin{array}{ll} - {R}^{T} & I \end{array}\right) \]\n\nform a basis for the flow space of \( X \) .\n\nThe columns of \( {M}^{T} \) and \( {N}^{T} \) form integral bases for the cut lattice and the flow lattice, respectively. Therefore, the determinant of the cut lattice is \( \det M{M}^{T} \), and the determinant of the flow lattice is \( \det N{N}^{T} \).\n\nBut\n\n\[ M{M}^{T} = I + R{R}^{T},\;N{N}^{T} = I + {R}^{T}R, \]\n\nand so by Lemma 8.2.4,\n\n\[ \det M{M}^{T} = \det N{N}^{T} \]\n\nand the result follows from Theorem 14.7.2.
|
Yes
|
Theorem 14.8.1 If \( X \) is a connected graph, then the matrix\n\n\[ P = \frac{1}{\tau \left( X\right) }\mathop{\sum }\limits_{T}{N}_{T} \]\n\nrepresents orthogonal projection onto the cut space of \( X \) .
|
Proof. To prove the result we will show that \( P \) is symmetric, that \( {Px} = x \) for any vector in the cut space of \( X \), and that \( {Px} = 0 \) for any vector in the flow space of \( X \) .\n\nNow,\n\n\[ {\left( {N}_{T}\right) }_{eg}^{2} = \mathop{\sum }\limits_{{f \in E\left( X\right) }}{\left( {N}_{T}\right) }_{ef}{\left( {N}_{T}\right) }_{fg} \]\n\nand \( {\left( {N}_{T}\right) }_{fg} \) can be nonzero only when \( f \in E\left( T\right) \), but then \( {\left( {N}_{T}\right) }_{fg} = 0 \) unless \( g = f \) . Hence the sum reduces to \( {\left( {N}_{T}\right) }_{eg}{\left( {N}_{T}\right) }_{gg} = {\left( {N}_{T}\right) }_{eg} \), and so \( {N}_{T}^{2} = {N}_{T} \) . Because the column space of \( {N}_{T} \) is the cut space of \( X \), we deduce from this that \( {N}_{T}x = x \) for each vector in the cut space of \( X \), and thus that \( {Px} = x \) .\n\nNext we prove that the sum of the matrices \( {N}_{T} \) over all spanning trees of \( X \) is symmetric. Suppose \( e \) and \( f \) are edges of \( X \) . Then \( {\left( {N}_{T}\right) }_{ef} \) is zero unless \( f \in E\left( T\right) \) and \( e \in C\left( {T, f}\right) \) . Let \( {\mathcal{T}}_{f} \) denote the set of trees \( T \) such that \( f \in E\left( T\right) \) and \( e \in C\left( {T, f}\right) \) ; let \( {\mathcal{T}}_{e} \) denote the set of trees \( T \) such that \( e \in E\left( T\right) \) and \( f \in C\left( {T, e}\right) \) . Let \( {\mathcal{T}}_{f}\left( +\right) \) and \( {\mathcal{T}}_{f}\left( -\right) \) respectively denote the set of trees in \( {\mathcal{T}}_{f} \) such that the head of \( e \) lies in the positive or negative shore of \( C\left( {T, f}\right) \), and define \( {\mathcal{T}}_{e}\left( +\right) \) and \( {\mathcal{T}}_{e}\left( -\right) \) similarly. Note next that if \( T \in {\mathcal{T}}_{f} \), then \( \left( {T \smallsetminus f}\right) \cup e \) is a tree in \( {\mathcal{T}}_{e} \) . This establishes a bijection from \( {\mathcal{T}}_{e} \) to \( {\mathcal{T}}_{f} \), and this bijection maps \( {\mathcal{T}}_{e}\left( +\right) \) to \( {\mathcal{T}}_{f}\left( +\right) \) .\n\nSince \( {\left( {N}_{T}\right) }_{ef} \) equals 1 or -1 according as the head of \( e \) is in the positive or negative shore of \( C\left( {T, f}\right) \), it follows that\n\n\[ \mathop{\sum }\limits_{T}{\left( {N}_{T}\right) }_{ef} = \mathop{\sum }\limits_{T}{\left( {N}_{T}\right) }_{fe} \]\n\nand therefore that \( P \) is symmetric.\n\nFinally, if \( x \) lies in the flow space, then \( {x}^{T}{N}_{T} = 0 \), for any tree \( T \), and so \( {x}^{T}P = 0 \), and taking the transpose we conclude that \( {Px} = 0 \) .
|
Yes
|
Lemma 14.9.1 In an infinite chip-firing game, every vertex is fired infinitely often.
|
Proof. Since there are only a finite number of ways to place \( N \) chips on \( X \), some configuration, say \( s \), must reappear infinitely often. Let \( \sigma \) be the sequence of vertices fired between two occurrences of \( s \) . If there is a vertex \( v \) that is not fired in this sequence, then the neighbours of \( v \) are also not fired, for otherwise the number of chips on \( v \) would increase. Since \( X \) is connected, either \( \sigma \) is empty or every vertex occurs in \( \sigma \) .
|
No
|
Theorem 14.9.2 Let \( X \) be a graph with \( n \) vertices and \( m \) edges and consider the chip-firing games on \( X \) with \( N \) chips. Then\n\n(a) If \( N > {2m} - n \), the game is infinite.\n\n(b) If \( m \leq N \leq {2m} - n \), the game may be finite or infinite.\n\n(c) If \( N < m \), the game is finite.
|
Proof. Let \( d\left( v\right) \) be the valency of the vertex \( v \) . If each vertex has at most \( d\left( v\right) - 1 \) chips on it, then\n\n\[ N \leq \mathop{\sum }\limits_{v}\left( {d\left( v\right) - 1}\right) = {2m} - n.\]\n\nSo, if \( N > {2m} - n \), there is always a vertex with as least as many chips on it as its valency. We also see that for \( N \leq {2m} - n \) there are initial configurations where no vertex can be fired.\n\nNext we show that if \( N \geq m \), there is an initial configuration that leads to an infinite game. It will be enough to prove that there are infinite games when \( N = m \) . Suppose we are given an acyclic orientation of \( X \), and let \( {d}^{ + }\left( v\right) \) denote the out-valency of the vertex \( v \) with respect to this orientation. Every acyclic orientation determines a configuration with \( N = e \) obtained by placing \( {d}^{ + }\left( v\right) \) chips on each vertex \( v \) . If an orientation is acyclic, there is a vertex \( u \) such that \( {d}^{ + }\left( u\right) = d\left( u\right) \) ; this vertex can be fired. After \( u \) has been fired it has no chips, and every neighbour of \( u \) has one additional chip. However, this is simply the configuration determined by another acyclic orientation, the one obtained by reversing the orientation of every edge on \( u \) . Therefore, the game can be continued indefinitely.\n\nFinally, we prove that the game is finite if \( N < m \) . Assume that the chips are distinguishable and let an edge capture the first chip that uses it. Once a chip is captured by an edge, assume that each time a vertex is fired, it either stays fixed, or moves to the other end of the edge it belongs to. Since \( N < m \), it follows that there is an edge that never has a chip assigned to it. Hence neither of the vertices in this edge is ever fired, and therefore the game is finite.
|
Yes
|
Theorem 14.9.3 Let \( X \) be a connected graph and let \( \sigma \) and \( \tau \) be two firing sequences starting from the same state \( s \) with respective scores \( x \) and \( y \) . Then \( \tau \) followed by \( \sigma \smallsetminus \tau \) is a firing sequence starting from \( s \) having score \( x \vee y \) .
|
Proof. We leave the proof of this result as a useful exercise.
|
No
|
Corollary 14.9.4 Let \( X \) be a connected graph, and \( s \) a given initial state. Then either every chip-firing game starting from \( s \) is infinite, or all such games terminate in the same state.
|
Proof. Let \( \tau \) be the firing sequence of a terminating game starting from \( s \), and let \( \sigma \) be the firing sequence of another game starting from \( s \) . Then by Theorem 14.9.3, \( \sigma \smallsetminus \tau \) is necessarily empty, and hence \( \sigma \) is finite.\n\nNow, suppose that \( \sigma \) is the firing sequence of another terminating game starting from \( s \) . Then both \( \sigma \smallsetminus \tau \) and \( \tau \smallsetminus \sigma \) are empty, and hence \( \sigma \) and \( \tau \) have the same score. Since the state of a chip-firing game depends only on the initial state and the score of the firing sequence, all terminating games must end in the same state.
|
Yes
|
Lemma 14.10.1 Suppose \( u \) and \( v \) are adjacent vertices in the graph \( X \) . At any stage of a chip-firing game on \( X \) with \( N \) chips, the difference between the number of times that \( u \) has been fired and the number of times that \( v \) has been fired is at most \( N \) .
|
Proof. Suppose that \( u \) has been fired \( a \) times and \( v \) has been fired \( b \) times, and assume without loss of generality that \( a < b \) . Let \( H \) be the subgraph of \( X \) induced by the vertices that have been fired at most \( a \) times. Consider the number of chips currently on the subgraph \( H \) . Along every edge between \( H \) and \( V\left( X\right) \smallsetminus H \) there has been a net movement of chips from \( V\left( X\right) \smallsetminus H \) to \( H \), and in particular, the edge \( {uv} \) has contributed \( b - a \) chips to this total. Since \( H \) cannot have more than \( N \) chips on it, we have \( b - a \leq N \) .
|
Yes
|
Theorem 14.10.2 If \( X \) is a connected graph with \( n \) vertices, e edges, and diameter \( D \), then a terminating chip-firing game on \( X \) ends within \( 2\mathrm{{ne}}D \) moves.
|
Proof. If every vertex is fired during a game, then the game is infinite, and so in a terminating game there is at least one vertex \( v \) that is never fired. By Lemma 14.10.1, a vertex at distance \( d \) from \( v \) has fired at most \( {dN} \) times, and so the total number of moves is at most \( {nDN} \) . By Theorem 14.9.2, \( N < {2e} \), and so the game terminates within \( {2neD} \) moves.
|
Yes
|
Lemma 14.10.3 Let \( M \) be a positive semidefinite matrix, with largest eigenvalue \( \rho \) . Then, for all vectors \( y \) and \( z \) ,\n\n\[ \left| {{y}^{T}{Mz}}\right| \leq \rho \parallel y\parallel \parallel z\parallel \]
|
Proof. Since \( M \) is positive semidefinite, for any real number \( t \) we have\n\n\[ {\left( y + tz\right) }^{T}M\left( {y + {tz}}\right) \geq 0. \]\n\nThe left side here is a quadratic polynomial in \( t \), and the inequality implies that its discriminant is less than or equal to zero. This yields the following extension of the Cauchy-Schwarz inequality:\n\n\[ {\left( {y}^{T}Mz\right) }^{2} \leq {y}^{T}{My}{z}^{T}{Mz} \]\nSince \( {\rho I} - M \) is also positive semidefinite, for any vector \( x \) ,\n\n\[ 0 \leq {x}^{T}\left( {{\rho I} - M}\right) x \]\n\nand therefore\n\n\[ {x}^{T}{Mx} \leq \rho {x}^{T}x \]\n\nThe lemma now follows easily.
|
Yes
|
Theorem 14.10.4 Let \( X \) be a connected graph with \( n \) vertices and let \( Q \) be the Laplacian of \( X \) . If \( {Qx} = y \) and \( {x}_{n} = 0 \), then\n\n\[ \left| {{\mathbf{1}}^{T}x}\right| \leq \frac{n}{{\lambda }_{2}}\parallel y\parallel \]
|
Proof. Since \( Q \) is a symmetric matrix, the results of Section 8.12 show that \( Q \) has spectral decomposition\n\n\[ Q = \mathop{\sum }\limits_{{\theta \in \operatorname{ev}\left( Q\right) }}\theta {E}_{\theta } \]\n\nSince \( X \) is connected, \( \ker Q \) is spanned by \( \mathbf{1} \), and therefore\n\n\[ {E}_{0} = \frac{1}{n}J \]\n\nDefine the matrix \( {Q}^{ \dagger } \) by\n\n\[ {Q}^{ \dagger } \mathrel{\text{:=}} \mathop{\sum }\limits_{{\theta \neq 0}}{\theta }^{-1}{E}_{\theta } \]\n\nThe eigenvalues of \( {Q}^{ \dagger } \) are 0, together with the reciprocals of the nonzero eigenvalues of \( Q \) . Therefore, it is positive semidefinite, and its largest eigenvalue is \( {\lambda }_{2}^{-1} \) . Since the idempotents \( {E}_{\theta } \) are pairwise orthogonal, we have\n\n\[ {Q}^{ \dagger }Q = \mathop{\sum }\limits_{{\theta \neq 0}}{E}_{\theta } = I - {E}_{0} = I - \frac{1}{n}J. \]\n\nTherefore, if \( {Qx} = y \), then\n\n\[ \left( {I - \frac{1}{n}J}\right) x = {Q}^{ \dagger }y \]\n\nMultiplying both sides of this equality on the left by \( {e}_{n}^{T} \), and recalling that \( {e}_{n}^{T}x = 0 \), we get\n\n\[ - \frac{1}{n}{\mathbf{1}}^{T}x = {e}_{n}^{T}{Q}^{ \dagger }y \]\n\nBy Lemma 14.10.3,\n\n\[ \left| {{e}_{n}^{T}{Q}^{ \dagger }y}\right| \leq {\lambda }_{2}^{-1}\begin{Vmatrix}{e}_{n}\end{Vmatrix}\parallel y\parallel \]\n\nfrom which the theorem follows.
|
Yes
|
Theorem 14.11.1 A state is recurrent if and only if it is diffuse.
|
Proof. Suppose that \( s \) is a recurrent state, and that \( \sigma \) is a firing sequence leading from \( s \) back to itself. Let \( Y \subseteq X \) be an induced subgraph of \( X \) , and let \( v \) be the vertex of \( Y \) that first finishes firing. Then every neighbour of \( v \) in \( Y \) is fired at least once in the remainder of \( \sigma \), and so by the time \( s \) recurs, \( v \) has at least as many chips as its valency in \( Y \) .\n\nFor the converse, suppose that the state \( s \) is diffuse. Then we will show that some permutation of the vertices of \( X \) is a firing sequence from \( s \) . Since \( X \) is an induced subgraph of itself, some vertex is ready to fire. Now, consider the situation after some set \( W \) of vertices has been fired exactly once each. Let \( U \) be the subgraph induced by the unfired vertices. In the initial state \( s \), some vertex \( u \in U \) has at least as many chips on it as its valency in \( U \) . After the vertices of \( W \) have been fired, \( u \) has gained one chip from each of its neighbours in \( W \) . Therefore, \( u \) now has at least as many chips as its valency in \( X \), and hence is ready to fire. By induction on \( \left| W\right| \) , some permutation of the vertices is a firing sequence from \( s \) .
|
Yes
|
Theorem 14.11.2 Let \( X \) be a connected graph with \( m \) edges. Then there is a one-to-one correspondence between diffuse states with \( m \) chips and acyclic orientations of \( X \) .
|
Proof. Let \( s \) be a state given, as in the proof of Theorem 14.9.2, by an acyclic orientation of \( X \) . If \( Y \) is an induced subgraph of \( X \), then the restriction of the acyclic orientation of \( X \) to \( Y \) is an acyclic orientation of \( Y \) . Hence there is some vertex whose out-valency in \( Y \) is equal to its valency in \( Y \), and so this vertex has at least as many chips as its valency in \( Y \) . Therefore, \( s \) is diffuse.\n\nConversely, let \( s \) be a diffuse state with \( m \) chips, and let the permutation \( \sigma \) of \( V\left( X\right) \) be a firing sequence leading from \( s \) to itself. Define an acyclic orientation of \( X \) by orienting the edge \( {ij} \) from \( i \) to \( j \) if \( i \) precedes \( j \) in \( \sigma \) . For any vertex \( v \), let \( U \) and \( W \) denote the vertices that occur before and after \( v \) in \( \sigma \), respectively, and let \( {n}_{U} \) and \( {n}_{W} \) denote the number of neighbours of \( v \) in \( U \) and \( W \), respectively. When \( v \) fires it has at least \( d\left( v\right) = {n}_{U} + {n}_{W} \) chips on it, of which \( {n}_{U} \) were accumulated while the vertices of \( U \) were fired. Therefore, in the initial state, \( v \) has at least \( {n}_{W} = {d}^{ + }\left( v\right) \) chips on it. This is true for every vertex, and since \( N = m = \mathop{\sum }\limits_{u}{d}^{ + }\left( u\right) \), every vertex \( v \) has exactly \( {d}^{ + }\left( v\right) \) chips on it.
|
Yes
|
Lemma 14.12.2 Let \( X \) be a connected graph with \( m \) edges, and let \( t \) be a \( q \) -critical state. Then there is a \( q \) -critical state \( s \) with \( m \) chips such that \( {s}_{v} \leq {t}_{v} \) for every vertex \( v \) .
|
Proof. The state \( t \) is recurrent if and only if there is a permutation \( \sigma \) of \( V\left( X\right) \) that is a legal firing sequence from \( t \) . Suppose that during this firing sequence, \( v \) is the first vertex with more than \( d\left( v\right) \) chips on it when fired. Then the state obtained from \( t \) by reducing the number of chips on \( v \) by the amount of this excess is also \( q \) -critical. If every vertex has precisely \( d\left( v\right) \) chips on it when fired, then there are \( m \) chips in total.
|
Yes
|
Lemma 14.12.2 Let \( X \) be a connected graph with \( m \) edges, and let \( t \) be a \( q \) -critical state. Then there is a \( q \) -critical state \( s \) with \( m \) chips such that \( {s}_{v} \leq {t}_{v} \) for every vertex \( v \) .
|
Proof. The state \( t \) is recurrent if and only if there is a permutation \( \sigma \) of \( V\left( X\right) \) that is a legal firing sequence from \( t \) . Suppose that during this firing sequence, \( v \) is the first vertex with more than \( d\left( v\right) \) chips on it when fired. Then the state obtained from \( t \) by reducing the number of chips on \( v \) by the amount of this excess is also \( q \) -critical. If every vertex has precisely \( d\left( v\right) \) chips on it when fired, then there are \( m \) chips in total.
|
Yes
|
Lemma 14.13.1 In the dollar game, after \( q \) has been fired, no other vertex can be fired twice before \( q \) is fired again.
|
Proof. Suppose that no vertex has yet been fired twice after \( q \) and consider the number of chips on any vertex \( u \) that has been fired exactly once since \( q \) . Immediately before \( q \) was last fired, \( u \) had at most \( d\left( u\right) - 1 \) chips on it. Since then, \( u \) has gained at most \( d\left( u\right) \) chips, because no vertex has been fired twice, and has lost \( d\left( u\right) \) chips when it was fired. Therefore, \( u \) is not ready to fire.
|
Yes
|
Lemma 14.13.2 If \( s \) and \( t \) are \( q \) -critical states such that \( s - t = {Qx} \) for some integer vector \( x \), then \( s = t \) .
|
Proof. We shall show that \( x \) is necessarily a constant vector, so \( {Qx} = 0 \) , and hence \( s = t \) . Assume for a contradiction that \( x \) is not constant. Then, exchanging \( s \) and \( t \) if necessary, we may assume that \( {x}_{q} \) is not a maximum coordinate of \( x \) . Let the permutation \( \tau \) be a legal firing sequence starting\n\nand ending at \( t \), and let \( v \neq q \) be the first vertex in \( \tau \) such that \( {x}_{v} \) is one of the maximum coordinates of \( x \) . Let \( W \) be the neighbours of \( v \) that occur before \( v \) in \( \tau \) . Then\n\n\[ \n{s}_{v} = {t}_{v} + {x}_{v}d\left( v\right) - \mathop{\sum }\limits_{{w \sim v}}{x}_{w} \n\]\n\n\[ \n= {t}_{v} + \mathop{\sum }\limits_{{w \sim v}}\left( {{x}_{v} - {x}_{w}}\right) \n\]\n\n\[ \n\geq {t}_{v} + \mathop{\sum }\limits_{{w \in W}}1 \n\]\n\n\[ \n\geq d\left( v\right) \text{.} \n\]\n\nThis contradicts the fact that \( s \) is a \( q \) -critical configuration, because \( v \) is ready to be fired.
|
Yes
|
Theorem 14.13.3 Let \( X \) be a connected graph on \( n \) vertices. Each coset of \( \mathcal{L}\left( Q\right) \) in \( {\mathbb{Z}}^{n} \cap {\mathbf{1}}^{ \bot } \) contains a unique q-critical state for the dollar game.
|
Proof. Given a coset of \( \mathcal{L}\left( Q\right) \), choose an element \( s \) in the coset that represents a valid initial state for the dollar game. By the discussion above, every game with initial state \( s \) eventually falls into a loop containing a unique \( q \) -critical state. Therefore, each coset of \( \mathcal{L}\left( Q\right) \) contains a \( q \) -critical state, and by Lemma 14.13.2 no coset contains more than one \( q \) -critical state. \( ▱ \)
|
Yes
|
Lemma 14.14.1 Let \( a \) and \( b \) be elements of the lattice \( \mathcal{L} \) with \( \langle a, b\rangle \geq 0 \) . Then \( H\left( a\right) \cap H\left( b\right) \subseteq H\left( {a + b}\right) \) .
|
Proof. Suppose \( x \in H\left( a\right) \cap H\left( b\right) \) . Then\n\n\[ \langle x, a + b\rangle = \langle x, a\rangle + \langle x, b\rangle \leq \frac{1}{2}\langle a, a\rangle + \frac{1}{2}\langle b, b\rangle .\n\]\n\nSince \( \langle a, b\rangle \geq 0 \), we have that\n\n\[ \langle a + b, a + b\rangle \geq \langle a, a\rangle + \langle b, b\rangle \]\n\nIt follows that \( x \in H\left( {a + b}\right) \) .
|
Yes
|
Lemma 14.14.2 An element a of \( \mathcal{L} \) is indecomposable if and only if a and -a are the two elements of minimum norm in the coset \( a + 2\mathcal{L} \) .
|
Proof. Suppose \( a \in \mathcal{L} \) . If \( x \in \mathcal{L} \), then \( a = a - x + x \), whence we see that \( a \) is indecomposable if and only if\n\n\[ \langle a - x, x\rangle < 0 \]\n\nfor all elements of \( \mathcal{L} \smallsetminus \{ 0, a\} \) . Since\n\n\[ \langle a - {2x}, a - {2x}\rangle = \langle a, a\rangle - 4\left( {\langle a, x\rangle -\langle x, x\rangle }\right) \]\n\nthis condition holds if and only if for any \( x \) in \( \mathcal{L} \smallsetminus \{ 0, a\} \), \n\n\[ \langle a - {2x}, a - {2x}\rangle > \langle a, a\rangle \]\n\nwhich implies that any other elements of the coset have greater norm than \( a \) .
|
Yes
|
Theorem 14.14.3 Let \( \mathcal{V} \) be the Voronoi cell of the origin in the lattice \( \mathcal{L} \) . Then \( \mathcal{V} \) is the intersection of the closed half-spaces \( H\left( a\right) \), where a ranges over the indecomposable elements of \( \mathcal{L} \) . For each such a, the intersection \( \mathcal{V} \cap H\left( a\right) \) is a facet.
|
Proof. We must show that \( \mathcal{V} \cap H\left( a\right) \) has dimension one less than the dimension of the polytope. So let \( a \) be a fixed indecomposable element of \( \mathcal{L} \) and let \( u \) be any vector orthogonal to \( a \) . If \( b \) is a second indecomposable element of \( \mathcal{L} \), then\n\n\[ \left\langle {b,\frac{1}{2}\left( {a + {\epsilon u}}\right) }\right\rangle = \frac{1}{2}\left( {\langle b, a\rangle + \epsilon \langle b, u\rangle }\right) \]\n\nIf \( b \neq \pm a \), then \( \langle a, b\rangle < \langle b, b\rangle \) ; hence for all sufficiently small values of \( \epsilon \) we have\n\n\[ \left\langle {b,\frac{1}{2}\left( {a + {\epsilon u}}\right) }\right\rangle \leq \frac{1}{2}\langle b, b\rangle \]\n\nThis shows that for all vectors \( u \) orthogonal to \( a \) and all sufficiently small values of \( \epsilon \), the vector \( \frac{1}{2}\left( {a + {\epsilon u}}\right) \) lies in the face \( H\left( a\right) \cap \mathcal{V} \) . Therefore, this face is a facet.
|
Yes
|
Theorem 14.14.4 The indecomposable vectors in the flow lattice of a connected graph \( X \) are the signed characteristic vectors of the cycles.
|
Proof. Let \( \mathcal{L} \) be a lattice contained in \( {\mathbb{Z}}^{n} \), and let \( x \) be an element of \( \mathcal{L} \) of minimal support which has all its entries in \( \{ - 1,0,1\} \) . For any element \( y \in \mathcal{L} \) we have \( {\left( x + 2y\right) }_{i} \neq 0 \) whenever \( {x}_{i} \neq 0 \), and so \( \langle x + {2y}, x + {2y}\rangle \geq \langle x, x\rangle \) . Equality holds only if \( \operatorname{supp}\left( y\right) = \operatorname{supp}\left( x\right) \), and then \( y \) is a multiple of \( x \) . Therefore, by Lemma 14.14.2, \( x \) is indecomposable. Since the signed characteristic vectors of the cycles of \( X \) have minimal support, they are indecomposable elements of the flow lattice.\n\nConversely, suppose that \( x \) is a vector in the flow lattice \( \mathcal{F} \) of \( X \) . Since \( x \) is a flow, there is a cycle \( C \) that supports a flow \( c \) such that \( {c}_{e}{x}_{e} \geq 0 \) for all \( e \in E\left( C\right) \) . (Prove this.) Then \( \langle x, c\rangle \geq 0 \), and so either \( x \) is decomposable, or \( x = \pm c \) .
|
No
|
Lemma 14.15.1 Let \( X \) be a graph with \( n \) vertices and \( c \) components, with incidence matrix \( B \) . Then the 2-rank of \( B \) is \( n - c \) .
|
Proof. The argument given in Theorem 8.3.1 remains valid over \( {GF}\left( 2\right) \) . (The argument in Theorem 8.2.1 implicitly uses the fact that \( - 1 \neq 1 \), and hence fails over \( {GF}\left( 2\right) \) .)
|
Yes
|
Lemma 14.15.2 A graph \( X \) is pedestrian if and only if each subgraph of \( X \) is the symmetric difference of an even subgraph and an edge cutset.
|
Proof. A subgraph of \( X \) is the symmetric difference of an even subgraph and an edge cutset if and only if its characteristic vector lies in \( C + F \) .
|
No
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.