Q
stringlengths 4
3.96k
| A
stringlengths 1
3k
| Result
stringclasses 4
values |
|---|---|---|
Corollary 6.13.4 Let \( X \) be a vertex-transitive graph on \( n \) vertices with chromatic number three. If \( n \) is not a multiple of three, then \( X \) is triangle-free.
|
Proof. Since \( X \) is 3-colourable, it has a homomorphism onto \( {K}_{3} \) . If \( X \) contained a triangle, then the core of \( X \) would be a triangle and \( n \) would be a multiple of three, contradicting the hypothesis. Therefore, \( X \) has no triangles.
|
Yes
|
Theorem 6.13.5 If \( X \) is a connected 2-arc transitive nonbipartite graph, then \( X \) is a core.
|
Proof. Since \( X \) is not bipartite, it contains an odd cycle; since \( X \) is 2-arc transitive, each 2-arc lies in a shortest odd cycle.
|
No
|
Theorem 6.14.1 If \( X \) is a connected arc-transitive nonbipartite cubic graph, then \( X \) is a core.
|
Proof. Let \( C \) be a shortest odd cycle in \( X \), and let \( x \) be a vertex in \( C \) with three neighbours \( {x}_{1},{x}_{2} \), and \( {x}_{3} \), where \( {x}_{1} \) and \( {x}_{2} \) are in \( C \) . If \( G \) is the automorphism group of \( X \), then the vertex stabilizer \( {G}_{x} \) contains an element \( g \) of order three, which can be taken without loss of generality to contain the cycle \( \left( {{x}_{1}{x}_{2}{x}_{3}}\right) \) . The 2-arc \( \beta = \left( {{x}_{1}, x,{x}_{2}}\right) \) is in a shortest odd cycle, and therefore so are \( {\beta }^{g} = \left( {{x}_{2}, x,{x}_{3}}\right) \) and \( {\beta }^{gg} = \left( {{x}_{3}, x,{x}_{1}}\right) \) . Hence any 2-arc with \( x \) as middle vertex lies in a shortest odd cycle, and because \( X \) is vertex transitive, the same is true for every 2-arc. Thus by Lemma 6.9.1, \( X \) is a core.
|
Yes
|
Theorem 6.14.3 If \( X \) is a connected vertex-transitive cubic graph, then \( {X}^{ \bullet } \) is \( {K}_{2} \), an odd cycle, or \( X \) itself.
|
Proof. The proof of this is left as Exercise 42.
|
No
|
Lemma 7.2.2 If \( X \) is vertex transitive, then\n\n\[{\omega }^{ * }\left( X\right) = \frac{\left| V\left( X\right) \right| }{\alpha \left( X\right) }\]\n\nand \( \alpha {\left( X\right) }^{-1}\mathbf{1} \) is a fractional clique with this weight.
|
Proof. Suppose \( g \) is a nonzero fractional clique of \( X \) . Then \( g \) is a function on \( V\left( X\right) \) . If \( \gamma \in \operatorname{Aut}\left( X\right) \), define the function \( {g}^{\gamma } \) by\n\n\[{g}^{\gamma }\left( x\right) = g\left( {x}^{\gamma }\right)\]\n\nThen \( {g}^{\gamma } \) is again a fractional clique, with the same weight as \( g \) . It follows that\n\n\[\widehat{g} \mathrel{\text{:=}} \frac{1}{\left| \operatorname{Aut}\left( X\right) \right| }\mathop{\sum }\limits_{{\gamma \in \operatorname{Aut}\left( X\right) }}{g}^{\gamma }\]\n\nis also a fractional clique with the same weight as \( g \) . If \( X \) is vertex transitive, then it is easy to verify that \( \widehat{g} \) is constant on the vertices of \( X \) . Now, \( c\mathbf{1} \) is a fractional clique if and only if \( c \leq \alpha {\left( X\right) }^{-1} \), and so the result follows.
|
Yes
|
Lemma 7.2.2 If \( X \) is vertex transitive, then\n\n\[{\omega }^{ * }\left( X\right) = \frac{\left| V\left( X\right) \right| }{\alpha \left( X\right) }\]\n\nand \( \alpha {\left( X\right) }^{-1}\mathbf{1} \) is a fractional clique with this weight.
|
Proof. Suppose \( g \) is a nonzero fractional clique of \( X \) . Then \( g \) is a function on \( V\left( X\right) \) . If \( \gamma \in \operatorname{Aut}\left( X\right) \), define the function \( {g}^{\gamma } \) by\n\n\[{g}^{\gamma }\left( x\right) = g\left( {x}^{\gamma }\right)\]\n\nThen \( {g}^{\gamma } \) is again a fractional clique, with the same weight as \( g \) . It follows that\n\n\[\widehat{g} \mathrel{\text{:=}} \frac{1}{\left| \operatorname{Aut}\left( X\right) \right| }\mathop{\sum }\limits_{{\gamma \in \operatorname{Aut}\left( X\right) }}{g}^{\gamma }\]\n\nis also a fractional clique with the same weight as \( g \) . If \( X \) is vertex transitive, then it is easy to verify that \( \widehat{g} \) is constant on the vertices of \( X \) . Now, \( c\mathbf{1} \) is a fractional clique if and only if \( c \leq \alpha {\left( X\right) }^{-1} \), and so the result follows.
|
Yes
|
Lemma 7.3.1 If a graph \( X \) has a fractional colouring \( f \) of weight \( w \), then it has a fractional colouring \( {f}^{\prime } \) with weight no greater than \( w \) such that \( B{f}^{\prime } = 1 \) .
|
Proof. If \( {Bf} \neq \mathbf{1} \), then we will show that we can perturb \( f \) into a function \( {f}^{\prime } \) of weight no greater than \( f \) such that \( B{f}^{\prime } \) has fewer entries not equal to one. The result then follows immediately by induction.\n\nSuppose that some entry of \( {Bf} \) is greater than 1; say \( {\left( Bf\right) }_{j} = b > 1 \) . Let \( {S}_{1},\ldots ,{S}_{t} \) be the independent sets in the support of \( f \) that contain \( {x}_{j} \) . Choose values \( {a}_{1},\ldots ,{a}_{t} \) such that\n\n\[ \n{a}_{i} \leq f\left( {S}_{i}\right) \;\text{ and }\;\mathop{\sum }\limits_{{i = 1}}^{{i = t}}{a}_{i} = b - 1.\n\]\n\nThen define \( {f}^{\prime } \) by\n\n\[ \n{f}^{\prime }\left( S\right) = \left\{ \begin{array}{ll} f\left( S\right) - {a}_{i}, & \text{ if }S = {S}_{i}; \\ f\left( S\right) + {a}_{i}, & \text{ if }S = {S}_{i} \smallsetminus {x}_{j}\text{ and }S \neq \varnothing ; \\ f\left( S\right) , & \text{ otherwise. } \end{array}\right.\n\]\n\nThen \( {f}^{\prime } \) is a fractional colouring with weight no greater than \( w \) such that \( {\left( B{f}^{\prime }\right) }_{j} = 1 \) and \( {\left( B{f}^{\prime }\right) }_{i} = {\left( Bf\right) }_{i} \) for all \( i \neq j \) .
|
Yes
|
Theorem 7.4.1 If there is a homomorphism from \( X \) to \( Y \) and \( f \) is a fractional colouring of \( Y \), then the lift \( \widehat{f} \) of \( f \) is a fractional colouring of \( X \) with weight equal to the weight of \( f \) . The support of \( \widehat{f} \) consists of the preimages of the independent sets in the support of \( f \) .
|
Proof. If \( u \in V\left( X\right) \), then\n\n\[ \mathop{\sum }\limits_{{T \in \mathcal{I}\left( {X, u}\right) }}\widehat{f}\left( T\right) = \mathop{\sum }\limits_{{S : u \in {\varphi }^{-1}\left( S\right) }}f\left( S\right) \]\n\n\[ = \mathop{\sum }\limits_{{S \in \mathcal{I}\left( {Y,\varphi \left( u\right) }\right) }}f\left( S\right) \]\n\nIt follows that \( \widehat{f} \) is a fractional colouring.
|
Yes
|
Corollary 7.4.2 If there is a homomorphism from \( X \) to \( Y \), then \( {\chi }^{ * }\left( X\right) \leq \) \( {\chi }^{ * }\left( Y\right) \) .
|
If there is an independent set in the support of \( f \) that does not intersect \( \varphi \left( X\right) \), then its preimage is the empty set. In this situation \( \widehat{f}\left( \varnothing \right) \neq 0 \) , and there is a fractional colouring that agrees with \( \widehat{f} \) on all nonempty independent sets and vanishes on \( \varnothing \) . Hence we have the following:
|
No
|
Lemma 7.4.4 If \( X \) is vertex transitive, then \( {\chi }^{ * }\left( X\right) \leq \left| {V\left( X\right) }\right| /\alpha \left( X\right) \) .
|
Proof. We saw in Section 7.1 that \( {\chi }^{ * }\left( {K}_{v : r}\right) \leq v/r \) . If \( X \) is vertex transitive, then by Theorem 3.9.1 and the remarks following its proof, it is a retract of a Cayley graph \( Y \) where \( \left| {V\left( Y\right) }\right| /\alpha \left( Y\right) = \left| {V\left( X\right) }\right| /\alpha \left( X\right) \) . By Corollary 7.4.2 we see that \( {\chi }^{ * }\left( X\right) = {\chi }^{ * }\left( Y\right) \) . If \( n = \left| {V\left( Y\right) }\right| \) and \( \alpha = \alpha \left( Y\right) \) , then we will show that there is a homomorphism from \( Y \) into \( {K}_{n : \alpha } \) .\n\nThus suppose that \( Y \) is a Cayley graph \( X\left( {G, C}\right) \) for some group \( G \) of order \( n \) . As in Section 3.1, we take the vertex set of \( Y \) to be \( G \) . Let \( S \) be an independent set of size \( \alpha \left( Y\right) \) in \( Y \), and define a map\n\n\[ \varphi : g \mapsto \left( {S}^{-1}\right) g \]\n\nwhere \( {S}^{-1} = \left\{ {{s}^{-1} : s \in S}\right\} \) . Now, suppose that \( g \sim h \) and consider \( \varphi \left( g\right) \cap \varphi \left( h\right) \) . If \( y \in \varphi \left( g\right) \cap \varphi \left( h\right) \), then \( y = {a}^{-1}g = {b}^{-1}h \) where \( a, b \in S \) . But then \( b{a}^{-1} = h{g}^{-1} \in C \), and so \( a \sim b \), contradicting the fact that \( S \) is an independent set. Thus \( \varphi \left( g\right) \) is disjoint from \( \varphi \left( h\right) \), and \( \varphi \) is a homomorphism from \( Y \) to \( {K}_{n : \alpha } \) .
|
Yes
|
Lemma 7.4.4 If \( X \) is vertex transitive, then \( {\chi }^{ * }\left( X\right) \leq \left| {V\left( X\right) }\right| /\alpha \left( X\right) \) .
|
Proof. We saw in Section 7.1 that \( {\chi }^{ * }\left( {K}_{v : r}\right) \leq v/r \) . If \( X \) is vertex transitive, then by Theorem 3.9.1 and the remarks following its proof, it is a retract of a Cayley graph \( Y \) where \( \left| {V\left( Y\right) }\right| /\alpha \left( Y\right) = \left| {V\left( X\right) }\right| /\alpha \left( X\right) \) . By Corollary 7.4.2 we see that \( {\chi }^{ * }\left( X\right) = {\chi }^{ * }\left( Y\right) \) . If \( n = \left| {V\left( Y\right) }\right| \) and \( \alpha = \alpha \left( Y\right) \) , then we will show that there is a homomorphism from \( Y \) into \( {K}_{n : \alpha } \) .\n\nThus suppose that \( Y \) is a Cayley graph \( X\left( {G, C}\right) \) for some group \( G \) of order \( n \) . As in Section 3.1, we take the vertex set of \( Y \) to be \( G \) . Let \( S \) be an independent set of size \( \alpha \left( Y\right) \) in \( Y \), and define a map\n\n\[ \varphi : g \mapsto \left( {S}^{-1}\right) g \]\n\nwhere \( {S}^{-1} = \left\{ {{s}^{-1} : s \in S}\right\} \) . Now, suppose that \( g \sim h \) and consider \( \varphi \left( g\right) \cap \varphi \left( h\right) \) . If \( y \in \varphi \left( g\right) \cap \varphi \left( h\right) \), then \( y = {a}^{-1}g = {b}^{-1}h \) where \( a, b \in S \) . But then \( b{a}^{-1} = h{g}^{-1} \in C \), and so \( a \sim b \), contradicting the fact that \( S \) is an independent set. Thus \( \varphi \left( g\right) \) is disjoint from \( \varphi \left( h\right) \), and \( \varphi \) is a homomorphism from \( Y \) to \( {K}_{n : \alpha } \) .
|
Yes
|
Theorem 7.4.5 For any graph \( X \) we have\n\n\[ \n{\chi }^{ * }\left( X\right) = \min \left\{ {v/r : X \rightarrow {K}_{v : r}}\right\} \n\]
|
Proof. We have already seen that \( {\chi }^{ * }\left( {K}_{v : r}\right) \leq v/r \), and so by Corollary 7.4.2, it follows that if \( X \) has a homomorphism into \( {K}_{v : r} \), then it has a fractional colouring with weight at most \( v/r \) .\n\nConversely, suppose that \( X \) is a graph with fractional chromatic number \( {\chi }^{ * }\left( X\right) \) . By Theorem 7.3.2, \( {\chi }^{ * }\left( X\right) \) is a rational number, say \( m/n \), and \( X \) has a regular fractional colouring \( f \) of this weight. Then there is a least integer \( r \) such that the function \( g = {rf} \) is integer valued. The weight of \( g \) is an integer \( v \), and since \( f \) is regular, the sum of the values of \( g \) on the independent sets containing \( x \) is \( r \) .\n\nNow, let \( A \) be the \( \left| {V\left( X\right) }\right| \times v \) matrix with rows indexed by \( V\left( X\right) \), such that if \( S \) is an independent set in \( X \), then \( A \) has \( g\left( S\right) \) columns equal to the characteristic vector of \( S \) . Form a copy of the Kneser graph \( {K}_{v : r} \) by taking \( \Omega \) to be the set of columns of \( A \) . Each vertex \( x \) of \( X \) determines a set of \( r \) columns of \( A \) -those that have a 1 in the row corresponding to \( x \) -and since no independent set contains an edge, the sets of columns corresponding to adjacent vertices of \( X \) are disjoint. Hence the map from vertices of \( X \) to sets of columns is a homomorphism from \( X \) into \( {K}_{v : r} \) .
|
Yes
|
Lemma 7.5.1 For any graph \( X \) we have \( {\omega }^{ * }\left( X\right) \leq {\chi }^{ * }\left( X\right) \) .
|
Proof. Suppose that \( f \) is a fractional colouring and \( g \) a fractional clique of \( X \) . Then\n\n\[ \n{\mathbf{1}}^{T}f - {g}^{T}\mathbf{1} = {\mathbf{1}}^{T}f - {g}^{T}{Bf} + {g}^{T}{Bf} - {g}^{T}\mathbf{1} \n\]\n\n\[ \n= \left( {{\mathbf{1}}^{T} - {g}^{T}B}\right) f + {g}^{T}\left( {{Bf} - \mathbf{1}}\right) . \n\]\n\nSince \( g \) is a fractional clique, \( {\mathbf{1}}^{T} - {g}^{T}B \geq 0 \) . Since \( f \) is a fractional colouring, \( f \geq 0 \), and consequently \( \left( {{\mathbf{1}}^{T} - {g}^{T}B}\right) f \geq 0 \) . Similarly, \( g \) and \( {Bf} - \mathbf{1} \) are nonnegative, and so \( {g}^{T}\left( {{Bf} - \mathbf{1}}\right) \geq 0 \) . Hence we have that \( {\mathbf{1}}^{T}f - {g}^{T}\mathbf{1} \) is the sum of two nonnegative numbers, and therefore \( {\mathbf{1}}^{T}f \geq {g}^{T}\mathbf{1} \) for any fractional colouring \( f \) and fractional clique \( g \) .\n\nThe above argument is essentially the proof of the weak duality theorem from linear programming. We point out that the strong duality theorem from linear programming implies that \( {\chi }^{ * }\left( X\right) = {\omega }^{ * }\left( X\right) \) for any graph \( X \) . For vertex-transitive graphs we can prove this now.
|
Yes
|
Corollary 7.5.2 If \( X \) is a vertex-transitive graph, then\n\n\[{\omega }^{ * }\left( X\right) = {\chi }^{ * }\left( X\right) = \frac{\left| V\left( X\right) \right| }{\alpha \left( X\right) }.\]
|
Proof. Lemma 7.2.1, Lemma 7.5.1, and Lemma 7.4.4 yield that\n\n\[ \frac{\left| V\left( X\right) \right| }{\alpha \left( X\right) } \leq {\omega }^{ * }\left( X\right) \leq {\chi }^{ * }\left( X\right) \leq \frac{\left| V\left( X\right) \right| }{\alpha \left( X\right) } \]\n\nand so the result follows.
|
Yes
|
Corollary 7.5.3 For any graph \( X \) we have\n\n\[{\chi }^{ * }\left( X\right) \geq \frac{\left| V\left( X\right) \right| }{\alpha \left( X\right) }\]
|
Proof. Use Lemma 7.2.1.
|
No
|
Lemma 7.5.4 Let \( X \) and \( Y \) be vertex-transitive graphs with the same fractional chromatic number, and suppose \( \varphi \) is a homomorphism from \( X \) to \( Y \) . If \( S \) is a maximum independent set in \( Y \), then \( {\varphi }^{-1}\left( S\right) \) is a maximum independent set in \( X \) .
|
Proof. Since \( X \) and \( Y \) are vertex transitive, \[ \frac{\left| V\left( X\right) \right| }{\alpha \left( X\right) } = {\chi }^{ * }\left( X\right) = {\chi }^{ * }\left( Y\right) = \frac{\left| V\left( Y\right) \right| }{\alpha \left( Y\right) }. \] Let \( f \) be a fractional colouring of weight \( {\chi }^{ * }\left( X\right) \) and let \( g = \alpha {\left( X\right) }^{-1}\mathbf{1} \) . By Lemma 7.2.2 we have that \( g \) is a fractional clique of maximum weight. From the proof of Lemma 7.5.1 \[ \left( {{\mathbf{1}}^{T} - {g}^{T}B}\right) f = 0. \] Since the sum of the values of \( g \) on any independent set of size less than \( \alpha \left( X\right) \) is less than 1, this implies that \( f\left( S\right) = 0 \) if \( S \) is an independent set with size less than \( \alpha \left( X\right) \) . On the other hand, Theorem 7.4.1 yields that \( X \) has a fractional colouring of weight \( {\chi }^{ * }\left( X\right) \) with \( {\varphi }^{-1}\left( S\right) \) in its support. Therefore, \( \left| {{\varphi }^{-1}\left( S\right) }\right| = \alpha \left( X\right) \) .
|
Yes
|
Lemma 7.6.1 Let \( X \) be a minimally imperfect graph. Then any independent set is disjoint from at least one big clique.
|
Proof. Let \( S \) be an independent set in the minimally imperfect graph \( X \) . Then \( X \smallsetminus S \) is perfect, and therefore \( \chi \left( {X \smallsetminus S}\right) = \omega \left( {X \smallsetminus S}\right) \) . If \( S \) meets each big clique in at least one vertex, it follows that \( \omega \left( {X \smallsetminus S}\right) \leq \omega \left( X\right) - 1 \) . Consequently,\n\n\[ \chi \left( X\right) = 1 + \chi \left( {X \smallsetminus S}\right) = \omega \left( X\right) \]\n\nwhich is impossible.
|
Yes
|
Lemma 7.6.2 Each vertex of \( X \) lies in exactly \( \alpha \) members of \( \mathcal{S} \), and any big clique of \( X \) is disjoint from exactly one member of \( \mathcal{S} \) .
|
Proof. We leave the first claim as an exercise.\n\nLet \( K \) be a big clique of \( X \), let \( v \) be an arbitrary vertex of \( X \), and suppose that \( X \smallsetminus v \) is coloured with \( \omega \) colours. Then \( K \) has at most one vertex in each colour class, and so either \( v \notin K \) and \( K \) meets each colour class in one vertex, or \( v \in K \) and \( K \) is disjoint from exactly one colour class.\n\nWe see now that if \( K \) is disjoint from \( {S}_{0} \), then it must meet each of \( {S}_{1},\ldots ,{S}_{N - 1} \) in one vertex. If \( K \) is not disjoint from \( {S}_{0} \), then it meets it in a single vertex, \( u \) say. If \( v \in {S}_{0} \smallsetminus u \), then \( K \) meets each of the independent sets we chose in \( X \smallsetminus v \) . However, \( K \) misses exactly one of the independent sets from \( X \smallsetminus u \) .\n\nLet \( A \) be the \( N \times n \) matrix whose rows are the characteristic vectors of the independent sets in \( \mathcal{S} \) . By Lemma 7.6.1 we may form a collection \( \mathcal{C} \) of big cliques \( {C}_{i} \) such that \( {C}_{i} \cap {S}_{i} = \varnothing \) for each \( i \) . Let \( B \) be the \( N \times n \) matrix whose rows are the characteristic vectors of these big cliques. Lemma 7.6.2 implies that \( {S}_{i} \) is the only member of \( \mathcal{S} \) disjoint from \( {C}_{i} \) . Accordingly, the following result is immediate.\n\nLemma 7.6.3 \( A{B}^{T} = J - I \) .
|
No
|
Theorem 7.6.4 The complement of a perfect graph is perfect.
|
Proof. For any graph \( X \) we have the trivial bound \( \left| {V\left( X\right) }\right| \leq \chi \left( X\right) \alpha \left( X\right) \) , and so for perfect graphs we have \( \left| {V\left( X\right) }\right| \leq \alpha \left( X\right) \omega \left( X\right) \) . Since \( J - I \) is invertible, the previous lemma implies that the rows of \( A \) are linearly independent, and thus \( N \leq n \) . On the other hand, \( \left| {V\left( {X \smallsetminus v}\right) }\right| \leq {\alpha \omega } \) , and therefore \( n \leq N \) . This proves that \( N = n \), and so \[ n = \left| {V\left( \bar{X}\right) }\right| = 1 + \alpha \left( X\right) \omega \left( X\right) = 1 + \omega \left( \bar{X}\right) \alpha \left( \bar{X}\right) . \] Therefore, \( \bar{X} \) cannot be perfect. If \( X \) is imperfect, then it contains a minimally imperfect induced subgraph \( Z \) . The complement \( \bar{Z} \) is then an induced subgraph of \( \bar{X} \) that is not perfect, and so \( \bar{X} \) is imperfect. Therefore, the complement of a perfect graph is perfect.
|
Yes
|
Theorem 7.6.4 The complement of a perfect graph is perfect.
|
Proof. For any graph \( X \) we have the trivial bound \( \left| {V\left( X\right) }\right| \leq \chi \left( X\right) \alpha \left( X\right) \) , and so for perfect graphs we have \( \left| {V\left( X\right) }\right| \leq \alpha \left( X\right) \omega \left( X\right) \) .\n\nSince \( J - I \) is invertible, the previous lemma implies that the rows of \( A \) are linearly independent, and thus \( N \leq n \) . On the other hand, \( \left| {V\left( {X \smallsetminus v}\right) }\right| \leq {\alpha \omega } \) , and therefore \( n \leq N \) . This proves that \( N = n \), and so\n\n\[ n = \left| {V\left( \bar{X}\right) }\right| = 1 + \alpha \left( X\right) \omega \left( X\right) = 1 + \omega \left( \bar{X}\right) \alpha \left( \bar{X}\right) . \]\n\nTherefore, \( \bar{X} \) cannot be perfect.\n\nIf \( X \) is imperfect, then it contains a minimally imperfect induced subgraph \( Z \) . The complement \( \bar{Z} \) is then an induced subgraph of \( \bar{X} \) that is not perfect, and so \( \bar{X} \) is imperfect. Therefore, the complement of a perfect graph is perfect.
|
Yes
|
Lemma 7.7.1 For \( v \geq {2r} \), an independent set in \( C\left( {v, r}\right) \) has size at most \( r \) . Moreover, an independent set of size \( r \) consists of the vertices that contain a given element of \( \{ 1,\ldots, v\} \) .
|
Proof. Suppose that \( S \) is an independent set in \( C\left( {v, r}\right) \) . Since \( C\left( {v, r}\right) \) is vertex transitive, we may assume that \( S \) contains the \( r \) -set \( \beta = \{ 1,\ldots, r\} \) , Let \( {S}_{1} \) and \( {S}_{r} \) be the \( r \) -sets in \( S \) that contain the points 1 and \( r \), respectively. Let \( j \) be the least integer that lies in all the \( r \) -sets in \( {S}_{r} \) . The least element of each set in \( {S}_{r} \) is thus at most \( j \), and since distinct sets in \( {S}_{r} \) have distinct least elements, it follows that \( \left| {S}_{r}\right| \leq j \) . On the other hand, each element of \( {S}_{1} \) has a point in common with each element of \( {S}_{r} \) . Hence each element of \( {S}_{1} \) contains \( j \), and consequently, \( \left| {S}_{1}\right| \leq r - j + 1 \) . Since \( v \geq {2r} \) this implies that \( {S}_{1} \cap {S}_{r} = \{ \beta \} \), and so we have\n\n\[ \left| S\right| = \left| {S}_{1}\right| + \left| {S}_{r}\right| - 1 \leq \left( {r - j + 1}\right) + j - 1 = r. \]\n\nIf equality holds, then \( S \) consists of the vertices in \( C\left( {v, r}\right) \) that contain \( j.▱ \)
|
Yes
|
Corollary 7.7.3 For \( v \geq {2r} \), the fractional chromatic number of the Kneser graph \( {K}_{v : r} \) is \( v/r \) .
|
Proof. Since \( C\left( {v, r}\right) \) is a subgraph of \( {K}_{v : r} \), it follows that\n\n\[ \frac{v}{r} = {\chi }^{ * }\left( {C\left( {v, r}\right) }\right) \leq {\chi }^{ * }\left( {K}_{v : r}\right) \]\n\nand we have already seen that \( {\chi }^{ * }\left( {K}_{v : r}\right) \leq v/r \) .
|
Yes
|
Corollary 7.7.4 If \( v > {2r} \), then the shortest odd cycle in \( {K}_{v : r} \) has length at least \( v/\left( {v - {2r}}\right) \) .
|
Proof. If the odd cycle \( {C}_{{2m} + 1} \) is a subgraph of \( {K}_{v : r} \), then\n\n\[ 2 + \frac{1}{m} = {\chi }^{ * }\left( {C}_{{2m} + 1}\right) \leq \frac{v}{r} \]\n\nwhich implies that \( m \geq r/\left( {v - {2r}}\right) \), and hence that \( {2m} + 1 \geq v/\left( {v - {2r}}\right) .▱ \)
|
Yes
|
Corollary 7.8.2 The automorphism group of \( {K}_{v : r} \) is isomorphic to the symmetric group \( \operatorname{Sym}\left( v\right) \) .
|
Proof. Let \( X \) denote \( {K}_{v : r} \) and let \( X\left( i\right) \) denote the maximum independent set consisting of all the \( r \) -sets containing the point \( i \) from the underlying set \( \Omega \) . Any automorphism of \( X \) must permute the maximum independent sets of \( X \), and by the Erdős-Ko-Rado theorem, all the maximum independent sets are of the form \( X\left( i\right) \) for some \( i \in \Omega \) . Thus any automorphism of \( X \) permutes the \( X\left( i\right) \), and thus determines a permutation in \( \operatorname{Sym}\left( v\right) \) . It is straightforward to check that no nonidentity permutation can fix all the \( X\left( i\right) \), and therefore \( \operatorname{Aut}\left( X\right) \cong \operatorname{Sym}\left( v\right) \) .
|
Yes
|
Theorem 7.9.1 If \( v > {2r} \), then \( {K}_{v : r} \) is a core.
|
Proof. Let \( X \) denote \( {K}_{v : r} \), and let \( X\left( i\right) \) denote the maximum independent set consisting of all the \( r \) -sets containing the point \( i \) from the underlying set \( \Omega \) . Let \( \varphi \) be a homomorphism from \( X \) to \( X \) . We will show that it is onto. If \( \beta = \{ 1,\ldots, r\} \), then \( \beta \) is the unique element of the intersection\n\n\[ X\left( 1\right) \cap X\left( 2\right) \cap \cdots \cap X\left( r\right) . \]\n\nBy Lemma 7.5.4, the preimage \( {\varphi }^{-1}\left( {X\left( i\right) }\right) \) is an independent set of maximum size. By the Erdős-Ko-Rado theorem, this preimage is equal to \( X\left( {i}^{\prime }\right) \), for some element \( {i}^{\prime } \) of \( \Omega \) . We have\n\n\[ {\varphi }^{-1}\{ \beta \} = {\varphi }^{-1}\left( {X\left( 1\right) }\right) \cap {\varphi }^{-1}\left( {X\left( 2\right) }\right) \cap \cdots \cap {\varphi }^{-1}\left( {X\left( r\right) }\right) \]\n\nfrom which we see that \( {\varphi }^{-1}\{ \beta \} \) is the intersection of at most \( r \) distinct sets of the form \( X\left( {i}^{\prime }\right) \) . This implies that \( {\varphi }^{-1}\{ \beta \} \neq \varnothing \), and hence \( \varphi \) is onto.
|
Yes
|
Theorem 7.9.2 If \( v \geq {2r} \) and \( r \geq 2 \), there is a homomorphism from \( {K}_{v : r} \) to \( {K}_{v - 2 : r - 1} \) .
|
Proof. If \( v = {2r} \), then \( {K}_{v : r} = \left( \begin{matrix} {2r} - 1 \\ r - 1 \end{matrix}\right) {K}_{2} \), which admits a homomorphism into any graph with an edge. So we assume \( v > {2r} \), and that the underlying set \( \Omega \) is equal to \( \{ 1,\ldots, v\} \) . We can easily find a homomorphism \( \varphi \) from \( {K}_{v - 1 : r} \) to \( {K}_{v - 2 : r - 1} \) : Map each \( r \) -set to the \( \left( {r - 1}\right) \) -subset we get by deleting its largest element. We identify \( {K}_{v - 1 : r} \) with the subgraph of \( {K}_{v : r} \) induced by the vertices that do not contain \( v \), and try to extend \( \varphi \) into a homomorphism from \( {K}_{v : r} \) into \( {K}_{v - 2 : r - 1} \) . We note first that the vertices in \( {K}_{v : r} \) that are not in our chosen \( {K}_{v - 1 : r} \) all contain \( v \), and thus they form an independent set in \( {K}_{v : r} \) . Denote this set of vertices by \( S \) and let \( {S}_{i} \) denote the subset of \( S \) formed by the \( r \) -sets that contain \( v, v - 1,\ldots, v - i + 1 \), but not \( v - i \) . The sets \( {S}_{1},\ldots ,{S}_{r} \) form a partition of \( S \) . If \( \alpha \in {S}_{1} \), define \( \varphi \left( \alpha \right) \) to be \( \alpha \smallsetminus v \) . If \( i > 1 \) and \( \alpha \in {S}_{i} \) , then \( v - i \notin \alpha \) . In this case let \( \varphi \left( \alpha \right) \) be obtained from \( \alpha \) by deleting \( v \) and replacing \( v - 1 \) by \( v - i \) . It is now routine to check that \( \varphi \) is a homomorphism from \( {K}_{v : r} \) into \( {K}_{v - 2 : r - 1} \) .
|
Yes
|
Lemma 7.9.3 Suppose that \( v > {2r} \) and \( v/r = w/s \) . There is a homomorphism from \( {K}_{v : r} \) to \( {K}_{w : s} \) if and only if \( r \) divides \( s \) .
|
Proof. Suppose \( r \) divides \( s \) ; we may assume \( s = {mr} \) and \( w = {mv} \) . Let \( W \) be a fixed set of size \( w \) and let \( \pi \) be a partition of it into \( v \) cells of size \( m \) . Then the \( s \) -subsets of \( W \) that are the union of \( r \) cells of \( \pi \) induce a subgraph of \( {K}_{w : s} \) isomorphic to \( {K}_{v : r} \) .\n\nAssume that the vertices of \( X \) are the \( r \) -subsets of the \( v \) -set \( V \), and for \( i \) in \( V \), let \( X\left( i\right) \) be the maximum independent set formed by the \( r \) -subsets that contain \( i \) . Similarly, assume that the vertices of \( Y \) are the \( s \) -subsets of \( W \), and for \( j \) in \( W \), let \( Y\left( j\right) \) be the maximum independent set formed by the \( s \) -subsets that contain \( j \) . The preimage \( {\varphi }^{-1}\left( {Y\left( j\right) }\right) \) is equal to \( X\left( i\right) \), for some \( i \) .\n\nLet \( {\nu }_{i} \) denote the number of elements \( j \) of \( W \) such that \( {\varphi }^{-1}\left( {Y\left( j\right) }\right) = \) \( X\left( i\right) \) . Let \( \alpha \) be an arbitrary vertex of \( X \) . Then \( {\varphi }^{-1}\left( {Y\left( j\right) }\right) = X\left( i\right) \) for some \( i \in \alpha \) if and only if \( {\varphi }^{-1}\left( {Y\left( j\right) }\right) \) contains \( \alpha \) if and only if \( j \in \varphi \left( \alpha \right) \) . Therefore,\n\n\[ \mathop{\sum }\limits_{{i \in \alpha }}{\nu }_{i} = s \]\n\n(7.1)\n\nMoreover, \( {\nu }_{i} \) is independent of \( i \) . Suppose \( \alpha \) and \( \beta \) are vertices of \( X \) such that \( \left| {\alpha \cap \beta }\right| = r - 1 \) . Then, if \( \alpha \smallsetminus \beta = \{ k\} \) and \( \beta \smallsetminus \alpha = \{ \ell \} \), we have\n\n\[ 0 = s - s = \mathop{\sum }\limits_{{i \in \alpha }}{\nu }_{i} - \mathop{\sum }\limits_{{i \in \beta }}{\nu }_{i} = {\nu }_{k} - {\nu }_{\ell } \]\n\nThus \( {\nu }_{k} = {\nu }_{\ell } \), and therefore \( {\nu }_{i} \) is constant. By (7.1) it follows that \( r \) divides \( s \), as required.
|
Yes
|
Theorem 7.10.1 (Hilton-Milner) If \( v \geq {2r} \), the maximum size of an independent set in \( {K}_{v : r} \) with no centre is\n\n\[{\mathrm{h}}_{v, r} = 1 + \left( \begin{array}{l} v - 1 \\ r - 1 \end{array}\right) - \left( \begin{matrix} v - r - 1 \\ r - 1 \end{matrix}\right) .
|
Proof. Suppose that \( f \) is a homomorphism from \( X = {K}_{v : r} \) to \( Y = {K}_{w : \ell } \) . Consider the preimages \( {f}^{-1}\left( {Y\left( i\right) }\right) \) of all the maximum independent sets of \( Y \), and suppose that two of them, say \( {f}^{-1}\left( {Y\left( i\right) }\right) \) and \( {f}^{-1}\left( {Y\left( j\right) }\right) \), have the same centre \( c \) . Then \( f \) maps any \( r \) -set that does not contain \( c \) to an \( \ell \) -set that does not contain \( i \) or \( j \), and so its restriction to the \( r \) -sets not containing \( c \) is a homomorphism from \( {K}_{v - 1 : r} \) to \( {K}_{w - 2 : \ell } \) .\n\nCounting the pairs \( \left( {\alpha, Y\left( i\right) }\right) \) where \( \alpha \in V\left( {K}_{v : r}\right) \) and \( Y\left( i\right) \) contains the vertex \( f\left( \alpha \right) \) we find that\n\n\[ \mathop{\sum }\limits_{i}\left| {{f}^{-1}\left( {Y\left( i\right) }\right) }\right| = \ell \left( \begin{array}{l} v \\ r \end{array}\right) \]\n\nIf no two of the preimages \( {f}^{-1}\left( {Y\left( i\right) }\right) \) have the same centre, then at most \( v \) of them have centres and the remaining \( w - v \) do not have centres. In this case, it follows that\n\n\[ \mathop{\sum }\limits_{i}\left| {{f}^{-1}\left( {Y\left( i\right) }\right) }\right| \leq v\left( \begin{array}{l} v - 1 \\ r - 1 \end{array}\right) + \left( {w - v}\right) {\mathrm{h}}_{v, r} \]\n\nand thus the result holds.
|
Yes
|
Lemma 7.10.2 Suppose there is a homomorphism from \( {K}_{v : r} \) to \( {K}_{w : \ell } \) . If\n\n\[ \ell \left( \begin{array}{l} v \\ r \end{array}\right) > v\left( \begin{array}{l} v - 1 \\ r - 1 \end{array}\right) + \left( {w - v}\right) {\mathrm{h}}_{v, r} \]\n\nthen there is a homomorphism from \( {K}_{v - 1 : r} \) to \( {K}_{w - 2 : \ell } \) .
|
Proof. Suppose that \( f \) is a homomorphism from \( X = {K}_{v : r} \) to \( Y = {K}_{w : \ell } \) . Consider the preimages \( {f}^{-1}\left( {Y\left( i\right) }\right) \) of all the maximum independent sets of \( Y \), and suppose that two of them, say \( {f}^{-1}\left( {Y\left( i\right) }\right) \) and \( {f}^{-1}\left( {Y\left( j\right) }\right) \), have the same centre \( c \) . Then \( f \) maps any \( r \) -set that does not contain \( c \) to an \( \ell \) -set that does not contain \( i \) or \( j \), and so its restriction to the \( r \) -sets not containing \( c \) is a homomorphism from \( {K}_{v - 1 : r} \) to \( {K}_{w - 2 : \ell } \) .\n\nCounting the pairs \( \left( {\alpha, Y\left( i\right) }\right) \) where \( \alpha \in V\left( {K}_{v : r}\right) \) and \( Y\left( i\right) \) contains the vertex \( f\left( \alpha \right) \) we find that\n\n\[ \mathop{\sum }\limits_{i}\left| {{f}^{-1}\left( {Y\left( i\right) }\right) }\right| = \ell \left( \begin{array}{l} v \\ r \end{array}\right) \]\n\nIf no two of the preimages \( {f}^{-1}\left( {Y\left( i\right) }\right) \) have the same centre, then at most \( v \) of them have centres and the remaining \( w - v \) do not have centres. In this case, it follows that\n\n\[ \mathop{\sum }\limits_{i}\left| {{f}^{-1}\left( {Y\left( i\right) }\right) }\right| \leq v\left( \begin{array}{l} v - 1 \\ r - 1 \end{array}\right) + \left( {w - v}\right) {\mathrm{h}}_{v, r} \]\n\nand thus the result holds.
|
Yes
|
Lemma 7.11.2 Let \( \mathcal{C} \) be a collection of closed convex subsets of the unit sphere in \( {\mathbb{R}}^{n} \) . Let \( X \) be the graph with the elements of \( \mathcal{C} \) as its vertices, with two elements adjacent if they are disjoint. If for each unit vector a the open half-space \( H\left( a\right) \) contains an element of \( \mathcal{C} \), then \( X \) cannot be properly coloured with \( n \) colours.
|
Proof. Suppose \( X \) has been coloured with the \( n \) colours \( \{ 1,\ldots, n\} \) . For \( i \in \{ 1,\ldots, n\} \), let \( {C}_{i} \) be the set of vectors \( a \) on the unit sphere such that \( H\left( a\right) \) contains a vertex of colour \( i \) . If \( S \in V\left( X\right) \), then the set of vectors \( a \) such that \( {a}^{T}x > 0 \) for all \( x \) in \( S \) is open, and \( {C}_{i} \) is the union of these sets for all vertices of \( X \) with colour \( i \) . Hence \( {C}_{i} \) is open.\n\nBy our constraint on \( \mathcal{C} \), we see that \( { \cup }_{i = 1}^{n}{C}_{i} \) is the entire unit sphere. Hence Borsuk’s theorem implies that for some \( i \), the set \( {C}_{i} \) contains an antipodal pair of points, \( a \) and \( - a \) say. Then both \( H\left( a\right) \) and \( H\left( {-a}\right) \) contain a vertex of colour \( i \) ; since these vertices must be adjacent, our colouring cannot be proper.
|
Yes
|
Theorem 7.11.4 \( \chi \left( {K}_{v : r}\right) = v - {2r} + 2 \) .
|
Proof. We have already seen that \( v - {2r} + 2 \) is an upper bound on \( \chi \left( {K}_{v : r}\right) \) ; we must show that it is also a lower bound.\n\nAssume that \( \Omega = \left\{ {{x}_{1},\ldots ,{x}_{v}}\right\} \) is a set of \( v \) points in \( {\mathbb{R}}^{v - {2r} + 1} \) such that each open half-space \( H\left( a\right) \) contains at least \( r \) points of \( \Omega \) . Call a subset \( S \) of \( \Omega \) conical if it is contained in some open half-space. For each conical \( r \) -subset \( \alpha \) of \( \Omega \), let \( S\left( \alpha \right) \) be the intersection with the sphere with the cone generated by \( \alpha \) . (In other words, let \( S\left( \alpha \right) \) be the set of all unit vectors that are nonnegative linear combinations of the elements of \( \alpha \) .) Then let \( X \) be the graph with the sets \( S\left( \alpha \right) \) as vertices, and with two such vertices adjacent if they are disjoint. If \( S\left( \alpha \right) \) is disjoint from \( S\left( \beta \right) \), then clearly \( \alpha \) is disjoint from \( \beta \), and so the map\n\n\[ \varphi : S\left( \alpha \right) \rightarrow \alpha \]\n\nis an injective homomorphism from \( X \) to \( {K}_{v : r} \) . Thus by Lemma 7.11.2, the chromatic number of \( {K}_{v : r} \) is at least \( v - {2r} + 2 \) .\n\nSince the fractional chromatic number of \( {K}_{v : r} \) is only \( v/r \), this shows that the difference between the chromatic number and the fractional chromatic number can be arbitrarily large.
|
Yes
|
Theorem 7.11.4 \( \chi \left( {K}_{v : r}\right) = v - {2r} + 2 \) .
|
Proof. We have already seen that \( v - {2r} + 2 \) is an upper bound on \( \chi \left( {K}_{v : r}\right) \) ; we must show that it is also a lower bound.\n\nAssume that \( \Omega = \left\{ {{x}_{1},\ldots ,{x}_{v}}\right\} \) is a set of \( v \) points in \( {\mathbb{R}}^{v - {2r} + 1} \) such that each open half-space \( H\left( a\right) \) contains at least \( r \) points of \( \Omega \) . Call a subset \( S \) of \( \Omega \) conical if it is contained in some open half-space. For each conical \( r \) -subset \( \alpha \) of \( \Omega \), let \( S\left( \alpha \right) \) be the intersection with the sphere with the cone generated by \( \alpha \) . (In other words, let \( S\left( \alpha \right) \) be the set of all unit vectors that are nonnegative linear combinations of the elements of \( \alpha \) .) Then let \( X \) be the graph with the sets \( S\left( \alpha \right) \) as vertices, and with two such vertices adjacent if they are disjoint. If \( S\left( \alpha \right) \) is disjoint from \( S\left( \beta \right) \), then clearly \( \alpha \) is disjoint from \( \beta \), and so the map\n\n\[ \varphi : S\left( \alpha \right) \rightarrow \alpha \]\n\nis an injective homomorphism from \( X \) to \( {K}_{v : r} \) . Thus by Lemma 7.11.2, the chromatic number of \( {K}_{v : r} \) is at least \( v - {2r} + 2 \) .\n\nSince the fractional chromatic number of \( {K}_{v : r} \) is only \( v/r \), this shows that the difference between the chromatic number and the fractional chromatic number can be arbitrarily large.
|
Yes
|
Theorem 7.13.1 (Welzl) Let \( X \) be a graph such that \( \chi \left( X\right) \geq 3 \), and let \( Z \) be a graph such that \( X \rightarrow Z \) but \( Z \nrightarrow X \) . Then there is a graph \( Y \) such that \( X \rightarrow Y \) and \( Y \rightarrow Z \), but \( Z \nrightarrow Y \) and \( Y \rightarrow X \) .
|
Proof. Since \( X \) is not empty or bipartite, any homomorphism from \( X \) to \( Z \) must map \( X \) into a nonbipartite component of \( Z \) . If we have homomorphisms \( X \rightarrow Y \) and \( Y \rightarrow Z \), it follows that the image of \( Y \) must be contained in a nonbipartite component of \( Z \) . Since \( Y \) cannot be empty, there is a homomorphism from any bipartite component of \( Z \) into \( Y \) . Hence it will suffice if we prove the theorem under the assumption that no component of \( Z \) is bipartite.\n\nLet \( m \) be the maximum value of the odd girths of the components of \( Z \) and let \( n \) be the chromatic number of the map-graph \( {X}^{Z} \) . Let \( L \) be a graph with no odd cycles of length less than or equal to \( m \) and with chromatic number greater than \( n \) . (For example, we may take \( L \) to be a suitable Kneser graph.) Let \( Y \) be the disjoint union \( X \cup \left( {Z \times L}\right) \).\n\nClearly, \( X \rightarrow Y \) . If there is a homomorphism from \( Y \) to \( X \), then there must be one from \( Z \times L \) to \( X \), and therefore, by Corollary 6.4.3, a homomorphism from \( L \) to \( {X}^{Z} \) . Given the chromatic number of \( L \), this is impossible.\n\nSince there are homomorphisms from \( X \) to \( Z \) and from \( Z \times L \) to \( Z \), there is a homomorphism from \( Y \) to \( Z \) . Given the value of the odd girth of \( L \) , there cannot be a homomorphism that maps a component of \( Z \) into \( L \) . Therefore, there is no homomorphism from \( Z \) to \( L \), and so there cannot be one from \( Z \) into \( Z \times L \).
|
Yes
|
Lemma 7.14.1 For any graph \( X \), we have \( {\alpha }_{r}\left( X\right) = \alpha \left( {X▱{K}_{r}}\right) \) .
|
Proof. Suppose that \( S \) is an independent set in \( X▱{K}_{r} \). If \( v \in V\left( {K}_{r}\right) \), then the set \( {S}_{v} \), defined by\n\n\[ \n{S}_{v} = \{ u \in V\left( X\right) : \left( {u, v}\right) \in S\} \n\]\n\nis an independent set in \( X \). Any two distinct vertices of \( X▱{K}_{r} \) with the same first coordinate are adjacent, which implies that if \( v \) and \( w \) are distinct vertices of \( {K}_{r} \), then \( {S}_{v} \cap {S}_{w} = \varnothing \). Thus an independent set in \( X▱{K}_{r} \) corresponds to a set of \( r \) pairwise-disjoint independent sets in \( X \). The subgraph induced by such a set is an \( r \) -colourable subgraph of \( X \). For the converse, suppose that \( {X}^{\prime } \) is an \( r \) -colourable induced subgraph of \( X \). Consider the set of vertices\n\n\[ \nS = \left\{ {\left( {x, i}\right) : x \in V\left( {X}^{\prime }\right) }\right. \text{and}\left. {x\text{has colour}i}\right\} \n\]\n\nin \( X▱{K}_{r} \). All vertices in \( S \) have distinct first coordinates so can be adjacent only if they share the same second coordinate. However, if both \( \left( {x, i}\right) \) and \( \left( {y, i}\right) \) are in \( S \), then \( x \) and \( y \) have the same colour in the \( r \) -colouring of \( {X}^{\prime } \), so are not adjacent in \( X \). Therefore, \( S \) is an independent set in \( X▱{K}_{r}.▱ \)
|
Yes
|
Lemma 7.14.2 If \( Y \) is vertex transitive and there is a homomorphism from \( X \) to \( Y \), then\n\n\[ \frac{\left| V\left( X\right) \right| }{{\alpha }_{r}\left( X\right) } \leq \frac{\left| V\left( Y\right) \right| }{{\alpha }_{r}\left( Y\right) } \]
|
Proof. If there is a homomorphism from \( X \) to \( Y \), then there is a homomorphism from \( X▱{K}_{r} \) to \( Y▱{K}_{r} \) . Therefore, \( {\chi }^{ * }\left( {X▱{K}_{r}}\right) \leq {\chi }^{ * }\left( {Y▱{K}_{r}}\right) \). \n\nUsing Corollary 7.5.3 and Corollary 7.5.2 in turn, we see that\n\n\[ \frac{\left| V\left( X▱{K}_{r}\right) \right| }{\alpha \left( {X▱{K}_{r}}\right) } \leq {\chi }^{ * }\left( {X▱{K}_{r}}\right) \leq {\chi }^{ * }\left( {Y▱{K}_{r}}\right) = \frac{\left| V\left( Y▱{K}_{r}\right) \right| }{\alpha \left( {Y▱{K}_{r}}\right) } \]\n\nand then by the previous lemma\n\n\[ \frac{\left| V\left( X▱{K}_{r}\right) \right| }{{\alpha }_{r}\left( X\right) } \leq \frac{\left| V\left( Y▱{K}_{r}\right) \right| }{{\alpha }_{r}\left( Y\right) } \]\n\nwhich immediately yields the result.
|
Yes
|
Lemma 7.15.1 For graphs \( X \) and \( Y \) , \n\n\[ \n\left| {\operatorname{Hom}\left( {X * Y,{K}_{n}}\right) }\right| = \left| {\operatorname{Hom}\left( {Y,{\mathcal{C}}_{n}\left( X\right) }\right) }\right| \n\]
|
Proof. Exercise.
|
No
|
Theorem 7.15.2 \( {\mathcal{C}}_{n}\left( {K}_{r}\right) = {K}_{n : r}\left\lbrack \overline{{K}_{r!}}\right\rbrack \) .
|
Proof. Each vertex of \( {\mathcal{C}}_{n}\left( {K}_{r}\right) \) is an \( n \) -colouring of \( {K}_{r} \), and so its image (as a function) is a set of \( r \) distinct colours. Partitioning the vertices of \( {\mathcal{C}}_{n}\left( {K}_{r}\right) \) according to their images gives \( \left( \begin{array}{l} n \\ r \end{array}\right) \) cells each containing \( r \) ! pairwise nonadjacent vertices. Any two cells of this partition induce a complete bipartite graph if the corresponding \( r \) -sets are disjoint, and otherwise induce an empty graph. It is straightforward to see that this is precisely the description of the graph \( {K}_{n : r}\left\lbrack \overline{{K}_{r!}}\right\rbrack \) .
|
Yes
|
Lemma 7.15.5 The number of v-colourings of the graph \( {C}_{n}\left\lbrack {K}_{r}\right\rbrack \) is equal to\n\n\[ \left| {\operatorname{Hom}\left( {{C}_{n},{K}_{v : r}\left\lbrack \overline{{K}_{r!}}\right\rbrack }\right) }\right| \]
|
Proof. The lexicographic product \( {C}_{n}\left\lbrack {K}_{r}\right\rbrack \) is equal to the strong product \( {K}_{r} * {C}_{n} \), and therefore we have\n\n\[ \left| {\operatorname{Hom}\left( {{C}_{n}\left\lbrack {K}_{r}\right\rbrack ,{K}_{v}}\right) }\right| = \left| {\operatorname{Hom}\left( {{K}_{r} * {C}_{n},{K}_{v}}\right) }\right| \]\n\n\[ = \left| {\operatorname{Hom}\left( {{C}_{n},{\mathcal{C}}_{v}\left( {K}_{r}\right) }\right) }\right| \]\n\n\[ = \left| {\operatorname{Hom}\left( {{C}_{n},{K}_{v : r}\left\lbrack \overline{{K}_{r!}}\right\rbrack }\right) }\right| . \]
|
Yes
|
Lemma 8.1.1 Let \( X \) and \( Y \) be directed graphs on the same vertex set. Then they are isomorphic if and only if there is a permutation matrix \( P \) such that \( {P}^{T}A\left( X\right) P = A\left( Y\right) \) .
|
Since permutation matrices are orthogonal, \( {P}^{T} = {P}^{-1} \), and so if \( X \) and \( Y \) are isomorphic directed graphs, then \( A\left( X\right) \) and \( A\left( Y\right) \) are similar matrices. The characteristic polynomial of a matrix \( A \) is the polynomial\n\n\[ \phi \left( {A, x}\right) = \det \left( {{xI} - A}\right) \]\n\nand we let \( \phi \left( {X, x}\right) \) denote the characteristic polynomial of \( A\left( X\right) \) . The spectrum of a matrix is the list of its eigenvalues together with their multiplicities. The spectrum of a graph \( X \) is the spectrum of \( A\left( X\right) \) (and similarly we refer to the eigenvalues and eigenvectors of \( A\left( X\right) \) as the eigenvalues and eigenvectors of \( X \) ). Lemma 8.1.1 shows that \( \phi \left( {X, x}\right) = \phi \left( {Y, x}\right) \) if \( X \) and \( Y \) are isomorphic, and so the spectrum is an invariant of the isomorphism class of a graph.
|
Yes
|
Lemma 8.1.2 Let \( X \) be a directed graph with adjacency matrix \( A \) . The number of walks from \( u \) to \( v \) in \( X \) with length \( r \) is \( {\left( {A}^{r}\right) }_{uv} \) .
|
Proof. This is easily proved by induction on \( r \), as you are invited to do. \( ▱ \)
|
No
|
Corollary 8.1.3 Let \( X \) be a graph with e edges and \( t \) triangles. If \( A \) is the adjacency matrix of \( X \), then\n\n(a) \( \operatorname{tr}A = 0 \) ,\n\n(b) \( \operatorname{tr}{A}^{2} = {2e} \) ,\n\n(c) \( \operatorname{tr}{A}^{3} = {6t} \) .
|
Since the trace of a square matrix is also equal to the sum of its eigenvalues, and the eigenvalues of \( {A}^{r} \) are the \( r \) th powers of the eigenvalues of \( A \), we see that \( \operatorname{tr}{A}^{r} \) is determined by the spectrum of \( A \) . Therefore, the spectrum of a graph \( X \) determines at least the number of vertices, edges, and triangles in \( X \) . The graphs \( {K}_{1,4} \) and \( {K}_{1} \cup {C}_{4} \) are cospectral and do not have the same number of 4-cycles, so it is difficult to extend these observations.
|
No
|
Lemma 8.2.4 If \( C \) and \( D \) are matrices such that \( {CD} \) and \( {DC} \) are both defined, then \( \det \left( {I - {CD}}\right) = \det \left( {I - {DC}}\right) \) .
|
Proof. If\n\n\[ \nX = \left( \begin{matrix} I & C \\ D & I \end{matrix}\right) ,\;Y = \left( \begin{matrix} I & 0 \\ - D & I \end{matrix}\right) , \n\] \n\nthen \n\n\[ \n{XY} = \left( \begin{matrix} I - {CD} & C \\ 0 & I \end{matrix}\right) ,\;{YX} = \left( \begin{matrix} I & C \\ 0 & I - {DC} \end{matrix}\right) , \n\] \n\nand since \( \det {XY} = \det {YX} \), it follows that \( \det \left( {I - {CD}}\right) = \det \left( {I - {DC}}\right) .▱ \n
|
Yes
|
Lemma 8.2.4 If \( C \) and \( D \) are matrices such that \( {CD} \) and \( {DC} \) are both defined, then \( \det \left( {I - {CD}}\right) = \det \left( {I - {DC}}\right) \) .
|
Proof. If\n\n\[ \nX = \left( \begin{matrix} I & C \\ D & I \end{matrix}\right) ,\;Y = \left( \begin{matrix} I & 0 \\ - D & I \end{matrix}\right) , \n\] \n\nthen \n\n\[ \n{XY} = \left( \begin{matrix} I - {CD} & C \\ 0 & I \end{matrix}\right) ,\;{YX} = \left( \begin{matrix} I & C \\ 0 & I - {DC} \end{matrix}\right) , \n\] \n\nand since \( \det {XY} = \det {YX} \), it follows that \( \det \left( {I - {CD}}\right) = \det \left( {I - {DC}}\right) .▱ \n
|
Yes
|
Lemma 8.2.5 Let \( X \) be a regular graph of valency \( k \) with \( n \) vertices and e edges and let \( L \) be the line graph of \( X \) . Then\n\n\[ \phi \left( {L, x}\right) = {\left( x + 2\right) }^{e - n}\phi \left( {X, x - k + 2}\right) . \]
|
Proof. Substituting \( C = {x}^{-1}{B}^{T} \) and \( D = B \) into the previous lemma we get\n\n\[ \det \left( {{I}_{e} - {x}^{-1}{B}^{T}B}\right) = \det \left( {{I}_{n} - {x}^{-1}B{B}^{T}}\right) \]\n\nwhence\n\n\[ \det \left( {x{I}_{e} - {B}^{T}B}\right) = {x}^{e - n}\det \left( {x{I}_{n} - B{B}^{T}}\right) . \]\n\nNoting that \( \Delta \left( X\right) = {kI} \) and using Lemma 8.2.2 and Lemma 8.2.3, we get\n\n\[ \det \left( {\left( {x - 2}\right) {I}_{e} - A\left( L\right) }\right) = {x}^{e - n}\det \left( {\left( {x - k}\right) {I}_{n} - A\left( X\right) }\right) \]\n\nand so\n\n\[ \phi \left( {L, x - 2}\right) = {x}^{e - n}\phi \left( {X, x - k}\right) \]\n\nwhence our claim follows.
|
Yes
|
Lemma 8.3.3 Let \( X \) and \( Y \) be dual plane graphs, and let \( \sigma \) be an orientation of \( X \) . If \( D \) and \( E \) are the incidence matrices of \( {X}^{\sigma } \) and \( {Y}^{\sigma } \), then \( D{E}^{T} = 0 \) .
|
Proof. If \( u \) is an edge of \( X \) and \( F \) is a face, there are exactly two edges on \( u \) and in \( F \) . Denote them by \( g \) and \( h \) and assume, for convenience, that \( g \) precedes \( h \) as we go clockwise around \( F \) . Then the \( {uF} \) -entry of \( D{E}^{T} \) is equal to\n\n\[ \n{D}_{ug}{E}_{gF}^{T} + {D}_{uh}{E}_{hF}^{T} \n\]\n\nIf the orientation of the edge \( g \) is reversed, then the value of the product \( {D}_{ug}{E}_{gF}^{T} \) does not change. Hence the value of the sum is independent of the orientation \( \sigma \), and so we may assume that \( g \) has head \( u \) and that \( f \) has tail \( u \) . This implies that the edges in \( Y \) corresponding to \( g \) and \( h \) both have head \( F \), and a simple computation now yields that the sum is zero.
|
Yes
|
Lemma 8.4.1 Let \( A \) be a real symmetric matrix. If \( u \) and \( v \) are eigenvectors of \( A \) with different eigenvalues, then \( u \) and \( v \) are orthogonal.
|
Proof. Suppose that \( {Au} = {\lambda u} \) and \( {Av} = {\tau v} \) . As \( A \) is symmetric, \( {u}^{T}{Av} = \) \( {\left( {v}^{T}Au\right) }^{T} \) . However, the left-hand side of this equation is \( \tau {u}^{T}v \) and the right-hand side is \( \lambda {u}^{T}v \), and so if \( \tau \neq \lambda \), it must be the case that \( {u}^{T}v = 0 \) .
|
Yes
|
Lemma 8.4.2 The eigenvalues of a real symmetric matrix \( A \) are real numbers.
|
Proof. Let \( u \) be an eigenvector of \( A \) with eigenvalue \( \lambda \) . Then by taking the complex conjugate of the equation \( {Au} = {\lambda u} \) we get \( A\bar{u} = \bar{\lambda }\bar{u} \), and so \( \bar{u} \) is also an eigenvector of \( A \) . Now, by definition an eigenvector is not zero, so \( {u}^{T}\bar{u} > 0 \) . By the previous lemma, \( u \) and \( \bar{u} \) cannot have different eigenvalues, so \( \lambda = \bar{\lambda } \), and the claim is proved.
|
No
|
Lemma 8.4.3 Let \( A \) be a real symmetric \( n \times n \) matrix. If \( U \) is an \( A \) - invariant subspace of \( {\mathbb{R}}^{n} \), then \( {U}^{ \bot } \) is also \( A \) -invariant.
|
Proof. For any two vectors \( u \) and \( v \), we have\n\n\[ \n{v}^{T}\left( {Au}\right) = {\left( Av\right) }^{T}u \n\]\n\nIf \( u \in U \), then \( {Au} \in U \) ; hence if \( v \in {U}^{ \bot } \), then \( {v}^{T}{Au} = 0 \) . Consequently, \( {\left( Av\right) }^{T}u = 0 \) whenever \( u \in U \) and \( v \in {U}^{ \bot } \) . This implies that \( {Av} \in {U}^{ \bot } \) whenever \( v \in {U}^{ \bot } \), and therefore \( {U}^{ \bot } \) is \( A \) -invariant.
|
Yes
|
Lemma 8.4.4 Let \( A \) be an \( n \times n \) real symmetric matrix. If \( U \) is a nonzero \( A \) -invariant subspace of \( {\mathbb{R}}^{n} \), then \( U \) contains a real eigenvector of \( A \) .
|
Proof. Let \( R \) be a matrix whose columns form an orthonormal basis for \( U \) . Then, because \( U \) is \( A \) -invariant, \( {AR} = {RB} \) for some square matrix \( B \) . Since \( {R}^{T}R = I \), we have\n\n\[ \n{R}^{T}{AR} = {R}^{T}{RB} = B, \n\]\n\nwhich implies that \( B \) is symmetric, as well as real. Since every symmetric matrix has at least one eigenvalue, we may choose a real eigenvector \( u \) of \( B \) with eigenvalue \( \lambda \) . Then \( {ARu} = {RBu} = {\lambda Ru} \), and since \( u \neq 0 \) and the columns of \( R \) are linearly independent, \( {Ru} \neq 0 \) . Therefore, \( {Ru} \) is an eigenvector of \( A \) contained in \( U \) .
|
Yes
|
Theorem 8.4.5 Let \( A \) be a real symmetric \( n \times n \) matrix. Then \( {\mathbb{R}}^{n} \) has an orthonormal basis consisting of eigenvectors of \( A \) .
|
Proof. Let \( \left\{ {{u}_{1},\ldots ,{u}_{m}}\right\} \) be an orthonormal (and hence linearly independent) set of \( m < n \) eigenvectors of \( A \), and let \( M \) be the subspace that they span. Since \( A \) has at least one eigenvector, \( m \geq 1 \) . The subspace \( M \) is \( A \) -invariant, and hence \( {M}^{ \bot } \) is \( A \) -invariant, and so \( {M}^{ \bot } \) contains a (normalized) eigenvector \( {u}_{m + 1} \) . Then \( \left\{ {{u}_{1},\ldots ,{u}_{m},{u}_{m + 1}}\right\} \) is an orthonormal set of \( m + 1 \) eigenvectors of \( A \) . Therefore, a simple induction argument shows that a set consisting of one normalized eigenvector can be extended to an orthonormal basis consisting of eigenvectors of \( A \) .
|
Yes
|
Corollary 8.4.6 If \( A \) is an \( n \times n \) real symmetric matrix, then there are matrices \( L \) and \( D \) such that \( {L}^{T}L = L{L}^{T} = I \) and \( {LA}{L}^{T} = D \), where \( D \) is the diagonal matrix of eigenvalues of \( A \) .
|
Proof. Let \( L \) be the matrix whose rows are an orthonormal basis of eigenvectors of \( A \) . We leave it as an exercise to show that \( L \) has the stated properties.
|
No
|
Lemma 8.5.1 Let \( X \) be a \( k \) -regular graph on \( n \) vertices with eigenvalues \( k,{\theta }_{2},\ldots ,{\theta }_{n} \) . Then \( X \) and its complement \( \bar{X} \) have the same eigenvectors, and the eigenvalues of \( \bar{X} \) are \( n - k - 1, - 1 - {\theta }_{2},\ldots , - 1 - {\theta }_{n} \) .
|
Proof. The adjacency matrix of the complement \( \bar{X} \) is given by\n\n\[ A\left( \bar{X}\right) = J - I - A\left( X\right) \]\n\nwhere \( J \) is the all-ones matrix. Let \( \left\{ {\mathbf{1},{u}_{2},\ldots ,{u}_{n}}\right\} \) be an orthonormal basis of eigenvectors of \( A\left( X\right) \) . Then 1 is an eigenvector of \( \bar{X} \) with eigenvalue \( n - k - 1 \) . For \( 2 \leq i \leq n \), the eigenvector \( {u}_{i} \) is orthogonal to 1, and so\n\n\[ A\left( \bar{X}\right) {u}_{i} = \left( {J - I - A\left( X\right) }\right) {u}_{i} = \left( {-1 - {\theta }_{i}}\right) {u}_{i}. \]\n\nTherefore, \( {u}_{i} \) is an eigenvector of \( A\left( \bar{X}\right) \) with eigenvalue \( - 1 - {\theta }_{i} \) .
|
Yes
|
Lemma 8.6.1 If \( A \) is a positive semidefinite matrix, then there is a matrix \( B \) such that \( A = {B}^{T}B \) .
|
Proof. Since \( A \) is symmetric, there is a matrix \( L \) such that\n\n\[ A = {L}^{T}{\Lambda L} \]\n\nwhere \( \Lambda \) is the diagonal matrix with \( i \) th entry equal to the \( i \) th eigenvalue of \( A \) . Since \( A \) is positive semidefinite, the entries of \( \Lambda \) are nonnegative, and so there is a diagonal matrix \( D \) such that \( {D}^{2} = \Lambda \) . If \( B = {L}^{T}{DL} \), then \( B = {B}^{T} \) and \( A = {B}^{2} = {B}^{T}B \), as required.
|
Yes
|
Lemma 8.6.2 If \( L \) is a line graph, then \( {\theta }_{\min }\left( L\right) \geq - 2 \) .
|
Proof. If \( L \) is the line graph of \( X \) and \( B \) is the incidence matrix of \( X \), we have\n\n\[ A\left( L\right) + {2I} = {B}^{T}B. \]\n\nSince \( {B}^{T}B \) is positive semidefinite, its eigenvalues are nonnegative and all eigenvalues of \( {B}^{T}B - {2I} \) are at least -2.
|
Yes
|
Lemma 8.6.3 Let \( Y \) be an induced subgraph of \( X \) . Then\n\n\[{\theta }_{\min }\left( X\right) \leq {\theta }_{\min }\left( Y\right) \leq {\theta }_{\max }\left( Y\right) \leq {\theta }_{\max }\left( X\right)\]
|
Proof. Let \( A \) be the adjacency matrix of \( X \) and abbreviate \( {\theta }_{\max }\left( X\right) \) to \( \theta \) . The matrix \( {\theta I} - A \) has only nonnegative eigenvalues, and is therefore positive semidefinite. Let \( f \) be any vector that is zero on the vertices of \( X \) not in \( Y \), and let \( {f}_{Y} \) be its restriction to \( V\left( Y\right) \) . Then\n\n\[0 \leq {f}^{T}\left( {{\theta I} - A}\right) f = {f}_{Y}^{T}\left( {{\theta I} - A\left( Y\right) }\right) {f}_{Y}\]\n\nfrom which we deduce that \( {\theta I} - A\left( Y\right) \) is positive semidefinite. Hence \( {\theta }_{\max }\left( Y\right) \leq \theta \) . A similar argument applied to \( A - {\theta }_{\min }\left( X\right) I \) yields the second claim of the lemma.
|
Yes
|
Lemma 8.7.2 Let \( A \) be an \( n \times n \) nonnegative irreducible matrix and let \( \rho \) be the greatest real number such that \( A \) has a \( \rho \) -subharmonic vector. If \( B \) is an \( n \times n \) matrix such that \( \left| B\right| \leq A \) and \( {Bx} = {\theta x} \), then \( \left| \theta \right| \leq \rho \) . If \( \left| \theta \right| = \rho \) , then \( \left| B\right| = A \) and \( \left| x\right| \) is an eigenvector of \( A \) with eigenvalue \( \rho \) .
|
Proof. If \( {Bx} = {\theta x} \), then\n\n\[ \left| \theta \right| \left| x\right| = \left| {\theta x}\right| = \left| {Bx}\right| \leq \left| B\right| \left| x\right| \leq A\left| x\right| .\n\]\n\nHence \( \left| x\right| \) is \( \left| \theta \right| \) -subharmonic for \( A \), and so \( \left| \theta \right| \leq \rho \) . If \( \left| \theta \right| = \rho \), then \( A\left| x\right| = \) \( \left| B\right| \left| x\right| = \rho \left| x\right| \), and by the previous lemma, \( \left| x\right| \) is positive. Since \( A - \left| B\right| \geq 0 \) and \( \left( {A - \left| B\right| }\right) \left| x\right| = 0 \), it follows that \( A = \left| B\right| \) .
|
Yes
|
Lemma 8.7.3 Let \( A \) be a nonnegative irreducible \( n \times n \) matrix with spectral radius \( \rho \) . Then \( \rho \) is a simple eigenvalue of \( A \), and if \( x \) is an eigenvector with eigenvalue \( \rho \), then all entries of \( x \) are nonzero and have the same sign.
|
Proof. The \( \rho \) -eigenspace of \( A \) is 1-dimensional, for otherwise we could find a \( \rho \) -subharmonic vector with some entry equal to zero, contradicting Lemma 8.7.1. If \( x \) is an eigenvector with eigenvalue \( \rho \), then by the previous lemma, \( \left| x\right| \) is a positive eigenvector with the same eigenvalue. Thus \( \left| x\right| \) is a multiple of \( x \), which implies that all the entries of \( x \) have the same sign.\n\nSince the geometric multiplicity of \( \rho \) is 1, we see that \( K = \ker \left( {A - {\rho I}}\right) \) has dimension 1 and the column space \( C \) of \( A - {\rho I} \) has dimension \( n - 1 \) . If \( C \) contains \( x \), then we can find a vector \( y \) such that \( x = \left( {A - {\rho I}}\right) y \) . For any \( k \), we have \( \left( {A - {\rho I}}\right) \left( {y + {kx}}\right) = x \), and so by taking \( k \) sufficiently large, we may assume that \( y \) is positive. But then \( y \) is \( \rho \) -subharmonic and hence is a multiple of \( x \), which is impossible. Therefore, we conclude that \( K \cap C = 0 \) , and that \( {\mathbb{R}}^{n} \) is the direct sum of \( K \) and \( C \) . Since \( K \) and \( C \) are \( A \) -invariant, this implies that the characteristic polynomial \( \varphi \left( {A, t}\right) \) of \( A \) is the product of \( t - \rho \) and the characteristic polynomial of \( A \) restricted to \( C \) . As \( x \) is not in \( C \), all eigenvectors of \( A \) contained in \( C \) have eigenvalue different from \( \rho \), and so we conclude that \( \rho \) is a simple root of \( \varphi \left( {A, t}\right) \), and hence has algebraic multiplicity one.
|
Yes
|
Theorem 8.8.1 Suppose \( A \) is a real nonnegative \( n \times n \) matrix whose underlying directed graph \( X \) is strongly connected. Then:\n\n(a) \( \rho \left( A\right) \) is a simple eigenvalue of \( A \) . If \( x \) is an eigenvector for \( \rho \), then no entries of \( x \) are zero, and all have the same sign.\n\n(b) Suppose \( {A}_{1} \) is a real nonnegative \( n \times n \) matrix such that \( A - {A}_{1} \) is nonnegative. Then \( \rho \left( {A}_{1}\right) \leq \rho \left( A\right) \), with equality if and only if \( {A}_{1} = A \) .
|
The first two parts of this theorem follow from the results of the previous section. We discuss part (c), but do not give a complete proof of it, since we will not need its full strength.
|
No
|
Theorem 8.9.1 Let \( A \) be a symmetric matrix of rank \( r \) . Then there is a permutation matrix \( P \) and a principal \( r \times r \) submatrix \( M \) of \( A \) such that\n\n\[ \n{P}^{T}{AP} = \left( \begin{matrix} I \\ R \end{matrix}\right) M\left( \begin{array}{ll} I & {R}^{T} \end{array}\right) .\n\]
|
Proof. Since \( A \) has rank \( r \), there is a linearly independent set of \( r \) rows of \( A \) . By symmetry, the corresponding set of columns is also linearly independent. The entries of \( A \) in these rows and columns determine an \( r \times r \) principal submatrix \( M \) . Therefore, there is a permutation matrix \( P \) such that\n\n\[ \n{P}^{T}{AP} = \left( \begin{matrix} M & {N}^{T} \\ N & H \end{matrix}\right)\n\]\n\nSince the first \( r \) rows of this matrix generate the row space of \( {P}^{T}{AP} \), we have that \( N = {RM} \) for some matrix \( R \), and hence \( H = R{N}^{T} = {RM}{R}^{T} \) . Therefore,\n\n\[ \n{P}^{T}{AP} = \left( \begin{matrix} M & {MR} \\ {RM} & {RM}{R}^{T} \end{matrix}\right) = \left( \begin{matrix} I \\ R \end{matrix}\right) M\left( \begin{array}{ll} I & {R}^{T} \end{array}\right)\n\]\n\nas claimed.
|
Yes
|
Corollary 8.9.4 Let \( A \) be a real symmetric \( n \times n \) matrix of rank \( r \) . Then there is an \( n \times r \) matrix \( C \) of rank \( r \) such that\n\n\[ A = {CN}{C}^{T} \]\n\nwhere \( N \) is a block-diagonal \( r \times r \) matrix with \( r - {2s} \) diagonal entries equal to \( \pm 1 \), and \( s \) blocks of the form\n\n\[ \left( \begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right) \]
|
Proof. We note that\n\n\[ {\beta }^{-1}\left( {y{z}^{T} + z{y}^{T}}\right) = \left( \begin{array}{ll} {\beta }^{-1}y & z \end{array}\right) \left( \begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right) {\left( \begin{array}{ll} {\beta }^{-1}y & z \end{array}\right) }^{T}. \]\n\nTherefore, if we take \( C \) to be the \( n \times r \) matrix with columns \( \sqrt{\left| {\alpha }_{i}^{-1}\right| }{x}_{i} \) , \( {\beta }_{j}^{-1}{y}_{j} \), and \( {z}_{j} \), then\n\n\[ A = {CN}{C}^{T} \]\n\nwhere \( N \) is a block-diagonal matrix, with each diagonal block one of the matrices\n\n\[ \left( \begin{matrix} 0 \end{matrix}\right) ,\;\left( {\pm 1}\right) ,\;\left( \begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right) . \]\n\nThe column space of \( A \) is contained in the space spanned by vectors \( {x}_{i} \) , \( {y}_{j} \) and \( {z}_{j} \) in (8.1); because these two spaces have the same dimension, we conclude that these vectors are a basis for the column space of \( A \) . Therefore, \( \operatorname{rk}\left( C\right) = r. \]\n\nThe previous result is an application of Lemma 8.9.3 to real symmetric matrices. In the next section we apply it to symmetric matrices over \( {GF}\left( 2\right) \) .
|
Yes
|
Theorem 8.10.1 Let \( A \) be a symmetric \( n \times n \) matrix over \( {GF}\left( 2\right) \) with zero diagonal and binary rank \( m \) . Then \( m \) is even and there is an \( m \times n \) matrix \( C \) of rank \( m \) such that\n\n\[ A = {CN}{C}^{T} \]\n\nwhere \( N \) is a block diagonal matrix with \( m/2 \) blocks of the form\n\n\[ \left( \begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right) \]
|
Proof. Over \( {GF}\left( 2\right) \), the diagonal entries of the matrix \( y{z}^{T} + z{y}^{T} \) are zero. Since all diagonal entries of \( A \) are zero, it follows that the algorithm implicit in the proof of Lemma 8.9.3 will express \( A \) as a sum of symmetric matrices with rank two and zero diagonals. Therefore, Lemma 8.9.3 implies that \( \operatorname{rk}\left( A\right) \) is even. The proof of Corollary 8.9.4 now yields the rest of the theorem.
|
Yes
|
Theorem 8.11.1 A reduced graph has binary rank at most \( {2r} \) if and only if it is an induced subgraph of \( \operatorname{Sp}\left( {2r}\right) \) .
|
Proof. Any reduced graph \( X \) of binary rank \( {2r} \) has a vectorial representation as a spanning set of nonzero vectors in \( {GF}{\left( 2\right) }^{2r} \) . Therefore, the vertex set of \( X \) is a subset of the vertices of \( \operatorname{Sp}\left( {2r}\right) \), where two vertices are adjacent in \( X \) if and only if they are adjacent in \( \operatorname{Sp}\left( {2r}\right) \) . Therefore, \( X \) is an induced subgraph of \( \operatorname{Sp}\left( {2r}\right) \) . The converse is clear.
|
Yes
|
Theorem 8.11.2 Every graph on \( {2r} - 1 \) vertices occurs as an induced subgraph of \( \operatorname{Sp}\left( {2r}\right) \) .
|
Proof. We prove this by induction on \( r \) . It is true when \( r = 1 \) because a single vertex is an induced subgraph of a triangle. So suppose that \( r > 1 \) , and let \( X \) be an arbitrary graph on \( {2r} - 1 \) vertices. If \( X \) is empty, then it is straightforward to see that it is an induced subgraph of \( \operatorname{Sp}\left( {2r}\right) \) . Otherwise, \( X \) has at least one edge \( {uv} \) . Let \( Y \) be the rank-two reduction of \( X \) at the edge \( {uv} \) . Then \( Y \) is a graph on \( {2r} - 3 \) vertices, and hence by the inductive hypothesis can be represented as a set \( \Omega \) of nonzero vectors in \( {GF}{\left( 2\right) }^{{2r} - 2} \) . If \( z \) is a vector in \( \Omega \) representing the vertex \( y \in V\left( Y\right) \), then define a vector \( {z}^{\prime } \in {GF}{\left( 2\right) }^{2r} \) as follows:\n\n\[ \n{z}_{i}^{\prime } = \left\{ \begin{array}{ll} {z}_{i}, & \text{ for }1 \leq i \leq {2r} - 2 \\ 1, & \text{ if }i = {2r} - 1\text{ and }y \sim u\text{ in }X,\text{ or }i = {2r}\text{ and }y \sim v\text{ in }X \\ 0, & \text{ otherwise. } \end{array}\right. \n\]\n\nThen the set of vectors\n\n\[ \n{\Omega }^{\prime } = \left\{ {{z}^{\prime } : z \in S}\right\} \cup \left\{ {{e}_{{2r} - 1},{e}_{2r}}\right\} \n\]\n\nis a set of \( {2r} \) vectors in \( {GF}{\left( 2\right) }^{2r} \) . Checking that the graph defined by \( {\Omega }^{\prime } \) is equal to \( X \) requires examining several cases, but is otherwise routine, so we leave it as Exercise 28.
|
No
|
Lemma 8.12.1 If \( X \) is a graph with diameter \( d \), then \( A\left( X\right) \) has at least \( d + 1 \) distinct eigenvalues.
|
Proof. We sketch the proof. Observe that the \( {uv} \) -entry of \( {\left( A + I\right) }^{r} \) is nonzero if and only if \( u \) and \( v \) are joined by a path of length at most \( r \) . Consequently, the matrices \( {\left( A + I\right) }^{r} \) for \( r = 0,\ldots, d \) form a linearly independent subset in the space of all polynomials in \( A \) . Therefore, \( d + 1 \) is no greater than the dimension of this space, which is the number of primitive idempotents of \( A \) .
|
No
|
Lemma 8.13.1 Let \( A \) be a real symmetric \( n \times n \) matrix and let \( B \) denote the matrix obtained by deleting the ith row and column of \( A \) . Then\n\n\[ \frac{\phi \left( {B, x}\right) }{\phi \left( {A, x}\right) } = {e}_{i}^{T}{\left( xI - A\right) }^{-1}{e}_{i} \]\n\nwhere \( {e}_{i} \) is the ith standard basis vector.
|
Proof. From the standard determinantal formula for the inverse of a matrix we have\n\n\[ {\left( {\left( xI - A\right) }^{-1}\right) }_{ii} = \frac{\det \left( {{xI} - B}\right) }{\det \left( {{xI} - A}\right) } \]\n\nso noting that\n\n\[ {\left( {\left( xI - A\right) }^{-1}\right) }_{ii} = {e}_{i}^{T}{\left( xI - A\right) }^{-1}{e}_{i} \]\n\nsuffices to complete the proof.
|
Yes
|
Corollary 8.13.2 For any graph \( X \) we have\n\n\[ \n{\phi }^{\prime }\left( {X, x}\right) = \mathop{\sum }\limits_{{u \in V\left( X\right) }}\phi \left( {X \smallsetminus u, x}\right) \n\]
|
Proof. By (8.3),\n\n\[ \n\operatorname{tr}{\left( xI - A\right) }^{-1} = \mathop{\sum }\limits_{\theta }\frac{\operatorname{tr}{E}_{\theta }}{x - \theta }. \n\]\n\nBy the lemma, the left side here is equal to\n\n\[ \n\mathop{\sum }\limits_{{u \in V\left( X\right) }}\frac{\phi \left( {X \smallsetminus u, x}\right) }{\phi \left( {X, x}\right) } \n\]\n\nIf \( {m}_{\theta } \) denotes the multiplicity of \( \theta \) as a zero of \( \phi \left( {X, x}\right) \), then a little bit of calculus yields the partial fraction expansion\n\n\[ \n\frac{{\phi }^{\prime }\left( {X, x}\right) }{\phi \left( {X, x}\right) } = \mathop{\sum }\limits_{\theta }\frac{{m}_{\theta }}{x - \theta } \n\]\n\nSince \( {E}_{\theta } \) is a symmetric matrix and \( {E}_{\theta }^{2} = {E}_{\theta } \), its eigenvalues are all 0 or 1, and \( \operatorname{tr}{E}_{\theta } \) is equal to its rank. But the rank of \( {E}_{\theta } \) is the dimension of the eigenspace associated with \( \theta \), and therefore \( \operatorname{tr}{E}_{\theta } = {m}_{\theta } \). This completes the proof.
|
Yes
|
Theorem 8.13.3 Let \( A \) be a real symmetric \( n \times n \) matrix, let \( b \) be a vector of length \( n \), and define \( \psi \left( x\right) \) to be the rational function \( {b}^{T}{\left( xI - A\right) }^{-1}b \) . Then all zeros and poles of \( \psi \) are simple, and \( {\psi }^{\prime } \) is negative everywhere it is defined. If \( \theta \) and \( \tau \) are consecutive poles of \( \psi \), the closed interval \( \left\lbrack {\theta ,\tau }\right\rbrack \) contains exactly one zero of \( \psi \) .
|
Proof. By (8.3),\n\n\[ \n{b}^{T}{\left( xI - A\right) }^{-1}b = \mathop{\sum }\limits_{{\theta \in \operatorname{ev}\left( A\right) }}\frac{{b}^{T}{E}_{\theta }b}{x - \theta }. \n\]\n\n(8.4)\n\nThis implies that the poles of \( \psi \) are simple. We differentiate both sides of (8.4) to obtain\n\n\[ \n{\psi }^{\prime }\left( x\right) = - \mathop{\sum }\limits_{\theta }\frac{{b}^{T}{E}_{\theta }b}{{\left( x - \theta \right) }^{2}} \n\]\n\nand then observe, using (8.3), that the right side here is \( - {b}^{T}{\left( xI - A\right) }^{-2}b \) . Thus\n\n\[ \n{\psi }^{\prime }\left( x\right) = - {b}^{T}{\left( xI - A\right) }^{-2}b. \n\]\n\nSince \( {b}^{T}{\left( xI - A\right) }^{-2}b \) is the squared length of \( {\left( xI - A\right) }^{-1}b \), it follows that \( {\psi }^{\prime }\left( x\right) < 0 \) whenever \( x \) is not a pole of \( \psi \) . This implies that each zero of \( \psi \) must be simple.\n\nSuppose that \( \theta \) and \( \tau \) are consecutive poles of \( \psi \) . Since these poles are simple, it follows that \( \psi \) is a strictly decreasing function on the interval \( \left\lbrack {\theta ,\tau }\right\rbrack \) and that it is positive for values of \( x \) in this interval sufficiently close to \( \theta \), and negative when \( x \) is close enough to \( \tau \) . Accordingly, this interval contains exactly one zero of \( \psi \) .
|
Yes
|
Theorem 9.1.1 Let \( A \) be a real symmetric \( n \times n \) matrix and let \( B \) be a principal submatrix of \( A \) with order \( m \times m \) . Then, for \( i = 1,\ldots, m \) ,
|
Proof. We prove the result by induction on \( n \) . If \( m = n \), there is nothing to prove. Assume \( m = n - 1 \) . Then, by Lemma 8.13.1, for some \( i \) we have\n\n\[ \frac{\phi \left( {B, x}\right) }{\phi \left( {A, x}\right) } = {e}_{i}^{T}{\left( xI - A\right) }^{-1}{e}_{i} \]\n\nDenote this rational function by \( \psi \) . By Theorem 8.13.3, \( \psi \left( x\right) \) has only simple poles and zeros, and each consecutive pair of poles is separated by a single zero. The poles of \( \psi \) are zeros of \( A \), the zeros of \( \psi \) are zeros of \( B \) .\n\nFor a real symmetric matrix \( M \) and a real number \( \lambda \), let \( n\left( {\lambda, M}\right) \) denote the number of indices \( i \) such that \( {\theta }_{i}\left( M\right) \geq \lambda \) . We consider the behaviour of \( n\left( {\lambda, A}\right) - n\left( {\lambda, B}\right) \) as \( \lambda \) decreases. If \( \lambda \) is greater than the largest pole of \( \psi \), then the difference \( n\left( {\lambda, A}\right) - n\left( {\lambda, B}\right) \) is initially zero. Since each pole is simple, the value of this difference increases by one each time \( \lambda \) passes through a pole of \( \psi \), and since each zero is simple, its value decreases by one as it passes through a zero. As there is exactly one zero between each pair of poles, this difference alternates between 0 and 1 . Therefore, it follows that \( {\theta }_{i + 1}\left( A\right) \leq {\theta }_{i}\left( B\right) \leq {\theta }_{i}\left( A\right) \) for all \( i \) .\n\nNow, suppose that \( m < n - 1 \) . Then \( B \) is a principal submatrix of a principal submatrix \( C \) of \( A \) with order \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) . By induction we have\n\n\[ {\theta }_{n - 1 - m + i}\left( C\right) \leq {\theta }_{i}\left( B\right) \leq {\theta }_{i}\left( C\right) \]\n\nBy what we have already shown,\n\n\[ {\theta }_{i + 1}\left( A\right) \leq {\theta }_{i}\left( C\right) \leq {\theta }_{i}\left( A\right) \]\n\nand it follows that the eigenvalues of \( B \) interlace the eigenvalues of \( A \) .
|
Yes
|
Lemma 9.2.1 There are no Hamilton cycles in the Petersen graph P.
|
Proof. First note that there is a Hamilton cycle in \( P \) if and only if there is an induced \( {C}_{10} \) in \( L\left( P\right) \) . Now, \( L\left( P\right) \) has eigenvalues \( 4,2, - 1 \), and -2 with respective multiplicities \( 1,5,4 \), and 5 (see Exercise 8.9). In particular, \( {\theta }_{7}\left( {L\left( P\right) }\right) = - 1 \) . The eigenvalues of \( {C}_{10} \) are \[ 2,\frac{1 + \sqrt{5}}{2},\frac{-1 + \sqrt{5}}{2},\frac{1 - \sqrt{5}}{2},\frac{-1 - \sqrt{5}}{2}, - 2, \] where 2 and -2 are simple eigenvalues and the others all have multiplicity two. Therefore, \( {\theta }_{7}\left( {C}_{10}\right) \approx - {0.618034} \) . Hence \( {\theta }_{7}\left( {C}_{10}\right) > {\theta }_{7}\left( {L\left( P\right) }\right) \), and so \( {C}_{10} \) is not an induced subgraph of \( L\left( P\right) \) .
|
Yes
|
Lemma 9.2.2 The edges of \( {K}_{10} \) cannot be partitioned into three copies of the Petersen graph.
|
Proof. Let \( P \) and \( Q \) be two copies of Petersen’s graph on the same vertex set and with no edges in common. Let \( R \) be the subgraph of \( {K}_{10} \) formed by the edges not in \( P \) or \( Q \) . We show that \( R \) is bipartite.\n\nLet \( {U}_{P} \) be the eigenspace of \( A\left( P\right) \) with eigenvalue 1, and let \( {U}_{Q} \) be the corresponding eigenspace for \( A\left( Q\right) \) . Then \( {U}_{P} \) and \( {U}_{Q} \) are 5-dimensional subspaces of \( {\mathbb{R}}^{10} \) . Since both subspaces lie in \( {\mathbf{1}}^{ \bot } \), they must have a nonzero vector \( u \) in common. Then\n\n\[ A\left( R\right) u = \left( {J - I - A\left( P\right) - A\left( Q\right) }\right) u = \left( {J - I}\right) u - {2u} = - {3u}, \]\n\nand so -3 is an eigenvalue of \( A\left( R\right) \) . Since \( R \) is cubic, it follows from Theorem 8.8.2 that it must be bipartite.
|
Yes
|
Lemma 9.3.1 Let \( \pi \) be an equitable partition of the graph \( X \), with characteristic matrix \( P \), and let \( B = A\left( {X/\pi }\right) \) . Then \( {AP} = {PB} \) and \( B = \) \( {\left( {P}^{T}P\right) }^{-1}{P}^{T}{AP} \) .
|
Proof. We will show that for all vertices \( u \) and cells \( {C}_{j} \) we have\n\n\[ \n{\left( AP\right) }_{uj} = {\left( PB\right) }_{uj} \n\] \n\nThe \( {uj} \) -entry of \( {AP} \) is the number of neighbours of \( u \) that lie in \( {C}_{j} \) . If \( u \in {C}_{i} \), then this number is \( {b}_{ij} \) . Now, the \( {uj} \) -entry of \( {PB} \) is also \( {b}_{ij} \) , because the only nonzero entry in the \( u \) -row of \( P \) is a 1 in the \( i \) -column. Therefore, \( {AP} = {PB} \), and so \n\n\[ \n{P}^{T}{AP} = {P}^{T}{PB} \n\] \nsince \( {P}^{T}P \) is invertible, the second claim follows.
|
Yes
|
Lemma 9.3.2 Let \( X \) be a graph with adjacency matrix \( A \) and let \( \pi \) be a partition of \( V\left( X\right) \) with characteristic matrix \( P \) . Then \( \pi \) is equitable if and only if the column space of \( P \) is \( A \) -invariant.
|
Proof. The column space of \( P \) is \( A \) -invariant if and only if there is a matrix \( B \) such that \( {AP} = {PB} \) . If \( \pi \) is equitable, then by the previous lemma we may take \( B = A\left( {X/\pi }\right) \) . Conversely, if there is such a matrix \( B \) , then every vertex in cell \( {C}_{i} \) is adjacent to \( {b}_{ij} \) vertices in cell \( {C}_{j} \), and hence \( \pi \) is equitable.
|
Yes
|
Theorem 9.3.3 If \( \pi \) is an equitable partition of a graph \( X \), then the characteristic polynomial of \( A\left( {X/\pi }\right) \) divides the characteristic polynomial of \( A\left( X\right) \) .
|
Proof. Let \( P \) be the characteristic matrix of \( \pi \) and let \( B = A\left( {X/\pi }\right) \) . If \( X \) has \( n \) vertices, then let \( Q \) be an \( n \times \left( {n - \left| \pi \right| }\right) \) matrix whose columns, together with those of \( P \), form a basis for \( {\mathbb{R}}^{n} \) . Then there are matrices \( C \) and \( D \) such that\n\n\[ \n{AQ} = {PC} + {QD} \n\]\n\nfrom which it follows that\n\n\[ \nA\left( \begin{array}{ll} P & Q \end{array}\right) = \left( \begin{array}{ll} P & Q \end{array}\right) \left( \begin{matrix} B & C \\ 0 & D \end{matrix}\right) . \n\]\n\nSince \( \left( \begin{array}{ll} P & Q \end{array}\right) \) is invertible, it follows that \( \det \left( {{xI} - B}\right) \) divides \( \det \left( {{xI} - A}\right) \) as asserted.
|
Yes
|
Lemma 9.3.4 If \( X \) is a regular graph with a perfect 1-code, then -1 is an eigenvalue of \( A\left( X\right) \) .
|
Proof. Let \( S \) be a perfect 1-code and consider the partition \( \pi \) of \( V\left( X\right) \) into \( S \) and its complement. If \( X \) is \( k \) -regular, then the definition of a perfect 1-code implies that \( \pi \) is equitable with quotient matrix\n\n\[ \left( \begin{matrix} 0 & k \\ 1 & k - 1 \end{matrix}\right) \]\n\nwhich has characteristic polynomial\n\n\[ x\left( {x - \left( {k - 1}\right) }\right) - k = \left( {x - k}\right) \left( {x + 1}\right) . \]\n\nTherefore, -1 is an eigenvalue of the quotient matrix, and hence an eigenvalue of \( A\left( X\right) \) .
|
Yes
|
Theorem 9.4.1 Let \( X \) be a vertex-transitive graph and \( \pi \) the orbit partition of some subgroup \( G \) of \( \operatorname{Aut}\left( X\right) \) . If \( \pi \) has a singleton cell \( \{ u\} \), then every eigenvalue of \( X \) is an eigenvalue of \( X/\pi \) .
|
Proof. If \( f \) is a function on \( V\left( X\right) \), and \( g \in \operatorname{Aut}\left( X\right) \), then let \( {f}^{g} \) denote the function given by\n\n\[ \n{f}^{g}\left( x\right) = f\left( {x}^{g}\right) .\n\]\n\nIt is routine to show that if \( f \) is an eigenvector of \( X \) with eigenvalue \( \theta \), then so is \( {f}^{g} \).\n\nIf \( f \) is an eigenvector of \( X \) with eigenvalue \( \theta \), let \( \widehat{f} \) denote the average of \( {f}^{g} \) over the elements \( g \in G \) . Then \( \widehat{f} \) is constant on the cells of \( \pi \), and provided that it is nonzero, it, too, is an eigenvector of \( X \) with eigenvalue \( \theta \).\n\nNow, consider any eigenvector \( h \) of \( X \) with eigenvalue \( \theta \) . Since \( h \neq 0 \) , there is some vertex \( v \) such that \( h\left( v\right) \neq 0 \) . Let \( g \in \operatorname{Aut}\left( X\right) \) be an element such that \( {u}^{g} = v \) and let \( f = {h}^{g} \) . Then \( f\left( u\right) \neq 0 \), and so\n\n\[ \n\widehat{f}\left( u\right) = f\left( u\right) \neq 0\n\]\n\nThus \( \widehat{f} \) is nonzero and constant on the cells of \( \pi \). Therefore, following the discussion in Section 9.3, this implies that \( \widehat{f} \) is the lift of some eigenvector of \( X/\pi \) with the same eigenvalue. Therefore, every eigenvalue of \( X \) is an eigenvalue of \( X/\pi \) .
|
Yes
|
Lemma 9.4.2 We have\n\n\\[ \n\\mathop{\\sum }\\limits_{{i = 0}}^{h}{\\left( -1\\right) }^{h - i}\\left( \\begin{matrix} h \\ i \\end{matrix}\\right) \\left( \\begin{matrix} a - i \\ k \\end{matrix}\\right) = {\\left( -1\\right) }^{h}\\left( \\begin{array}{l} a - h \\ k - h \\end{array}\\right) .\n\\]
|
Proof. Denote the sum in the statement of the lemma by \\( f\\left( {a, h, k}\\right) \\) . Since\n\n\\[ \n\\left( \\begin{matrix} a - i \\ k \\end{matrix}\\right) = \\left( \\begin{matrix} a - i - 1 \\ k \\end{matrix}\\right) + \\left( \\begin{matrix} a - i - 1 \\ k - 1 \\end{matrix}\\right)\n\\]\n\nwe have\n\n\\[ \nf\\left( {a, h, k}\\right) = f\\left( {a - 1, h, k}\\right) + f\\left( {a - 1, h, k - 1}\\right) .\n\\]\n\nWe have \\( f\\left( {k, h, k}\\right) = {\\left( -1\\right) }^{h} \\), while \\( f\\left( {a, h,0}\\right) = 0 \\) if \\( h > 0 \\) and \\( f\\left( {a,0,0}\\right) = 1 \\) . Thus it follows by induction that\n\n\\[ \nf\\left( {a, h, k}\\right) = {\\left( -1\\right) }^{h}\\left( \\begin{array}{l} a - h \\ k - h \\end{array}\\right)\n\\]\n\nas claimed.
|
Yes
|
Theorem 9.5.1 Let \( A \) be a real symmetric \( n \times n \) matrix and let \( R \) be an \( n \times m \) matrix such that \( {R}^{T}R = {I}_{m} \) . Set \( B \) equal to \( {R}^{T}{AR} \) and let \( {v}_{1},\ldots ,{v}_{m} \) be an orthogonal set of eigenvectors for \( B \) such that \( B{v}_{i} = {\theta }_{i}\left( B\right) {v}_{i} \) . Then:\n\n(a) The eigenvalues of \( B \) interlace the eigenvalues of \( A \) .\n\n(b) If \( {\theta }_{i}\left( B\right) = {\theta }_{i}\left( A\right) \), then there is an eigenvector \( y \) of \( B \) with eigenvalue \( {\theta }_{i}\left( B\right) \) such that \( {Ry} \) is an eigenvector of \( A \) with eigenvalue \( {\theta }_{i}\left( A\right) \) .\n\n(c) If \( {\theta }_{i}\left( B\right) = {\theta }_{i}\left( A\right) \) for \( i = 1,\ldots ,\ell \), then \( R{v}_{i} \) is an eigenvector for \( A \) with eigenvalue \( {\theta }_{i}\left( A\right) \) for \( i = 1,\ldots ,\ell \) .\n\n(d) If the interlacing is tight, then \( {AR} = {RB} \) .
|
Proof. Let \( {u}_{1},\ldots ,{u}_{n} \) be an orthogonal set of eigenvectors for \( A \) such that \( A{u}_{i} = {\theta }_{i}\left( A\right) {u}_{i} \) . Let \( {U}_{j} \) be the span of \( {u}_{1},\ldots ,{u}_{j} \) and let \( {V}_{j} \) be the span of \( {v}_{1},\ldots ,{v}_{j} \) . For any \( i \), the space \( {V}_{i} \) has dimension \( i \), and the space \( \left( {{R}^{T}{U}_{i - 1}}\right) \) has dimension at most \( i - 1 \) . Therefore, there is a nonzero vector \( y \) in the\n\nintersection of \( {V}_{i} \) and \( {\left( {R}^{T}{U}_{i - 1}\right) }^{ \bot } \) . Then \( {y}^{T}{R}^{T}{u}_{j} = 0 \) for \( j = 1,\ldots, i - 1 \) , and therefore \( {Ry} \in {U}_{i - 1}^{ \bot } \) . By Rayleigh’s inequalities this yields\n\n\[ \n{\theta }_{i}\left( A\right) \geq \frac{{\left( Ry\right) }^{T}{ARy}}{{\left( Ry\right) }^{T}{Ry}} = \frac{{y}^{T}{By}}{{y}^{T}y} \geq {\theta }_{i}\left( B\right) .\n\]\n\n(9.3)\n\nWe can now apply the same argument to the symmetric matrices \( - A \) and \( - B \) and conclude that \( {\theta }_{i}\left( {-B}\right) \leq {\theta }_{i}\left( {-A}\right) \), and hence that \( {\theta }_{n - m + i}\left( A\right) \leq \) \( {\theta }_{i}\left( B\right) \) . Therefore, the eigenvalues of \( B \) interlace those of \( A \), and we have proved (a).\n\nIf equality holds in (9.3), then \( y \) must be an eigenvector for \( B \) and \( {Ry} \) an eigenvector for \( A \), both with eigenvalue \( {\theta }_{i}\left( A\right) = {\theta }_{i}\left( B\right) \) . This proves (b).\n\nWe prove (c) by induction on \( \ell \) . If \( i = 1 \), we may take \( y \) in (9.3) to be \( {v}_{1} \), and deduce that \( {AR}{v}_{1} = {\theta }_{1}\left( A\right) R{v}_{1} \) . So we may assume that \( {AR}{v}_{i} = \) \( {\theta }_{i}\left( A\right) R{v}_{i} \) for all \( i < \ell \), and hence we may assume that \( {u}_{i} = R{v}_{i} \) for all \( i < \ell \) . But then \( {v}_{\ell } \) lies in the intersection of \( {V}_{\ell } \) and \( {\left( {R}^{T}{U}_{\ell - 1}\right) }^{ \bot } \), and thus we may choose \( y \) to be \( {v}_{\ell } \), which proves (c).\n\nIf the interlacing is tight, then there is some index \( j \) such that \( {\theta }_{i}\left( B\right) = \) \( {\theta }_{i}\left( A\right) \) for \( i \leq j \) and \( {\theta }_{i}\left( {-B}\right) = {\theta }_{i}\left( {-A}\right) \) for \( i \leq m - j \) . Applying (c), we see that for all \( i \), \n\n\[ \n{RB}{v}_{i} = {\theta }_{i}\left( B\right) R{v}_{i} = {AR}{v}_{i} \n\]\n\nand since \( {v}_{1},\ldots ,{v}_{m} \) is a basis for \( {\mathbb{R}}^{m} \), this implies that \( {RB} = {AR} \) .
|
Yes
|
Lemma 9.6.1 If \( P \) is the characteristic matrix of a partition \( \pi \) of the vertices of the graph \( X \), then the eigenvalues of \( {\left( {P}^{T}P\right) }^{-1}{P}^{T}{AP} \) interlace the eigenvalues of \( A \) . If the interlacing is tight, then \( \pi \) is equitable.
|
Proof. The problem with \( P \) is that its columns form an orthogonal set, not an orthonormal set, but fortunately this can easily be fixed. Recall that \( {P}^{T}P \) is a diagonal matrix with positive diagonal entries, and so there is a diagonal matrix \( D \) such that \( {D}^{2} = {P}^{T}P \) . If \( R = P{D}^{-1} \), then\n\n\[ \n{R}^{T}{AR} = {D}^{-1}{P}^{T}{AP}{D}^{-1} = D\left( {{D}^{-2}{P}^{T}{AP}}\right) {D}^{-1}, \n\]\n\nand so \( {R}^{T}{AR} \) is similar to \( {\left( {P}^{T}P\right) }^{-1}{P}^{T}{AP} \) . Furthermore,\n\n\[ \n{R}^{T}R = {D}^{-1}\left( {{P}^{T}P}\right) {D}^{-1} = {D}^{-1}\left( {D}^{2}\right) {D}^{-1} = I, \n\]\n\nand therefore by Theorem 9.5.1, the eigenvalues of \( {R}^{T}{AR} \) interlace the eigenvalues of \( A \) . If the interlacing is tight, then the column space of \( R \) is \( A \) -invariant, and since \( R \) and \( P \) have the same column space, it follows that \( \pi \) is equitable.
|
Yes
|
Lemma 9.6.2 Let \( X \) be a \( k \) -regular graph on \( n \) vertices with least eigenvalue \( \tau \) . Then\n\n\[ \alpha \left( X\right) \leq \frac{n\left( {-\tau }\right) }{k - \tau } \]
|
Proof. The inequality follows on unpacking (9.4). If \( S \) is an independent set with size meeting this bound, then the partition with \( S \) and \( V\left( X\right) \smallsetminus S \) as its cells is equitable, and so each vertex not in \( S \) has exactly \( k\left| S\right| /\left( {n - \left| S\right| }\right) = \) \( - \tau \) neighbours in \( S \) .
|
Yes
|
Lemma 9.6.3 Let \( X \) be a graph on \( n \) vertices and let \( A \) be a symmetric \( n \times n \) matrix such that \( {A}_{uv} = 0 \) if the vertices \( u \) and \( v \) are not adjacent. Then\n\n\[ \alpha \left( X\right) \leq \min \left\{ {n - {n}^{ + }\left( A\right), n - {n}^{ - }\left( A\right) }\right\} . \]
|
Proof. Let \( S \) be the subgraph of \( X \) induced by an independent set of size \( s \), and let \( B \) be the principal submatrix of \( A \) with rows and columns indexed by the vertices in \( S \) . (So \( B \) is the zero matrix.) By interlacing,\n\n\[ {\theta }_{n - s + i}\left( A\right) \leq {\theta }_{i}\left( B\right) \leq {\theta }_{i}\left( A\right) \] \n\nBut of course, \( {\theta }_{i}\left( B\right) = 0 \) for all \( i \) ; hence we infer that\n\n\[ 0 \leq {\theta }_{s}\left( A\right) \] \n\nand that \( {n}^{ - }\left( A\right) \leq n - s \) . We can apply the same argument with \( - A \) in place of \( A \) to deduce that \( {n}^{ + }\left( A\right) \leq n - s \) .
|
Yes
|
Lemma 9.8.1 A fullerene has exactly twelve 5-cycles.
|
Proof. Suppose \( F \) is a fullerene with \( n \) vertices, \( e \) edges, and \( f \) faces. Then \( n, e \), and \( f \) are constrained by Euler’s relation, \( n - e + f = 2 \) . Since \( F \) is cubic, \( {3n} = {2e} \) . Let \( {f}_{r} \) denote the number of faces of \( F \) with size \( r \) . Then\n\n\[
{f}_{5} + {f}_{6} = f = 2 + e - n = 2 + \frac{3}{2}n - n = 2 + \frac{1}{2}n.
\]\n\nSince each edge lies in exactly two faces,\n\n\[
5{f}_{5} + 6{f}_{6} = {2e} = {3n}.
\]\n\nSolving these equations implies that \( {f}_{5} = {12} \) .
|
Yes
|
Lemma 9.9.1 If \( X \) is a cubic planar graph with leapfrog graph \( F\left( X\right) \), then \( F\left( X\right) \) has at most half of its eigenvalues positive and at most half of its eigenvalues negative.
|
Proof. Let \( \pi \) be the partition whose cells are the edges of the canonical perfect matching \( M \) of \( F\left( X\right) \). Since \( X \) is cubic, two distinct cells of \( \pi \) are joined by at most one edge. The graph defined on the cells of \( \pi \) where two cells are adjacent if they are joined by an edge is the line graph \( L\left( X\right) \) of \( X \).\n\nLet \( P \) be the characteristic matrix of \( \pi \), let \( A \) be the adjacency matrix of \( F\left( X\right) \), and let \( L \) be the adjacency matrix of \( L\left( X\right) \). Then a straightforward calculation shows that\n\n\[ \n{P}^{T}{AP} = {2I} + L \n\]\n\nBecause the smallest eigenvalue of \( L \) is -2, it follows that \( {P}^{T}{AP} \) is positive semidefinite. If \( R = P/\sqrt{2} \), then \( {R}^{T}{AR} \) is also positive semidefinite, and its eigenvalues interlace the eigenvalues of \( A \). Therefore, if \( F\left( X\right) \) has \( {2m} \) vertices, we have\n\n\[ \n0 \leq {\theta }_{m}\left( {{R}^{T}{AR}}\right) \leq {\theta }_{m}\left( A\right) \n\]\n\nNext we prove a similar bound on \( {\theta }_{m + 1}\left( A\right) \). We use an arbitrary orientation \( \sigma \) of \( X \) to produce an orientation of the edges of the canonical perfect matching \( M \). Suppose \( e \in E\left( X\right) \) and \( {F}_{1} \) and \( {F}_{2} \) are the two faces of \( X \) that contain \( e \). Then \( \left( {e,{F}_{1}}\right) \) and \( \left( {e,{F}_{2}}\right) \) are the end-vertices of an edge of \( M \). We orient it so that it points from \( \left( {e,{F}_{1}}\right) \) to \( \left( {e,{F}_{2}}\right) \) if \( {F}_{2} \) is the face on the right as we move along \( e \) in the direction determined by \( \sigma \). Denote this oriented graph by \( {M}^{\sigma } \).\n\nLet \( Q \) be the incidence matrix of \( {M}^{\sigma } \) and let \( D \) be the incidence matrix of \( {X}^{\sigma } \). Then\n\n\[ \n{Q}^{T}{AQ} = - {D}^{T}D \n\]\n\nwhich implies that \( {Q}^{T}{AQ} \) is negative semidefinite. If \( R = Q/\sqrt{2} \), then \( {R}^{T}{AR} \) is also negative semidefinite, and the eigenvalues of \( {R}^{T}{AR} \) interlace those of \( A \). Therefore,\n\n\[ \n{\theta }_{m + 1}\left( A\right) \leq {\theta }_{1}\left( {{R}^{T}{AR}}\right) \leq 0 \n\]\n\nand the result is proved.
|
Yes
|
Theorem 9.9.2 If \( X \) is a cubic planar graph, then its leapfrog graph \( F\left( X\right) \) has exactly half of its eigenvalues negative. If, in addition, \( X \) has a face of length not divisible by three, then its leapfrog graph \( F\left( X\right) \) also has exactly half of its eigenvalues positive.
|
Proof. By the lemma, the first conclusion follows if \( {\theta }_{m + 1}\left( A\right) \neq 0 \), and the second follows if \( {\theta }_{m}\left( A\right) \neq 0 \) . Suppose to the contrary that \( {\theta }_{m + 1}\left( A\right) = 0 \) . Then by Theorem 9.5.1, there is an eigenvector \( f \) for \( A \) with eigenvalue 0 that sums to zero on each cell of \( \pi \) . Let \( F = {v}_{0},\ldots ,{v}_{r} \) be a face of \( F\left( X\right) \) that is not a special hexagon. Thus each vertex \( {v}_{i} \) is adjacent to \( {v}_{i - 1},{v}_{i + 1} \), and the other vertex \( {w}_{i} \) in the same cell of \( \pi \) . Since \( f \) sums to zero on the cells of \( \pi \), we have \( f\left( {w}_{i}\right) = - f\left( {v}_{i}\right) \) . Since \( f \) has eigenvalue 0, the sum of the values of \( f \) on the neighbours of \( {v}_{i + 1} \) is 0, and similarly for \( {v}_{i + 2} \) . Therefore (performing all subscript arithmetic modulo \( r + 1 \) ), we get \[ f\left( {v}_{i}\right) - f\left( {v}_{i + 1}\right) + f\left( {v}_{i + 2}\right) = 0, \] \[ f\left( {v}_{i + 1}\right) - f\left( {v}_{i + 2}\right) + f\left( {v}_{i + 3}\right) = 0 \] and hence \[ f\left( {v}_{i + 3}\right) = - f\left( {v}_{i}\right) \] If the length of \( F \) is not divisible by six, then \( f \) is constant, and therefore zero, on the vertices of \( F \) . Any cubic planar graph has a face of length less than six, and therefore \( F\left( X\right) \) has a face that is not a special hexagon on which \( f \) is zero. Every edge of \( M \) lies in two special hexagons, and if \( f \) is determined on one special hexagon, the values it takes on any \
|
Yes
|
Lemma 10.1.1 Let \( X \) be an \( \left( {n, k, a, c}\right) \) strongly regular graph. Then the following are equivalent:\n\n(a) \( X \) is not connected,\n\n(b) \( c = 0 \) ,\n\n(c) \( a = k - 1 \) ,\n\n(d) \( X \) is isomorphic to \( m{K}_{k + 1} \) for some \( m > 1 \) .
|
Proof. Suppose that \( X \) is not connected and let \( {X}_{1} \) be a component of \( X \) . A vertex in \( {X}_{1} \) has no common neighbours with a vertex not in \( {X}_{1} \), and so \( c = 0 \) . If \( c = 0 \), then any two neighbours of a vertex \( u \in V\left( X\right) \) must be adjacent, and so \( a = k - 1 \) . Finally, if \( a = k - 1 \), then the component containing any vertex must be a complete graph \( {K}_{k + 1} \), and hence \( X \) is a disjoint union of complete graphs.
|
Yes
|
Lemma 10.2.1 A connected regular graph with exactly three distinct eigenvalues is strongly regular.
|
Proof. Suppose that \( X \) is connected and regular with eigenvalues \( k,\theta \) , and \( \tau \), where \( k \) is the valency. If \( A = A\left( X\right) \), then the matrix polynomial\n\n\[ M \mathrel{\text{:=}} \frac{1}{\left( {k - \theta }\right) \left( {k - \tau }\right) }\left( {A - {\theta I}}\right) \left( {A - {\tau I}}\right) \]\n\nhas all its eigenvalues equal to 0 or 1 . Any eigenvector of \( A \) with eigenvalue \( \theta \) or \( \tau \) lies in the kernel of \( M \), whence we see that the rank of \( M \) is equal to the multiplicity of \( k \) as an eigenvalue. Since \( X \) is connected, this multiplicity is one, and as \( M\mathbf{1} = \mathbf{1} \), it follows that \( M = \frac{1}{n}J \) .\n\nWe have shown that \( J \) is a quadratic polynomial in \( A \), and thus \( {A}^{2} \) is a linear combination of \( I, J \), and \( A \) . Accordingly, \( X \) is strongly regular.
|
Yes
|
Lemma 10.3.1 Let \( X \) be strongly regular with parameters \( \left( {n, k, a, c}\right) \) and distinct eigenvalues \( k,\theta \), and \( \tau \) . Then \[ {m}_{\theta }{m}_{\tau } = \frac{{nk}\bar{k}}{{\left( \theta - \tau \right) }^{2}}. \]
|
Proof. The proof of this lemma is left as an exercise.
|
No
|
Lemma 10.3.2 Let \( X \) be strongly regular with parameters \( \left( {n, k, a, c}\right) \) and eigenvalues \( k,\theta \), and \( \tau \) . If \( {m}_{\theta } = {m}_{\tau } \), then \( k = \left( {n - 1}\right) /2, a = \left( {n - 5}\right) /4 \) , and \( c = \left( {n - 1}\right) /4 \) .
|
Proof. If \( {m}_{\theta } = {m}_{\tau } \), then they both equal \( \left( {n - 1}\right) /2 \), which we denote by \( m \) . Then \( m \) is coprime to \( n \), and therefore it follows from the previous lemma that \( {m}^{2} \) divides \( k\bar{k} \) . Since \( k + \bar{k} = n - 1 \), it must be the case that \( k\bar{k} \leq {\left( n - 1\right) }^{2}/4 = {m}^{2} \), with equality if and only if \( k = \bar{k} \) . Therefore, we must have equality, and so \( k = \bar{k} = m \) . Since \( m\left( {\theta + \tau }\right) = - k \), we see that \( \theta + \tau = a - c = - 1 \), and so \( a = c - 1 \) . Finally, because \( k\left( {k - a - 1}\right) = \bar{k}c \) we see that \( c = k - a - 1 \), and hence \( c = k/2 \) . Putting this all together shows that \( X \) has the stated parameters.
|
Yes
|
Lemma 10.3.4 Let \( X \) be a strongly regular graph with \( p \) vertices, where \( p \) is prime. Then \( X \) is a conference graph.
|
Proof. By Lemma 10.3.1 we have\n\n\[{\left( \theta - \tau \right) }^{2} = \frac{{pk}\bar{k}}{{m}_{\theta }{m}_{\tau }}\]\n\n(10.3)\n\nIf \( X \) is not a conference graph, then \( {\left( \theta - \tau \right) }^{2} \) is a perfect square. But since \( k,\bar{k},{m}_{\theta } \), and \( {m}_{\tau } \) are all nonzero values smaller than \( p \), the right-hand side of (10.3) is not divisible by \( {p}^{2} \), which is a contradiction.
|
Yes
|
Lemma 10.3.5 Let \( X \) be a primitive strongly regular graph with an eigenvalue \( \theta \) of multiplicity \( n/2 \) . If \( k < n/2 \), then the parameters of \( X \) are\n\n\[ \left( {{\left( 2\theta + 1\right) }^{2} + 1,\theta \left( {{2\theta } + 1}\right) ,{\theta }^{2} - 1,{\theta }^{2}}\right) .
|
Proof. Since \( {m}_{\theta } = n - 1 - {m}_{\tau } \), we see that \( {m}_{\theta } \neq {m}_{\tau } \), and hence that \( \theta \) and \( \tau \) are integers.\n\nFirst we will show by contradiction that \( \theta \) must be the eigenvalue of multiplicity \( n/2 \) . Suppose instead that \( {m}_{\tau } = n/2 \) . From above we know that \( {m}_{\tau } = \left( {{n\theta } + k - \theta }\right) /\left( {\theta - \tau }\right) \), and because \( {m}_{\tau } \) divides \( n \), it must also divide \( k - \theta \) . But since \( X \) is primitive, \( 0 < \theta < k \), and so \( 0 < k - \theta < n/2 \) , and hence \( {m}_{\tau } \) cannot divide \( k - \theta \) . Therefore, we conclude that \( {m}_{\theta } = n/2 \).\n\nNow, \( {m}_{\theta } = \left( {{n\tau } + k - \tau }\right) /\left( {\tau - \theta }\right) \), and since \( {m}_{\theta } \) divides \( n \), it must also divide \( k - \tau \) . But since \( - k \leq \tau \), we see that \( k - \tau < n \), and hence it must be the case that \( {m}_{\theta } = k - \tau \).\n\nSet \( m = {m}_{\theta } = n/2 \) . Then \( {m}_{\tau } = m - 1 \), and since \( \operatorname{tr}A = 0 \) and \( \operatorname{tr}{A}^{2} = {nk} \) , we have\n\n\[ k + {m\theta } + \left( {m - 1}\right) \tau = 0,\;{k}^{2} + m{\theta }^{2} + \left( {m - 1}\right) {\tau }^{2} = {2mk}. \]\n\nBy expanding the terms \( \left( {m - 1}\right) \tau \) and \( \left( {m - 1}\right) {\tau }^{2} \) in the above expressions and then substituting \( k - m \) for the second occurrence of \( \tau \) in each case we get\n\n\[ 1 + \theta + \tau = 0,\;{\theta }^{2} + {\tau }^{2} = m, \]\n\nrespectively. Combining these we get that \( m = {\theta }^{2} + {\left( \theta + 1\right) }^{2} \), and hence that \( k = \theta \left( {{2\theta } + 1}\right) \) . Finally, we know that \( c - k = {\theta \tau } = - \left( {{\theta }^{2} + \theta }\right) \), and so \( c = {\theta }^{2} \) ; we also know that \( a - c = \theta + \tau = - 1 \), so \( a = {\theta }^{2} - 1 \) . Hence the result is proved.
|
Yes
|
Corollary 10.3.6 Let \( X \) be a primitive strongly regular graph with \( {2p} \) vertices where \( p \) is prime. Then the parameters of \( X \) or its complement are\n\n\[ \left( {{\left( 2\theta + 1\right) }^{2} + 1,\theta \left( {{2\theta } + 1}\right) ,{\theta }^{2} - 1,{\theta }^{2}}\right) .
|
Proof. By taking the complement if necessary, we may assume that \( k \leq \left( {n - 1}\right) /2 \) . The graph \( X \) cannot be a conference graph (because for a conference graph \( n = 2{m}_{\tau } + 1 \) is odd), and hence \( \theta \) and \( \tau \) are integers. Since \( {\left( \theta - \tau \right) }^{2}{m}_{\theta }{m}_{\tau } = {2pk}\bar{k} \), we see that either \( p \) divides \( {\left( \theta - \tau \right) }^{2} \) or \( p \) divides \( {m}_{\theta }{m}_{\tau } \) . If \( p \) divides \( {\left( \theta - \tau \right) }^{2} \), then so must \( {p}^{2} \), and hence \( p \) must divide \( k\bar{k} \) . Since \( k \leq \left( {{2p} - 1}\right) /2 \), this implies that \( p \) must divide \( \bar{k} \), and hence \( \bar{k} = p \) and \( k = p - 1 \) . It is left as Exercise 1 to show that there are no primitive strongly regular graphs with \( k \) and \( \bar{k} \) coprime. On the other hand, if \( p \) divides \( {m}_{\theta }{m}_{\tau } \), then either \( {m}_{\theta } = p \) or \( {m}_{\tau } = p \), and the result follows from Lemma 10.3.5.
|
No
|
Lemma 10.4.3 Let \( L \) be a Latin square of order \( n \) and let \( X \) be the graph of the corresponding \( {OA}\left( {3, n}\right) \) . Then the maximum number of vertices \( \alpha \left( X\right) \) in an independent set of \( X \) is \( n \), and the chromatic number \( \chi \left( X\right) \) of \( X \) is at least \( n \) . If \( L \) is the multiplication table of a group, then \( \chi \left( X\right) = n \) if and only if \( \alpha \left( X\right) = n \) .
|
Proof. If we identify the \( {n}^{2} \) vertices of \( X \) with the \( {n}^{2} \) cells of the Latin square \( L \), then it is clear that an independent set of \( X \) can contain at most one cell from each row of \( L \) . Therefore, \( \alpha \left( X\right) \leq n \), which immediately implies that \( \chi \left( X\right) \geq n \) .\n\nAssume now that \( L \) is the multiplication table of a group \( G \) and denote the \( {ij} \) -entry of \( L \) by \( i \circ j \) . An independent set of size \( n \) contains precisely one cell from each row of \( L \), and hence we can describe such a set by giving for each row the column number of that cell. Therefore, an independent set of size \( n \) is determined by giving a permutation \( \pi \) of \( N \mathrel{\text{:=}} \{ 1,2,\ldots, n\} \) such that the map \( i \mapsto i \circ {i}^{\pi } \) is also a permutation of \( N \) . But if \( k \in N \), then the permutation \( {\pi }_{k} \) which maps \( i \) to \( k \circ {i}^{\pi } \) will also provide an independent set either equal to or disjoint from the one determined by \( \pi \) . Thus we obtain \( n \) independent sets of size \( n \), and so \( X \) has chromatic number \( n \) .
|
Yes
|
Lemma 10.4.4 Let \( L \) be a Latin square arising from the multiplication table of the cyclic group \( G \) of order \( {2n} \) and let \( X \) be the graph of the corresponding \( {OA}\left( {3,{2n}}\right) \) . Then \( X \) has no independent sets of size \( {2n} \) .
|
Proof. Suppose on the contrary that \( X \) does have an independent set of \( {2n} \) vertices, described by the permutation \( \pi \) . There is a unique element \( \tau \) of order two in \( G \), and so all the remaining nonidentity elements can be paired with their inverses. It follows that the product of all the entries in \( G \) is equal to \( \tau \) . Hence\n\n\[ \tau = \mathop{\sum }\limits_{{i \in G}}i \circ {i}^{\pi } = \mathop{\sum }\limits_{{i \in G}}i\mathop{\sum }\limits_{{i \in G}}{i}^{\pi } = {\tau }^{2} = 1. \]\n\nThis contradiction shows that such a permutation \( \pi \) cannot exist.
|
Yes
|
Theorem 10.4.5 An \( {OA}\left( {k, n}\right) \) is extendible if and only if its graph has chromatic number \( n \) .
|
Proof. Let \( X \) be the graph of an \( {OA}\left( {k, n}\right) \) . Suppose first that \( \chi \left( X\right) = n \) . Then the \( {n}^{2} \) vertices of \( X \) fall into \( n \) colour classes \( V\left( X\right) = {V}_{1} \cup \cdots \cup {V}_{n} \) . Define the \( \left( {k + 1}\right) \) st row of the orthogonal array by setting the entry to \( i \) if the column corresponds to a vertex in \( {V}_{i} \) . Conversely, if the orthogonal array is extendible, then the sets of columns on which the \( \left( {k + 1}\right) \) st row is constant form \( n \) independent sets of size \( n \) in \( X \) .
|
Yes
|
Lemma 10.6.1 Let \( X \) be strongly regular with eigenvalues \( k > \theta > \tau \) . Suppose that \( x \) is an eigenvector of \( {A}_{1} \) with eigenvalue \( {\sigma }_{1} \) such that \( {\mathbf{1}}^{T}x = \) 0 . If \( {Bx} = 0 \), then \( {\sigma }_{1} \in \{ \theta ,\tau \} \), and if \( {Bx} \neq 0 \), then \( \tau < {\sigma }_{1} < \theta \) .
|
Proof. Since \( {\mathbf{1}}^{T}x = 0 \), we have\n\n\[ \left( {{A}_{1}^{2} - \left( {a - c}\right) {A}_{1} - \left( {k - c}\right) I}\right) x = - {B}^{T}{Bx} \]\n\nand since \( X \) is strongly regular with eigenvalues \( k,\theta \), and \( \tau \), we have\n\n\[ \left( {{A}_{1}^{2} - \left( {a - c}\right) {A}_{1} - \left( {k - c}\right) I}\right) x = \left( {{A}_{1} - {\theta I}}\right) \left( {{A}_{1} - {\tau I}}\right) x. \]\n\nTherefore, if \( x \) is an eigenvector of \( {A}_{1} \) with eigenvalue \( {\sigma }_{1} \) ,\n\n\[ \left( {{\sigma }_{1} - \theta }\right) \left( {{\sigma }_{1} - \tau }\right) x = - {B}^{T}{Bx}. \]\n\nIf \( {Bx} = 0 \), then \( \left( {{\sigma }_{1} - \theta }\right) \left( {{\sigma }_{1} - \tau }\right) = 0 \) and \( {\sigma }_{1} \in \{ \theta ,\tau \} \) . If \( {Bx} \neq 0 \), then \( {B}^{T}{Bx} \neq 0 \), and so \( x \) is an eigenvector for the positive semidefinite matrix \( {B}^{T}B \) with eigenvalue \( - \left( {{\sigma }_{1} - \theta }\right) \left( {{\sigma }_{1} - \tau }\right) \) . It follows that \( \left( {{\sigma }_{1} - \theta }\right) \left( {{\sigma }_{1} - \tau }\right) < 0 \) , whence \( \tau < {\sigma }_{1} < \theta \) .
|
Yes
|
Theorem 10.6.3 Let \( X \) be an \( \left( {n, k, a, c}\right) \) strongly regular graph. Then \( \sigma \) is a local eigenvalue of one subconstituent of \( X \) if and only if \( a - c - \sigma \) is a local eigenvalue of the other, with equal multiplicities.
|
Proof. Suppose that \( {\sigma }_{1} \) is a local eigenvalue of \( {A}_{1} \) with eigenvector \( x \) . Then, since \( {\mathbf{1}}^{T}x = 0 \) ,\n\n\[ B{A}_{1} + {A}_{2}B = \left( {a - c}\right) B + {cJ} \]\n\nimplies that\n\n\[ {A}_{2}{Bx} = \left( {a - c}\right) {Bx} - {\sigma }_{1}{Bx} = \left( {a - c - {\sigma }_{1}}\right) {Bx}. \]\n\nTherefore, since \( {Bx} \neq 0 \), it is an eigenvector of \( {A}_{2} \) with eigenvalue \( a - c - {\sigma }_{1} \) . Since \( {\mathbf{1}}^{T}B = \left( {k - 1 - a}\right) {\mathbf{1}}^{T} \), we also have \( {\mathbf{1}}^{T}{Bx} = 0 \), and so \( a - c - {\sigma }_{1} \) is a local eigenvalue for \( {A}_{2} \) .\n\nA similar argument shows that if \( {\sigma }_{2} \) is a local eigenvalue of \( {A}_{2} \) with eigenvector \( y \), then \( a - c - {\sigma }_{2} \) is a local eigenvalue of \( {A}_{1} \) with eigenvector \( {B}^{T}y \) .\n\nFinally, note that the mapping \( B \) from the \( {\sigma }_{1} \) -eigenspace of \( {A}_{1} \) into the \( \left( {a - c - {\sigma }_{1}}\right) \) -eigenspace of \( {A}_{2} \) is injective and the mapping \( {B}^{T} \) from the \( \left( {a - c - {\sigma }_{1}}\right) \) -eigenspace of \( {A}_{2} \) into the \( {\sigma }_{1} \) -eigenspace of \( {A}_{1} \) is also injective. Therefore, the dimension of these two subspaces is equal.
|
Yes
|
Theorem 10.6.4 The Clebsch graph is the unique strongly regular graph with parameters \( \left( {{16},5,0,2}\right) \) .
|
Proof. Suppose that \( X \) is a \( \left( {{16},5,0,2}\right) \) strongly regular graph, which therefore has eigenvalues \( 5,1 \), and -3 . Let \( {X}_{2} \) denote the second subcon-stituent of \( X \) . This is a cubic graph on 10 vertices, and so has an eigenvalue 3 with eigenvector 1. All its other eigenvectors are orthogonal to 1. Since 0 is the only eigenvalue of the first subconstituent, the only other eigenvalues that \( {X}_{2} \) can have are \( 1, - 3 \), and the local eigenvalue -2 . Since -1 is not in this set, \( {X}_{2} \) can not have \( {K}_{4} \) as a component, and so \( {X}_{2} \) is connected. This implies that its diameter is at least two; therefore, \( {X}_{2} \) has at least three eigenvalues. Hence the spectrum of \( {X}_{2} \) is not symmetric about zero, and so \( {X}_{2} \) is not bipartite. Consequently, \( - 3 \) is not an eigenvalue of \( {X}_{2} \) . Therefore, \( {X}_{2} \) is a connected cubic graph with exactly the three eigenvalues 3, 1, and -2. By Lemma 10.2.1 it is strongly regular, and hence isomorphic to the Petersen graph. The neighbours in \( {X}_{2} \) of any fixed vertex of the first subconstituent form an independent set of size four in \( {X}_{2} \) . Because the Petersen graph has exactly five independent sets of size four, each vertex of the first subconstituent is adjacent to precisely one of these independent sets. Therefore, we conclude that \( X \) is uniquely determined by its parameters.
|
Yes
|
Lemma 10.7.2 If \( k \geq {m}_{\theta } \), then \( \tau \) is an eigenvalue of the first subconstituent of \( X \) with multiplicity at least \( k - {m}_{\theta } \) .
|
Proof. Let \( U \) denote the space of functions on \( V\left( X\right) \) that sum to zero on each subconstituent of \( X \) relative to \( u \) . This space has dimension \( n - 3 \) . Let \( T \) be the space spanned by the eigenvectors of \( X \) with eigenvalue \( \tau \) that sum to zero on \( V\left( {X}_{1}\right) \) ; this has dimension \( n - m - 2 \) and is contained in \( U \) . Let \( N \) denote the space of functions on \( V\left( X\right) \) that sum to zero and have their support contained in \( V\left( {X}_{1}\right) \) ; this has dimension \( k - 1 \) and is also contained in \( U \) . If \( k > m \), then \( \dim N + \dim T > \dim U \), and therefore \( \dim N \cap T \geq k - m \) . Each function in \( N \cap T \) is an eigenvector of \( X \) with eigenvalue \( \tau \), and its restriction to \( V\left( {X}_{1}\right) \) is an eigenvector of \( {X}_{1} \) with the same eigenvalue.
|
Yes
|
Lemma 10.7.3 If \( k \geq {m}_{\theta } \), then\n\n\[ \n\left( {{m}_{\theta } - 1}\right) \left( {{ka} - {a}^{2} - \left( {k - {m}_{\theta }}\right) {\tau }^{2}}\right) - {\left( a + \left( k - {m}_{\theta }\right) \tau \right) }^{2} \geq 0.\n\]
|
Proof. We know that \( a \) is an eigenvalue of \( {A}_{1} \) with multiplicity at least one, and that \( \tau \) is an eigenvalue with multiplicity at least \( k - m \) . This leaves \( m - 1 \) eigenvalues as yet unaccounted for; we denote them by \( {\sigma }_{1},\ldots ,{\sigma }_{m - 1} \) . Then\n\n\[ \n0 = \operatorname{tr}\left( {A}_{1}\right) = a + \left( {k - m}\right) \tau + \mathop{\sum }\limits_{i}{\sigma }_{i}\n\]\n\nand\n\n\[ \n{ka} = \operatorname{tr}\left( {A}_{1}^{2}\right) = {a}^{2} + \left( {k - m}\right) {\tau }^{2} + \mathop{\sum }\limits_{i}{\sigma }_{i}^{2}.\n\]\n\nBy the Cauchy-Schwarz inequality,\n\n\[ \n\left( {m - 1}\right) \mathop{\sum }\limits_{i}{\sigma }_{i}^{2} \geq {\left( \mathop{\sum }\limits_{i}{\sigma }_{i}\right) }^{2}\n\]\n\nwith equality if and only if the \( m - 1 \) eigenvalues \( {\sigma }_{i} \) are all equal. Using the two equations above, we obtain the inequality in the statement of the lemma.
|
Yes
|
Lemma 10.7.4 If \( k < {m}_{\theta } \), then\n\n\[ \n\left( {{m}_{\theta } - 1}\right) \left( {{ka} - {a}^{2} - \left( {k - {m}_{\theta }}\right) {\tau }^{2}}\right) - {\left( a + \left( k - {m}_{\theta }\right) \tau \right) }^{2} > 0.\n\]
|
Proof. Define the polynomial \( p\left( x\right) \) by\n\n\[ \np\left( x\right) \mathrel{\text{:=}} \left( {m - 1}\right) \left( {{ka} - {a}^{2} - \left( {k - m}\right) {x}^{2}}\right) - {\left( a + \left( k - m\right) x\right) }^{2}.\n\]\n\nThen\n\n\[ \np\left( x\right) = \left( {m - 1}\right) {ka} - m{a}^{2} + {2a}\left( {m - k}\right) x + \left( {k - 1}\right) \left( {m - k}\right) {x}^{2},\n\]\n\nand after some computation, we find that its discriminant is\n\n\[ \n- {4a}\left( {m - k}\right) \left( {m - 1}\right) k\left( {k - 1 - a}\right) .\n\]\n\nSince \( k < m \) and \( 1 < m \), we see that this is negative unless \( a = 0 \) . If \( a = 0 \) , then\n\n\[ \np\left( x\right) = \left( {k - 1}\right) \left( {m - k}\right) {x}^{2}\n\]\n\nand consequently \( p\left( \tau \right) \neq 0 \), unless \( \tau = 0 \) .\n\nIf \( a = 0 \) and \( \tau = 0 \), then \( X \) is the complete bipartite graph \( {K}_{k, k} \) with eigenvalues \( k,0 \), and \( - k \) . However, if \( \tau = 0 \), then \( \theta = - k \) and \( m = 1 \), which contradicts the condition that \( k < m \) .
|
Yes
|
Corollary 10.7.6 There is no strongly regular graph with parameter set \( \left( {{28},9,0,4}\right) \) .
|
Proof. The parameter set \( \left( {{28},9,0,4}\right) \) is feasible, and a strongly regular graph with these parameters would have spectrum\n\n\[ \left\{ {9,{1}^{\left( {21}\right) }, - {5}^{\left( 6\right) }}\right\} \]\n\nBut if \( k = 9,\theta = 1 \), and \( \tau = - 5 \), then\n\n\[ {\theta }^{2}\tau - {2\theta }{\tau }^{2} - {\tau }^{2} - {k\tau } + k{\theta }^{2} + {2k\theta } = - 8 \]\n\nand hence there is no such graph.
|
Yes
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.