text
stringlengths 0
6.23M
| __index_level_0__
int64 0
419k
|
|---|---|
Legend tells that wonderful Easter colored eggs with ornaments and spots brings cute Easter bunny! Usually it is very careful and tries not to show people on the eyes, brings festive eggs at dawn and then runs away. But with you, he decided to meet and even make friends! Come and play with the bunny, look collection of its Easter eggs, and help it to choose an outfit for the holiday!
| 166,983
|
Barovier&Toso Exagon Barovier&Toso Exagon
Suspended cascades and modern fireflies, in a thread of transparent or coloured reflections Transparent lotus leaves, embellished with tiny gold flakes or in pale colors, float in the air, lit by L.E.Ds which highlight their forms. Three or five energy-saving halogen lamps are embedded in the chrome-plated plate. The L.E.D. holder has the same finish (available as an optional). Available colors: crystal, bluastro, liquid citron, crystal/gold
The history of Barovier - a small company, a family, a great story since 1295 Everything took place on an island, an exotic, wondrous place like every island: Murano.
| 80,861
|
\begin{document}
\maketitle
\begin{abstract}
The point of this note is to prove that the secrecy function attains its maximum at $y=1$ on all known extremal even unimodular lattices. This is a special case of a conjecture by Belfiore and Sol\'e. Further, we will give a very simple method to verify or disprove the conjecture on any given unimodular lattice.
\end{abstract}
\section{Introduction}
Belfiore and Oggier defined in \cite{belfioggiswire} the secrecy gain
\[
\max_{y\in \mathbb{R}, 0<y}\frac{\Theta_{\mathbb{Z}^n}(yi)}{\Theta_{\Lambda}(yi)},
\]
where
\[
\Theta_{\Lambda}(z)=\sum_{x\in \Lambda}e^{\pi i ||x||^2z},
\]
as a new lattice invariant to measure how much confusion the eavesdropper will experience while the lattice $\Lambda$ is used in Gaussian wiretap coding. The function $\Xi_{\Lambda}(y)=\frac{\Theta_{\mathbb{Z}^n}(yi)}{\Theta_{\Lambda}(yi)}$ is called the secrecy function. Belfiore and Sol\'e then conjectured in \cite{belfisole} that the secrecy function attains its maximum at $y=1$, which would then be the value of the secrecy gain. The secrecy gain was further studied by Oggier, Sol\'e and Belfiore in \cite{belfisoleoggis}. The main point of this note is to prove the following theorem:
\begin{theorem}\label{main}
The secrecy function obtains its maximum at $y=1$ on all known even unimodular extremal lattices.
\end{theorem}
The method used here applies for any given even unimodular (and also for some unimodular but not even) lattices. This will be discussed in its own section.
\section{Preliminaries and Lemmas}
For an excellent source on theta-functions, see e.g. the Chapter 10 in Stein's and Shakarchi's book \cite{steinshakarchi}. Define the theta function
\[
\Theta(z\mid\tau)=\sum_{n=-\infty}^{\infty}e^{\pi i n^2\tau}e^{2\pi i n z}.
\]
Now
\[
\Theta(z\mid \tau)=\prod_{n=1}^{\infty}(1-q^{2n})(1+q^{2n-1}e^{2\pi i z})(1+q^{2n-1}e^{-2\pi i z}),
\]
when $q=e^{\pi i \tau}$
The functions $\vartheta_2$, $\vartheta_3$ and $\vartheta_4$ can be written with the help of $\Theta(z\mid \tau)$ in the following way:
\begin{alignat*}{1}
\vartheta_2(\tau) & =e^{\pi i \tau/4}\Theta\left(\frac{\tau}{2}\mid\tau\right)\\
\vartheta_3(\tau) &=\Theta(0\mid \tau)\\
\vartheta_4(\tau) & =\Theta\left(\frac{1}{2}\mid \tau\right).
\end{alignat*}
Using the product representation for the $\Theta(z\mid \tau)$ function this reads
\begin{alignat*}{1}
\vartheta_2(\tau) & =e^{\pi i \tau/4}\prod_{n=1}^{\infty}(1-q^{2n})(1+q^{2n-1}e^{2\pi i\tau/2})(1+q^{2n-1}e^{-2\pi i \tau/2})\\ & =e^{\pi i \tau/4}\prod_{n=1}^{\infty}(1-q^{2n})(1+q^{2n})(1+q^{2n-2})\\
\vartheta_3(\tau) &=\prod_{n=1}^{\infty}(1-q^{2n})(1+q^{2n-1})(1+q^{2n-1})=\prod_{n=1}^{\infty}(1-q^{2n})(1+q^{2n-1})^2\\
\vartheta_4(\tau) & =\Theta\left(\frac{1}{2}\mid \tau\right)\prod_{n=1}^{\infty}(1-q^{2n})(1+q^{2n-1}e^{2\pi i /2})(1+q^{2n-1}e^{-2\pi i /2})\\ & =\prod_{n=1}^{\infty}(1-q^{2n})(1-q^{2n-1})^2.
\end{alignat*}
A lattice is called unimodular if its determinant $=\pm 1$, and the norms are integral, ie, $||x||^2\in \mathbb{Z}$ for all vectors $x$ on the lattice. Further, it is called even, if $||x||^2$ is even. Otherwise it is called odd. A lattice can be even unimodular only if the dimension is divisible by $8$. Odd unimodular lattices have no such restrictions.
Let us now have a brief look at the even unimodular lattices. Write the dimension $n=24m+8k$. Then the theta series of the lattice can be written as a polynomial of the Eisenstein series $E_4$ and the discriminant function $\Delta$:
\[
\Theta=E_4^{3m+k}+\sum_{j=1}^m b_i E^{3(m-j)+k}\Delta^j.
\]
Since $E_4=^\frac{1}{2}\left(\vartheta_2^8+\vartheta_3^8+\vartheta_4^8\right)$ and $\Delta=\frac{1}{256}\vartheta_2^8\vartheta_3^8\vartheta_4^8$, the theta function of an even unimodular lattice can be easily written as a polynomial of these basic theta functions. Furthermore, the secrecy function can be written as a simple rational function of $\frac{\vartheta_2^4\vartheta_4^4}{\vartheta_3^8}$:
\begin{multline}\label{polynomiksi}
\frac{\Theta_{\mathbb{Z}^n}}{\Theta_{\Lambda}}=\frac{\vartheta_3^{n}}{E_4^{3m+k}+\sum_{j=1}^mE_4^{3(m-j)+k}\Delta^j}\\=\left(\frac{\frac{1}{2}\left(\vartheta_2^8+\vartheta_3^8+\vartheta_4^8\right)+\sum_{j=1}^m\frac{b_j}{256^j\cdot 2^{3(m-j)+k}}\left(\vartheta_2^8+\vartheta_3^8+\vartheta_4^8\right)^{3(m-j)+k}\vartheta_2^{8j}\vartheta_3^{8j}\vartheta_4^{8j}}{\vartheta_3^{3m+k}}\right)^{-1}\\=\left(\left(1-\frac{\vartheta_2^4\vartheta_4^4}{\vartheta_3^8}\right)^{3m+k}+\sum_{j=1}^m\frac{b_j}{256^j}\left(1-\frac{\vartheta_2^4\vartheta_4^4}{\vartheta_3^8}\right)^{3(m-j)+k}\cdot\left(\frac{\vartheta_2^4\vartheta_4^4}{\vartheta_3^8}\right)^{2j}\right)^{-1}.
\end{multline}
Hence, finding the maximum of the secrecy function is equivalent to finding the minimum of the denominator of the previous expression in the range of $\frac{\vartheta_2^4\vartheta_4^4}{\vartheta_3^8}$.
\begin{definition}
An even unimodular lattice is called extremal if the norm of the shortest vector on the lattice is $2m+2$.
\end{definition}
It is worth noticing that the definition of extremal has changed. Earlier (see e.g. \cite{conwayodlyzkosloane}), extremal meant that the shortest vector was of length $\left\lfloor\frac{n}{8}\right\rfloor+1$. With the earlier definition the highest dimensional self-dual extremal lattice is in dimension $24$ (see \cite{conwayodlyzkosloane}), while with the current definition there is a selfdual extremal lattice in dimension $80$ (for a construction, see \cite{bachocnebe}).
Let us now turn to general unimodular lattices, in particular odd ones. Write $n=8\mu+\nu$, where $n$ is the dimension of the lattice. Just like a bit earlier, the theta function of any unimodular lattice (regardless of whether it's even or odd), can be written as a polynomial (see e.g. (3) in \cite{conwayodlyzkosloane}):
\[
\Theta_{\Lambda}=\sum_{r=0}^{\mu}a_r\vartheta_3^{n-8r}\Delta_8^r,
\]
where $\Delta_8=\frac{1}{16}\vartheta_2^4\vartheta_4^4$. Hence
\begin{equation}\label{polynomiksi2}
\frac{\Theta_{\mathbb{Z}^n}}{\Theta_{\Lambda}}=\left(\sum_{r=0}^{\mu}\frac{a_r}{16^r}\frac{\vartheta_2^{4r}\vartheta_4^{4r}}{\vartheta_3^{8r}}\right)^{-1}.
\end{equation}
Again, to determine the maximum of the function, it suffices to consider the denominator polynomial in the range of $\frac{\vartheta_2\vartheta_4}{\vartheta_3}$.
The following lemma is easy, and follows from the basic properties of the theta functions. The proof is here for completeness.
\begin{lemma}
Let $y\in \mathbb{R}$. The function
\[
f(y)=\frac{\vartheta_4^4(yi)\vartheta_2^4(yi)}{\vartheta_3^8(yi)}
\]
has symmetry: $f(y)=f\left(\frac{1}{y}\right)$
\end{lemma}
\begin{proof}
The formulas (21) in \cite{conwaysloane} give the following:
\begin{alignat*}{1}
\vartheta_2\left(\frac{i}{y}\right)= & \vartheta_2\left(\frac{-1}{yi}\right)=\sqrt{y}\vartheta_4(yi)\\
\vartheta_3\left(\frac{i}{y}\right)= & \vartheta_3\left(\frac{-1}{yi}\right)=\sqrt{y}\vartheta_3(yi)\\
\vartheta_4\left(\frac{i}{y}\right)= & \vartheta_4\left(\frac{-1}{yi}\right)=\sqrt{y}\vartheta_2(yi).
\end{alignat*}
Now
\[
f(y)=\frac{\vartheta_2^4(yi)\vartheta_4^4(yi)}{\vartheta_3^8(yi)}=\frac{y^{-2}\vartheta_4^4\left(\frac{i}{y}\right)y^{-2}\vartheta_2^4\left(\frac{i}{y}\right)}{y^{-4}\vartheta\left(\frac{i}{y}\right)}=f\left(\frac{1}{y}\right),
\]
which was to be proved.
\end{proof}
We may now formulate a lemma that is crucial in the proof of the main theorem:
\begin{lemma}\label{rajoite}
Let $y\in \mathbb{R}$. The function
\[
\frac{\vartheta_4^4(yi)\vartheta_2^4(yi)}{\vartheta_3^8(yi)}
\]
attains its maximum when $y=1$. This maximum is $\frac{1}{4}$.
\end{lemma}
\begin{proof}
To shorten the notation, write $g=e^{-\pi y}$. Notice that when $y$ increases, $g$ decreases and vice versa. Using the product representations for the functions $\vartheta_2(yi)$, $\vartheta_3(yi)$ and $\vartheta_4(yi)$, we obtain
\begin{multline*}
\frac{\vartheta_2(yi)\vartheta_4(yi)}{\vartheta_3(yi)^2}\\=\frac{\left(g^{1/4}\prod_{n=1}^{\infty}(1-g^{2n})(1+g^{2n})(1+g^{2n-2})\right)\left(\prod_{n=1}^{\infty}(1-g^{2n})(1-g^{2n-1})^2\right)}{\left(\prod_{n=1}^{\infty}(1-g^{2n})(1+g^{2n-1})^2\right)^2}\\
=g^{1/4}\left(\prod_{n=1}^{\infty}(1+g^{2n})(1+g^{2n-2})\right)\left(\prod_{n=1}^{\infty}(1-g^{2n-1})^2\right)\left(\prod_{n=1}^{\infty}(1+g^{2n-1})^{-4}\right).
\end{multline*}
Now
\[
\prod_{n=1}^{\infty}(1+g^{2n-2})=2\prod_{n=1}^{\infty}(1+g^{2n})
\]
and
\[
\prod_{n=1}^{\infty}(1+g^{2n-1})^{-4}=\prod_{n=1}^{\infty}(1+(-g)^n)^4=\left(\prod_{n=1}^{\infty}(1+g^{2n})\right)^4\left(\prod_{n=1}^{\infty}(1-g^{2n-1})\right)^4.
\]
Combining all these pieces together, we obtain
\begin{multline*}
\frac{\vartheta_2(yi)\vartheta_4(yi)}{\vartheta_3(yi)^2}\\=2g^{1/4}\left(\prod_{n=1}^{\infty}(1+g^{2n})^2\right)\left(\prod_{n=1}^{\infty}(1-g^{2n-1})^2\right)\left(\prod_{n=1}^{\infty}(1+g^{2n})\right)^4\left(\prod_{n=1}^{\infty}(1-g^{2n-1})\right)^4\\=2g^{1/4}\left(\prod_{n=1}^{\infty}(1+g^{2n})^6\right)\left(\prod_{n=1}^{\infty}(1-g^{2n-1})^6\right)=2\left(g^{1/24}\prod_{n=1}^{\infty}(1+(-g)^n)\right)^6.
\end{multline*}
Since the factor $2$ is just a constant, it suffices to consider the function $g^{1/24}\prod_{n=1}^{\infty}(1+(-g)^n)$. To find the maximum, let us first differentiate the function:
\[
\frac{\partial}{\partial g}\left(g^{1/24}\prod_{n=1}^{\infty}(1+(-g)^n)\right)=\left(g^{1/24}\prod_{n=1}^{\infty}(1+(-g)^n)\right)\left(\frac{1}{24g}+\sum_{n=1}^{\infty}\frac{n(-1)^ng^{n-1}}{1+(-g)^n}\right).
\]
Since $g^{1/24}\prod_{n=1}^{\infty}(1+(-g)^n)$ is always positive, it suffices to analyze the part $\frac{1}{24g}+\sum_{n=1}^{\infty}\frac{n(-1)^ng^(n-1)}{1+(-g)^n}$ to find the maxima. We wish to prove that the derivate has only one zero, because if it has only one zero, then this zero has to be located at $y=1$ (because the original function has symmetry, and therefore, a zero in the point $y$ results in a zero in the point $\frac{1}{y}$ which has to be separate unless $y=1$. To show that the derivative has only one zero, let us consider the second derivative, or actually, the derivative of the part $\frac{1}{24g}+\sum_{n=1}^{\infty}\frac{n(-1)^ng^{n-1}}{1+(-g)^n}$. Now
\[
\frac{\partial}{\partial g}\left(\frac{1}{24g}+\sum_{n=1}^{\infty}\frac{n(-1)^ng^{n-1}}{1+(-g)^n}\right)=-\frac{1}{24g^2}+\sum_{n=1}^{\infty}\left(\frac{n(n-1)(-1)^ng^{n-2}}{1+(-g)^n}-\frac{n^2g^{2(n-1)}}{(1+(-g)^n)^2}\right).
\]
Now we wish to show that this is negative when $g\in(0,1)$. Let us first look at the term $-\frac{1}{24g^2}$ and the terms in the sum corresponding the values $n=1$ and $n=2$. Their sum is
\[
-\frac{1}{24g^2}-\frac{1}{(1-g)^2}+\frac{2-2g^2}{(1+g^2)^2}=\frac{-73g^6+98g^5-51g^4-92g^3+21g^2+2g-1}{24g^2(1-g)^2(1+g^2)^2}.
\]
The denominator is positive when $g\in (0,1)$, and the nominator has two real roots, which are both negative (approximately $g_1\approx-0.719566$ and $g_2\approx-0.196021$). On positive values of $g$, the nominator is always negative. In particular, the nominator is negative when $g\in (0,1)$.
Let us now consider the terms $n>2$, and show that the sum is negative. Since the original function has symmetry $y\rightarrow \frac{1}{y}$, and we are only considering the real values of the theta series, we may now limit ourselves to the interval $y\in [1,\infty)$, which means that $g\in (0,e^{-\pi}]$. Let us now show that the sum of two consecutive terms where the first one corresponds an odd value of $n$, and the second one an even value of $n$ is negative. The sum looks like the following:
\[
-\frac{n(n-1)g^{n-2}}{1-g^n}-\frac{n^2g^{2(n-1)}}{(1-g^n)^2}+\frac{n(n+1)g^{n-1}}{1+g^{n+1}}-\frac{(n+1)^2g^{2n}}{(1+g^{n+1})^2}.
\]
Let us estimate this, and take a common factor:
\begin{multline*}
<g^{n-2}n\left(-\frac{n-1}{1-g^n}-\frac{ng^n}{(1-g^n)^2}+\frac{(n+1)g}{1+g^{n+1}}-\frac{(n+1)g^{n+2}}{(1+g^{n+1})^2}\right)\\=g^{n-2}n\left(-\frac{n-1+g^n}{(1-g^n)^2}+\frac{(n+1)g}{(1+g^{n+1})^2}\right)<g^{n-2}n\left(\frac{-(n-1)-g^n+(n+1)g}{(1+g^{n+1})^2}\right)<0,
\end{multline*}
when
\[
(n-1)+g^n>(n+1)g.
\]
Since $(n-1)+g^n>(n-1)$, and $(n+1)g\leq (n+1)e^{-\pi}<\frac{n+1}{10}<n-1$, when $n\geq 2$, this proves that the first derivative has only one zero. This zero is at $y=1$. Since the second derivative is negative, it means that this point is actually the maximum of the function. The maximum value is:
\[
\frac{\vartheta_2^4(i)\vartheta_4^4(i)}{\vartheta_3^8(i)}=\frac{1}{4}
\]\end{proof}
\section{Proof of the main theorem}
Let us first do on the lattice $E_8$ as a warm-up case.
We wish to show that
\begin{theorem}
\begin{equation}\label{tavoite}
\Xi_{E_8}(y)\leq \Xi_{E_8}(1).
\end{equation}
\end{theorem}
\begin{proof}
Notice that
\begin{multline*}
\Xi_{E_8}(yi)=\left(\frac{1}{2}\left(\frac{\vartheta_2(yi)^8+\vartheta_3(yi)^8+\vartheta_4(yi)^8}{\vartheta_3(yi)^8}\right)\right)^{-1}\\=\left(\frac{1}{2}+\frac{\left(\vartheta_2^4(yi)+\vartheta_4^4(yi)\right)^2-2\vartheta_2^4(yi)\vartheta_4^4(yi)}{2\vartheta_3(yi)^8}\right)^{-1}=\left(1-\frac{\vartheta_2^4(yi)\vartheta_4^4(yi)}{\vartheta_3^8(yi)}\right)^{-1},
\end{multline*}
Therefore, to show that (\ref{tavoite}) holds, it suffices to show that
\[
\frac{\vartheta_2(yi)^4\vartheta_4(yi)^4}{\vartheta_3(yi)^8}\leq \frac{\vartheta_2(i)^4\vartheta_4(i)^4}{\vartheta_3(i)^8},\] which is equivalent to showing that
\[
\frac{\vartheta_2(yi)\vartheta_4(yi)}{\vartheta_3(yi)^2}\leq \frac{\vartheta_2(i)\vartheta_4(i)}{\vartheta_3(i)^2},
\]
which we have already done in Lemma \ref{rajoite}.
\end{proof}
Let us now concentrate on the other cases. Again, write $z=\frac{\vartheta_2^4\vartheta_4^4}{\vartheta_3^8}$. The following table gives the secrecy functions of all known extremal even unimodular lattices (notice that these are known only in dimensions $8-80$):\\[0.1cm]
\begin{tabular}{c|c}
dimension & $\Xi$ \\ \hline
$8$ & $\left(1-z\right)^{-1}$\\
$16$ & $\left((1-z)^{2}\right)^{-1}$ \\
$24$ & $\left((1-z)^3-\frac{45}{16}z^2\right)^{-1}$ \\
$32$ & $\left((1-z)^4-\frac{15}{4}(1-z)z^2\right)^{-1}$ \\
$40$ & $\left((1-z)^5-\frac{75}{16}(1-z)^2z^2\right)^{-1}$ \\
$48$ & $\left((1-z)^6-\frac{45}{8}(1-z)^3z^2+\frac{3915}{2048}z^4\right)^{-1}$\\
$56$ & $\left((1-z)^7-\frac{105}{16}(1-z)^4z^2+\frac{21735}{4096}(1-z)z^4\right)^{-1}$ \\
$64$ & $\left((1-z)^8-\frac{15}{2}(1-z)^5z^2+\frac{4905}{512}(1-z)^2z^4\right)^{-1}$ \\
$72$ & $\left((1-z)^9-\frac{135}{16}(1-z)^6z^2+\frac{60345}{4096}(1-z)^3z^4-\frac{53325}{32768}z^6\right)^{-1}$\\
$80$ & $\left((1-z)^{10}-\frac{75}{8}(1-z)^7z^2+\frac{42525}{2048}(1-z)^4z^4-\frac{202125}{32768}(1-z)z^6\right)^{-1}$
\end{tabular}\\[0.3cm]
It suffices to show that the first derivatives of the denominators are negative because then the denominator is decreasing, and the function is increasing and obtains its maximum at $z=\frac{1}{4}$.
In dimension $16$ the derivative is
\[
-2(1-z)<0.
\]
In dimension $24$ the derivative is
\[
-3(1-z)^2-\frac{45}{8}z^2<0.
\]
In dimension $32$ the derivative is
\[
-4(1-z)^3-\frac{15}{2}z(1-z)+\frac{15}{4}z^2=-4(1-z)^3-\frac{15}{4}z(2-2z+z)=-4(1-z)^3-\frac{15}{4}(2-z)<0.
\]
In dimension $40$ the derivative is
\[
-5(1-z)^4-\frac{75}{8}z(1-z)^2+\frac{75}{8}(1-z)z^2=-5(1-z)^4-\frac{75}{8}z(1-z)(2z-1)<0.
\]
In dimension $48$ the derivative is
\[
-6(1-z)^5-\frac{45}{4}(1-z)^3z+\frac{135}{8}(1-z)^2z^2+\frac{3915}{512}z^3<-6(1-z)^5+\frac{3915}{512}z^3<0
\]
In dimension $56$ the derivative is
\begin{multline*}
-7(1-z)^6-\frac{105}{8}z(1-z)^4+\frac{105}{4}(1-z)^3z^2+\frac{21735}{1024}z^3(1-z)-\frac{21735}{4096}z^4\\<-7(1-z)^6+\frac{21735}{1024}z^3(1-z)<0.
\end{multline*}
In dimension $64$ the derivative is
\[
-8(1-z)^7+\frac{75}{2}(1-z)^4z^2-15(1-z)^5z-\frac{4905}{256}z(1-z)^4+\frac{4905}{128}z^2(1-z)^3<0.
\]
In dimension $72$ the derivative is
\[
-9(1-z)^8+\frac{405}{8}(1-z)^5z^2-\frac{135}{8}(1-z)^6z+\frac{60345}{4096}\cdot 4(1-z)^3z^3-\frac{60345}{4096}\cdot 3 (1-z)^2z^4-6\cdot \frac{202125}{32768}z^5,
\]
which has real zeros $z_0\approx 0.3002$ and $z_1\approx 0.5222$, and when $z<z_0$, the derivative is negative, which proves this case.
It remains to consider the dimension 80, where the derivative is
\[
-10(1-z)^9-\frac{75}{4}(1-z)^7z+\frac{525}{8}(1-z)^6z^2+\frac{42525}{512}z^3(1-z)^4-\frac{42525}{512}(1-z)^3z^4+\frac{202125}{32768}z^6-\frac{1212750}{32768}(1-z)z^5,
\]
which has real roots $z_0\approx 0.2889$, $z_1\approx 0.4491$ and $z_2\approx 0.8620$, and is negative when $z<z_1$. This proves the theorem.
\section{Method for any given unimodular lattice}
Let $\Lambda$ be a unimodular lattice. Then its secrecy function can be written as a polynomial $P(z)$, where $z=\frac{\vartheta_2^4\vartheta_4^4}{\vartheta_3^8}$ as shown in (\ref{polynomiksi}) and \ref{polynomiksi2}. Now, according Lemma \ref{rajoite}, $0\leq z\geq \frac{1}{4}$ (the lower bound does not follow from the lemma but from the fact that $z$ is a square of a real number). Therefore, it suffices to consider the polynomial $P(z)$ on the interval $[0,\frac{1}{4}]$. The conjecture is true if and only if the polynomial obtains its smallest value on the interval at $\frac{1}{4}$. Investigating the behaviour of a given polynomial to show whether one point is its minimum on a short interval is a very straightforward operation.
| 93,463
|
Today I'm going to look at one of the things on the internet that went right. ,Created by Pseudolonewolf, is a website with mostly flash games(a form of online game often found online using Macromedia Flash). The website mostly consists of RPG's so would be a must for any RPG nerd. Unknown to most of the public for a long time Fighunter.com(named after one of the first games on the website) didn't become popularly known among the internet-savy until the release of it's game SMECOF, a side scrolling shooter game with RPG elements.
With every one of his games being unique with in-depth story lines, self-composed music, and many in-game jokes revolving around several subjects Fighunter stands out as one of the most brilliant websites out there and definitely deserves more popularity and support among the world.
| 125,530
|
Wilma “Sis” Jane Freeman Lilly, 63, of Olive Hill, KY, entered into rest, Saturday morning, October 19, 2019 at the University of Louisville Hospital. She was born January 5, 1956, in East Point, KY, a daughter to the late Jesse James Freeman and Geraldine Patrick Freeman.
Wilma was a lifetime member of the Church of Christ at East Point, KY. She co-owned a grocery store and managed Hoss Cats in Grayson. In her spare time she loved attending auctions, going to yard sales and playing Rook.
In addition to her parents, she was preceded in death by one brother; Jimmy Junior Freeman.
She is survived by her husband of 44 years; Charles Wilmer Lilly, two sons; Charles David Lilly and John Travis Lilly, six sisters; Linda Sue Mathis (Lonnie) of Winchester, Judy Ann Mann (Larry) of Virginia, Donna Maddix of Olive Hill, Janice Carroll Hammond (Harlan) of Paintsville, Alma Lee Hammond (Max) of Olive Hill and Lynn Turner Conley (Todd) of Paintsville, three brothers; John Wayne Freeman (Connie) of Kansas, James Robert Freeman (Donna) of Paintsville and Terry S. Freeman (Joyce) of West Liberty, and two grandchildren to whom she raised as her own; Christian Blake Lilly and Mayzie Ann Lilly. In addition to these, she leaves behind several special friends; Hobart and Sonia Dean, Willis and Bonnie Bond, Lonnie and Karen Tackett, and the Rayburn Family.
Funeral Services will be conducted at 11:00 AM, Wednesday, October 23, 2019 at the Duvall and Moore Funeral Home & Cremation Service with Rev. Larry Mann officiating. Burial will follow in the Kentucky Veterans Cemetery Northeast in Grayson, KY.
Visitation will be 6:00–9:00PM, Tuesday, October 22, 2019 at the Duvall and Moore Funeral Home & Cremation Service, 149 Whitt Street, Olive Hill, KY and 9:00AM until time of service on Wednesday, October 23, 2019.
Family and friends will be serving as pallbearers.
In lieu of flowers, the family requests that donations can be made in Wilma’s memory to Duvall and Moore Funeral Home & Cremation Service to offset final expenses.
Online Condolences may be sent to
To send flowers to the family or plant a tree in memory of Wilma Jane "Sis" (Freeman) Lilly, please visit our floral store.
| 86,064
|
\begin{document}
\maketitle
\begin{abstract}
An algorithm for computing the Karcher mean of $n$ positive definite matrices is proposed, based on the majorization-minimization (MM) principle. The proposed MM algorithm is parameter-free, does not need to choose step sizes, and has a theoretical guarantee of linear convergence.
\end{abstract}
\section{Introduction}
It is well-known that the geometric mean for a set of positive real numbers $(a_1,a_2,\cdots, a_n)$ is defined by $(a_1a_2\cdots a_n)^{\frac{1}{n}}$. However, this definition can not be naturally generalized to a set of positive definite matrices $\bA_1,\bA_2,\cdots,\bA_n\in\reals^{p\times p}$, since $(\bA_1\bA_2\cdots \bA_n)^{\frac{1}{n}}$ is usually not symmetric, and it is not invariant to permutation, that is, generally $(\bA_1\bA_2\bA_3\cdots \bA_n)^{\frac{1}{n}} = (\bA_2\bA_1\bA_3\cdots \bA_n)^{\frac{1}{n}}$ does not hold.
The Karcher mean~\cite[(6.24)]{bhatia2007positive}~\cite[Section 4]{JeurVV2012} is commonly used as the geometric mean of positive definite matrices, and it is defined by the optimization problem
\begin{equation}\label{eq:main}
\hat{\bX}=\argmin_{\bX}F(\bX),\,\,\,\text{where}\,\,\,F(\bX)=\sum_{i=1}^n \|\log (\bA_i^{-\frac{1}{2}}\bX\bA_i^{-\frac{1}{2}})\|_F^2,
\end{equation}
and $\|\bX\|_F$ denotes the Frobenius norm of $\bX$.
The solution is uniquely defined and satisfies a list of ``desirable properties'' for matrix geometric mean in~\cite[Section 1]{Ando2004}. We refer the reader to~\cite[Section 4]{JeurVV2012} for a more detailed discussion on the proof of its uniqueness, existence and other properties.
The optimization problem \eqref{eq:main} has been extensively investigated in the literature. For example, the gradient descent method has been applied in \cite{Ferreira2006,RenAbs2011}. A linearization of the gradient descent method in the spirit of
the Richardson iteration is proposed in \cite{Bini2013}, and it is proved to converge locally.
Another natural algorithm is Newton's method, which is considered in
\cite{Groisser2004,Ferreira2013} in the name of ``centroid computation''. A stochastic algorithm and a gradient descent method are proposed for the Riemannian $p$-means in \cite{Nielsen2013}, and when $p=2$ the Riemannian $p$-means is equivalent to the Karcher mean. A very comprehensive survey~\cite{JeurVV2012} presents several algorithms and their variants, including first-order methods such as the steepest descent method, the conjugate descent method, and second-order methods such as the trust region method and the BFGS method.
A common issue of these algorithms is the choice of step sizes in the update formula. While the line search strategy has a convergence guarantee, it is computationally expensive as observed in \cite{Bini2013}. The strategy of using constant step sizes lacks theoretical guarantee on the convergence to the Karcher mean, unless the initialization is sufficiently close to the solution (see~\cite[Theorem 2.10]{Afsari2013} for the gradient descent method and \cite[Theorem 5.2]{Groisser2004} for Newton's method). The method of gradually decreasing step sizes in~\cite[Algorithm 3]{Fletcher2007} requires an initial step size, but it is unclear how one should choose this parameter such that the algorithm converges to the Karcher mean. While empirically performs well, the criterion of choosing step sizes proposed in \cite{Bini2013} only has a theoretical guarantee on local convergence.
The main contribution of this paper is to present and analyze a majorization-minimization (MM) algorithm for solving~\eqref{eq:main}. Compared to previous methods, the MM algorithm is different and based on the majorization-minimization principle. It is parameter-free, and it converges linearly to the Karcher mean without choosing step sizes.
The rest of the paper is organized as follows. Section~\ref{sec:background} describes the properties of the Karcher mean and the framework of MM algorithms. Section~\ref{sec:algorithm} proposes the MM algorithm for the Karcher mean and analyzes its property of convergence. Section~\ref{sec:simulations} compares the proposed MM algorithm with some previous algorithms under various settings.
\section{Background}\label{sec:background}
\subsection{MM algorithms}
Majorization-minimization (MM) is a principle of designing algorithms. While the name ``MM'' is proposed in recent works by Hunter and Lange~\cite{Hunter2000_2,MM_tutorial2004}, the idea has a long history. For example, the MM principle has been used in the analysis of Weiszfeld's algorithm~\cite{Weiszfeld1937} for finding the Euclidean median~\cite[Section 3.1]{Kuhn73}, and in the analysis of iterative reweighted least square (IRLS) algorithms for sparse recovery and matrix completion~\cite{Daubechies_iterativelyreweighted,Fornasier_low-rankmatrix}.
Here we present the framework of MM algorithms. Suppose we want to find $\argmin_{x\in\mathcal{A}}f(x)$, an MM algorithm is an iterative procedure given by
\begin{equation}\label{eq:minimization}
x_{k+1}=T(x_k),\,\,\,\, T(x_k)=\arg\min_{x\in\mathcal{A}} g(x,x_k),
\end{equation}
where the majorization surrogate function $g(x,x')$ satisfies
\begin{equation}\label{eq:majorization}\text{$g(x,x')\geq f(x)$ and $g(x',x')=f(x')$.} \end{equation}
We give a general statement on the property of MM algorithm in Theorem~\ref{thm:MM_convergence}.
\begin{theorem}\label{thm:MM_convergence}
If $f$ and $g$ are differentiable functions, $f$ is bounded from below, and $T$ is continuous, then for any accumulation point of the sequence $x_k$, if it lies in the interior of $\mathcal{A}$, then it is a stationary point of $f(x)$.
\end{theorem}
\begin{proof}
First of all, $f(x_k)$ is a nonincreasing sequence:
\begin{eqnarray}\label{eq:MM_descendence}
f(T(x_{k}))=f(x_{k+1})\leq g(x_{k+1},x_k)\leq g(x_k,x_k)=f(x_k).
\end{eqnarray}
Since $f$ is bounded from below, $f(x_k)$ converges. Therefore, $\lim_{k\rightarrow\infty}f(T(x_{k}))-f(x_{k})=0$. Applying the continuity of $f$ and $T$, for any converging subsequence of $\{x_k\}$, $\{x_{m_k}\}\rightarrow\hat{x}$, we have
$f(T(\hat{x}))=f(\hat{x})$, and the equality in (\ref{eq:MM_descendence}) holds if $x_k$ and $x_{k+1}$ are replaced by $\hat{x}$ and $T(\hat{x})$. Therefore, the second inequality in (\ref{eq:MM_descendence}) achieves equality, which means that $\hat{x}$ is a minimizer of $g(x, \hat{x})$. Since $f'(\hat{x})=g'(x,\hat{x})\big|_{x=\hat{x}}$ ($g(x,\hat{x})-f(x)$ is minimized at $x=\hat{x}$), $f'(\hat{x})=g'(x,\hat{x})\big|_{x=\hat{x}}=0$.\end{proof}
The most important component of designing an MM algorithm is to find an appropriate surrogate function $g(x,x')$. A common choice of $g(x,x')$ is a square function, i.e., $c_1(x')x^2+c_2(x')x+c(x')$~\cite[Section 3.1]{Kuhn73,Daubechies_iterativelyreweighted,Fornasier_low-rankmatrix}, since it gives a simple update formula in \eqref{eq:minimization}. However, in this paper we will use a surrogate function in the form of $\langle\bC_1(\bX'),\bX\rangle+\langle\bC_2(\bX'),\bX^{-1}\rangle+c$, where $
\langle\bA,\bB\rangle= \sum_{i,j=1}^{p}\bA_{ij}\bB_{ij}=\tr(\bA\bB^T)$.
\subsection{Matrix derivatives}\label{sec:diff}
Since the analysis in this paper involves matrix derivatives (see~\cite[Section V.3]{Bhatia1997}), we review its definition and give several examples in this section.
For a function $f: \reals^{p\times p}\rightarrow \reals$, the directional derivative $D f(\bX)(\bH)$ is defined by
\[
D f(\bX)(\bH)=\lim_{t\rightarrow 0} \frac{f(\bX+t\bH)-f(\bX)}{t}.
\]
We say $D f(\bX) =\bY$ if $D f(\bX)(\bH)=\langle \bY, \bH\rangle$.
Next we will give some examples. Since we work with symmetric matrices throughout the paper, we assume that the matrices $\bX$ and $\bA$ are symmetric in the following examples.
A simple example is $f(\bX)=\tr(\bA\bX)$, for which we have $D f(\bX)=\bA$. For the rest of the paper our analysis also involves differentiation of $\langle \bX^{-1}, \bA\rangle$, $\|\log \bX\|_F^2$ and $\|\bX\bA\|^2$, and the calculation of their derivatives is as follows. For $f(\bX)=\langle \bX^{-1}, \bA\rangle$, we have
\begin{equation}D f(\bX) = -\bX^{-1}\bA\bX^{-1}.\label{eq:matrix_deri1}\end{equation} The proof of \eqref{eq:matrix_deri1} is as follows:
\begin{align*}
&\Big\langle (\bX+t\bH)^{-1}, \bA\Big\rangle - \Big\langle \bX^{-1}, \bA\Big\rangle
=\Big\langle \big(\bX^{\frac{1}{2}}(\bI+t\bX^{-\frac{1}{2}}\bH\bX^{-\frac{1}{2}})
\bX^{\frac{1}{2}}\big)^{-1} - \bX^{-1}, \bA\Big\rangle\\
=&\Big\langle \bX^{-\frac{1}{2}}\big(\bI+t\bX^{-\frac{1}{2}}\bH\bX^{-\frac{1}{2}}\big)^{-1} \bX^{-\frac{1}{2}}- \bX^{-1}, \bA\Big\rangle\\
\approx & \Big\langle \bX^{-\frac{1}{2}}\big(\bI-t\bX^{-\frac{1}{2}}\bH\bX^{-\frac{1}{2}}\big) \bX^{-\frac{1}{2}}- \bX^{-1}, \bA\Big\rangle
\\=&\Big\langle -t\bX^{-1}\bH\bX^{-1}, \bA\Big\rangle
=\Big\langle -t\bX^{-1}\bA\bX^{-1}, \bH\Big\rangle,
\end{align*}
and in the proof we apply the first-order approximation $(\bI+t\bH)^{-1}\approx \bI-t\bH$.
For $f(\bX)=\|\log \bX\|_F^2$, applying the result in \cite[pg. 218]{bhatia2007positive} we have $D f(\bX) = 2 \bX^{-1}\log\bX.$
For $f(\bX)=\|\bX\bA\|_F^2$, we have $D f(\bX) = 2 \bA\bX\bA$
since
\begin{align*}
&\|(\bX+ t\bH)\bA\|_F^2-\|\bX\bA\|_F^2
\\=&\tr\Big((\bX+ t\bH)\bA(\bX+ t\bH)\bA-\bX\bA\bX\bA\Big)
\\ \approx& 2t\,\tr\Big(\bX\bA\bH\bA\Big)
=\langle2t\,\bA\bX\bA,\bH\rangle.
\end{align*}
\section{MM algorithm for computing the Karcher mean}\label{sec:algorithm}
We first present the majorization-minimization (MM) algorithm for \eqref{eq:main}:
\begin{equation}\label{eq:algorithm}
\bX_{k+1}=T(\bX_k)=f_2(\bX_k)^{\frac{1}{2}}\big(f_2(\bX_k)^{\frac{1}{2}}f_1(\bX_k)f_2(\bX_k)^{\frac{1}{2}}\big)^{-\frac{1}{2}}f_2(\bX_k)^{\frac{1}{2}},
\end{equation}
where
\[
f_1(\bX)=\sum_{i=1}^n\bA_i^{-\frac{1}{2}}g_1(\bA_i^{-\frac{1}{2}}\bX\bA_i^{-\frac{1}{2}})\bA_i^{-\frac{1}{2}}, \,\,\,g_1(x)=(\sqrt{\log x^2+1}+\log x)x^{-1},
\]
and
\[
f_2(\bX)=\sum_{i=1}^n\bA_i^{\frac{1}{2}}g_2(\bA_i^{-\frac{1}{2}}\bX\bA_i^{-\frac{1}{2}})\bA_i^{\frac{1}{2}},\,\,\,\text{ $g_2(x)=(\sqrt{\log x^2+1}-\log x)x$}.
\]
The matrix function $g(\bX)$ is defined as follows: Assume that the eigenvalue decomposition is given by $\bX=\bU\diag(\lambda_1,\lambda_2,\cdots,\lambda_p)\bU^T$, then
\[g(\bX)=\bU\diag\big(g(\lambda_1),g(\lambda_2),\cdots, g(\lambda_p)\big)\bU^T.\]
From this definition and the property that $g_1(x)\,g_2(x)=1$, we have $g_1(\bX)\,g_2(\bX)=\bI$.
We have the following theorem on the proposed MM algorithm:
\begin{thm}\label{thm:main}
The sequence $\{\bX_k\}_{k\geq 1}$ generated by~\eqref{eq:algorithm} converges to the solution of \eqref{eq:main}, and $\{F(\bX_k)\}_{k\geq 1}$ converges linearly.
\end{thm}
The proof of Theorem~\ref{thm:main} is based on the following Lemmas and will be given in Section~\ref{sec:proof_main}.
\begin{lemma}\label{lemma:major}
There exists $c_0(\bX'): \reals^{p\times p}\rightarrow \mathbb{R}$ such that \begin{equation}\label{eq:define_G}G(\bX,\bX')= \langle f_1(\bX'),\bX \rangle + \langle f_2(\bX'),\bX^{-1} \rangle +c_0(\bX')\end{equation}
satisfies $G(\bX',\bX')=F(\bX')$ and $G(\bX,\bX')\geq F(\bX)$.
\end{lemma}
\begin{lemma}\label{lemma:minimization}The minimizer of $\langle\bC_1,\bX\rangle+\langle\bC_2,\bX^{-1}\rangle$ is
$\bC_2^{\frac{1}{2}}(\bC_2^{\frac{1}{2}}\bC_1\bC_2^{\frac{1}{2}})^{-\frac{1}{2}}\bC_2^{\frac{1}{2}}.$
\end{lemma}
\subsection{Proof of Theorem~\ref{thm:main}: Convergence}\label{sec:proof_main}
\begin{proof}
By Lemmas~\ref{lemma:major} and~\ref{lemma:minimization}, the iterative procedure satisfies the definition of MM algorithm in \eqref{eq:minimization} and \eqref{eq:majorization}. By Theorem~\ref{thm:MM_convergence} and the uniqueness of the stationary point of $F(\bX)$, any converging subsequence of $\{\bX_k\}_k$ converges to the unique minimizer of $F(\bX)$.
By the monotonicity of MM algorithms in~\eqref{eq:MM_descendence}, $F(\bX_k)$ is nonincreasing and the sequence $\{\bX_k\}_{k\geq 1}$ is contained in the compact set $\{\bX: F(\bX)\leq F(\bX_1)\}$. Therefore, $\{\bX_k\}_k$ converges to the unique minimizer of $F(\bX)$.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:main}: Convergence rate}
Next we will show that the proposed MM algorithm converges linearly.
For the proof we defined a few more notations. We denote the geodesic line connecting $\bX_1$ and $\bX_2$ by $[\bX_1,\bX_2]_t=\bX_1^{\frac{1}{2}}(\bX_1^{-\frac{1}{2}}\bX_2\bX_1^{-\frac{1}{2}})^t\bX_1^{\frac{1}{2}}$, and define directional derivative on the manifold of positive definite matrices by the standard definition on the geodesic line:
\[
f_{\bX_2}'(\bX_1)=\lim_{t\rightarrow 0} \frac{f([\bX_1,\bX_2]_t)-f(\bX_1)}{t\dist(\bX_1,\bX_2)},\,\,\,\,f_{\bX_2}''(\bX_1)=\lim_{t\rightarrow 0} \frac{f'([\bX_1,\bX_2]_t)-f'(\bX_1)}{t\dist(\bX_1,\bX_2)},
\]
where $\dist(\bX_1,\bX_2)=\|\log(\bX_1^{-\frac{1}{2}}\bX_2\bX_1^{-\frac{1}{2}})\|_F$.
The following lemma will be helpful in the proof of the convergence rate:
\begin{lemma}\label{lemma:derivative}
For a function $f(x): \reals\rightarrow\reals$, if $f''(x)>a>0$ then $f(x_0)-\min f(x) \leq \frac{f'(x_0)^2}{2a}$. If $f''(x)\leq b$, then $f(x_0)-\min f(x)\geq \frac{f'(x_0)^2}{2b}$.
\end{lemma}
\begin{proof}
WLOG we may assume that $f'(x_0)>0$ and $\hat{x}=\argmin_x f(x)$. If $f''(x)>a>0$ then $f'(x)\leq f'(x_0)-a(x_0-x)$, and $f(x_0)-\min_x f(x) = \int_{x=\hat{x}}^{x_0}f'(x) \di x\leq \int_{x=\hat{x}}^{x_0}f'(x_0)-a(x_0-x) \di x=f'(x_0)(x_0-\hat{x})-a(x_0-\hat{x})^2/2\leq \frac{f'(x_0)^2}{2a}$.
If $f''(x)<b$, then $f'(x)\geq f'(x_0)-b(x_0-x)$ and $f'(x)>0$ for $x_0-f'(x_0)/b<x<x_0$. Therefore, $f(x_0)-\min_x f(x)\geq f(x_0)-f(x_0-f'(x_0)/b)=\int_{x=x_0-f'(x_0)/b}^{x_0}f'(x)\di x \geq \int_{x=x_0-f'(x_0)/b}^{x_0}f'(x_0)-b(x_0-x) \di x = \frac{f'(x_0)^2}{2b}$.
\end{proof}
Now we will investigate the second derivatives of $F_{\bX'}(\bX)$ and $G_{\bX'}(\bX,\bX_k)$.
By applying the semiparallelogram law \cite[(6.16)]{bhatia2007positive}, the second directional derivative of $F_{\bX'}(\bX)$ is always greater than or equal to $n$.
Next we show that the second directional derivative of $G(\bX,\bX')$ is well-defined and continuous with respect to $\bX$ and the direction $\frac{\bX^{-\frac{1}{2}}\bX'\bX^{-\frac{1}{2}}}{\|\bX^{-\frac{1}{2}}\bX'\bX^{-\frac{1}{2}}\|_F}$, by proving it for both $\langle\bA,\bX\rangle$ and $\langle\bB,\bX^{-1}\rangle$. We may parametrize the geodesic line by $l(t)=\bX^{\frac{1}{2}}e^{t\bD}\bX^{\frac{1}{2}}$, where $\bD$ is the direction $\frac{\bX^{-\frac{1}{2}}\bX'\bX^{-\frac{1}{2}}}{\|\bX^{-\frac{1}{2}}\bX'\bX^{-\frac{1}{2}}\|_F}$. Then its second directional derivative at $t=0$ is given by $\lim_{t\rightarrow 0}\frac{1}{t^2\|\bD\|_F^2}\langle\bX^{\frac{1}{2}}\bA\bX^{\frac{1}{2}},e^{t\bD}+e^{-t\bD}-2\bI\rangle=\langle\bX^{\frac{1}{2}}\bA\bX^{\frac{1}{2}},\bD^2\rangle$, which is well-defined and continuous with respect to $\bX$ and the direction $\bD$. Similarly, the second directional derivative of $\langle\bA,\bX^{-1}\rangle$ is also well-defined. Therefore, the second directional derivative of $G(\bX,\bX_k)$ is well-defined and continuous.
By continuity, there exists $c>0$ such that for any $\bX$ with $\dist(\bX,\hat{\bX})<c$, the second directional derivative of $G(\bX,\bX')$ is bounded from above by some constant $C>0$.
Denote the geodesic line that connecting $\bX_k$ and $\hat{\bX}$ by $L$. Applying Lemma~\ref{lemma:derivative} to $L$ we have
\[
F(\bX_k)-F(\hat{\bX})\leq F_{\hat{\bX}}'(\bX)|_{\bX=\bX_k}^2/2n.
\]
Due to the convergence, there exists $K$ such that for any $k>K$, $\dist(\bX_k,\bX)<c$. Applying Lemma~\ref{lemma:derivative} and the properties of $F$ and $G$ we have that for $k>K$,
\[
F(\bX_k)-F(\bX_{k+1})\geq G(\bX_k,\bX_k)-\min_{\bX\in L}G(\bX,\bX_k) \geq G_{\hat{\bX}}'(\bX,\bX_k)\Big|_{\bX=\bX_k}^2/2C.
\]
Note that $F_{\hat{\bX}}'(\bX)|_{\bX=\bX_k}=G_{\hat{\bX}}'(\bX,\bX_k)|_{\bX=\bX_k}$ (since $G(\bX,\bX_k)-F(\bX)$ is minimized at $\bX=\bX_k$), $F(\bX_k)-F(\bX_{k+1})\geq \frac{n}{C}(F(\bX_k)-F(\hat{\bX}))$ and $F(\bX_{k+1})-F(\hat{\bX})\leq (1-\frac{n}{C})(F(\bX_k)-F(\hat{\bX}))$. That is, $F(\bX_k)$ converges linearly.
\subsection{Proof of Lemma~\ref{lemma:major}}\label{sec:lemma1}
We start with the following auxiliary lemma and its proof:
\begin{lemma}\label{lemma:major1}
$\bX'$ is the unique minimizer of \begin{align}\label{eq:major1}
g_{\bX'}(\bX)=&\big\langle g_1(\bX'),\bX\big\rangle +
\big\langle g_2(\bX'),\bX^{-1}\big\rangle
- \|\log\bX\|_F^2.\nonumber
\end{align}
\end{lemma}
\begin{proof}
The proof can be divided into two steps. In the first step, we will show that $\bX'$ is the unique stationary point of $g_{\bX'}(\bX)$. In the second step, we will show that $\bX'$ is also the unique minimizer of $g_{\bX'}(\bX)$.
We start with the first step. By the discussion in Section~\ref{sec:diff}, $g_{\bX'}(\bX)$ is differentiable and the derivative is
\[
g_1(\bX')-\bX^{-1}g_2(\bX')\bX^{-1}-2\bX^{-1}\log\bX.
\]
Let $\bZ=g_2(\bX')$ and recall that $g_1(\bX')=g_2(\bX')^{-1}$, the equation for the stationary point is given by
\begin{equation}\label{eq:stationary}
\bZ^{-1}-\bX^{-1}\bZ\bX^{-1}-2\bX^{-1}\log\bX=0
\end{equation}
Apply the matrix derivatives in Section~\ref{sec:diff}, we can see that the LHS of \eqref{eq:stationary} is the derivative of
\[
h(\bZ)=\log\det(\bZ)-\frac{1}{2}\|\bZ\bX^{-1}\|_F^2-\tr(2\bX^{-1}\log\bX\bZ)
\]
with respect to $\bZ$. Since both $-\|\bZ\bX^{-1}\|_F^2$ and $\log\det(\bZ)$ are concave with respect to $\bZ$, and $\tr(2\bX^{-1}\log\bX\bZ)$ is a linear function with respect to $\bZ$, $h(\bZ)$ is concave. Therefore, the stationary point of $h(\bZ)$ is unique. That is, when $\bX$ is given, there is a unique $\bZ$ such that \eqref{eq:stationary} holds. By calculation, one may verify that this unique solution is $\bZ=g_2(\bX)$, which holds for any $(\bX,\bZ)$ satisfying \eqref{eq:stationary}. Since $g_2(x)'=\Big(1-\frac{1}{\sqrt{\log^2 x+1}}\Big)\Big(\sqrt{\log^2 x+1}-\log x\Big)\geq 0$
and $g_2(x)'=0$ holds only when $x=1$, $g_2(x)$ is monotonically increasing and $g_2^{-1}(x)$ is well-defined. Therefore, when $\bZ$ is given, the unique solution to \eqref{eq:stationary} is given by $\bX=g_2^{-1}(\bZ)=\bX'$, which concludes our first step of the proof.
Second, we will show that $\bX=\bX'$ is the unique minimizer of $g_{\bX'}(\bX)$. By differentiability, any minimizer would be a stationary point and satisfies \eqref{eq:stationary}. Combining it with the uniqueness of the solution to \eqref{eq:stationary}, this solution would be the unique minimizer as long as the minimizer exists.
Then it suffices to show the existence of the minimizer, by showing that $g_{\bX'}(\bX)$ goes to $\infty$ when $\bX$ goes to the boundary of the set of the PSD matrices, i.e., $\lambda_1(\bX)\rightarrow\infty$ or $\lambda_p(\bX)\rightarrow 0$. It holds since $g_{\bX'}(\bX)\geq \lambda_p(g_1(\bX'))\tr(\bX)+\lambda_p(g_1(\bX'))\tr(\bX^{-1})-\|\log(\bX)\|_F^2$, and for any $c_1, c_2>0$, $c_1x+c_2x^{-1}-\log^2 x\rightarrow \infty$ for $x\rightarrow 0$ or $\infty$.\end{proof}
\begin{proof}[Proof of Lemma~\ref{lemma:major}]
Applying Lemma~\ref{lemma:major1}, we have
\begin{equation}\label{eq:major6}
g_{\bX'}(\bX)-g_{\bX'}(\bX')\geq 0,\,\,\,g_{\bX'}(\bX')-g_{\bX'}(\bX')=0.
\end{equation}Replace $\bX$, $\bX'$ in \eqref{eq:major6} by $\bA_i^{-\frac{1}{2}}\bX\bA_i^{-\frac{1}{2}}$, $\bA_i^{-\frac{1}{2}}\bX'\bA_i^{-\frac{1}{2}}$, and summing it over $1\leq i\leq n$, we proved Lemma~\ref{lemma:major} with $c_0(\bX')$ in \eqref{eq:define_G} defined by
\begin{equation}\label{eq:major7}
c_0(\bX')=-\sum_{i=1}^n g_{\bA_i^{-\frac{1}{2}}\bX'\bA_i^{-\frac{1}{2}}}(\bA_i^{-\frac{1}{2}}\bX'\bA_i^{-\frac{1}{2}}).
\end{equation}
\end{proof}
\subsection{Proof of Lemma~\ref{lemma:minimization}}\label{sec:lemma2}
Since $\bX^{-1}$ is operator convex~\cite[Theorem 2.6]{Carlen2010}, i.e.,
$(\bX+\bY)^{-1}+(\bX-\bY)^{-1}- 2\bX^{-1}$
is positive definite, $\langle\bC_2,\bX^{-1}\rangle$ is convex:
\begin{align*}
&\langle\bC_2,(\bX+\bY)^{-1}\rangle + \langle\bC_2,(\bX-\bY)^{-1}\rangle - 2\,\langle\bC_2,\bX^{-1}\rangle
\\=& \langle\bC_2,(\bX+\bY)^{-1}+(\bX-\bY)^{-1} -2 \bX^{-1} \rangle\geq 0.
\end{align*}
Since $\langle\bC_1,\bX\rangle$ is a linear function about $\bX$, $\langle\bC_1,\bX\rangle+\langle\bC_2,\bX^{-1}\rangle$ is convex and the unique minimizer is the root of its derivative, i.e., the solution to \begin{equation}\label{eq:matrix_deri0}
\bC_1-\bX^{-1}\bC_2\bX^{-1}=0.
\end{equation}
Lemma~\ref{lemma:minimization} is then proved by verifying that $\bX=\bC_2^{\frac{1}{2}}(\bC_2^{\frac{1}{2}}\bC_1\bC_2^{\frac{1}{2}})^{-\frac{1}{2}}\bC_2^{\frac{1}{2}}$ satisfies \eqref{eq:matrix_deri0}.
\subsection{Discussion of the majorization function}
The key of the proposed MM algorithm is to find $g_1(\bX)$ and $g_2(\bX)$ in Lemma~\ref{lemma:major1}. Actually, $g_1$ and $g_2$ in Lemma~\ref{lemma:major1} is obtained by the analysis of the case $p=1$.
When $p=1$, the goal is to choose $g_1(x)$ and $g_2(x)$ such that $x'$ is the unique minimizer of $g_1(x')x + g_2(x')/x - \log^2x.$ Let $z=\log x$ and $z'=\log x'$, then it is equivalent to find $g_1(z)$ and $g_2(z)$ such that $z'$ is the unique minimizer of
\[
g_0(z) = g_1(z')e^z+g_2(z')e^{-z}-z^2.
\]
To achieve the goal, it suffices to have $g_0'(z')=0$ and $g_0''(z)\geq 0$ for all $z\in\reals$, that is, \begin{equation}\label{eq:1d_2}
g_1(z')e^{z'}-g_2(z')e^{-z'}-2z'=0,\,\,\,\,\, g_1(z')e^{z}+g_2(z')e^{-z}\geq 2.
\end{equation}
By Cauchy-Schwartz inequality, the second equation in \eqref{eq:1d_2} is satisfied when $g_1(z')g_2(z') = 1.$ Combining it with the first equation in \eqref{eq:1d_2}, we have
\[
g_1(z')=e^{-z'}(\sqrt{z'^2+1}+z'),\,\,\,g_2(z')=e^{z'}(\sqrt{z'^2+1}-z').
\]
Plug in $z'=\log x'$, we obtain $g_1$ and $g_2$ in Lemma~\ref{lemma:major1}.
\section{Simulations}\label{sec:simulations}
In this section we compare the MM algorithm with the gradient descent method (GD)~\cite{Xavier06} with line search and described in Algorithm~\ref{alg:nhbd}, and a linearized gradient descent algorithm with a Richardson-like iteration~\cite{Bini2013}. We use the code available at http://bezout.dm.unipi.it/software/mmtoolbox/, and we referred the algorithm as ``Toolbox'' in the simulations.
There are many other algorithms available, but the gradient descent and its variants are more commonly used and it resembles the MM algorithm in terms of complexity per iteration and convergence rate. Indeed, \cite{JeurVV2012} gave a extensive survey on various algorithms such as the steepest descent method (SD), the conjugate gradient method (CG), Riemannian BFGS method (RBFGS), and the trust region method (TR) with the Armijo line search technique. It is shown that while CG has a similar performance as SD, the second order methods, including RBFGS and TR, are outperformed by SD and CG when the size of matrices increases.
\begin{algorithm}[htbp]
\caption{Implementation of Gradient Descend with Line Search} \label{alg:nhbd}
\begin{algorithmic}
\REQUIRE $\bA_1, \bA_2, \cdots, \bA_n \subseteq
\mathbb{R}^{p\times p}$: $\nu$: start step size, $c$: control parameter size
\ENSURE $\bX$: the Karcher mean.\\
\textbf{Steps}:
\STATE
$\bullet$ $\bX_1=\frac{1}{n}\sum_{i=1}^n\bA_i$, $k=1$
\REPEAT \STATE
$\bullet$ Let $\bD=\frac{1}{n}\sum_{i=1}^n\log(\bX_k^{-\frac{1}{2}}\bA_i\bX_k^{-\frac{1}{2}})$\\
$\bullet$ Find the smallest $j>0$ such that $F(\bX_k^{\frac{1}{2}}\exp(c^j\nu \bD)\bX_k^{\frac{1}{2}})<F(\bX_k)$, and let $\bX_{k+1}=\bX_k^{\frac{1}{2}}\exp(c^j\nu \bD)\bX_k^{\frac{1}{2}}$\\
$\bullet$ $k=k+1$
\UNTIL Convergence
\end{algorithmic}
\end{algorithm}
In implementation we use an inner iteration to find $j$ in Algorithm~\ref{alg:nhbd}. For each inner iteration, the computational cost mainly comes from $n$ eigenvalue decompositions. Therefore, each inner iteration has a similar computational complexity and empirically similar running time as one iteration of MM or ``Toolbox''. For a fair comparison, when we talk about the number of iterations, we use the number of inner iterations for the GD algorithm.
For simulations, we generate the data set $\bA_1, \bA_2, \cdots, \bA_n$ by the following scheme: $\bA_i=\bU_i\bS_i\bU_i^T$, where $\bU_i$ are random orthogonal matrices (generated by MATLAB command ``orth(rand(p,p))''), and $\bS_i$ are diagonal matrices with entries sampled differently for different simulations. All algorithms are initialized with the arithmetic mean. The parameters $\nu$ and $c$ in the GD algorithm are set to be $c=1/2$ and $\nu=\frac{1}{4},\frac{1}{2},1,2,4$.
For the first simulation, the diagonal entries of $\bS_i$ are sampled from a uniform distribution in $[1,10]$, so that the conditional number of $\bA_i$ are bounded above by $10$. We test two cases $p=n=10$ and $p=n=40$. The mean logarithmic error of each iteration over $100$ runs, defined by $\log(\|\sum_{i=1}^n\sum_{i=1}^n\log(\bX_k^{-\frac{1}{2}}\bA_i\bX_k^{-\frac{1}{2}})\|_F)$, are visualized in Figure~\ref{fig:compare1}. From the figure we can see that the rate of convergence of the MM algorithm is slower but comparable to the GD algorithm with $\nu=1$, which is the best performer among all choices of $\nu$. Besides, MM and Toolbox algorithms give smaller logarithmic errors than the GD algorithm.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{10.eps}
\includegraphics[width=0.45\textwidth]{40.eps}
\caption{\it The performance of algorithms, where $x$-axis and $y$-axis corresponds to the number of iterations and logarithmic error respectively.\label{fig:compare1}}
\end{center}
\end{figure}
The GD algorithm perform best for $\nu=1$ can be justified intuitively: When all $\bA_i$ are well-conditioned and close to each other, the space of PSD matrices is locally well approximated by the Euclidean space, and to compute the Karcher mean in Euclidean space, $\nu=1$ is the optimal choice because GD would converges in one step. However, such argument does not hold for matrices with larger conditional number and $\nu=1$ might not perform best. In the next example, we fix $p=n=10$ and let the diagonal entries of $\bS_i$ to be a geometric series $10^0, 10^{a}, 10^{2a},\cdots, 10^{9a}$, and we visualize the convergence rate for $a=0.3, 0.5, 0.7, 0.9$ in Figure~\ref{fig:compare2}. In the figures we can see that for these cases, it is hard to choose a consistent and optimal $\nu$ for GD. In comparison, MM algorithm is parameter-free and always converges in a reasonable rate.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{03.eps}
\includegraphics[width=0.45\textwidth]{05.eps}
\includegraphics[width=0.45\textwidth]{07.eps}
\includegraphics[width=0.45\textwidth]{09.eps}
\caption{\it The performance of algorithms for matrices with large conditional numbers.\label{fig:compare2}}
\end{center}
\end{figure}
However, the proposed MM algorithm does not converge as fast when the matrices $\bA_i$ have different scalings. As a simple example, when $\{\bA_i\}$ are matrices of size $1\time 1$, i.e, scalars, the MM algorithm requires more iterations when $\max_i(\bA_i)/\min_i(\bA_i)$ is large. Next we show two cases with $p=10$, $n=3$. For the first case $\{\bA_i\}$ are generated in the same way as in Figure~\ref{fig:compare1}. In the second case, $\bA_1$ is multiplied by $10^4$ while $\bA_2$ and $\bA_3$ are kept unchanged. The convergence rates of these two examples are visualized in Figure~\ref{fig:compare3}, and it is clear that MM has a slower convergence in the second case. The reason might be that, the majorizing function $G(\bX,\bX')$ is not a good approximation of the $F(\bX)$ and it is much larger than $F(\bX)$ when $\bX$ is far away from $\bX'$. However, MM algorithm still has linear convergence rate and small logarithmic error at convergence.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{example1.eps}
\includegraphics[width=0.45\textwidth]{example2.eps}
\caption{\it The performance of algorithms for matrices with different scalings: for the second figure, $\bA_1$ is multiplied by $10^4$.\label{fig:compare3}}
\end{center}
\end{figure}
\section{Conclusion}
This paper has presented a novel algorithm for computing the Karcher mean of positive definite matrices based on the majorization-minimization (MM) principle. The MM algorithm is fast, simple to implement, and has a theoretical convergence guarantee.
There are some possible future directions arising from this work. First, the MM algorithm strongly depends on the choice of the majorization function (in our case, the function $G(\bX,\bX')$), and it would be interesting to investigate that if other majorization functions give better performance. Second, it would also be interesting to apply the framework of MM algorithms to other manifold optimization problems.
\bibliographystyle{abbrv}
\bibliography{bib-online}
\end{document}
| 117,998
|
TITLE: Let $F$ be an extension field over $K$ , if $[F(x):K(x)]$ is finite , then is $F$ also a finite extension over $K$ ?
QUESTION [0 upvotes]: Let $F$ be an extension field over $K$ such that $F(x)$ is a finite extension over $K(x)$ ; then is it true that $[F:K]$ is also finite ? ( I know about the converse , that if $F/K$ is a finite extension then so is $F(x)/K(x)$ and $[F:K]=[F(x):K(x)]$ holds ) Please help . Thanks in advance
REPLY [0 votes]: In general, if $V$ is a $K$-vector space and $E/K$ is a field extension, then $\dim_E(V\otimes_K E) = \dim_K V$.
Apply that to $V = F$ and $E = K(x)$.
An elementary version : suppose $[F:K]$ is infinite, and $(x_i)_{i\in I}$ is an infinite family of linearly independent elements of $F$. Suppose they have a linear relation in $F(x)$ over $K(x)$, and try to reduce to the case $\sum_i P_ix_i = 0$ with $P_i\in K[x]$ such that the $P_i$ are not all divisible by $X$.
Then try to conclude.
| 128,758
|
TITLE: What is the least number $n$, such that $n^{2015}+2015$ is prime?
QUESTION [7 upvotes]: What is the least number $n$, such that $n^{2015}+2015$ is prime ?
According to my calculations, there is no prime for $n\le 6000$.
It is clear, that $n$ must be even, since $n^{2015}+2015$ must be odd.
REPLY [4 votes]: The first one seems to be $n=9462$ (according to Mathematica), i.e.
$$
9462^{2015}+2015
$$
is prime. I have no good mathematical arguments for this, though.
The code I used to get this was:
n=2;
While[Not[PrimeQ[n^2015 + 2015]], Print[n]; n = n + 2]
It stoped at 9460. Just to be sure, I ran
PrimeQ[9462^2015+2015]
and the response was True.
| 139,985
|
When Hillary Clinton became Secretary of State in 2009, the Obama White House required her to sign an agreement promising to have her family’s charities, under the umbrella of the Clinton Global Initiative (CGI; now known as the Bill, Hillary and Chelsea Clinton Foundation) submit new donations from foreign countries to the State Department for review.
The Agreement
The agreement was designed to avoid potential conflicts of interest, given her new government role. The arrangement was made by an Obama administration covering its flanks over the appearance, at a minimum, of impropriety, given the significant sums of money the charities pulled in from overseas. Many of the countries and foreign corporations who gave the most money also had issues in front of the State Department, where a positive decision could change the donor’s fortunes.
The Violations
The Clinton Foundation repeatedly violated this agreement with the Obama White House.
The Washington Post reported the Clinton Foundation failed to disclose $500,000 from Algeria at the time the country was lobbying the State Department over human-rights issues. Bloomberg learned the Clinton Giustra Enterprise Partnership, a Clinton Foundation affiliate, failed to disclose 1,100 foreign contributions.
But the Boston Globe’s report on the Clinton Health Access Initiative (CHAI), yet another foundation affiliate (these people have more shell groups than a Mafia crime family), may cover the most notable omissions yet. Tens of millions of dollars went undisclosed to the State Department.
Overall, the Clinton charities accepted new donations from at least six foreign governments while Clinton was Secretary: Switzerland, Papua New Guinea, Swaziland, Rwanda, Sweden and Algeria. Australia and the United Kingdom increased their funding by millions of dollars during this period.
The Lack of Consequences
The Obama White House remains deadly silent over the violations of its own agreement with Hillary and the Clinton charities.
Now the State Department, the organization charged with overseeing the agreement and monitoring its own Secretary for impropriety, says it too will do nothing.
On May 7, State Department spokesperson Jeff Rathke stated the Department “regrets” that it did not get to review the new foreign government funding, but does not plan to look into the matter further. “The State Department has not and does not intend to initiate a formal review or to make a retroactive judgment about items that were not submitted during Secretary Clinton’s tenure.”
The State Department spokesman said the Department was not aware of donations having an undue influence on U.S. foreign policy.
When reporters asked how the Department could know this without reviewing the belated disclosures, he declined to comment further.
The Questions
No one can anticipate what issues may confront a president. No president’s full span of decisions can be made in public. There is, in the electoral process, a huge granting of trust from the people to their leader. In cases like the violation of the ethics agreement by Clinton, undisclosed for eight years, one must ask about that trust — has it been earned?
One must also ask how, and more importantly, why, the White House and the State Department simply wash their hands of this issue. The Congress and elements of the media, so obsessed with events in Benghazi, seem nearly unaware of these financial issues while Clinton held one of the most powerful positions in government.
Lastly, given that Clinton now seeks the most powerful position in government, one must ask why the American voters seem oblivious to the clear trail she has left behind of how she views trust and ethics in government.
| 53,638
|
Date & Time: Sun, May 26 @ 4:00pm - 9:00pm
Location: Quartyard
Address: 1301 Market St, San Diego, CA 92101
The Grand Artique & The Deep End presents:
Sunshine Jones (Live), Chrysocolla (Deep End), Abouáv + More
Join us this Memorial Day Weekend as we welcome Sunshine Jones to perform live at San Diego’s premier outdoor venue, Quartyard!
There’s no better way to celebrate Memorial Day Weekend than enjoying the San Diego weather while listening to good music. This San Diego event also hosts local talent and will have enticing art from local San Diego artists.
21+ | 4pm-9pm | $15-$20
Music & Art for the Underground
All sales final, no refunds, rain or shine.
| 3,664
|
Sorel Mizzi has absolutely owned these events so far. He took down the event today for $50k; he came in second on September 9th for $15k; he took down the event on August 12th for $50k; he took down the event the week before that for another $50k; and then to top it all off, he won on July 15th for a whopping $72k. To say that Mizzi owns these heads-up showdowns would be an extreme understatement. I can't think of too many people that I would take over Mizzi in this competition.
I think that Pokerstars needs to change the name of the "High Stakes Showdown" to the "Zangbezan24 Invitational." Whether it is live poker or online poker, Sorel Mizzi seems to leave a trail of bodies in his wake.
--
Filed Under: Tournament Results
| 51,534
|
TITLE: Subset of Cantor set that isn't compact
QUESTION [5 upvotes]: How to prove that the Cantor set has a subset that is not compact? Actually, I want to prove that every infinite set $X\subset\mathbb{R}^n$ has a subset $Y$ that is not compact. If $X$ isn't bounded, then $X$ has a unbounded subset $Y$ that is not compact. If $X$ is bounded and includes some ball, then $X$ includes a open ball $Y$ that is not compact. But if $X$ is bounded and includes no ball, like the Cantor set, I don't know. I think can be easier start by Cantor set, but I'm not sure. Can you help me?
Thanks.
REPLY [5 votes]: HINT: If no point of $X$ is a limit point of $X$, then you can take $Y=X$. If there is a point $p\in X$ that is a limit point of $X$, show that there is a sequence $\langle x_n:n\in\Bbb N\rangle$ in $X\setminus\{p\}$ that converges to $p$, and let $Y=\{x_n:n\in\Bbb N\}$.
This works with any infinite $X\subseteq\Bbb R^n$.
| 197,234
|
Adjustable Floor Lamps Adjustable Floor Lamp Model By For O Adjustable Floor Lamp With Glass Shade
adjustable floor lamps adjustable floor lamp model by for o adjustable floor lamp with glass shade.
adjustable floor lamp with dimmer task studio pharmacy arm and shade,adjustable floor lamps for reading arc lamp uk in brass,adjustable floor lamp with shade glass lamps uk miles boom blog,bedroom floor lamps for sale and red fabric shade adjustable lamp target canada,adjustable floor lamps for reading uk contemporary shop architect arched brushed,adjustable metal floor lamp polished nickel west elm lamps walmart uk for reading,pharmacy floor lamp with adjustable arm and shade lamps contemporary pair of modernist by laurel for sale at target,adjustable floor lamp target ikea lersta with dimmer tab f by design barber,adjustable floor lamps contemporary lamp ikea metal architect swing arm standing with glass shade,adjustable floor lamp lamps globe by frank canada amazon.
0 thoughts on “Adjustable Floor Lamps Adjustable Floor Lamp Model By For O Adjustable Floor Lamp With Glass Shade”
| 207,178
|
December 24, 2012 | Categories: colours, midtown Atlanta | Tags: glass architecture, glass towers, blue windows, retractable roof, white balconies | Comments Off on kind of blue
Enter your email address to follow this blog and receive notifications of new posts by email.
Blog at WordPress.com. | The Modularity Lite Theme.
Get every new post delivered to your Inbox.
Join 63 other followers
| 339,741
|
Prolific filmmaker Balu Mahendra, known for inspiring the audience with his visually uplifting films for over three decades, passed away following a heart attack here Thursday. He was 74.
Balu breathed his last at the Vijaya Hospital.
Born as Benjamin Mahendran on May 20, 1939, in Sri Lanka, Balu Mahendra had a fascination for photography since he was very young.
With a flair for capturing images, Balu started his career as a cinematographer and landed his big break in 1974 Malayalam film Nellu.
He went on to work as cinematographer in several award winning films such as Prayanam,.
In a casual interaction with IANS during the premiere of his last film, he said: "So many rare Tamil films couldn't be restored. You can't make such films again. I wish to see that we take up archiving seriously and restore as many films as possible."
| 336,177
|
I am afraid. I am afraid of the world I am living in. I fear for my children and the world they will grow up in. I am afraid for my country and for the world. Terrorism is here to stay. People who don’t value life on earth are doing things to innocent people who do value life on earth. I don’t see how you can win a war with people who think it is better to die blowing up innocent people than it is to live a life of love and interdependence.
I am afraid because I see a side of myself that scares me. My grandmother and grandfather both lost relatives in the holocaust. My great grandparents on both sides of my family were all immigrants. Yet I was afraid to have refugees come into my country for fear of what may happen. I was afraid to open my arms and extend to people who have seen such horrific events and have nowhere else to turn to the same opportunity that my great grandparents were afforded. It is the fear that one person in a group of 10,000 may commit an act of terrorism. This fear that grows inside of me sickens me. It is wrong. I see Canada opening up her arms and accepting refugees in a way that makes me proud to be a human. (see this amazing video)
Franklin Delano Roosevelt stated in his first inaugural address, “Only thing we have to fear is fear itself”. I never understood what that really meant until now. I look at the United States political landscape and I see Donald Trump is running a campaign based on fear. His statements include:
“When Mexico sends its people, they’re not sending the best. They’re not sending you, they’re sending people that have lots of problems and they’re bringing those problems with us. They’re bringing drugs. They’re bringing crime. They’re rapists. …
Donald Trump was asked in an interview about whether Muslims should be subject to special scrutiny, a question he answered ambiguously. He then later affirmed that Muslims should be required to register in a database. source
He is running a campaign based solely on people’s fears. It is their fear losing the security of the nation, their fear of losing their advantage in life, their fear of the unknown that has created this anomaly that is Donald Trump’s campaign.
How are we allowing a person who is openly sexist, racist, and Islamaphobic to be a viable candidate? My naiveté shows because I thought America was heading toward a future of racial acceptance after Obama was elected. It doesn’t matter what you think of his political views, a black man was elected by the public to the highest office in the land! Jamie Foxx talks about how groundbreaking this was in his most recent interview on the Tim Ferris Show.
As great as I felt that the U.S. had elected a person of color to the White House is as afraid as I am right now that Donald Trump is leading in the polls. Will he really try to create a database of Muslims if he gets elected? Will they be tracked by the government? I wonder why the country does not see that he is willing to trample all over the constitution without a worry.
Fear is creating an atmosphere that is brutal for school age Muslims in this country. Muslim students are being attacked solely because of how they dress.
“The boys, who are in the same grade as the girl, allegedly put her in a headlock and punched her as they tried to take off her hijab, the source said. While beating up the girl, the boys also allegedly called her “ISIS,” the source said. source
The stories go on and.” source
The.” source I see people making decisions based on great fears all around me. Logic and reasoning have been suspended. Fear has replaced them.
Q1. What content do you teach that causes fear in students? #slowchatpe
Q2. How do you overcome their fear? #slowchatpe
Q3. What do you view as the scariest content to teach? #slowchatpe
Q4. How do you address the fear that our students will face in the future in class? #slowchatpe
Q5. Who helps you on Twitter or Voxer that helps you overcome your fear? #slowchatpe
Pingback: One Word: Support | #slowchatPE
Pingback: #CharlottesvilleCurriculum | #slowchatPE
Pingback: Disallow | #slowchatPE
Pingback: Being an Ally for Social Justice - PTE027
In my work with teachers I have found that, for many, fear is their primary driving force: fear of losing control of a class, fear of trying something new, fear of student rejection or dissent. Many teachers are so comfortable with their current levels of failure that they resist attempts to improve their practice. Example, when discussing the pros/cons of having students dress out there is often significant resistance. Many argue that requiring students to change clothes is essential because of safety, participation, tradition, or hygiene. While I have counter points to each of those, they are compelling reasons. The fear is exposed, however, when I have gotten teachers to examine their rationale for requiring students to dress out more deeply. Over and over again, when all the other arguments are removed, teachers share that they have kids dress out for two main reasons: (1) if they did not, how would they assign grades, and (2) what would they do with all that extra class time. Those reasons, their real reasons, are based in fear. While we must have grace, space, and compassion for the growth path of everyone we must also support courage and vulnerability.
I applaud Justin’s perspective and am grateful that he opened the door to this important discussion in a professional context. For as Ira Shor said in Empowering Education, “all education is political”. As a profession we MUST critically examining how our practices reinforce or interrupt oppressive ideologies, such as this shared by Mr. Trump. And, If we are ever to get to a point where we can have large scale, individual courageous conversations about how our professional practice and personal
beliefs/actions impact socially constructed inequities such as race, ethnicity, gender, sexuality and nationalism, it is likely that we need to begin with courageous conversations about fear and job performance.
What a thought provoking post. Fear is the great motivator that invites us to compromise our values in a flutter of self-preservation. Beware an environment that brings people with questionable rhetoric to power. Thank you for making me think. The questions are challenging! Great post, Justin!
| 377,352
|
TITLE: The vanishing of $MGL^{2n+i,n}(X)$; do spectra of smooth projective varieties generate $SH_{l}$?
QUESTION [11 upvotes]: I have two questions related to the stable motivic homotopy categories of Morel-Voevodsky. The first is probably simple; I wonder what is known on the second one.
For the algebraic cobordism theory $MGL$ and a smooth variety $X$ over a (perfect?) field is it true that $MGL^{2n+i,n}(X)=0$ for any $n\in \mathbb{Z},i>0$? More generally, are there any reasonable restrictions on a (oriented?) ring spectrum $E$ in $SH$ that ensure the vanishing of
$MGL^{2n+i,n}(E)$. In particular, is this question related with some sort of effectivity for spectra?
It is well known that 'shifts and twists' of the spectra $\Sigma(X_+)$ generate $SH$, where $X$ runs through all smooth $k$-varieties. If the characteristic of $k$ is $0$, resolution of singularities yields that it suffices to consider only smooth projective varieties here. Now, what statements of this sort are known for $k$ of characteristic $p>0$? I suspect that that one can deduce a similar result for $SH\otimes \mathbb{Z}_{(l)}$ for any prime $l\neq p$, ffrom the Gabber's l'-alterations theorem. Is this true? If this is too difficult, can one prove a similar statement for the triangulated category of $MGL$-modules?
What are the best references for these questions?
REPLY [1 votes]: It was proved by Riou in Appendix B of http://arxiv.org/abs/1311.2159 that the spectra of smooth projective varieties do (compactly) generate $SH(k)_{\mathbb{Z}_{(l)}}$ for any $l$ distinct from $\operatorname{char} k$.
Note also that $SH(k)_{\mathbb{Z}_{(l)}}$ "differs a little" from $SH(k)\otimes {\mathbb{Z}_{(l)}}$.
| 10,414
|
Covenant Health has named Patrick J. Birmingham, retired president and publisher of the Knoxville News Sentinel, as the health system’s vice president of philanthropy and governmental relations.
Birmingham’s responsibilities include oversight of the Covenant Health Office of Philanthropy’s comprehensive development program, including major gifts, planned giving, annual gift campaigns and special event fund raising. He will succeed Jeff Elliott, vice president of philanthropy, who is retiring after more than 13 years of service to Covenant Health.
During his tenure Elliott has led community fund raising initiatives for major health system projects such as the construction of LeConte Medical Center in Sevierville and Roane Medical Center in Harriman, which included raising $800,000 for the hospital’s chapel, on-call chaplain program and serenity garden. He also led a successful campaign to raise funds for the robotic surgery program at Methodist Medical Center in Oak Ridge.
Elliott developed the Covenant Health Leadership Academy, which began in 2013 to help area business and community leaders learn about Covenant Health’s medical programs and services, and about current issues facing the healthcare industry. Several Leadership Academy graduates now serve as board members and event leaders for Covenant Health’s foundations.
“When I came to Covenant Health, it was my first opportunity to be part of a large health system,” Elliott said. “I have been privileged to work with knowledgeable medical professionals who have been gracious with their time and expertise, and with strong community leaders in counties throughout East Tennessee.”
“We are grateful for Jeff Elliott’s contributions to Covenant Health, and for his commitment to our patients through many successful fund-raising campaigns,” said Jim VanderSteeg, president and CEO. “In addition to large-scale initiatives, he has enhanced planned giving opportunities and overseen popular foundation events in our local communities.”
The Office of Philanthropy manages five foundations that raise funds for Covenant Health facilities and programs: Fort Sanders Foundation, Thompson Cancer Survival Center Foundation, Methodist Hospital Foundation in Oak Ridge, Morristown-Hamblen Hospital Foundation, and the Dr. Robert F. Thomas Foundation in Sevierville.
Patrick Birmingham also will have responsibilities for developing and maintaining relationships with government officials and legislative representatives, along with establishing health system policies and plans which align with government laws, regulations and standards.
“Patrick is an outstanding community leader who is known as a results-oriented decision maker,” said VanderSteeg. “In the important areas of philanthropy and governmental affairs, Covenant Health will benefit from his first-rate analytical skills, his business expertise, and his extensive experience in the communications industry.”
“Covenant Health is an exceptional organization dedicated to providing superior care to patients. Their ongoing commitment to excellence is focused on improving the quality of healthcare throughout East Tennessee,” Birmingham said. “I feel blessed to be joining this team of healthcare professionals as we continue the foundations’ mission of helping those less fortunate.”
Prior to joining the News Sentinel, Birmingham was president and publisher of the Corpus Christi Caller-Times in Corpus Christi, Texas. During his tenure the Caller-Times was named “Newspaper of the Year” multiple times by the Texas Associated Press Managing Editors and in 2008 Birmingham was named “Newspaper Leader of the Year” by the Texas Daily Newspapers Association.
Birmingham has a long-standing interest in healthcare and its importance to communities. In Corpus Christi he served as board chairman for CHRISTUS Spohn Health System, a six-hospital health care system that is part of CHRISTUS Health, one of the top 10 Catholic health care systems in the U.S. He also served as a member of Covenant Health’s board of directors.
Birmingham studied at the University of Missouri. A strong believer in community involvement, he is past chairman and a current member of the Knoxville Chamber board of directors. He also serves on the boards of Second Harvest, United Way, and Helen Ross McNabb Center Foundation, as well as the University of Tennessee College of Nursing Advisory Board. In 2010 he led a community-wide effort to save the Knoxville Open professional golf tournament. Renamed the News Sentinel Open, the tournament has generated more than $500,000 in contributions to local charities since 2010.
| 396,448
|
Welcome to the new age digital printing technology. It’s a new revolutionary product after the laser and ink-jet sublimation. Yes, you guessed it right. It is the UV Printers with updated and automated technology. The Digital Flatbed UV Printer is the new era printing revolution and the trend is here to stay.
The Flatbed Digital Printer, or Flatbed UV printer or commonly known as Flatbed printers are the printers capable of printing on a wide range of materials such as wood, acrylic, glass, plastic, cloth, wood, ceramic, leather, etc. Technically, it has a flat surface upon which material is placed to be printed upon. These latest technology printers use UV curable inks which enables easy printing on a wide range of materials. Incorporating popular gloss technique, matte technique and embossing technique have been easier than ever before.
The UV printers are more viable and preferable, both economically as well environmentally. The earlier solvent printers produced much more cartridge wastage and caused pollution too. The new technology printers are versatile, compact and environment friendly. In addition to this the UV cured inks are weather resistant and produce lesser odor and heat relatively.
How can UV Printers increase profits?
Have you designed a strategy to beat the competitors?
Have you analyzed that you profits are waning off with time?
Is all your profit going in dead stock?
Are you scared to handle the customer’s recent demands of personalization and customization?
All the above reasons are a major cause of declining profits. The Digital Flatbed UV Printer is one-stop solution to all your profitability dilemmas. This new environment friendly technique is sure to help you increase the customer base, retain the existing clients and offer you a better quality for cheaper cost of production. Sounds too good to be true!Here are a few benefits that accrue with the UV Printers:
- Customization
The Large Format UV Printer and the Small UV Printer offer customization and personalization which help attract and retain clients. The plethora of objects that can be printed are mobile phone cases, glass, wooden items, ceramic tiles, and practically anything that can be customized. Personalization has been the demand of the evolving market, both for domestic products as well as office-use products.
- Cost effectiveness
With enhanced print quality, the output improves and with the cost effectiveness, the clients have claimed that the profits have risen by 270%. UV-LED technology is highly affordable and the testimonials swear by the rising profits.
- Maintenance costs
This robust equipment has lesser maintenance costs and requires lesser space than the predecessors. The Flatbed Printers are sure to reduce your annual repair andmaintenance costs, thereby increasing profits.
- Versatility
UV-LED flatbed printers are highly versatile and perfect for clients who want to address multiple markets. You can print on any item. With the myriad options to print on, you can choose the higher margin items, coupled with personalization and hence capture a wider market. A wide range of printing, namely industrial, signs and graphics have been possible due to versatility of the Flatbed Printers.
- Faster production
With faster production and specialization in customizing a product, the technology has surely encouraged mass production, hence more profits. The flexibility of printing on one item to a mass production has further leveraged higher profits.
The Conclusion:
With the ease in use and printing on wavy and heat sensitive materials along with customization, the new found Digital Printers are the solution to increase profits. Enhanced printing quality and improved output are the sole reasons you can retain existing clients and attract the potential clients. The customer will automatically be drawn towards you as you will have the ability to create objects that reflect their personal style and look.
<iframe width="560" height="315" src="" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
| 199,137
|
CASA by the Numbers for Idaho
Key CASA/GAL Numbers For Idaho in 2017
1,863
Number of children and youth in child protection cases
3,323
Number of children in child protection cases
2,677 (81%)
Number of children served by volunteers or staff
395
Number of active volunteers
44
Counties served by 7 Local Programs
CASA is time well spent. It’s not easy or pretty, but it will change the course of a child’s life for the better. It will bring a great amount of satisfaction to a volunteer who is ready to change the community profile for tomorrow.CASA Volunteer
| 112,397
|
TITLE: Allowed energies for a particle inside potential barrier
QUESTION [1 upvotes]: The allowed energies for a particle inside a potential well are discrete ones, and for a particle outside the well, the energy specrum is continuum. But what for a particle inside a potential barrier? Are there allowed values for the energy? And if there are, are they discrete or continuum?
REPLY [2 votes]: So, if I understand it correctly, the context of the question is quantum tunneling. Well, in such a scenario the particle is actually forbidden inside the barrier. Hence, it does not have a spectrum there. What this means is that the propagation constant of a particle with a certain energy becomes imaginary inside the barrier and its wave function thus becomes a decaying exponential.
Analogous scenarios exist in classical physics. Consider for instance light being launched into an optical fibre while its frequency is below the cut-off of the fibre. Instead of propagating along the fibre, the light decays. That is because there is no band ("energy level") in the fibre at that frequency.
| 139,966
|
I 2019:
Shhh. Don’t tell our kids.
- Fake old school cell phones. (They come in a pack of 4 or 5 for about $20.) So fun!! My kids LOVE playing with grownup stuff.
- Chapstick – our 22 month old has taken to putting it in his ear… so we’ll see how that goes #4thchild (This holiday flavor pack is fun!)
- Gum – Somehow, our kids still think gum is like an AMAZING special treat. Kids are awesome.
- Card games – Gonuts for donuts and Phase 10 are this year’s card games of choice. Some other stocking-stuffer sized games our kids have enjoyed are Spot It (we started with the “animals” jr version which is great ages 3-5, and eventually got the regular one.) Flash, Sleeping Queens, Uno, and this fun multi-pack of kid card games. (If you get that last one, they come individually packaged inside so you can totally get the pack of 6 games for $7 and divvy up the games between multiple kids!)
- Ornaments – I usually snag one at Target for each kid depending on what they’re interested in that year
- Rainbow makers/ prism catchers – seriously these are so fun!!! (Those are 2 for $8 ish, and have a hole in the top for a string to go through. Or if you want one with a chain, these are 2 for $12 ish.)
In year’s past, some other kid stocking stuffer hits were…
- Change purses – this 7 pack for $7 is fun! You could gift one to each kiddo, and save the rest for birthday gifts throughout the year.
- Pencils like this – remember these!?
- Other fun pens/ markers/ oil pastels/ art supplies
- Tiny notebooks
- Wooden train accessories – we did cool train bridges in stockings last year (bridge 1, bridge 2, bridge 3)
- “Real” keys on a keychain – are my kids the only ones who love playing with grownup stuff? Sparkle unicorns, superhero keychains, emoji, pizzas ha! If there are too many in a pack, save some for birthday gifts throughout the year.
- Fake cameras
- Jewelry or hair clips. My 5 year old this year said the only thing she wanted was a “clip collection.” Haha!! She’s getting these & these & a unicorn clip holder. I hope that’s what she meant! 🙂
unicorn | snap clips | butterfly clips
- Picture books like this with printed pictures they’d enjoy. Picture below is from Amazon but you can find them at the dollar store or walgreens for cheaper!
- Light sabers – My sister in law did these for her kids one year. Fun aunt award! These old school ones are classic & my kids have loved borrowing their cousins’ ones. These battery-operated LED ones look SUPER FUN too!
I’m sure I’m missing some – those are just a few ideas off the top of my head! What have been your kids’ favorite stocking stuffers? I’ll add to my idea list with your ideas!
| 113,643
|
I cannot believe it but I'm basically being handed an opportunity to check a bucket list item off the list. My company is sending me to Italy for work and I will tag on some days in the Alps to ski as well.
I'll only have 2 ski days due to time limitations so I'm planning to make the most of the experience with what funds I have. I'd like, if snow conditions allow, to ski the Vallee Blanche with a guide. The difficulty I'm having is finding a company that will allow me to add into a group as I don't have the funds to pay for my very own guide. I will stay in Pre-Saint-Didier and can be flexible to do the Valle Blanche on either February 22 or 23. Does anyone here have an leads on a company or contact?
While I'm at it, anything else I should be aware of or looking into? I'll take my boots and helmet but rent skis/poles. And I did buy insurance.
| 139,351
|
C and D Motosports
C And D MotoSports
20607 Soledad Canyon Rd 20607
Canyon Country, CA 91351
(661)299-MOTO (6686)
Hours
Tuesday-Friday 9AM-5PM
Saturday 9AM-4PM
There is one motorcycle shop in Santa Clarita that goes above and beyond by offering service and repairs on all makes and models of ATV, UTV, street, and dirt bikes. C and D Motosports has a full line of riding gear, accessories, and parts. You will find the latest gear, boots, and helmets and other apparel.
They want to help keep your vehicle running smoothly. You will be sure that any hard parts, tires, and chemicals will be available. If something is not available, their staff will special order anything you may need that is not in stock.
C and D Motosports is family owned and operated and you are sure to get the best service and products that you can find in the Santa Clarita Valley. They offer all of the top brands and the latest apparel at their full retail shop. They are the go-to motorsports experts in the SCV and have the newest and highest quality parts available.
They have accounts with the biggest companies and race mod shops in the nation and offer full motor rebuilds, full suspension, and race mods. No matter what you need fixed, C and D Motosports offers a variety of services to keep your bikes running at peak performance from minor repairs to tire changes.
They have a stong passion for motorcycles that is parallel to their quality maintenance, expertise, and technical skills. The experts are motorcycle enthusiasts and professional mechanics that help you get the most out of your bike.
They believe in building personal relationships, exchanging ideas, and talking shop. They are the most trusted service shop in Santa Clarita, conveniently located across from Home Depot in Canyon Country.
Swing by C and D Motosports and see everything they have to offer or visit their website or Facebook page. Once you get to know them, you will never go anywhere else when you need products or services for your motorbikes.
You can even view the showroom on their site for an inside peek at their apparel before you shop or even get to know background about some of their team members. Give C and D Motosports a call to find the best rates and have all of your questions about motorsports supplies and services answered.
| 394,601
|
This deal has ended.
A subscription to Women's Day magazine drops from $9.99 to $4.99 with our code BRADSDEALS at DiscountMags today only. This is $0.41 and the best per year price we could find by $5. This magazine features advice on nutrition, fitness, fashion, family, and more. Please note that this deal expires at 11:59 PM ET tonight (4/19).No longer available
Join the discussion
| 50,930
|
TITLE: Distance from a closed set.
QUESTION [2 upvotes]: Let $X$ be a normed linear space and let $Y$ be a closed subspace of $X$. Suppose $(y_n)$ be a sequence in $Y$ such that $d(y_n, A)\to 0$ for a subset $A$ of $X$. Is it true that $d(y_n, A\cap Y)\to 0$? Neither I could prove nor could I find a counterexample. Any hint is appreciated.
REPLY [3 votes]: No.
Suppose $X=\mathbb{R}^2$ with the Euclidean norm. Let $Y=\mathbb{R}\times\{0\}$ and $A=\{(0,\frac{1}{n}):n\in\mathbb{N}\}$.
Now take $y_n = (\frac{1}{n},0)$ then $d(y_n,A) = 0$ because $y_n$ is arbitrarily close to $(0,0)$ which is arbitrarily closed to an element in $A$. But $A\cap Y=\emptyset$ so $d(y_n,Y\cap A)$ is not even defined. (If you want that $d(y_n,Y\cap A)$ will be defined just add any element that is not $(0,0)$ to $A$).
| 106,391
|
.
The Childfree Life
Facebook group
Childfree Resource
Network
Extensive links to childfree and population websites and essays.
Childfree
Webring
Over 40 Project
A source of information. For example, why couples choose to not breed:
The Top Six Motives are:
1) I love our life, our relationship, as it is, and having a child won’t enhance it.
2) I value freedom and independence.
3) I do not want to take on the responsibility of raising a child.
4) I have no desire to have a child, no maternal/paternal instinct.
5) I want to accomplish/experience things in life that would be difficult to do if I was a parent.
6) I want to focus my time and energy on my own interests, needs, or goals.”
Kidfree and Lovin’ It
“The Complete Site for Living as a Happy Non-Pareent”
Moral Childfree
Argues that it’s immoral to sentence another human to a life which will consist of more suffering than pleasure. “‘Life
Childfree Reflections
Living Childfree-by-Choice, with Marcia Drut-Davis. “My goal is to educate, support, and inspire honest communication about the choice not to parent.”
Confessions of a Childfree Woman: a life spent swimming against the mainstream 2013.
Why Women Choose Not to Have Kids
Temma Ehrenfeld
Oct 5, 2012.
My Mother’s Day Gift To The Planet: Not Having Kids
Chris Bolgiano
May 9, 2010
The Myth of Joyful Parenthood
The more kids cost, the more we idealize raising them
Valerie Ross
August 15, 2011
So Cute, So Hard on a Marriage
After Baby, Men and Women Are Unhappy in Different Ways; Pushing Pre-Emptive Steps
Andrea Petersen
April 28, 2011
Numerous studies have shown that a couples’ satisfaction with their marriage takes a nose dive after the first child is born. Sleepless nights and fights over whose turn it is to change diapers can leach the fun out of a relationship.
Pressured To Have Children?
Healthy & Green Living Editors
April 24, 2011
Direct and Subtle Pressure to Have Children—How Can a Childfree Wannabe Cope? Exploring all facets of childfree living.
Complete Without Kids Book By Ellen Walker, Ph.D.
The No-Baby Boom
A growing number of couples are choosing to live child-free. And you might be joining their ranks.
Brian Frazer
April 2011
“And the sheep-to-human ratio in New Zealand, which currently stands at 10 to 1, seems sure to increase, since a staggering 18 percent of adult men there have elected to get vasectomies.”
...of the more than 60 people Laura Scott interviewed for Two Is Enough (some as old as 66), not one expressed qualms about his or her decision. Actually, regret is more common among the breeders. In a 2003 survey of more than 20,000 parents that Dr. Phil conducted for his show, 40 percent reported that they wouldn’t have had kids if they’d realized the difficulties of raising a family.
In the US: 58,410,000 married couples and 26,896,000 childfree married couples [no year given for data]
2010: The year childfree went mainstream (thanks, Oprah!)
Lisa Hymas
December 30, 2010
Childfree Clique
A blog with comments.
Happily Childfree
A blog with links.
| 141,083
|
TITLE: Convex Function problem
QUESTION [0 upvotes]: How can prove this question?
Let $f: [0.1]\to\mathbb{R}$ be a convex function of class $C^1$, with $f'(0)> 0$.
Prove that
$$\lim_{x \to +\infty}f(x)=+\infty$$
And that the limit exists finite
$$\lim_{x \to +\infty}\frac{x}{f(x)}$$
REPLY [0 votes]: A standard result is that a $C^1$ convex function on $\mathbb R$ (or any open interval) has an increasing derivative. So
$$f(x)=\int_0^xf'(t)dt+f(0)\geq\int_0^xf'(0)dt+f(0)=xf'(0)+f(0),$$
and we get the result since $f'(0)>0$.
For the second question, put $g(x)=\frac{f(x)}x$ for $x>0$. Then
$$g'(x)=\frac{f'(x)x-f(x)}{x^2}.$$
Now fix $x_0>0$ such that $f(x_0)>0$. Then for $x\geq x_0$
$$g'(x)=\frac{f'(x)(x-x_0)+f'(x)x_0-f(x)}{x^2}=\frac{f'(x)(x-x_0)+f'(x)x_0-\int_{x_0}^xf'(t)dt+f(x_0)}{x^2},$$
and since $f'(x)(x-x_0)-\int_{x_0}^xf'(t)dt\geq 0$ and $f'(x)x_0+f(x_0)\geq 0$, we get that $h$ is increasing on $[x_0,+\infty[$ so $x\mapsto \frac x{f(x)}$ is decreasing on $[x_0,+\infty[$. Therefore, the limit $\lim_{x\to +\infty}\frac x{f(x)}$ exists and is finite since $\frac x{f(x)}> 0$ for $x$ large enough.
| 82,238
|
\section{Questions}\label{question}
We foresee many directions this work can be expanded. In this section we will lay out a few of these possible directions. Our first question is perhaps the most important to expanding these codes to solve any error not just those defined by single qudit errors.
\begin{quest}{1}
Can one insert CNOT gates into the graph avoidance representation to extend these codes to have universal fault tolerance?
\end{quest}
Our next questions are in pairs to encourage the collaboration of quantum information theorist to work with graph theorist.
\begin{quest}{2(a)}
What properties of the graph, $G_{\mathscr{E}}$, are required for the existence of a Sub-LUC graph?
\end{quest}
\begin{quest}{2(b)}
What properties the error set $\mathscr{E}$ create a graph, $G_{\mathscr{E}}$, have the existence of a Sub-LUC graph?
\end{quest}
\begin{quest}{3(a)}
Which graphs, $G_{\mathscr{E}}$ and $G_{\mathscr{E}'}$, have the same Sub-LUC graph?
\end{quest}
\begin{quest}{3(b)}
Which error sets $\mathscr{E}$ and $\mathscr{E}'$ create graphs, $G_{\mathscr{E}}$ and $G_{\mathscr{E}'}$, that have the same Sub-LUC graph?
\end{quest}
Quantum random walks are a field of study already at the intersection of quantum information and algebraic graph theory. Indeed, quantum random walks have been shown to be universal for quantum computation. An important property to exploit for quantum algorithms is perfect or group state transfer \cite{pst,fracrevival,gst}. The next question expands the possible routes taken to study (extended) reflexive stabilizer codes using quantum random walks.
\begin{quest}{4}
What LUC graphs have state transfer with quantum random walks, continuous or discrete? \cite{cayleywalk}
\end{quest}
The next question is an option to incorporate graph-theory techniques into the study of quantum error-correcting codes.
\begin{quest}{5}
Using limiting properties of graphs or graphons, can one find a GV-Bound for extended reflexive stabilizer codes? \cite{graphons,gvbound}
\end{quest}
| 94,752
|
Tilbury Port Taxi Service
From £95. Tilbury: The gateway to London and South East. My-LondonTaxi.com will get you there comfortably on time.
The Port of Tilbury is known as London’s major port. It provides fast, modern distribution for the south east of England and far beyond. A very dynamic and diverse port, is Tilbury.
The Port handles the full range of cargoes and boasts a specialist expertise in handling paper and forest products, containers, grain, bulk commodities, and construction and building materials. Tilbury enjoys a strategic location which makes it a natural point of distribution.
The Port at Tilbury is a multi-modal hub, which covers more than 850 acres and is well positioned for the M25 orbital motorway and the UK’s national motorway network. In addition, there are direct rail connections inside the Port, with access to the whole of the UK and dedicated barge facilities.
Following the acquisition of the Deep Sea container Terminal in 2012, London Container Terminal (LCT) was created providing a unique combination of European Short Sea and Deep Sea services at the same facility. As the UK’s third largest container terminal, LCT handles over half a million containers per year.
As part of an exciting new development, London Distribution Park (LDP) immediately adjacent to the Port at Tilbury is currently under construction – in a joint venture with the specialist industrial development company, Roxhill. This 70 acre facility will complement the port’s successful distribution capability and further develop Forth Ports’ portcentric strategy, making Tilbury London’s leading logistics and distribution hub.
If you need a taxi connection to the Port at Tilbury, look no further than My-LondonTaxi.com. We offer clean and spacious vehicles like the Mercedes Vito – a 6 seat taxi with lot’s of room for luggage. Let our skilled drivers get you to your port on time.
| 378,902
|
Kini Biz
Bizphere Brand & Marketing Group, an integrated brand and marketing management company which focuses on SME branding has released the findings of its online survey titled “ Malaysian SMEs’ Brand Potential”.
The objective of the survey was to gauge the branding challenges and potential of Malaysian SMEs to become a global brand.
The survey showed that while most SME brands are not well known, or possess low brand awareness locally, up to 62% of SME owners are confident that in five years their brand will known regionally or even globally.
Bizsphere head of SME Marketing Support Sherine Ho, belives that this is the mindset which is needed by SME owners seeking to prosper in the global market.
“Malaysian entrepreneurs have begun to recognize the importance of branding with the liliberalisation of the international market and intense and competition among local and incoming foreign brands it is just not enough to produce quality products. People buy a brand which relevant, differentiated and engaging.” Said Ho.
Most respondents in the survey were able to recall globall brand names with ease but found it hard to name Malaysian brand names especially the SMEs. In 20 seconds, respondents were able to name an average of 9.3 global brands but only 5.4 Malaysia SME brands.
The top five most mentioned Malaysian brand names were AirAsia, Proton, Petronas, Maxis and Maybank whilst Ramly Burger. Nelson’s Corn, OldTown, Secret Recipe and Ayam were the top five most mentioned Malaysian SME brands
It is an interesting fact that many brands have been mistaken as being local due to their regular presence in our daily lives. For example, many claim that Ayam is a Malaysian brand but the truth is, the brand is owned by Ayam SARL of France.
“Malaysian SMEs have what it takes to be global brands. In addition to trade, our government has done a lot to promote Malaysian SMEs as brands of quality, excellence and distinction which meet international standards. For example, The Malaysian Brand Certification Scheme or National Mark was launched in 2009 with a wide spectrum of benefits. Despite being heavily promoted, only 42% of SME owners are aware of this programme.” explained Ho.
Malaysia SME owners should be more pro-active in researching government support programmes which are relevant to them.
The survey also showed that the lack of a comprehensive brand plan and financial support for brand communications are two of the top challenges faced by Malaysian SME trying to brand themselves to the next level.
Bizphere associate consultant Erin Foong feels that these findings are not surprising, as every eight out of 10 SME owners she has met brand themselves on an ad-hoc basis with no proper strategy.
“Most entrepreneurs are communicating what they want to say instead of what their target customer would like to hear. As a financial support, the key is not how much budget you have: it is how best to allocate your resources to meet your key brand and business objectives.”
Eight percent of the respondents wanted to have dedicated personnel or a specific unit to take care of brand & marketing communications as well as an annual budget allocation for branding & promotions.
Foong says it is still a common practice among SME owners to priorities sales over branding. Branding therefore becomes something that an employee or department does on top of something else like sales.
| 342,380
|
View Dr. Fern’s Hospital Award Display! Posted on September 4, 2013 David Fern, MD, has practiced medicine since 1971 and has been a DeKalb Medical Surgeon since 1978. As part of his mission to serve others, Dr. Fern joined “Heart for Africa” and has traveled to South Africa and Kenya to provide free medical care. His most recent trip was to Kenya in November 2007.
| 116,551
|
Vets Now are seeking a Veterinary Nurse to thrive in the exciting world of emergency and critical care. The caseload is diverse and will present the perfect platform to utilise your skill set. Ideal for those nurses looking for a second job or a break from the monotony of day practice.
Our supportive UK network of emergency clinics and hospitals work in collaboration to ensure that we can offer the best emergency and critical care. Whether it’s sharing best practice or seeking advice on a particularly complex case, you will never be alone at Vets Now. You will be fully supported in Gillingham by our Principal Nurse Manager, expert Vet Surgeons and District Managers.
We take pride and integrity in the work that we do and that’s why we are committed to investing in the career progression of our people. We offer a generous CPD allowance, with various courses on offer; including enrolment on the accredited Certificate of Veterinary Nursing in Emergency and Critical Care. We will also cover VDS and RCVS fees.
Our clinic in Gillingham boasts a great team and is centrally located with easy access to motorways.
We are looking for a Veterinary Nurse who is able to work Sunday days from 12pm to 12.
| 355,507
|
I wake feeling better rested than any other night on the trip – perhaps due to a combination of a slightly warmer night and setting up the tent on a comfortable slope.
There is no more sausage and we are forced to fall back to oatmeal. Historically, this is the fork in the road that leads to either a return to civilization or the Donner Party. We will return to civilization later in the day… hopefully.
Camp is broken and we make our way south to the Painter’s Pots. The valley containing these pools of bubbling mud is almost entirely filled with steam.
Making our way to the western exit of the park, we stop at Gibbon Falls to take pictures. Although the falls are impressive, I am disappointed that a.) there do not seem to be any gibbons and b.) that I am not very good at taking pictures of waterfalls.
A final buffalo bids us farewell and we soon find ourselves in a seemingly empty corner of Montana. There is little, if anything, notable about the drive from Yellowstone to Idaho.
Panda Express for lunch in Idaho Falls, then down to Blackfoot to visit the Idaho Potato Museum. My expectations are low and when we arrive they are met appropriately. Aside from some spectacular whitewashing on why Idaho is a great place to grow potatoes even though it’s effectively a desert (“We can more accurately irrigate the crops”), the most entertaining thing about our visit is listening to Terry grill the woman who runs the front desk about the Idaho Potato Council’s executives and their recently launched (and completely bizarre) comic book series.
I buy a coffee mug that depicts a potato-themed version of American Gothic and a Vitruvian Potato magnet.
Our next stop is the Collector’s Corner Museum, a recommendation from the Roadtrippers app. From the outside it looks like a bodega full of Precious Moments figurines. I tell Terry and Matt that I’m going to sit in the car if we find out they charge admission.
An elderly gentleman greets us, yelling towards the back of the building for his wife to turn on the lights because they have visitors. Partially inspired by both their charm and a burbling of sympathy for the obvious lack of foot traffic, I hand over five dollars to buy my entry.
The woman tells us that flash photography isn’t allows as she eyes our DSLRs. I frown. She says, “But if you want to take pictures of your friends standing in front of something, that’s OK.”
The interior of the museum is larger than it seemed from the outside and is filled with dozens of glass display cases. Some of the collections are moderately interesting – in particular, a knife collection contains several Nazi and Axis blades. I notice a number of swastika-bearing items I’m fairly sure are both difficult and in some cases, illegal, to purchase and collect.
The old man is a WWII vet and shares stories and a neat scrapbook with us that contains news clippings from the war. “Germans attack with robot planes” is one of the more interesting headlines.
His wife chimes in with random facts about each of the collections but seems to have a good sense of when she’s hovering and never becomes annoying. Both seem to love their collecting hobby and I can’t help but feel a little sad that the things they’ve collected (and the surrounding history) that matter so much to them won’t mean much to anyone in a few years.
I loathe junk, junk stores, and junk collections, often looking down my nose at people who are passionate about collecting “things”, but the tiny couple that runs the Collector’s Corner gets a pass (for all that my approval matters). I can’t manage any cynicism or spite towards them, even when I try.
Our home for the evening is a surprisingly clean Super 8. The front desk clerk gives us one of the few, decent, non-apathetic restaurant recommendations I’ve ever received from a hotel clerk.
I am normally dubious of any restaurant claiming to serve “kobe” burgers, but would recommend the Snow Eagle Brewery to anyone passing through Idaho Falls. Given that they serve high-point beer and I am very tired it’s possible that the burger was just OK and my recommendation should be taken with a grain of salt. But it is probably very good, possibly maybe.
| 71,004
|
.
There is another point of view though, since Amsterdam is considered by many to be the "sin" city. The free use of cannabis, the smell of which wanders in the city's air and the Red Light District, one of the world's most taboo attractions, make Amsterdam the most alternative city in Europe.
Although Amsterdam is a relatively small city, there are so many things to do that it is very difficult to get bored! It’s a great city for a weekend getaway or for a longer stay for those loving art, history, food and nightlife. We visited Amsterdam for 3 days and what we did was to choose a centrally located hotel in order to be able to explore tho majority of city's sights in our limited time (read more about my stay in Hotel Dwars, here).
Take a canal ride
Amsterdam's beautiful canals are one of the reasons why the city is so famous worldwide but also one of the main reasons of the incredible charm of the capital of the Netherlands. One of the things you have to do if you find yourself in Amsterdam is to choose among the many cruise companies offering canal rides so you can explore and admire the city from another perspective!
Iamsterdam Sign
Iamsterdam is a sign that, in just a few years, has managed to become the landmark of the city, with tourists crowding to take pictures in front, above and next to it! The whole area around the "famous" sign is very beautiful, with lots of museums, while if you visit the city during winter you will have the chance to skating on the ice rink that stands in front of the Iamsterdam sign, which during the spring turns into a beautiful lake with colorful tulips.
Van Gogh Museum
The Van Gogh Museum, located on the Square of the Museums, is one of the most famous museums in the world and of course one of the most popular in the city of Amsterdam. Here, the lovers of Impressionism and the famous Dutch painter, have the opportunity to admire the world's largest collection of Van Gogh paintings and drawings including the famous sunflower painting, and to learn a lot about his life.
Rijksmuseum
Just next to the Van Gogh Museum, there is another great museum, the National Museum of the Netherlands, the imposing and impressive Rijksmuseum. This is the building every one of us has observed behind the Iamsterdam sign. In Rijksmuseum, there are more than one million exhibits, including works from the famous Dutch painters Rembrandt, Stein and Vermeer.
Ride a bike
As you probably already know, Amsterdam is the most bicycle-friendly capitals of the world, with 60% of city trips being made by bike. What you may not know is the fact that although cycling is a great experience and a great way to see the city, cycling in Amsterdam's city center can be quite a difficult task, especially if you are a first timer! A wonderful place to ride a bike without leaving the city center while avoiding bike traffic on the streets is the huge and beautiful Vondelpark.
House of Anna Frank
Anna's Frank house is one of the most visited museums in the Netherlands. Here, in a secret room at the back of the house, during the Second World War, Anna Frank tried to hide from the Nazis with her family and four other people. Anna Frank did not survive the war, but in 1947 her diary was published and became one of the most widely read books in the world. What you have to do if you plan to visit Anna's Frank house is to go early in the morning and to get your ticket electronically from the official site of the museum in order to avoid the queues.
Albert Cuypmarket
One of the things I love while traveling is to visit open-air markets and try some local street food. Albert Cuyp market is located in De Pijp area and is open every day except Sundays. In fact, this is a road along which there are hundreds of stalls in which anyone can find everything from clothes and shoes to flowers and excellent street food. It is worth a visit to try out, among other things, the delicious stroopwafels and the fluffy poffertjes (read more about what to eat in Amsterdam, here).
Red Light District
Apart from one of the oldest neighborhoods in Amsterdam, the area with the windows and red curtains, known worldwide as the Red Light District, hosts some of the most unique brothels in the world, which are one of the city's main tourist attractions. If you want to walk in the area and see this special sight, start from Oude Kerk (Old Church) and wander around.
Iamsterdam City Card
During my trip to Amsterdam, I purchased the 72 hours, Iamsterdam, City Card. Why? Amsterdam has many interesting museums and attractions, but their entrance fees are quite expensive in proportion to the rest of Europe. For this reason, if you plan to visit some of the city's most important museums as I did, I would recommend you to buy an IAmsterdam City Card. This card includes free admission to the city's most popular museums and attractions (e.g. the Van Gogh Museum, Rijksmuseum, Rembrandt House, NEMO Museum, Amsterdam's Zoo etc), free public transport, free canal cruises and many discounts in restaurants, cafes and shops. You can choose the card that suits you best depending on the days you are going to stay in the city, as there are 24, 48, 72 and 96 hours cards.
Disclaimer: Many thanks to Iamsterdam. As always, all the opinions listed in this article are mine!
| 61,780
|
Boston (5K, 10K, or 13.1 Half Marathon) or at any destination of your choice.
We will ship you a cool shirt, water carabiner, drawstring bag and a large 3" medal to you after you complete your run!
What is a Virtual Run and 10 Reasons Why You Should Run in a Virtual Themed Run Shirt
- 3" Runners Medal
- Free Runners Goodie!. You can complete your run anytime and anywhere, before or after your package arrives..
| 387,257
|
A start-up business plan sample for a coffee shop i executive summary guatemala paradise is a start-up business, scheduled to provide products and services as a sole trade business. Free essay: internet coffee shop marketing plan javanet internet cafe executive summary 10 executive summary the goal of this marketing plan is to outline. The marketing plan of a new starbucks coffee shop includes key aspects: first of all, it's positioning and basic characteristics characteristics of potential. Upload your essay browse editors build your the company profile and competitions of the coffee shop the dark side 4,729 words 11 pages a description of the . Read this essay on coffee shop come browse our large digital warehouse of free sample essays coffee shop marketing plan for fortune coffee and cake shop .
Essay about coffee shop business plan operate a small business to sell coffee to patrons the business started out with one owner hannah being sole proprietorship venture. Need a business plan on starting a coffee shop business plan here is a sample temple coffee house business plan to help you get started if you need help writing a coffee shop business plan, click on order now to place your order. Coffee shop business plan coffee shop business plan essay info: 23365 words the business is a coffee shop and a laundromat a two in one service, intended .
Business plan coffee shop coffee shop for $ 50-100 thousand market situation coffee boom that seized america, and then portugal, has come to us is explained by the fact that the growing popularity of refreshing drink in the population, attracts more and more attention to this business. Below is an essay on coffee shop from anti essays, your source for research papers, essays, and term paper examples business plan - coffee shop. Café vancouver is a new coffee shop at granville and robson street in vancouver downtown we will write a custom essay sample on coffee cafe business plan . Coffee shop business plan essay by shashankvns97, october 2014 download word file, 42 pages, 00 downloaded 3 times keywords marketing strategy . Coffee shop marketing plan essay sample marketing needscoffee break is a unique coffee shop/local bar that is organized to try and facilitate new spiritual friendships and singles can meet.
Management assignment free sample on business expansion plan of a coffee shop made by our phd management assignment help experts call +1(213)438-9854 or livechat now. Coffee shop business plan essay the height’s café marketing plan executive summary the heights café will be a coffee shop located on station avenue in the historic town of haddon heights. Project management and coffee shop essay a+ pages: we will write a custom essay sample on project management and coffee shop coffee shop business plan . Essay outline/plan service introduction to starting a coffee shop marketing essay most people use around 200,000 dollars opening up a coffee shop in the .
Java culture coffee bar is determined to become a daily necessity for local coffee addicts, a place to dream of as you try to escape the daily . The coffee shop essays last night, i was going to my favorite coffee shop in richmond when i entered the coffee shop, there was no one in the shop drinking coffee in this windy day. Internet coffee shop marketing plan javanet internet cafe executive summary 1 0 executive summary the goal of this marketing plan is to outline the strategies, tactics, and programs that will make the sales goals outlined in the javanet business plan a reality in the year 1999.. That cute coffee shop is a business that sells coffee and sweet treats it's a small scale bakery plus coffee shop for locals this will be a place to come hangout, work, and study while enjoying coffee it will be a partnership with abby brindley as 75-25 percent ownership i will own the 75%. . Coffee shop business plan mission palace café a local coffee house that will be along the shoreline of corona del mar, california palace café is a coffee house that will be serving coffee, espressos, food, and pastries to its customers. Coffee shop essays: over 180,000 coffee shop essays, coffee shop term papers, coffee shop research paper, book reports 184 990 essays, term and research papers available for unlimited access.
2018.
| 106,165
|
Have you ever experienced being asked a number of times why God allows not so good things to happen? Hmmmm …. I have. Initially, it catches me by surprise, but when I start to look into it again and again, it actually takes me into a deeper thought with more treasures of revelations unveiled. Indeed, a king’s glory is shown in his ability to explore the facts of the matter of what has been asked for or said.
“God conceals the revelation of His word
in the hiding place of His glory.
But the honor of kings is revealed
by how they thoroughly search out
the deeper meaning of all that God says.” (Proverbs 25:2 TPT)
Apart from Jesus, no one has walked on earth with the full revelation of our Heavenly Father’s ultimate thoughts, ways, plans, purposes and will, let alone the fullness of Who He is. We only know in part, see in part, declare in part and prophesy in part. But, the best part of walking in this journey with Him is that He constantly and continuously reveals Himself to us that adds knowledge and revelation deposited into the reservoir of our hearts. We grow in Him. What we receive from the unveiled truth of His Word powerfully influences us from the inside and transforms us on the outside beyond measure and beyond comprehension. It is done by the grace of God through our faith in the Lord Jesus Christ and our intimate fellowship with the Holy Spirit Who is the Revealer of all truth.
)
The question about why God allows bad things to happen only arises when we do not know Who He truly is, that He is a GOOD GOD. Even Jesus said to the one who addressed Him as “Good Master”, “Why do you call Me good? NO ONE is [essentially] good [by nature] except GOD ALONE.” (Mark 10:18 AMP) Because God is GOOD ALL THE TIME, then there is nothing that can come out of Him that is contrary to Who He is. Otherwise, one can simply say, God is good sometimes. Right? But, praise be to God that He is consistent. He is reliable. He is faithful. “God is not human, that He should lie, not a human being, that He should change His mind. Does He speak and then not act? Does He promise and not fulfill?” (Numbers 23:19) He never changes His mind or play tricks in the shadows. He is the same yesterday, today and forever. Our God is GOOD! Period!
Thanks be to God that “EVERY GIFT God freely gives us is GOOD and PERFECT, streaming down from the Father of lights, who shines from the heavens with no hidden shadow or darkness and IS NEVER SUBJECT TO CHANGE.” (James 1:17 TPT)
Bad things happen as a result of the fall in the Garden of Eden. When the sin of disobedience happened, it brought curse to the world. God DID NOT curse the earth. Adam brought the curse upon the earth when he turned his dominion over to satan. It was Adam’s disobedience that produced people who were subject to the earth and its circumstances.
But God Who is RICH in MERCY and ABOUNDING in LOVE, gave us His only begotten and beloved Son Jesus Christ, that through His Blood shed at the cross, the remission of sins was made possible. His finished work made us new creations in Christ with His divine nature deposited in us that displaced the sinful nature that we once inherited from Adam. The last Adam, Jesus Christ, produced new men who sit above the natural realm and the day to day circumstances.
Yes, bad things still happen due to the above mentioned reason, but when we received Jesus Christ as our Lord and Savior, we became His children, REDEEMED from the curse of the law by Him becoming a curse for us. He Who knew no sin became sin so that you and I will become the righteousness of God in Christ Jesus, postured to receive)
We are called to be children and friends of God like Abraham. We are entitled to the same physical, financial and spiritual blessings that Abraham received. We enact these blessings by faith. God makes the promise, we do the believing. We speak His Word, move the mountain and receive the blessing.
Jesus never said that we will have a trial-free life. What He promised and guaranteed was for us not to fear for He has triumphed over this corrupt world order.
)
If you are a father, would you allow bad things to happen to your children? No way! You will do all that you can to protect them. How then can we ever think that God Who is our Father allows bad things to happen to His children? Have you ever pondered on that? If an earthly father is over protective of his children, HOW MUCH MORE our Heavenly Father does? Don’t you think so? “ AMPC). If this is one of His promises in which He is faithful to fulfill it, does this make you think He allows bad things to happen to His children?
He Who is GOOD is always blamed for the bad things going on around us, that even the calamities written in insurance policies are called “acts of God”, oh well, it used to be. Hopefully it has been revised by now.
God, our Father, NEVER allows bad things to happen to His children. This is the assurance that we have in Christ. He sent us Jesus because He deeply loves you and me. “He sends forth His word and heals us and rescues us from the pit and destruction” (Psalm 107:20). How can He heal and rescue if He was the one who sent the pit and destruction at the same time? Don’t you think it’s odd? We are to take into consideration that because of sin, the enemy gained a foothold of the earth, hence, Jesus came to deal with our sins once and for all. He shed His blood so that forgiveness was made available and its benefit is forever. He not just forgave and washed away our sins but the Word says, “He remembers them NO MORE!”
“For I will be merciful to their unrighteousness, and their sins and their lawless deeds I will remember NO MORE.” (Hebrews 8:12)
“You see, God takes all our crimes—our seemingly inexhaustible sins—and removes them. As far as east is from the west, He removes them from us.” (Psalm 103:12 The Voice)
This is the beauty of the finished work of Jesus Christ and this is the NEW and EVERLASTING COVENANT with better promises. For us to walk in it, it takes faith to believe what Jesus has done. It takes faith to receive the forgiveness that He made available for us. It takes faith to say, “Be it done to me according to Your Word.” If we do not think the way we are described in God’s Word, being THE RIGHTEOUSNESS OF GOD IN CHRIST, being GREATLY BLESSED, HIGHLY FAVORED AND DEEPLY LOVED, we will not walk as His Word says. His resurrection justified us and justification by faith means He sees us just as if we have not sinned. Redemption by the blood of Jesus Christ postures us to walk in the blessings that God has ordained for us to have. It is the kind of blessing that no one can reverse.
“The thief comes only in order to steal and kill and destroy. I CAME that they may HAVE and ENJOY LIFE, and have it in abundance (to the full, till it overflows).” (John 10:10 AMPC)
Jesus came to undo, destroy, loosen, and dissolve the works the devil has done. He came so we can have real and eternal life, more and better life than we ever dreamed of.
“He who sins is of the devil, for the devil has sinned from the beginning. For this purpose the Son of God was manifested, that He might destroy the works of the devil.” (1 John 3:8 NKJV)
Jesus came to reveal the Father. He only does what He sees the Father doing and speaks what He hears Him speak. He demonstrated the love of the Father by His absolute obedience up to the point of death, for the joy that was set before Him, seeing you and I fully restored into our Heavenly Father’s embrace. Yes, Jesus came to restore us who were once lost and is still restoring to this very second whoever believes in Him, until we all become found in Him. Thus, His birth was the JOY to the world for the Lord has come.
Because no one has fully grasped the truth of the goodness of God, majority still thinks that it is He who authors and allows all the no-so-good things to happen, that these things are meant to purify, prune and discipline us. But what I desire to point out is that He did not author nor allow them. It’s just that because of the sin issue, these gained access to our world and now becomes a part of God’s chess pieces that He uses for us to arise and overcome. Mind you, if trials and hardships make people purified and fruitful, as a result, all of us should be because we all go through. But you see, this is not the case. Many resulted into a downward spiral because of these hard challenges.
God gave us HIS WORD as the primary key to overcome, and Jesus’ Name is the Word of God. He is the Word made flesh and dwelt among us. Our faith in the Lord Jesus Christ which is synonymous to our faith in His Word is what should remain, like gold tested in fire. Think with me on this, if we face these trials apart from the Word of God, apart from the finished work of Jesus Christ, we can never be victorious over it for it is He Who ALWAYS causes us to triumph. It is not by might nor by power, but by the Holy Spirit.
.” (1 Peter 1:5-7 TPT)
We are all a work in progress. We somehow came from such mindset at one point. But as we go from faith to faith, as we grow in the revelation of His Word line upon line, precept upon precept, from glory to glory, it changes and renews our thinking that brings transformation in our lives. Our confidence rises up to a different level when we come to this knowledge that ONLY and SURELY HIS MERCY AND HIS GOODNESS SHALL PURSUE US ALL THE DAYS OF OUR LIVES, all because we are the righteousness of God in Christ Jesus. It is the kind of righteousness by faith that is established on the finished work of Jesus Christ and not by our self-righteousness. There will be an increment in how we perceive His trustworthiness and reliability knowing that He always delights in the prosperity of His people because He sees us justified. It not about our works. But it is all about what He has done that changed our being and eventually affects our doing.
Yes, we live in a fallen world with storms in our midst, but we no longer point our fingers to the enemy for he is a defeated foe, disarmed, paralyzed and destroyed through the suffering, death and resurrection of our Lord Jesus Christ. We do not fight a defeated foe. The Word of God says, we fight a good fight of faith wearing the full armor of God. We fight not to gain victory, but we fight FROM the victory that Jesus has won for us. We have been declared victors and no longer victims, on top and not underneath, the head and not the tail, blessed and no longer cursed. We have been declared sons of God and no longer slaves to sin. “We seek (aim at and strive after) first of all His kingdom and His righteousness (His way of doing and being right), and then all these things taken together will be given you besides” (Matthew 6:33 AMPC). We set your minds and keep them on what is above (the higher things), not on the things that are on the earth” (Colossians 3:2 AMPC). Instead of reacting to what is being thrown at us, we respond to what is being spoken to us by His Word.
Instead of focusing on the lie that God allows bad things to happen in our lives, why not shift and turn our eyes unto Jesus Who is the author and finisher of our faith. Why not open our hearts and make room for Him to live in and through us. Why not allow Him to bring forth through us God’s promises of redemption and restoration of power and authority that we are no longer subject to the rule of the enemy, but he becomes subject to us?
Jesus’ coming to earth is the antidote to the bad things that came with the curse. Jesus is the ANSWER. Jesus is the REDEMPTIVE SOLUTION. Jesus reversed the curse into blessings. Once God blesses, who can put an end to it? His Word says as spoken by Baalam to Balak, “Behold, I have received a command to bless; When HE has blessed, then I cannot revoke it.” (Numbers 23:20)
When I was meditating on this Word in Numbers 23, my heart burst out with joy at this revelation. This was about Balak who wanted Balaam to curse Israel. But when Balaam went into the presence of God and met Him, God put a word in his mouth to be spoken to Balak.
He has seen wickedness in Israel. The Lord his God is with him, and the shout of a King is among them. (Numbers 23:18-21)
If you look into the context of these verses, because God has NOT seen iniquity nor wickedness in Israel, instead God saw their unrighteousness covered because of their annual atonement of sins by the blood of the unblemished lamb, God was up to BLESS them with a blessing that cannot be reversed. In addition to that, the next verse mentioned that the Lord his God was with him and the shout of a King is among them.
How much more now that it is not the blood of an animal that brought forgiveness to our unrighteousness, but the Blood of the Lamb of God, Jesus Christ? His precious blood was not shed to cover but to wash away our sins, not yearly, but ONCE AND FOR ALL, and its effect is eternal. Don’t you think this will permanently set us up into the blessings of God that can never be revoked because He lives inside of us Whose mighty shout and power no one and nothing can come against?
Let us learn to walk in the unforced rhythm of His grace. He will not force Himself to us. Instead, He will reveal to us His goodness and when we see it, it brings us to bend our knees and worship the One Who loves us dearly and deeply. It is our personal revelation of God’s love that displaces the misconceptions of why He allows bad things to happen in our lives. It is in our intimate relationship with Him that allows us to collaborate with Him that brings blessings that far outweigh the trials that we face. It is our awareness of His presence, His being with us, for us, upon us and in us is what brings fullness of joy in our hearts and the JOY OF THE LORD becomes our strength to endure every eventuality that this fallen world brings.
“I have loved you with an everlasting love, therefore I have continued my faithfulness to you.” (Jeremiah 31:3)
Prayer: Heavenly Father, thank You for Your love, mercy and grace. Your love always put our hearts to rest that even when we see you contrary to Who You really are, You are always there to reveal to us Your loving patience towards us, because You do not want anyone to perish but all to come to repentance and know You. Thank You, Jesus, that even at the cross, You still cried out to the Father, “Forgive them, Father, for they do not know what they are doing.” Such love and compassion pierces our hearts with utmost thanksgiving for all that You have done for us. Thank You, Holy Spirit, for opening our eyes to see day in and day out what it means to be loved beyond measure and to be blessed forever, without revocation nor cancellation of what God has promised! Amen!
| 389,547
|
Blears to have final say on new Liverpool stadium
Designed by Texas-based HKS, the 60,000-seat ground is likely to be given the green light later today; however, due to the significance of the scheme, Blears can decide whether she wants to call in the plans for a full public inquiry.
It will be the second time the stadium will be put in front of government, after former deputy prime minister John Prescott had the final say on the previous plans.
But planning approval for that project was torn up and then-architect Atherden Fuller Leng’s designs scrapped following the arrival of Liverpool FC’s new owners George Gillett and Tom Hicks.
A department for communities and local government spokesman said: ‘As this project has more than just local importance, the council is required to put it forward to government to make the final decision. It’s very standard practice.’
The council’s planning decision comes just over a week after it was revealed the new stadium costs had risen by £150 million. Hicks and Gillett are currently seeking a £500 million financial package to cover any further costs.
If Blears does not call the project in, it will pave the way for work to start on site by next March.
Have your say
You must sign in to make a comment.
| 14,701
|
Home > Outdoor Living > Indoor/Outdoor Rugs > Floral/Botanical Outdoor Rugs > Terrace Collection Indoor-Outdoor Rug - Vine
Terrace Collection Indoor-Outdoor Rug - Vine
These durable, easy-care rugs can be used indoors or outdoors.
Place this solid beauty on your outdoor deck or indoor dining room.
The Terrace Collection is Wilton-woven of durable, easy care, mildew-resistant polypropylene and is great indoors or out. The Collection is affordably priced and UV-stabilized for fade-resistance. Clean spots with soap and water or clean entire rug with the hose.
Add the touch of the Terrace Collection Rugs to any room. Order yours today at Brookstone!
Dimensions:
- 1.92' x 2.92' Rug (35" x 23"); Weight: 1.25 lbs.
- 1.92' x 6.33' Rug (76" x 23"); Weight: 3 lbs.
- 3.26' x 7.92' Rug (59" x 39"); Weight: 3.25 lbs.
- 4.92' x 7.5' Rug (90" x 59"); Weight: 7.15 lbs.
- 7.83' x 7.83' Rug (94" x 94"); Weight: 11.45 lbs.
- 7.83' x 9.83' Rug (118" x 94"); Weight: 14.9 lbs.
Terrace Collection Indoor-Outdoor Rug - Vine
796488p
Shopping Cart
X
| 280,040
|
Last year after the Oscars, Jimmy Kimmel showed off a trailer for a completely fake movie titled Movie: The Movie (watch it here) that looked funnier than most parody movies you’ve seen (or ignored) over the past decade or so. This year, naturally, Kimmel had to show the trailer for the sequel, Movie: The Movie 2V.
The also fake sequel stars a slew of celebrities, including Jessica Chastain, Channing Tatum, Samuel L. Jackson, Armie Hammer, Amanda Seyfried, Bradley Cooper, Gerard Butler, Rachel Weisz, Matt Damon, Kerry Washington, John Krasinski, Chris Rock, Salma Hayek, Bryan Cranston, Jude Law, Jason Schwartzman, Topher Grace, and even Oprah..
| 51,832
|
CodePlexProject Hosting for Open Source Software
All Feedback is appreciated :-) Feel free to post your comments or show off your project!
Thanks!
Aug 24, 2014
3:38 AM(4 posts)
first post: philippedasilva wrote: I've been around for quite a long time reading stuff and asking a c...
Jun 23, 2013
3:21 PM(3 posts)
first post: needshelprenderin wrote: Hi Abuzer,
While I am working on my specular map shader I am also...
Jun 22, 2013
10:37 PM(4 posts)
first post: needshelprenderin wrote: Hi, I've recently gotten back into using Xen. I left off getting Be...
Apr 6, 2013
3:11 AM(4 posts)
first post: Jacon wrote: So I created a little custom control in WPF that hosts an XNA progr...
Jan 28, 2013
1:51 AM(4 posts)
first post: backwardsByte wrote: Im trying to fill in the arguments to a method SetMatrices(ref Mat...
Jan 24, 2013
5:04 AM(3 posts)
first post: backwardsByte wrote: I cannot locate a method or property that I can use to display my ...
Jan 16, 2013
10:12 PM(4 posts)
first post: IvanGrozni wrote: Hi,I've started using Xen's particle system and it works wonderfull...
3:48 AM(1 post)
first post: ledgarl wrote: you still working in this project? is very cool!
Dec 1, 2012
7:19 PM(3 posts)
first post: goras wrote: Hello, We're students in game programming and we just start to use...
Sep 30, 2012
2:56 PM(5 posts)
first post: jsmars wrote: I read that before StatusUnknown stopped working on Xen, he was wo...
Filter
Unanswered only
Keep up with what's going on in this project's discussions
| 346,254
|
Thursday is sometimes referred to as “Friday’s Friday,” meaning it’s the herald of Friday, and therefore the weekend.
Many students call Thursdays ‘Thirsty Thursdays” due to a decreasing number of lessons students have on a Friday making it the day to start their weekend drinking.
Thursday is a day featured heavily in Christian lore, and it’s also the traditional day U.K. elections are held on. So let’s take a look at all the interesting facts about Thursday!
- The name Thursday is derived from the Old English Þūnresdægand the Middle English Thuresday (with loss of -n-, first in northern dialects, from influence of Old Norse Þorsdagr) meaning ‘Thor’s Day,” after the Norse God of Thunder and son of Odin, Thor.
- Many Germanic-derived languages name Thursday after Thor, like ‘Torsdag” in Denmark, Norway and Sweden, “Donnerstag” in Germany, Donderlag in the Netherlands.
- As Jupiter is the Roman equivalent of Thor, the Latin name for Thursday was “lovis Dies,” meaning “Jupiter’s Day.”
- In Latin, the possessive case of Jupiter was either “lovis” or “jovis,” and therefore most languages derived from Latin reflect this in their naming of Thursday, like the Spanish “jueves,” the French “jeudi,” or the Italian “giovedi.”
- In most of the languages spoken in India, the word for Thursday is “Guruvara,” with vara meaning “day” and “guru” being the style for Brhaspati, who is guru to the gods and a regent of the planet Jupiter.
- In the Judeo-Christian liturgical calendar, Thursday is often abbreviated to Th or Thu.
-.
- The astrological and astronomical sign of the planet Jupiter is sometimes used to represent Thursday.
- Estonians did not work on Thursdays and called their Thursday nights “evenings of Tooru.”
- Some historians say that Estonians would gather in holy woods known as “Hiis” on Thursday nights, where a bagpipe player would sit and play whilst people danced and sung until the dawn.
- Back in the USSR during the 1970’s and 1980’s, Thursday was known as the “Fish Day” of the week, where the nation’s food service institutions would serve fish rather than meat.
- Thursday is the name of a six-piece post-hardcore rock band from America who formed in 1997.
- In the U.K., elections are always held on a Thursday. This may seem a little odd, especially considering there’s no specific reason why other than tradition. The last U.K. election to be contested that did not occur on a Thursday was back in 1931, when everybody voted on a Tuesday.
- In some American high schools during the 1950’s and 1960’s, wearing the color green on a Thursday would lead to people believing you were gay.
- In the Thai Solar Calendar, the color orange is associated with Thursday.
- In Buddhist Thailand, Thursday is considered to be ‘Teacher’s Day,” and it is believed that a person should begin their education on a Thursday.
- Thai students still pay homage to this belief by holding gratitude ceremonies for their teachers that are always held on a Thursday.
- Following on in the same vein, graduations days at universities in Thailand almost always occur on a Thursday.
- In Australia, most movie premiers are often held on a Thursday.
- On Thursday the 20th of June 1782, the fledgling United States of America decided to do some branding and selected the Bald Eagle as their official emblem.
- Leonardo Da Vinci, artist, inventor, pioneer, genius (and probably time traveler) was born on a Thursday, on April 15, 1452.
(Visited 227 times, 1 visits today)
| 379,676
|
There's a cat on the loose! Just kidding. More like there's a cat on a LEASH! I walked my cat around the back yard this afternoon, to let her have some adventure. She's only been let out of the house two other times before. The first time she attempted to climb the oak tree in our front yard, broke her collar, and almost got away. The second time, she was a crazy kitty cat and was just running all over the place. Today was the third time. It had been well over a year since I let her out, so I was really scared for what was going to happen. She pounced around the yard, had me chasing her a couple times, but other than that it wasn't too bad. It was just such a beautiful day that I could not sit stuck inside.
1 comments:
It's a cat on a leash (held by Luce!)
Glad you and kitty had a good time today :-) Enjoy the beautiful weather, since you have it!
| 261,114
|
TITLE: Defining the Order of the Natural Numbers
QUESTION [0 upvotes]: Is there a way to give the regular partial order of the natural number directly from their definition through the Infinity Axiom? I have only ever seen the partial order of the natural number to be defined after the Peano axioms are proved, in which case a natural number $n$ is defined to be less than a natural number $m$ if there exists a non-zero natural number $k$ such that $n + k = m$.
REPLY [3 votes]: The ordering of the naturals is actually much easier to define if we use the set-theoretic interpretation via the axiom of infinity: the axiom of infinity guarantees the existence of the ordinal $\omega$ - AoI gives us an inductive set, and then Separation applied to this set gives us the smallest inductive set, which is $\omega$ - and natural numbers are just elements of $\omega$. The ordinals are linearly (indeed, well) ordered by "$\in$," and restricted to $\omega$ this gives the usual ordering on the naturals.
Of course, we care about more than just the ordering of the naturals. Addition, multiplication, etc. of natural numbers are then defined recursively - this is tedious but not actually hard.
| 59,040
|
TITLE: Defective battery problem
QUESTION [1 upvotes]: A flashlight has $6$ batteries, $3$ of which are defective. If $3$ are selected at random without replacement, find the probability that all of them are defective.
I am finding the probability of getting all of them defective batteries which should be the probability of each when its drawn, should this be like this: $(3/6) \cdot (2/5) \cdot (1/4) = 6/120 = 0.05$. When submitting this i get an error, but isn't finding the probability like this is basically finding the probability for each with order is finding it for all of them?
REPLY [0 votes]: If three out of the six are defective and you select three without replacement, there is only one way to obtain all three defective batteries. But there are clearly more ways to select three batteries in which one or more is not defective.
To see this, it suffices to label the batteries as follows:
$$\{G_1, G_2, G_3, D_1, D_2, D_3\}.$$ Then there are $$\binom{6}{3} = \frac{6!}{3!3!} = 20$$ ways to select three batteries without replacement. But only one way gives you $\{D_1, D_2, D_3\}$. The full list of $20$ possibilities is:
$$\{G_1, G_2, G_3\}, \{G_1, G_2, D_1\}, \{G_1, G_2, D_2\}, \{G_1, G_2, D_3\}, \{G_1, G_3, D_1\}, \\
\{G_1, G_3, D_2\}, \{G_1, G_3, D_3\}, \{G_1, D_1, D_2\}, \{G_1, D_1, D_3\}, \{G_1, D_2, D_3\}, \\
\{G_2, G_3, D_1\}, \{G_2, G_3, D_2\}, \{G_2, G_3, D_3\}, \{G_2, D_1, D_2\}, \{G_2, D_1, D_3\}, \\
\{G_2, D_2, D_3\}, \{G_3, D_1, D_2\}, \{G_3, D_1, D_3\}, \{G_3, D_2, D_3\}, \{D_1, D_2, D_3\}$$
| 136,538
|
This is ironic! The set of Prison Break was robbed on Thursday night.
Burglars broke into trailers and stole cash, credit cards, documents, and a computer of the show’s actors. Jodi Lyn O’Keefe and Dom Purcell were hit the hardest.
The robbers charged $16,000 on her debit card, and charged $9000 on credit. Ouch! Good thing that’s probably pocket change for them, but still!
The actors said they always kept their trailers locked for fear of something like this happening, and O’Keefe was reported as being really shaken up by the incident.
Also, when police arrived on the scene, they thought she had been assaulted as well, because she was in full makeup for the episode she was about to shoot, in which she gets beaten up. We hope they catch them!
| 204,113
|
\begin{document}
\baselineskip = 16pt
\newcommand \C{{\mathbb C}}
\newcommand \ZZ {{\mathbb Z}}
\newcommand \NN {{\mathbb N}}
\newcommand \QQ{{\mathbb Q}}
\newcommand \RR {{\mathbb R}}
\newcommand \PR {{\mathbb P}}
\newcommand \AF {{\mathbb A}}
\newcommand \GG {{\mathbb G}}
\newcommand \bcA {{\mathscr A}}
\newcommand \bcC {{\mathscr C}}
\newcommand \bcD {{\mathscr D}}
\newcommand \bcF {{\mathscr F}}
\newcommand \bcG {{\mathscr G}}
\newcommand \bcH {{\mathscr H}}
\newcommand \bcM {{\mathscr M}}
\newcommand \bcJ {{\mathscr J}}
\newcommand \bcL {{\mathscr L}}
\newcommand \bcO {{\mathscr O}}
\newcommand \bcP {{\mathscr P}}
\newcommand \bcQ {{\mathscr Q}}
\newcommand \bcR {{\mathscr R}}
\newcommand \bcS {{\mathscr S}}
\newcommand \bcV {{\mathscr V}}
\newcommand \bcW {{\mathscr W}}
\newcommand \bcX {{\mathscr X}}
\newcommand \bcY {{\mathscr Y}}
\newcommand \bcZ {{\mathscr Z}}
\newcommand \goa {{\mathfrak a}}
\newcommand \gob {{\mathfrak b}}
\newcommand \goc {{\mathfrak c}}
\newcommand \gom {{\mathfrak m}}
\newcommand \gon {{\mathfrak n}}
\newcommand \gop {{\mathfrak p}}
\newcommand \goq {{\mathfrak q}}
\newcommand \goQ {{\mathfrak Q}}
\newcommand \goP {{\mathfrak P}}
\newcommand \goM {{\mathfrak M}}
\newcommand \goN {{\mathfrak N}}
\newcommand \uno {{\mathbbm 1}}
\newcommand \Le {{\mathbbm L}}
\newcommand \Spec {{\rm {Spec}}}
\newcommand \Gr {{\rm {Gr}}}
\newcommand \Pic {{\rm {Pic}}}
\newcommand \Jac {{{J}}}
\newcommand \Alb {{\rm {Alb}}}
\newcommand \Corr {{Corr}}
\newcommand \Chow {{\mathscr C}}
\newcommand \Sym {{\rm {Sym}}}
\newcommand \Prym {{\rm {Prym}}}
\newcommand \cha {{\rm {char}}}
\newcommand \eff {{\rm {eff}}}
\newcommand \tr {{\rm {tr}}}
\newcommand \Tr {{\rm {Tr}}}
\newcommand \pr {{\rm {pr}}}
\newcommand \ev {{\it {ev}}}
\newcommand \cl {{\rm {cl}}}
\newcommand \interior {{\rm {Int}}}
\newcommand \sep {{\rm {sep}}}
\newcommand \td {{\rm {tdeg}}}
\newcommand \alg {{\rm {alg}}}
\newcommand \im {{\rm im}}
\newcommand \gr {{\rm {gr}}}
\newcommand \op {{\rm op}}
\newcommand \Hom {{\rm Hom}}
\newcommand \Hilb {{\rm Hilb}}
\newcommand \Sch {{\mathscr S\! }{\it ch}}
\newcommand \cHilb {{\mathscr H\! }{\it ilb}}
\newcommand \cHom {{\mathscr H\! }{\it om}}
\newcommand \colim {{{\rm colim}\, }}
\newcommand \End {{\rm {End}}}
\newcommand \coker {{\rm {coker}}}
\newcommand \id {{\rm {id}}}
\newcommand \van {{\rm {van}}}
\newcommand \spc {{\rm {sp}}}
\newcommand \Ob {{\rm Ob}}
\newcommand \Aut {{\rm Aut}}
\newcommand \cor {{\rm {cor}}}
\newcommand \Cor {{\it {Corr}}}
\newcommand \res {{\rm {res}}}
\newcommand \red {{\rm{red}}}
\newcommand \Gal {{\rm {Gal}}}
\newcommand \PGL {{\rm {PGL}}}
\newcommand \Bl {{\rm {Bl}}}
\newcommand \Sing {{\rm {Sing}}}
\newcommand \spn {{\rm {span}}}
\newcommand \Nm {{\rm {Nm}}}
\newcommand \inv {{\rm {inv}}}
\newcommand \codim {{\rm {codim}}}
\newcommand \Div{{\rm{Div}}}
\newcommand \sg {{\Sigma }}
\newcommand \DM {{\sf DM}}
\newcommand \Gm {{{\mathbb G}_{\rm m}}}
\newcommand \tame {\rm {tame }}
\newcommand \znak {{\natural }}
\newcommand \lra {\longrightarrow}
\newcommand \hra {\hookrightarrow}
\newcommand \rra {\rightrightarrows}
\newcommand \ord {{\rm {ord}}}
\newcommand \Rat {{\mathscr Rat}}
\newcommand \rd {{\rm {red}}}
\newcommand \bSpec {{\bf {Spec}}}
\newcommand \Proj {{\rm {Proj}}}
\newcommand \pdiv {{\rm {div}}}
\newcommand \CH {{\it {CH}}}
\newcommand \wt {\widetilde }
\newcommand \ac {\acute }
\newcommand \ch {\check }
\newcommand \ol {\overline }
\newcommand \Th {\Theta}
\newcommand \cAb {{\mathscr A\! }{\it b}}
\newenvironment{pf}{\par\noindent{\em Proof}.}{\hfill\framebox(6,6)
\par\medskip}
\newtheorem{theorem}[subsection]{Theorem}
\newtheorem{conjecture}[subsection]{Conjecture}
\newtheorem{proposition}[subsection]{Proposition}
\newtheorem{lemma}[subsection]{Lemma}
\newtheorem{remark}[subsection]{Remark}
\newtheorem{remarks}[subsection]{Remarks}
\newtheorem{definition}[subsection]{Definition}
\newtheorem{corollary}[subsection]{Corollary}
\newtheorem{example}[subsection]{Example}
\newtheorem{examples}[subsection]{examples}
\title{On the kernel of the push-forward homomorphism between Chow groups.}
\author{ Kalyan Banerjee, Jaya NN Iyer}
\address {Indian Statistical Institute, Bangalore Centre, Bangalore 560059, India}
\address{The Institute of Mathematical Sciences, CIT
Campus, Taramani, Chennai 600113, India}
\email{kalyanb$_{-}$vs@isibang.ac.in}
\email{jniyer@imsc.res.in}
\footnotetext{Mathematics Classification Number: 14C25, 14D05, 14D20,
14D21}
\footnotetext{Keywords: Pushforward homomorphism, Theta divisor, Jacobian varieties, Chow groups, higher Chow groups.}
\begin{abstract}
In this paper, we prove that the kernel of the push-forward homomorphism on $d$-cycles modulo rational equivalence, induced by the closed embedding of an ample divisor linearly equivalent to some multiple of the theta divisor inside the Jacobian variety $J(C)$ is trivial. Here $C$ is a smooth projective curve of genus $g$.
\end{abstract}
\maketitle
\setcounter{tocdepth}{1}
\tableofcontents
\section{Introduction}
In this paper, we investigate the kernel of the push-forward homomorphism on Chow groups induced by the closed embedding of a smooth irreducible ample divisor $D$ inside a smooth projective variety $X$, over the field of complex numbers. Assume the dimension $dim(X)=n$ and let $j: D\hookrightarrow X$ be the closed embedding . This question is motivated by the following results and conjecture.
When Chow groups are replaced by the singular homology of a smooth projective variety over $\C$, the (dual of) Lefschetz hyperplane theorem gives an isomorphism of the pushforward map:
$$
j_*:H_k(D,\ZZ) \hookrightarrow H_k(X,\ZZ)
$$
for $k \,<\,n$, and surjectivity when $k=n$..
M. Nori \cite[Conjecture 7.2.5]{Nori} gave improved bounds on the degrees of singular cohomology for the standard Lefschetz restriction maps, and when $D$ is a very general ample divisor of large degree on $X$.
Furthermore, he conjectured the following on the restriction maps on the rational Chow groups:
\begin{conjecture}\label{Nor}
Suppose $D$ is a very general smooth ample divisor on $X$, of sufficiently large degree. Then the restriction map:
$$
j^*:CH^p(X)\otimes \QQ \rightarrow CH^p(D)\otimes \QQ
$$
is an isomorphism, for $p<n$ and is injective, for $p=n$.
\end{conjecture}
More generally, we have (see \cite[Conjecture 1.5]{Paranjape}):
\begin{conjecture}\label{Par}
Let $D$ be a smooth ample divisor in $X$. Then the restriction map for the inclusion of $D$ in $X$:
$$
CH^p(X)\otimes \QQ \rightarrow CH^p(D)\otimes \QQ
$$
is an isomorphism, for $p\leq \frac{dim{Y}-1}{2}$.
\end{conjecture}
It seems reasonable to pose the following dual of above Chow Lefschetz questions:
\begin{conjecture}
The pushforward map on the rational Chow groups, for a very general ample divisor $D\subset X$ of sufficiently large degree:
$$
j_*:CH_k(D)\otimes \QQ \rightarrow CH_k(X)\otimes \QQ
$$
is injective, whenever $k>0$.
\end{conjecture}
Similarly, we could pose the dual version of Conjecture \ref{Par}.
Our aim is to verify these conjectures when $D$ is the theta divisor on the Jacobian of a smooth projective curve, and \textit{special} smooth divisors linearly equivalent to a multiple of the theta divisor.
Let $C$ be a smooth projective curve of genus $g$ and let $\Th$ denote a theta divisor inside the Jacobian $J(C)$ of $C$.
Suppose $\pi:\tilde{C}\rightarrow C$ is a ramified finite Galois covering of degree $n$, for $n\geq 1$. Let $G$ denote the Galois group such that $C=\tilde{C}/G$. Then the induced morphism $\pi^*: J(C)\rightarrow J(\tilde{C})$ is injective. Furthermore, for a suitable translate $\Theta_{\tilde{C}}$ of the theta divisor in $J(\tilde{C})$, the restriction on $J(C)$ is a smooth, irreducible, ample divisor $H_C$ which is linearly equivalent to $n\Th$.
Then we show the following.
\begin{theorem}\label{mainT}
Suppose $C$ is a smooth projective curve of genus $g$ and $H_C$ be as mentioned above. Let $j_C$ denote the closed embedding of $H_C$ inside $J(C)$. Then the kernel of the push-forward homomorphism $j_{C*}:\CH_k(H_C)\otimes {{\mathbb Q}}\rightarrow \CH_k(J(C))\otimes {{\mathbb Q}}$ is trivial, for $k\geq 0$.
\end{theorem}
Note that $H_C$ is a special ample divisor in the linear system $|n\Theta|$, since it is restriction of $\Theta$ on $J(\tilde{C})$. It will be interesting to look at the situation when $H_C$ is a general smooth divisor in $|n\Theta|$. However, as pointed out by C. Voisin, we cannot expect injectivity on $CH_0(H_C)_{{\mathbb Q}}\rightarrow CH_0(J(C))_{{\mathbb Q}}$, when $H_C$ is very general.
The proof utilises localization sequence of higher Chow groups, applied to $G$-fixed subvarieties of the Jacobian of $\tilde{C}$. An application of a theorem of Collino \cite[Theorem 1]{Collino}, which shows the injectivity, for $k$-cycles on inclusions of lower dimensional symmetric product $Sym^m(C)$ of a curve $C$ inside $Sym^n(C)$, for $m\leq n$, gives us the required injectivity. In the final section \S \ref{Collino}, we also extend his theorem for the pushforward map on higher Chow groups of symmetric powers of a curve, and for any of its open subset. This is crucial in proof of Theorem \ref{mainT}.
Instead of rational Chow groups, the group $A_k(X)$ of algebraically trivial $k$-cycles on $X$ modulo rational equivalence can be considered. A weaker problem is posed in the following.
See \cite[Exercise 1, Chapter 10]{Voisin}. Let $S$ be a smooth, connected, complex, projective, algebraic surface embedded inside some $\PR^N$. Let $C_t$ be a general smooth hyperplane section of $S$ and $j_t$ be the closed embedding of $C_t$ into $S$. Let $H_t$ be the Hodge structure
$$\ker(j_{t*}:H^1(C_t,\ZZ)\to H^3(S,\ZZ))$$
which is induced from the Hodge structure of $H^3(S,\ZZ)$.
and $A_t$ be the abelian variety corresponding to $H_t$ inside $J(C_t)$. Then
the kernel of the push-forward homomorphism $j_{t*}$ from $A_0(C_t)$ to $A_0(S)$ is a countable union of translates of an abelian subvariety $A_{0,t}$ of $A_t$.
For a very general $C_t$, the abelian variety $A_{0,t}$ is either $0$ or $A_t$.
If the albanese map from $A_0(S)$ to $\Alb(S)$ is not an isomorphism, then for a very general $t$, the kernel of the push-forward homomorphism $j_{t*}$ is countable.
In \cite{BG}, the first author and V. Guletskii extended the problem to even dimensional smooth projective varieties over any algebraically closed, uncountable ground field. For a smooth cubic fourfold in $\PR^5$, and for a very general hyperplane section on it, it is shown that the kernel of the push-forward homomorphism on algebraically trivial algebraic $1$-cycles modulo rational equivalence, induced by the closed embedding of the hyperplane section into the cubic fourfold, is countable.
We consider the Jacobian variety $J(C)$ of a smooth projective curve $C$ and the associated Kummer variety $K(J(C)):=\frac{J(C)}{<i>}$. Here $i$ is the inverse map on $J(C)$.
We consider the image of a symmetric theta divisor $\Th$, i.e. $i(\Th)=\Th$.
We show:
\begin{theorem}
Let $C$ be a hyperelliptic curve of genus four and $D$ denotes the image of a symmetric theta-divisor $\Th$ under the natural morphism $q:J(C)\to K(J(C))$. Let $j'$ denote the closed embedding of $D$ into $K(J(C))$. Then $A^2(D)$ is trivial and hence the kernel of the push-forward homomorphism $j'_*$ from $A^2(D)$ to $A^3(K(J(C)))$ is trivial.
\end{theorem}
{\small \textbf{Acknowledgements:} We thank C. Voisin, R. Sebastien for pointing out an inaccuracy.
The first named author is grateful to Department of Atomic Energy, India for funding this project.}
\textbf{Notations}:
Here $k$ is an uncountable, algebraically closed field and all the varieties are defined over $k$. Denote
$$
CH_d(X)_{\mathbb Q}:= CH_d(X)\otimes {\mathbb Q}.
$$
Here $X$ is a variety of pure dimension $n$, defined over $k$ and $CH_d(X)$ denotes the Chow group of $d$-dimensional cycles modulo rational equivalence.
We denote $A_d(X)$ the group of algebraically trivial $k$- cycles on $X$ modulo rational equivalence. Let
$$
A^d(X):= A_{dim\,X-d}(X),\, \,CH^d(X):=CH_{dim\,X-d}(X).
$$
We write
$$
CH_d(X,s)_{\mathbb Q}:= CH^{dim\,X -d}(X,s) \otimes \mathbb Q
$$
the Bloch's higher Chow groups with ${\mathbb Q}$-coefficients.
When $X$ is a singular variety, we replace above Chow groups by Fulton's operational Chow groups.
This will be essential in
proof of Theorem \ref{prop2}, where we consider the operational Chow groups of theta divisor $\Theta_C$ which is a singular variety.
\section{Kummer variety of a hyperelliptic curve}
In this section we consider a hyperelliptic curve $C$ of genus $4$ and the Kummer variety $K(J(C))$ associated to the Jacobian $J(C)$ of the curve $C$. Let $\Th$ denote a symmetric theta divisor inside $J(C)$ and let $D$ denote the image of $\Th$ inside $K(J(C))$, under the natural morphism from $J(C)$ to $K(J(C))$. We would like to investigate the kernel of the push-forward homomorphism at the level of Chow groups of one cycles, induced by the closed embedding of $D$ in $K(J(C))$.
First we prove the following two propositions which are true for any smooth projective curve of genus $g$. Define the map $\wt{i}$ from $\Pic^{g-1}C$ to itself, given by
$$
\wt{i}(\bcO(D))=K_C\otimes \bcO(-D)\;,
$$
where for a divisor $D$, $\bcO(D)$ denote the line bundle associated to $D$ and $K_C$ is the canonical line bundle on $C$.
Consider a theta characteristic $\tau$ such that $\tau^2=K_C$. Consider the following map
$$
\otimes \tau^{-1}:\Pic^{g-1}C\to J(C)
$$
given by
$$
\bcO(D)\mapsto \bcO(D)\otimes \tau^{-1})\;.
$$
\begin{lemma}
The following square is commutative.
$$
\diagram
\Pic^{g-1}C \ar[dd]_-{\otimes \tau^{-1}} \ar[rr]^-{\wt{i}} & & \Pic^{g-1}C \ar[dd]^-{\otimes \tau^{-1}} \\ \\
J(C)\ar[rr]^-{i} & & J(C)
\enddiagram
$$
\end{lemma}
\begin{proof}
First observe that $i\circ (\otimes \tau^{-1})$ is $\bcO(-D)\otimes \tau$. On the other hand $\otimes \tau^{-1}\circ\wt{i}(\bcO(D))$ is equal to
$$
K_C\otimes \bcO(-D)\otimes\tau^{-1}
$$
that is nothing but
$$
\bcO(-D)\otimes \tau^2\otimes \tau^{-1}
$$
which is equal to
$$
\bcO(-D)\otimes \tau\;.
$$
So the above diagram is commutative.
\end{proof}
The commutativity of the above diagram gives us a map from $\Pic^{g-1}C$ to the Kummer variety $K(J(C))$.
Now assume that $C$ is hyperelliptic, and $h:C\to\PR^1$ be the hyperelliptic map. So we have the following commutative triangle
$$
\xymatrix{
C \ar[rr]^-{i}
\ar[ddrr]_-{h} & &
C \ar[dd]^-{h} \\ \\
& & \PR^1
}
$$
where $i$ is the hyperelliptic involution induced by the degree $2$ morphism $h$.
We will use the following.
\begin{theorem}
\label{theorem1}(\cite{Hartshorne},IV,$5.4$)Let $D$ be an effective special divisor on a smooth curve $C$, then
$$
\dim(|D|)\leq \frac{1}{2}\deg(D)\;.
$$
Furthermore equality occurs if and only if $D=0$ or $D=K_C$ or $C$ is hyperelliptic and $D$ is a multiple of the unique $g^1_2$ on $C$.
\end{theorem}
In the above theorem $g^r_d$ denotes a linear system of dimension $r$ and degree $d$. Also by special divisor we mean a divisor $D$ such that the dimension of the linear system associated to $K_C-D$ is greater than zero.
Let $l$ be the map from $\Sym^{g-1}C$ to $\Pic^{g-1}C$ given by
$$
l(D)=\bcO(D)\;,
$$
where $\bcO(D)$ denote the line bundle associated to the divisor $D$. Consider $i_C:\Sym^{g-1}C\to \Sym^{g-1}C$ given by
$$
i_C(P_1+\cdots+P_n)=i(P_1)+\cdots+i(P_n)\;,
$$
where $i$ is the hyperelliptic involution on $C$.
We show:
\begin{proposition}
\label{prop1}
The following diagram is commutative.
$$
\diagram
\Sym^{g-1}C \ar[dd]_-{i_C} \ar[rr]^-{l} & & \Pic^{g-1}C \ar[dd]^-{\wt{i}} \\ \\
\Sym^{g-1}C \ar[rr]^-{l} & & \Pic^{g-1}C
\enddiagram
$$
In other words, the involution $\wt{i}$ lifts on the $(g-1)$-st symmetric power of the curve.
\end{proposition}
\begin{proof}
First, \cite[Proposition 2.3]{Hartshorne} gives:
$$
K_C=h^*K_{\PR^1}+\bcO(B)
$$
where $B$ is the branch divisor of the morphism $h$ and degree of $B$ is $2g+2$. Now we have to show that $\bcO(i_C(D))$ is $K_C-\bcO(D)$. In other words, we have to prove that
$$
\bcO(i_C(D))\otimes \bcO(D)=K_C
$$
that is
$$\bcO(D+i_C(D))=K_C\;.$$
Here $i_C$ is the involution induced on the symmetric powers of $C$ defined above, by the involution $i$ on $C$. Observe that $D+i_C(D)$ is invariant under the involution $i_C$.
Now consider the morphism $h:C\to \PR^1$.
We compute $h^0(K_C-D-i_C D)$, that is the dimension of the vector space of global sections of the line bundle $K_C-\bcO(D+i_C D)$. By Riemann-Roch theorem we have that
$$
h^0(\bcO(D+i_C D))-h^0(K_C-\bcO(D+i_C D))=2g-2-g+1=g-1\;.
$$
Observe that $\deg(K_C-\bcO(D+i_C D))=0$.
Now for a divisor $D$ the degree is zero means that either the divisor is zero, in this case we have $h^0(D)$ is one or $D$ is non-zero. In the case $D$ is non-zero, we have $h^0(D)=0$, otherwise the line bundle associated to $D$ would be trivial.
So we have two cases
$$
K_C=\bcO(D+i_C D)
$$
or
$$
h^0(K_C-\bcO(D+i_C D))=0\;.
$$
Suppose that $h^0(K_C-\bcO(D+i_C D))=0$.
So by the Riemann-Roch theorem we get that
$$
h^0(\bcO(D+i_C D))=2g-2-g+1=g-1\;.
$$
By the theorem \ref{theorem1} we get that $\bcO(D+iD)$ is equal to $L^{g-1}$ for a line bundle $L\in g^1_2$ on $C$. We have
$$
K_C=h^*K_{\PR^1}+\bcO(B)
$$
and also by \ref{theorem1} we get that any two divisors of degree $2g-2$ on a hyper-elliptic curve $C$, are linearly equivalent, that is the corresponding line bundles on $C$ are isomorphic. This tells us that $h^*\bcO_{\PR^1}(g-1)$ and $L^{g-1}$ are isomorphic. By the projection formula we get that
$$
h_*L^{g-1}=h_*h^*(\bcO_{\PR^1}(g-1))
$$
which is nothing but
$$
\bcO_{\PR^1}(g-1)\oplus \bcO_{\PR^1}(g-1)\otimes \bcO(B)\;.
$$
Since $H^0(C,L^{g-1})$ is isomorphic to to $H^0(\PR^1,h_*L^{g-1})$, we have $h^0(L^{g-1})$ is greater than $g-1$ which is a contradiction. So the possibility that $h^0(K_C-\bcO(D+i_C D))=0$ is ruled out and we have the only possibility
$$
K_C=\bcO(D+i_C D)\;.
$$
This gives us the commutativity of the diagram.
$$
\diagram
\Sym^{g-1}C \ar[dd]_-{i} \ar[rr]^-{l} & & \Pic^{g-1}C \ar[dd]^-{\wt{i}} \\ \\
\Sym^{g-1}C \ar[rr]^-{l} & & \Pic^{g-1}C
\enddiagram
$$
This ends the proof.
\end{proof}
Next, for Chow groups computations, we identify $\Pic^{g-1}C$ with $J(C)$ using a base point $P_0\in C$. The image of $\Sym^{g-1}C$ in $\Pic^{g-1}C$ is denoted by $\Th$ and it is symmetric under $\wt{i}$, by Proposition \ref{prop1}.
\begin{theorem}
Let $C$ be a hyperelliptic curve of genus $4$ and let $K(\Pic^{3}C)$ denote the Kummer variety associated to $\Pic^{3}C$. Let $D$ denote the image of a symmetric theta-divisor $\Th$ under the natural morphism from $\Pic^{3}C$ to $K(\Pic^{3}C)$. Let $j'$ denote the closed embedding of $D$ into $K(\Pic^{3}C)$. Then $A^2(D)$ is trivial and hence the kernel of the push-forward homomorphism $j'_*$ from $A^2(D)$ to $A^3(K(\Pic^{3}C))$ is trivial.
\end{theorem}
\begin{proof}
The commutativity of the diagram in \ref{prop1} gives us a map from
$\Sym^{3}C/i$ to $\Pic^{3}C/\wt{i}\cong K(\Pic^{3}C)$, where the first morphism is birational and the second one is finite. Now $\Sym^{3}C/i$ is isomorphic to $\Sym^{3}\PR^1$, which is isomorphic to the projective space $\PR^{3}$. Note that $A^2(\PR^{3})$ is trivial hence weakly representable. Since weak representability of $A^2$ is a birational invariant, we get that $A^2(D)$ is isomorphic to an abelian variety $A$. By the proposition $6$ in \cite{BG} we get that the kernel of the push-forward homomorphism $j'_*$ from $A^2(D)$ to $A^3(K(\Pic^{3}C))$ is a countable union of translates of an abelian subvariety $A_0$ of the abelian variety $A$ representing $A^2(D)$. Since $H^{3}(\PR^{3},\ZZ)$ is trivial, we get that the abelian variety $A$ is trivial. So the kernel of the push-forward homomorphism $j'_*$ is trivial.
\end{proof}
\section{Inclusion of theta divisor into the Jacobian}
In this section we investigate the kernel of the push-forward homomorphism, induced by the closed embedding of the theta divisor inside the Jacobian of a smooth projective curve $C$ of genus $g$. More precisely we prove the following theorem.
\begin{theorem}
\label{prop2}
Let $C$ be a smooth projective curve of genus $g$. Let $\Th$ be a symmetric theta-divisor embedded inside $J(C)$ and let $j$ denote the embedding. Then the kernel of the push-forward homomorphism $j_*$ from ${\CH_d(\Th)}_{\QQ}$ to ${\CH_d(J(C))}_{\QQ}$ is trivial.
\end{theorem}
Since $\Th$ is a singular variety $\CH_d(\Th)$ will denote Fulton's operational Chow groups. In particular, pullback morphisms on these groups are defined induced by arbitrary morphisms $X\rightarrow \Th$. Note that operational Chow groups of a smooth variety are the same as the usual Chow groups.
\begin{proof}
It is well known that the map from $\Sym^{g-1} C$ to $\Th$ is surjective and birational. Let us fix a point $P$ in $C$. Consider the following map $j_C$ from $\Sym^{g-1}C$ to $\Sym^g C$
defined by
$$
P_1+\cdots+ P_{g-1}\mapsto P_1+\cdots+P_{g-1}+P\;.
$$
Here the sum denotes the unordered set of points of lengths $(g-1)$ and $(g)$.
With this definition of $j_C$ we observe that the following diagram is commutative.
$$
\diagram
\Sym^{g-1} C \ar[dd]_-{j_C} \ar[rr]^-{q_{\Th}} & & \Th \ar[dd]^-{j} \\ \\
\Sym^{g} C \ar[rr]^-{q} & & Pic^g(C)
\enddiagram
$$
We prove that commutativity of this diagram gives us the following formula at the level of $\CH_d$.
$$
j_*=q_*\circ j_{C*}\circ q_{\Th}^*
$$
To prove this we notice that for a prime $k$-cycle $V$ in $\CH_d(\Th)$ we have
$$(q\circ j_C)(q_{\Th}^{-1}(V))=(j\circ q_{\Th})(q_{\Th}^{-1}(V))$$
by the commutativity of the above diagram. Now $q_{\Th}$ is surjective, so
$$
q_{\Th}(q_{\Th}^{-1}(V))=V\;.
$$
Now suppose that $E$ is the exceptional locus of $q_{\Th}$ in $\Sym^{g-1} C$ such that $q_{\Th}$ is injective on $\Sym^{g-1} C\setminus E$ into $\Th$. Now suppose $\alpha$ be a cycle class in $\CH_d(\Th)$ and it is non-zero and supported on the complement of $E$, then $q_{\Th}^*$ is non-zero due to birationality. Suppose $\alpha$ is supported on $E$ and $q_{\Th}^*(\alpha)$ is zero. Then we prove that $\alpha$ is torsion.
\begin{lemma}
Let $E$ be a closed subscheme in $\Sym^m C$. Then the closed embedding of $E$ into $\Sym^n C$ for $m\leq n$ induces a push-forward homomorphism at the level of Chow groups which has torsion kernel.
\end{lemma}
\begin{proof}
Let $E$ be a closed subscheme inside $\Sym^m C$, then we consider the embedding of $\Sym^m C$ into $\Sym^n C$. We want to prove that $j:E\to \Sym^n C$ gives rise to an injective push-forward homomorphism at the level of Chow groups. Consider the projection from $\pi_n^{-1}(E)$ to $C^m$. The elements of $\pi_n^{-1}(E)$ are of the form $(x_1,\cdots,p,\cdots,p,\cdots,x_m)$ or $(x_1,\cdots,x_m,p,\cdots,p)$. Therefore the elements of the image of the projection from $\pi_n^{-1}(E)$ will look like $(x_1,p,\cdots,x_j)$ or $(x_1,\cdots,x_m)$. So the image will be a union of disjoint Zariski closed subsets of $C^m$, one of which is $\pi_m^{-1}(E)$. So consider the correspondence $\Gamma'$ given by the Graph of the projection from $\pi_n^{-1}(E)$ to $\Sym^m C$. Then define $\Gamma$ to be $\pi_n\times \pi_m(\Gamma')$, that will give us a correspondence on $\Sym^n C\times E$. Then by \ref{lemma1} we get that the homomorphism $\Gamma_*j_*$ is induced by the cycle $(j\times id)^*(\Gamma)$. Now we compute this cycle.
So $(j\times id)^{-1}(\Gamma)$ is nothing but
$$\{([e_1,\cdots,e_m],[e_1',\cdots,e_m'])|
([e_1,\cdots,e_m,p\cdots,p],[e_1',\cdots,e_m'])\in \Gamma\}\;.$$
That would mean the following
$(e_1,\cdots,p,\cdots,e_m)$ and $(e_1,\cdots,e_m,p,\cdots,p)$ are in $\pi_n^{-1}(E)$ and
$(e_1',\cdots,e_m')$
is in the image of the projection. So we have
$$(e_1',\cdots,e_m')=(e_1,\cdots,e_m)$$
or
$$e_i'=p$$
for some $i$. That would mean that $(j\times id)^{-1}(\Gamma)=\Delta_E\cup Y$ where $Y$ is supported on $\Sym^{m-1}C\cap E$. Arguing as in \cite{Collino} we get that
$$(j\times id)^*(\Gamma)=d\Delta_E+D$$
where $D$ is supported on $\Sym^{m-1}C\cap E$. Then consider $\rho$ to be the open immersion of the complement of $\Sym^{m-1}C\cap E$ in $E$. Since $D$ is supported on $\Sym^{m-1}C\cap E$ we get that
$$\rho^*\Gamma_*j_*(Z)=\rho^*(dZ)\;.$$
As previous consider the diagram.
$$
\xymatrix{
\CH_*(\Sym^{m-1}C\cap E) \ar[r]^-{j'_{*}} \ar[dd]^-{}
& \CH_*(E) \ar[r]^-{\rho^{*}} \ar[dd]_-{j_{*}}
& \CH_*(X_0(m)) \ar[dd]_-{} \
\\ \\
\CH_*(\Sym^{m-1}C\cap E) \ar[r]^-{j''_*}
& \CH_*(\Sym^{n} C) \ar[r]^-{}
& \CH^*(U)
}
$$
$X_0(m),U$ are complement of $\Sym^{m-1}C\cap E$ in $E,\Sym^n C$ respectively. Then suppose that we have
$$j_*(z)=0$$
that gives us that
$$\rho^*\Gamma_*j_*(z)=\rho^*(dz)=0$$
so there exists some $z'$ such that $j_*'(z')=dz$. But by the above diagram we have $j''_*(z')=0$. So by induction if we assume that $j''_*$ has torsion kernel then we get that $d'z'=0$, so we have $dd'z=0$. So the kernel of the map from $\CH_*(E)\to \CH_*(\Sym^n C)$ is torsion, consequently $\CH_*(E)\to \CH_*(\Sym^m C)$ has torsion kernel.
\end{proof}
Since $\Sym^{g}C$ is a blow up of $J(C)$, we have the the inverse image $E'$ over $E$ a projective bundle. By the projective bundle formula we have $\CH_*(E)\to \CH_*(E')$ injective and by the above lemma $\CH_*(E')\to \CH_*(\Sym^{g-1}C)$ is torsion. It will follow from this that if $\alpha$ is supported on $E$ and $q_{\Th}^*(\alpha)$ is zero then $\alpha$ is torsion. So considering Chow groups with $\QQ$-coefficients we get that $q_{\Th}^*$ is injective.
Since $j_{C*}$ is injective by theorem $1$ in \cite{Collino} from ${\CH_d(\Sym^{g-1} C)}_{\QQ}$ to ${\CH_d(\Sym^g C)}_{\QQ}$, we get that $j_{C*}q_{\Th}^*$ is injective. Since $q_{\Th}^*(\alpha)$ is not zero, we get that $j_{C*}\circ q_{\Th}^*(\alpha)$ is not supported on the exceptional locus of $q$. This is because of the fact that $q_{\Th}$ is the restriction of $q$. Now consider the following fiber square,
$$
\diagram
\Sym^{g} C \setminus E'\ar[dd]_-{} \ar[rr]^-{} & & \Sym^g C \ar[dd]^-{q} \\ \\
U \ar[rr]^-{} & & J(C)
\enddiagram
$$
where $E'$ is the exceptional locus of the map $\Sym^{g} C\to J(C)$, and $U$ be the open subscheme of $J(C)$ such that $\Sym^g\setminus E'$ is isomorphic to $U$.
This fiber square gives us the following commutative square at the level of Chow groups.
$$
\diagram
{\CH_d(\Sym^g C)}_{\QQ} \ar[dd]_-{q_*} \ar[rr]^-{} & & {\CH_d(\Sym^g C \setminus E')}_{\QQ}\ar[dd]^-{} \\ \\
{\CH_d(J(C))}_{\QQ} \ar[rr]^-{} & & {\CH_d(U)}_{\QQ}
\enddiagram
$$
Since $j_*q_{\Th}^*(\alpha)$ is not supported on $E'$, we get that the image of $j_*q_{\Th}^*(\alpha)$ under the pull back homomorphism ${\CH_d(\Sym^g C)}_{\QQ}\to {\CH_d(\Sym^g C\setminus E')}_{\QQ}$ is nonzero. Also observe that the right vertical homomorphism above is an isomorphism, so $j_*q_{\Th}^*(\alpha)$ is mapped to some non-zero element in ${\CH_d(U)}_{\QQ}$. By the commutativity of the above diagram, we get that $q_*(j_*q_{\Th}^*(\alpha))$ is non-zero. In other words $j_*(\alpha)$ is non-zero. So $j_*$ is injective.
\end{proof}
\subsection{Finite group quotients of $J(C)$}
Now we prove that the kernel of the push-forward homomorphism from $\CH_d(D)$ to $\CH_d(K(J(C)))$ is trivial. More generally let $G$ be a finite group acting on $J(C)$, where $C$ is a smooth projective curve of genus $g$. Let $\Th$ denote the theta divisor of $J(C)$ such that $G(\Th)=\Th$. Then we prove the following.
\begin{proposition}
Let $j_G$ denote the embedding of $\Th/G$ into $J(C)/G$. Then the kernel of the push-forward homomorphism $j_{G*}$ from $\CH_d(\Th/G)_{{\mathbb Q}}$ to $\CH_d(J(C)/G)_{{\mathbb Q}}$ is trivial.
\end{proposition}
\begin{proof}
By theorem \ref{prop2}, it suffices to check that the action of $G$ intertwines with $j_*$. That is we have to show that
$$
g.j_*(a)=j_*(g.a)
$$
for any $a$ in $\CH_d(\Th)$ and for any $g\in G$. For that write $a$ as $\sum n_i V_i$. Then
$$
g.(j_*(a))=g.(\sum n_i j(V_i))=\sum n_i g(V_i)
$$
since $j$ is a closed embedding, we have
$$
\sum n_i g(V_i)=j_*(\sum n_i g(V_i))
$$
that is same as
$$
j_*(g.a)\;.
$$
By \cite[Example 1.7.6]{Fulton}, we have
$$
\CH_d(\Th/G)_{{\mathbb Q}}={\CH_d(\Th)^G}_{{\mathbb Q}}
$$
where $\CH_d(\Th)^G_{{\mathbb Q}}$ denotes the $G$-invariants in $\CH_d(\Th)_{{\mathbb Q}}$. By the above intertwining of the group action of $G$, we get that ${j_*}|_{\CH_d(\Th)^G_{{\mathbb Q}}}$ takes it values in $\CH_d(J(C))^G_{{\mathbb Q}}$. Since $j_*$ is injective we get that ${j_*}|_{\CH_d(\Th)^G_{{\mathbb Q}}}$ is injective and
${j_*}|_{\CH_d(\Th)^G_{{\mathbb Q}}}$ is nothing but $j_{G*}$. So we get that $j_{G*}$ is injective.
\end{proof}
\section{Special ample smooth divisors on $J(C)$}
Let $n\Theta$ denote the $n$-th multiple of $\Th_C$, that is
$$\Theta+\cdots+\Theta$$
$n$ times, inside the Jacobian of a genus $g$ smooth projective curve $C$.
Since $h^0(n\Th_C)=n^g$, we can choose $H_C$, a smooth, irreducible, ample divisor on $J(C)$, linearly equivalent to $n\Th$. We are interested to investigate the kernel of the push-forward homomorphism at the level of Chow groups with rational coefficients, induced by the closed embedding of $H_C$ into $J(C)$.
Consider a Galois covering
$$\pi: \wt{C}\lra C$$
of degree $n$ branched along $r$ points where $r\geq 1$. In particular let $G$ be a finite group acting on $\wt{C}$ such that $C=\wt{C}/G$.
Let $\pi^*$ denote the morphism induced by $\pi$ from $J(C)$ to $J(\wt{C})$. Since $\pi^*$ is injective by \cite[Corollary 11.4.4]{BL}, we identity the image of $\pi^*$ with the polarized pair $(J(C), H_C)$.
Let us denote genus of $\tilde{C}$ by $\wt{g}$. Note that for a general translate of $\Th_{\wt{C}}$, the restriction of the translate to $J(C)$ is smooth and irreducible. Since by \cite[Lemma 12.3.1]{BL},
$$
(\pi^*)^*(\Th_{\wt{C}})\equiv n\Th_C\equiv H_C\;,
$$
we have $H_C$ is equal to $J(C)\cap \Th_{\wt{C}}$, and it is smooth and irreducible.
Note that $H_C$ is special in the linear system $|n\Th_C|$ since it is restriction of $\Th_{\tilde{C}}$ and for a general member of $n\Th_C$, this does not happen.
Denote $CH_*(H_C)_{\mathbb Q}:= CH_*(H_C)\otimes {\mathbb Q}$ and $CH_*(J(C))_{\mathbb Q}:= CH_*(J(C))\otimes {\mathbb Q}$. In the following, we identify $Pic^g(C)= J(C)$ and $Pic^{\tilde{g}}(\tilde{C})= J(\tilde{C})$ (without specifying a choice of base point).
\begin{theorem}
Let $C$ be a curve of genus $g$ and $H_C$ be as mentioned above. Let $j_C$ denote the closed embedding of $H_C$ inside $J(C)$. Then the kernel of the push-forward homomorphism $j_{C*}$ from ${\CH_d(H_C)}_{{\mathbb Q}}$ to $\CH_d(J(C))_{{\mathbb Q}}$ is trivial, for $k\geq 1$.
\end{theorem}
\begin{proof}
By the above discussion we have the following commutative diagram
$$
\diagram
H_C\ar[dd]_-{j_C} \ar[rr]^-{} & & \Th_{\wt{C}} \ar[dd]^-{j_{\wt{C}}} \\ \\
J(C) \ar[rr]^-{} & & J(\wt{C}).
\enddiagram
$$
This diagram gives us the following commutative diagram at the level of $\CH_*$.
$$
\diagram
\CH_d(H_C)_{{\mathbb Q}}\ar[dd]_-{j_{C*}} \ar[rr]^-{} & & \CH_d(\Th_{\wt{C}})_{{\mathbb Q}} \ar[dd]^-{j_{\wt{C}*}} \\ \\
\CH_d(J(C))_{{\mathbb Q}} \ar[rr]^-{} & & \CH_d(J(\wt{C}))_{{\mathbb Q}}
\enddiagram
$$
Using \ref{prop2} we get that $j_{\wt{C}*}$ is injective. To prove that the homomorphism $j_{C*}$ is injective we use the localization exact sequence of Bloch's higher Chow groups \cite{Bloch}. First note that $\Sym^{\wt{g}-1} \wt{C}$ is birational to $\Th_{\wt{C}}$, and $\Sym^{\wt{g}} \wt{C}$ is birational to $J(\wt{C})$. Consider the natural morphism from $\Sym^{\wt{g}}(\wt{C})$ to $J(\wt{C})$. Let $H_C',J(C)'$ denote the scheme theoretic inverse images of $H_C,J(C)$ in $\Sym^{\wt{g}} \wt{C}$. Now fix a base-point $P_0$ in $\wt{C}$ and we consider the inclusion $\Sym^{\wt{g}-1}\wt{C}\hookrightarrow \Sym^{\wt{g}}\wt{C}$ given by
$$
P_1+\cdots+P_{\wt{g}-1}\mapsto P_1+\cdots+P_{\wt{g}-1}+P_0\;.
$$
Then by using the localization exact sequence at the level of higher Chow groups we have the following commutative diagram:
$$
\xymatrix{
\CH_d(\Sym^{\wt{g}-1} \wt{C},1)\ar[r]^-{}\ar[dd]_-{} & \CH_d(\Sym^{\wt{g}-1} \wt{C}\setminus H_C',1) \ar[r]^-{} \ar[dd]_-{}
& \CH_d(H_C') \ar[r]^-{} \ar[dd]_-{}
& \CH_d(\Sym^{\wt{g}-1} \wt{C})\ar[dd]_-{}\
\\ \\
\CH_d(\Sym^{\wt{g}} \wt{C},1) \ar[r]^-{} & \CH_d(\Sym^{\wt{g}} \wt{C}\setminus J(C)',1) \ar[r]^-{}
& \CH_d(J(C)') \ar[r]^-{}
& \CH_d(\Sym^{\wt{g}} \wt{C})
}
$$
Since $C=\wt{C}/G$, consider the induced action of $G$ on symmetric powers of $\wt{C}$.
The group $G$ acts component wise and we have $\Sym^{\wt{g}-1} \wt{C}/G$ is isomorphic to $\Sym^{\wt{g}-1} C$ and similarly $\Sym^{\wt{g}} \wt{C}/G$ is isomorphic to $\Sym^{\wt{g}} C$. Since $G$ acts trivially on $C$, we get that $G$ acts trivially on $J(C)'$ and $H_C'$ respectively. So $G$ acts trivially on $\CH_d(H_C')$ and $\CH_d(J(C)')$. Now consider the $G$-invariant part of the above $G$-equivariant commutative diagram with ${\mathbb Q}$-coefficients. That is consider
$$
\xymatrix{
\CH_d(\Sym^{\wt{g}-1} \wt{C},1)^G_{{\mathbb Q}}\ar[r]^-{}\ar[dd]_-{} & \CH_d(\Sym^{\wt{g}-1} \wt{C}\setminus H_C',1)^G_{{\mathbb Q}} \ar[r]^-{} \ar[dd]_-{}
& \CH_d(H_C')^G_{{\mathbb Q}} \ar[r]^-{} \ar[dd]_-{}
& \CH_d(\Sym^{\wt{g}-1} \wt{C})^G_{{\mathbb Q}}\ar[dd]_-{}\
\\ \\
\CH_d(\Sym^{\wt{g}} \wt{C},1)^G_{{\mathbb Q}} \ar[r]^-{} & \CH_d(\Sym^{\wt{g}} \wt{C}\setminus J(C)',1)^G_{{\mathbb Q}} \ar[r]^-{}
& \CH_d(J(C)')^G_{{\mathbb Q}} \ar[r]^-{}
& \CH_d(\Sym^{\wt{g}} \wt{C})^G_{{\mathbb Q}}
}
$$
and it becomes
$$
\xymatrix{
\CH_d(\Sym^{\wt{g}-1} C,1)_{{\mathbb Q}}\ar[r]^-{}\ar[dd]_-{} & \CH_d(\Sym^{\wt{g}-1} C\setminus H_C',1)_{{\mathbb Q}} \ar[r]^-{} \ar[dd]_-{}
& \CH_d(H_C')_{{\mathbb Q}} \ar[r]^-{} \ar[dd]_-{}
& \CH_d(\Sym^{\wt{g}-1} C)_{{\mathbb Q}}\ar[dd]_-{}\
\\ \\
\CH_d(\Sym^{\wt{g}} C,1)_{{\mathbb Q}} \ar[r]^-{} & \CH_d(\Sym^{\wt{g}}C \setminus J(C)',1)_{{\mathbb Q}} \ar[r]^-{}
& \CH_d(J(C)')_{{\mathbb Q}} \ar[r]^-{}
& \CH_d(\Sym^{\wt{g}} C)_{{\mathbb Q}}
}
$$
The formula in \cite[Example 1.7.6]{Fulton} also holds for the higher Chow groups.
Since $C$ is of genus $g$ and $\Sym^{\wt{g}-1} C,\Sym^{\wt{g}} C$ are of dimension $\wt{g}-1,\wt{g}$ respectively, we get that $\Sym^{\wt{g}-1} C,\Sym^{\wt{g}} C$ are projective bundles $\PR^{\wt{g}-g-1}_{J(C)},\PR^{\wt{g}-g}_{J(C)}$ respectively, that is a $\PR^{\wt{g}-g-1},\PR^{\wt{g}-g}$ bundle respectively and $\tilde{g}\geq g+1$. So by the projective bundle formula as in \cite{Bloch} we have, when $d=1$,
$$
\CH_1(\PR^{\wt{g}-g-1}_{J(C)},1)_{{\mathbb Q}}=H^{\wt{g}-g-2}.\CH_0(J(C),1)_{{\mathbb Q}}\oplus H^{\wt{g}-g-1}.\CH_1(J(C),1)_{{\mathbb Q}}
$$
and
$$
\CH_1(\PR^{\wt{g}-g}_{J(C)},1)_{{\mathbb Q}}=H^{\wt{g}-g-1}.\CH_0(J(C),1)_{{\mathbb Q}}\oplus H^{\wt{g}-g}.\CH_1(J(C),1)_{{\mathbb Q}}
$$
where $H$ denote the class of the line bundle $\bcO_{\PR_{J(C)}}(1)$ in $\Pic(\PR_{J(C)})$
and we have
$$
\CH_1(\PR^{\wt{g}-g-1}_{J(C)})_{{\mathbb Q}}\,\simeq \,\CH_1(\PR^{\wt{g}-g}_{J(C)})_{{\mathbb Q}}.
$$
Similarly, we could apply projection bundle formula for all $k\geq 0$, to deduce isomorphisms.
By Corollary \ref{openCollino} in section \ref{Collino}, proved for higher Chow groups with ${\mathbb Q}$-coefficients, we have that the homomorphism
$$
\CH_d(\Sym^{\wt{g}-g-1} C\setminus H_C',1)_{{\mathbb Q}}\rightarrow \CH_d(\Sym^{\wt{g}-g} C\setminus J(C)',1)_{{\mathbb Q}}
$$
is injective. Also by Collino's theorem in \cite[Theorem 1]{Collino} we have that the homomorphism $$
\CH_d(\Sym^{\wt{g}-g-1}C)_{{\mathbb Q}} \rightarrow \CH_d(\Sym^{\wt{g}-g}C)_{{\mathbb Q}}
$$
is injective. Now suppose that we start with some non-zero element in $\CH_d(H_C')_{{\mathbb Q}}$, and suppose that it goes to something non-zero in $\CH_d(\Sym^{\wt{g}-g-1}C)_{{\mathbb Q}}$, since the homomorphism from
$\CH_d(\Sym^{\wt{g}-g-1}C)_{{\mathbb Q}}$ to $\CH_d(\Sym^{\wt{g}-g}C)_{{\mathbb Q}}$ is injective, we get that the non-zero element we started with in $\CH_d(H_C')_{{\mathbb Q}}$ goes to some non-zero element in $\CH_d(J(C)')_{{\mathbb Q}}$. Now suppose that the element that we choose from $\CH_d(H_C')_{{\mathbb Q}}$ goes to zero under the homomorphism $\CH_d(H_C')_{{\mathbb Q}}$ to $\CH_d(\Sym^{\wt{g}-g-1}C)_{{\mathbb Q}}$. Then by the localization exact sequence, it follows that the element in $\CH_d(H_C')_{{\mathbb Q}}$ is in the image of the homomorphism
$$
\CH_d(\Sym^{\wt{g}-g-1} C\setminus H_C',1)_{{\mathbb Q}}\to \CH_d(H_C')_{{\mathbb Q}}\;.
$$
Suppose the element in $\CH_d(\Sym^{\wt{g}-g-1} C\setminus H_C',1)_{{\mathbb Q}}$ is nonzero. Since the map
$$
\CH_d(\Sym^{\wt{g}-g-1} C\setminus H_C',1)_{{\mathbb Q}}\rightarrow \CH_d(\Sym^{\wt{g}-g} C\setminus J(C)',1)_{{\mathbb Q}}
$$
is injective, the image of that element in $\CH_d(\Sym^{\wt{g}-g} C\setminus J(C)',1)_{{\mathbb Q}}$ is non-zero. Either this element goes to zero or it is mapped to a nonzero element in $\CH_d(J(C)')_{{\mathbb Q}}$. If it goes to a nonzero element, then the element we started with from $\CH_d(H_C')_{{\mathbb Q}}$ goes to a non-zero element in $\CH_d(J(C)')_{{\mathbb Q}}$. Suppose the element in $\CH_d(\Sym^{\wt{g}-g} C\setminus J(C)',1)_{{\mathbb Q}}$ goes to zero in $\CH_d(J(C)')_{{\mathbb Q}}$. Then the element is in the image of the map
$$
\CH_d(\Sym^{\wt{g}-g} C,1)_{{\mathbb Q}}\to \CH_d(\Sym^{\wt{g}-g} C\setminus J(C)',1)_{{\mathbb Q}}\;.
$$
Then by using the isomorphism $\CH_d(\Sym^{\wt{g}-g-1} C,1)_{{\mathbb Q}}$ with $\CH_d(\Sym^{\wt{g}-g} C,1)_{{\mathbb Q}}$, we get that this element, comes from an element in $\CH_d(\Sym^{\wt{g}-g-1} C,1)_{{\mathbb Q}}$, then composing the two maps
$$\CH_d(\Sym^{\wt{g}-g-1} C,1)_{{\mathbb Q}}\to \CH_d(\Sym^{\wt{g}-g-1} C\setminus H_C',1)_{{\mathbb Q}}\to \CH_d(H_C')_{{\mathbb Q}}
$$
we get that the element we started with in $\CH_d(H_C')_{{\mathbb Q}}$ is zero, which is a contradiction to the fact that we started with a non-zero element from $\CH_d(H_C')_{{\mathbb Q}}$. So we prove that the map from $\CH_d(H_C')_{{\mathbb Q}}$ to $\CH_d(J(C)')_{{\mathbb Q}}$ is injective.
Now $H_C'$ is birational to $H_C$ and $J(C)'$ is birational to $J(C)$. So we have the commutative diagram
$$
\diagram
\CH_d(H_C')_{{\mathbb Q}}\ar[dd]_-{} \ar[rr]^-{} & & \CH_d(J(C)')_{{\mathbb Q}} \ar[dd]^-{} \\ \\
\CH_d(H_C)_{{\mathbb Q}} \ar[rr]^-{j_{C*}} & & \CH_d(J(C))_{{\mathbb Q}}
\enddiagram
$$
Then arguing as in proposition \ref{prop2} and noting that the support of a cycle on $H_C'$ does not lie on the exceptional locus of the birational map from $J(C)'$ to $J(C)$, we prove that the homomorphism $j_{C*}$ at the level of Chow groups of $k$-cycles with rational coefficients is injective.
\end{proof}
Now we want to prove the following.
\begin{proposition}
Let $H_C$ be as in the previous theorem. Then the push-forward $B_*(H_C)$ to $B_*(J(C))$ is injective, where $B_*$ denote the group of algebraic cycles modulo algebraic equivalence.
\end{proposition}
\begin{proof}
For that first we note that the Collino's argument as in \cite{Collino} goes through for $B_*$. Meaning that if we consider the closed embedding of $\Sym^m C$ into $\Sym^n C$, then the push-forward at the level of $B_*$ is injective. So let $\wt{C}$ be a curve which is a ramifield Galois cover of $C$, such that $J(C)$ is embedded in $J(\wt{C})$. Let $\theta_{\wt{C}}$ be the theta divisor of $J(\wt{C})$ and let $H_C=\theta_{\wt{C}}\cap J(C)$.
Let $H_C'$ be the inverse image of $H_C$ in $\Sym^{\wt{g-1}}(\wt{C})$ and $J(C)'$ be the inverse image of $\Sym^{\wt{g}}(\wt{C})$, where $\wt{g}$ is the genus of $\wt{C}$. First we prove that $B_*(H_C')$ to $B_*(J(C)')$ is injective. Then we consider the Cartesian square
$$
\diagram
H_C' \ar[dd]_-{} \ar[rr]^-{} & & J(C)' \ar[dd]^-{} \\ \\
H_C \ar[rr]^-{} & & J(C)
\enddiagram
$$
argue as in proposition \ref{prop2} to get that $B_*(H_C)$ to $B_*(J(C))$ is injective. For that we prove the following lemma.
\begin{lemma}
\label{basechange}
Consider a Cartesian square
$$
\diagram
{Sym^n X}\times_Z Y\ar[dd]_-{} \ar[rr]^-{} & & Y \ar[dd]^-{} \\ \\
\Sym^n X \ar[rr]^-{} & & Z
\enddiagram
$$
where $Y\to Z$ is an embedding then the inclusion $\Sym^m X\times_Z Y$ to $\Sym^n X\times_Z Y$. Let $j$ denote the embedding of $\Sym^m X\times_Z Y$ to $\Sym^n X\times_Z Y$ for $m\leq n$. Then $j_*$ is injective at the level of $B_*$.
\end{lemma}
\begin{proof}
Let $i$ be the embedding of $\Sym^m X\to \Sym^n X$, where $m\leq n$. Let $j$ denote the embedding of $\Sym^m X\times_Z Y\to \Sym^n X\times_Z Y$. Let $\Gamma$ be as before,
$$\Gamma=\pi_n\times \pi_n(Graph(pr_{n,m}))$$
where $pr_{n,m}$ is the projection from $X^n$ to $X^m$. $\pi_i$ is the natural morphism from $X^i$ to $\Sym^i X$. Let $\pi$ denote the projection morphism from $(\Sym^n X\times_Z Y)\times (\Sym^m X\times_Z Y)\to \Sym^n X\times \Sym^m X$. Then consider the correspondence
$$\pi^*(\Gamma)=\Gamma'$$
supported on $(\Sym^n X\times_Z Y)\times (\Sym^m X\times_Z Y)$. Arguing as in \ref{lemma1} in the previous section we can prove that $\Gamma'_*j_*$ is induced by $(j\times id)^*\Gamma'$, which is equal to $(j\times id)^*\pi^*\Gamma=(\pi\circ (j\times id))^*\Gamma$. Now we have the following commutative diagram.
$$
\diagram
{Sym^m X\times_Z Y}\times {\Sym^m X\times_Z Y}\ar[dd]_-{j\times id} \ar[rr]^-{\pi'} & & {\Sym^n X\times_Z Y}\times {\Sym^m X\times_Z Y} \ar[dd]^-{\pi} \\ \\
\Sym^m X\times \Sym^m X \ar[rr]^-{i\times id} & & \Sym^n X\times \Sym^m X
\enddiagram
$$
So we get that
$$\pi\circ (j\times id)=(i\times id)\times \pi'$$
therefore we have that
$$(\pi\circ (j\times id))^*\Gamma=\pi'^*(i\times id)^*\Gamma\;.$$
Now
$$(i\times id)^*\Gamma=\Delta+Y_1$$
where $\Delta$ is the diagonal in $\Sym^m X\times \Sym^m X$ and $Y_1$ is supported on $\Sym^m X\times \Sym^{m-1}X$. Now we compute $\pi'^*(\Delta)$, that is
$$\{([x_1,\cdots,x_m],y)([x_1',\cdots,x_m'],y')|
[x_1,\cdots,x_m]=[x_1',\cdots,x_m']\}$$
but by the definition of fibered product we have that
$$f([x_1,\cdots,x_m])=g(y)=g(y')$$
assuming $g$ to be an embedding we get that $y=y'$. So
$$\pi'^*\Delta=\Delta_{\Sym^m X\times_Z Y}\;.$$
Now
$$\pi'^*(Y)=\{(([x_1,\cdots,x_m],y),([x_1',\cdots,x_m'],y'))\}$$
where $[x_1',\cdots,x_m']=[y_1,\cdots,y_{m-1},p]$, which means that
$$\pi'^* Y$$ is supported on
$$(\Sym^m X\times_Z Y)\times (\Sym^{m-1}X\times_Z Y)\;.$$
So
$$\pi'^*(\Delta+Y_1)=\Delta_{\Sym^m X\times_Z Y}+Y_2$$
where $Y_2$ is supported on
$$(\Sym^m X\times_Z Y)\times (\Sym^{m-1}X\times_Z Y)\;.$$
Now consider
$$\rho:\Sym^m X\times_Z Y\setminus \Sym^{m-1}X\times_Z Y\to \Sym^m X\times_Z Y;,$$
then
$$\rho^*\Gamma'_*j_*(V)=\rho^*(V\times \Sym^m X\times_Z Y.(\Delta_{\Sym^m X\times_Z Y}+Y_2))=\rho^*(V+V_1)=\rho^*(V)\;.$$
Here $V_1$ is supported on $\Sym^{m-1}X\times_Z Y$.
Now to prove that $j_*$ is injective, we apply induction on $m$. If $m=0$, then since $Y\to Z$ is an embedding we have $\Sym^m \times_Z Y$ is a point. Since $\Sym^n X\times_Z Y$ is projective we have that the inclusion of the point into $\Sym^n X\times_Z Y$ induces injective $j_*$.
Consider the following commutative diagram,
$$
\xymatrix{
B_*(\Sym^{m-1}X\times_Z Y) \ar[r]^-{j'_{*}} \ar[dd]_-{}
& B_*({\Sym^m X\times_Z Y}) \ar[r]^-{\rho^{*}} \ar[dd]_-{j_{*}}
& B_*(X_0(m)) \ar[dd]_-{} \
\\ \\
B_*(\Sym^{m-1}X\times_Z Y) \ar[r]^-{j''_*}
& B_*(\Sym^{n} X\times_Z Y) \ar[r]^-{}
& B_*(U)
}
$$
Here $X_0(m)$ is the complement of $\Sym^{m-1}X\times_Z Y$ in $\Sym^m X\times_Z Y$ and $U$ is the complement of $\Sym^{m-1}X\times_Z Y$ in $\Sym^n X\times_Z Y$.
Now suppose that
$$j_*(z)=0$$
that will imply that
$$\rho^*\Gamma'_*j_*(z)=0$$
that is
$$\rho^*(z)=0\;.$$
So by the exactness of the first row we get that
$$j'_*(z')=z\;.$$
Now we have that
$$j_*\circ j'_*(z')=j''_*(z')$$
but by the induction hypothesis we have $j''_*$ is injective from $\Sym^{m-1}X\times_Z Y$ to $\Sym^n X\times_Z Y$. Therefore by the commutativity we have that
$$j''_*(z')=0$$
hence $z'=0$ consequently $z=0$. So $j_*$ is injective.
\end{proof}
\begin{lemma}
Let $Y$ be a closed subscheme of $\Sym^n X$. Let $i$ denote the closed embedding of $\Sym^m X$ into $\Sym^n X$. Consider $j:Y\cap \Sym^m X\to Y$. Then $j_*$ is injective at the level of $B_*$.
\end{lemma}
\begin{proof}
Follows from the previous proposition with $Z=\Sym^n X$.
\end{proof}
So we get that the push-forward homomorphism from $B_*(H_C')$ to $B_*(J(C)')$ is injective. Hence arguing as in \ref{prop2} we get that $B_*(H_C)$ to $B_*(J(C))$ is injective.
\end{proof}
\section{Collino's theorem for higher Chow groups}
\label{Collino}
Let $C$ be a smooth projective curve over an algebraically closed field. Let $\Sym^n C$ denote the $n$-th symmetric power of $C$. Let us fix a point $p$ in $C$. Consider the closed embedding $i_{m,n}$ of $\Sym^m C$ to $\Sym^n C$, given by
$$[x_1,\cdots,x_m]\mapsto [x_1,\cdots,x_m,p,\cdots,p]$$
where $[x_1,\cdots,x_m]$ denote the unordered $m$-tuple of points in $\Sym^m C$. Then the push-forward homomorphism $i_{m,n*}$ from $\CH_*(\Sym^m C)$ to $\CH_*(\Sym^n C)$ is injective as proved in \cite[Theorem 1]{Collino}. In this section we prove that the same holds for the higher Chow groups. That is the push-forward homomorphism $i_{m,n*}^s$ from $\CH_*(\Sym^m C,s)$ to $\CH_*(\Sym^n C,s)$ is injective. To prove that we follow the approach by Collino in \cite{Collino}, the argument present here is a minor modification of the arguments in \cite{Collino}, but we write it for our convenience.
Let $\Gamma^s$ be the correspondence given by
$$\pi_n\times \pi_m(\Gamma')$$
supported on $(\Sym^m C\times_{\Spec(k)} \Delta^s)\times_{\Spec(k)}(\Sym^n C\times_{\Spec(k)}\Delta^s)$ where $\Gamma'$ is the graph of the projection $pr_{n,m}^s$ from $( C^n\times_{\Spec(k)}\Delta^s)$ to $( C^m\times_{\Spec(k)} \Delta^s)$ and $\pi_{n}$ is the natural morphism from
$C^n\times_{\Spec(k)}\Delta^s$ to $\Sym^n C\times_{\Spec(k)}\Delta^s$. Let $g^s_*$ be the homomorphism induced by $\Gamma^s$ at the level of algebraic cycles.
First we prove the following lemma.
\begin{lemma}
\label{lemma3}
The homomorphism $g^s_{*}\circ i^s_{m,n*}$ at the level of the group of algebraic cycles, is induced by the cycle $(i_{m,n}^s\times id)^*\Gamma^s$ on $(\Sym^m C\times_{\Spec(k)} \Delta^s)\times (\Sym^m C\times_{\Spec(k)}\Delta^s)$.
\end{lemma}
\begin{proof}
Let's denote $i_{m,n*}^s$ as $i^s_*$.
We have
$$g^s_*i^s_*(Z)=pr_{(\Sym^m C\times\Delta^s)*}(i^s_*(Z)\times \Sym^m C\times \Delta^s.\Gamma^s)\;.$$
The above expression can be written as
$$pr_{(\Sym^m C\times\Delta^s)*}((i^s\times id)_*(Z\times \Sym^m C\times \Delta^s).\Gamma^s)\;.$$
By the projection formula the above is equal to
$$pr_{(\Sym^m C\times\Delta^s)*}\circ(i^s\times id)_*
((Z\times \Sym^m C\times \Delta^s). (i^s\times id)^*\Gamma^s)\;.$$
Since $pr_{\Sym^m C\times\Delta^s}\circ(i^s\times id)$
is the projection $pr_{\Sym^m C\times\Delta^s}$ we get that the above is equal to
$$pr_{(\Sym^m C\times\Delta^s)*}((Z\times \Sym^m C\times \Delta^s). (i^s\times id)^*\Gamma^s)\;.$$
Here the above two projections are taken respectively on $(\Sym^n C\times\Delta^s)\times(\Sym^m C\times\Delta^s)$ and on $(\Sym^m C\times\Delta^s)\times(\Sym^m C\times\Delta^s)$.
So we get that $g^s_*\circ i^s_*$ is induced by $(i^s\times id)^*\Gamma^s$.
\end{proof}
Now consider a closed subscheme $W$ of $\Sym^n C$. Let $i_{m,n}$ denote the embedding of $\Sym^m C$ into $\Sym^n C$. Consider the morphism $i_{m,n}^s$ from $(\Sym^m C\setminus i_{m,n}^{-1}W)\times \Delta^s$ to $(\Sym^n C\setminus W)\times \Delta^s.$ Consider the restriction of $\Gamma^s$ to $((\Sym^n C\setminus W)\times \Delta^s) \times ((\Sym^m C\setminus i_{m,n}^{-1}W)\times \Delta^s)$. Denote it by $\Gamma^{s}{'}$. Let $g^{s}{'}_*$ denote the homomorphism induced by $\Gamma^{s}{'}$. Then arguing as in the previous lemma \ref{lemma3} we get the following.
\begin{corollary}
The homomorphism $g^{s'}_*\circ i^s_{m,n*}$ is induced by the cycle $(i^s_{m,n}\times id)^*\Gamma^{s}{'}$ on $((\Sym^m C \setminus i_{m,n}^{-1}W)\times \Delta^s)\times ((\Sym^m C \setminus i_{m,n}^{-1}W)\times \Delta^s)$.
\end{corollary}
\begin{proof}
It follows by arguing as in lemma \ref{lemma3} with $g^s_*,\Gamma^s$ replaced by $g^s{'}_*, \Gamma^s{'}$.
\end{proof}
Now let us consider the closed embedding $\Sym^{m-1}C\times \Delta^s$ into $\Sym^m C\times \Delta^s$, induced by the embedding $\Sym^{m-1}C$ into $\Sym^m C$. Let $\rho^s$ be the embedding of the complement of $\Sym^{m-1}C\times \Delta^s$ in $\Sym^m C\times \Delta^s$. Then we have the following proposition.
\begin{proposition}
\label{prop3}
At the level of the group of algebraic cycles we have
$$\rho^{s*}\circ g^s_*\circ i^s_*=\rho^{s*}\;.$$
\end{proposition}
\begin{proof}
To prove the proposition we prove that
$$(i^s\times \id)^{-1}\Gamma^s=\Delta \cup D$$
where $\Delta $ means the diagonal in $(\Sym^{m}C\times\Delta^s)\times (\Sym^{m}C\times\Delta^s)$ and $D$ is a closed subscheme of $(\Sym^{m}C\times\Delta^s)\times (\Sym^{m-1}C\times\Delta^s)$.
For that we write out
$$(i^s\times \id)^{-1}\Gamma^s\;,$$
that is equal to
$$(i^s\times \id)^{-1}(\pi_n\times\pi_m)Graph(pr^s_{n,m})\;.$$
The above is equal to
$$(i^s\times \id)^{-1}(\pi_n\times\pi_m)
\{((x_1\cdots,x_n,\delta_s),(x_1,\cdots,x_m,\delta^s))|x_i\in C, \delta^s\in \Delta^s\}$$
that is
$$(i^s\times \id)^{-1}\{([x_1,\cdots,x_n,\delta^s],[x_1,\cdots,x_m,\delta^s])|x_i\in C,\delta^s\in \Delta^s\}\;.$$
Call the set
$$\{([x_1,\cdots,x_n,\delta^s],[x_1,\cdots,x_m,\delta^s])|x_i\in C,\delta^s\in \Delta^s\}$$
as $B$, and the set
$$(i^s\times \id)^{-1}\{([x_1,\cdots,x_n,\delta^s],[x_1,\cdots,x_m,\delta^s])|x_i\in C,\delta^s\in \Delta^s\}\;.$$
as $A$. The set $A$ is of the form
$$\{([x'_1,\cdots,x'_m,\delta^s],[y'_1,\cdots,y'_m,\delta^s])|
([x_1',\cdots,x_m',p,\cdots,p,\delta^s],[y_1',\cdots,y_m',\delta^s])\in B\}\;.$$
So the set $A$ can be written as the union of
$$\{([x_1'\cdots,x_m',\delta^s],[x_1'\cdots,x_m',\delta^s])|x_i\in C,\delta^s\in \Delta^s\}$$
and
$$\{([x_1'\cdots,x_m',\delta^s],[x_1'\cdots,p,x_m',\delta^s])|x_i\in C,\delta^s\in \Delta^s\}\;,$$
that is the union
$$\Delta\cup D$$
where $\Delta $ is the diagonal in the scheme $(\Sym^{m}C\times\Delta^s)\times (\Sym^{m}C\times\Delta^s)$ and $D$ is a closed subscheme in $(\Sym^m C\times\Delta^s)\times (\Sym^{m-1}C\times\Delta^s)\;.$
Therefore we get that
$$(i^s\times id)^*(\Gamma)=\Delta+Y$$
where $Y$ is supported on $(\Sym^m\times\Delta^s)\times (\Sym^{m-1}C\times\Delta^s)$.
So $g_*i^s_*(Z)$ is equal to
$$\pr_{\Sym^m C\times \Delta^s*}[(\Delta+Y).(Z\times \Sym^m C\times \Delta^s)]=Z+Z_1$$
where $Z_1$ is supported on $\Sym^{m-1}C\times\Delta^s$. So
$$\rho^{s*}g_*i^s_*=\rho^{s*}(Z+Z_1)=\rho^{s*}(Z)$$
since $\rho^{s*}(Z_1)=0$. Hence the proposition is proved.
\end{proof}
Now we want to run the same argument as in proposition \ref{prop3} but for open varieties. That is let $W$ be a closed subscheme in $\Sym^n C$.
Let us consider the embedding of $(\Sym^{m-1}C\setminus i_{m-1,n}^{-1}W)\times \Delta^s$ into $(\Sym^m C\setminus i_{m,n}^{-1}W)\times \Delta^s$, induced by the embedding $\Sym^{m-1}C$ into $\Sym^m C$. Let $\rho^s{'}$ be the embedding of the complement of $(\Sym^{m-1}C \setminus i_{m-1,n}^{-1}W)\times \Delta^s$ in $(\Sym^m C \setminus i_{m,n}^{-1}W)\times \Delta^s$. Then arguing as in proposition \ref{prop3} we prove that
\begin{corollary}
\label{corollary1}
At the level of algebraic cycles we have
$$
\rho{^s{'}}{^*}\circ g^s{'}_*\circ i_{m,n*}^s=\rho{^s{'}}{^*}\;.
$$
\end{corollary}
\begin{proof}
We argue as in proposition \ref{prop4} with $g^s_{*}$ replaced by $g^s{'}_*$ and $\Gamma^s$ by $\Gamma^{s}{'}$ and noting that
$$(i_{m,n}^s\times id)^*(\Gamma^{s}{'})=((i_{m,n}^s\times id)^*\Gamma^{s})\cap ((\Sym^m C\setminus i_{m,n}^{-1}W\times\Delta^s)\times (\Sym^m C\setminus i_{m,n}^{-1}W\times \Delta^s))\;.$$
\end{proof}
Now we prove that the push-forward homomorphism $i^s_*$ from $\CH_*(\Sym^m C,s)$ to $CH_*(\Sym^n C,s)$ is injective. This involves many steps. The first step would be to verify that the push-forward homomorphism $i^s_*$ is defined at the level of higher Chow groups.
Here $\bcZ$ denotes the group of admissible cycles, as defined by S. Bloch \cite{Bloch}.
\begin{lemma}
\label{lemma1}
${i_{m,n*}^s}$ is well defined from $\CH^*(\Sym^m C,s)$ to $\CH^*(\Sym^n C,s)$.
\end{lemma}
\begin{proof}
The morphism $i_{m,n}$ is defined from $\Sym^m C$ to $\Sym^n C$. That will give us a morphism $i^s_{m,n}$ from $\Sym^m C\times \Delta^s$ to $\Sym^n C\times \Delta^s$. So consider the face morphisms
$$\partial_i:\Delta^{s-1}\to \Delta^s$$
given by
$$(t_0,\cdots,t_{s-1})\mapsto (t_0,\cdots,t_{i-1},0,t_i,\cdots,t_{s-1})\;.$$
This face morphisms give rise to the morphisms from $\Sym^m C\times \Delta^{s-1}$ to $\Sym^m C\times \Delta^s$, continue to call these morphisms as $\partial_i$.
Consider the following commutative diagram
$$
\diagram
\Sym^m C\times \Delta^{s-1}\ar[dd]_-{i_{m,n}^{s-1}} \ar[rr]^-{\partial_i} & & \Sym^{m}C\times \Delta^s \ar[dd]^-{i_{m,n}^s} \\ \\
\Sym^n C\times \Delta^{s-1}\ar[rr]^-{\partial_i} & & \Sym^n C\times \Delta^s
\enddiagram
$$
From the above commutative diagram we get the commutativity of the following diagram:
$$
\diagram
\bcZ^*(\Sym^m C\times \Delta^{s})\ar[dd]_-{i_{m,n*}^{s}} \ar[rr]^-{\partial_i^*} & & \bcZ^*(\Sym^{m}C\times \Delta^{s-1}) \ar[dd]^-{i_{m,n*}^{s-1}} \\ \\
\bcZ^*(\Sym^n C\times \Delta^{s})\ar[rr]^-{\partial_i^*} & & \bcZ^*(\Sym^n C\times \Delta^{s-1})
\enddiagram
$$
The commutativity of this diagram and induced maps on admissible cycles shows that $i_{m,n*}^s$ is well defined at the level of higher Chow groups.
\end{proof}
\begin{corollary}
Let $W$ be a closed subscheme of $\Sym^n C$. Consider the morphism $i$ from $\Sym^m C\setminus i^{-1}(W)$ to $\Sym^n C\setminus W$. Then the homomorphism $i_{m,n*}^s$ is well defined from $\CH^*(\Sym^m C\setminus i^{-1}(W),s)$ to $\CH^*(\Sym^n C\setminus W,s).$
\end{corollary}
\begin{proof}
Proof follows by arguing similarly as in lemma \ref{lemma1} with $\Sym^m C,\Sym^n C$ replaced by $\Sym^m C\setminus i^{-1}(W),\Sym^n C\setminus W$.
\end{proof}
\begin{lemma}
\label{lemma2}
Let $\rho$ be the inclusion from $\Sym^m C\setminus \Sym^{m-1}C=C_0(m)$ to $\Sym^m C$. Then the homomorphism $\rho^{s*}$ is well defined at the level of higher Chow groups.
\end{lemma}
\begin{proof}
To prove this consider the diagram at the level of schemes.
$$
\diagram
C_0(m)\times \Delta^{s-1}\ar[dd]_-{\partial_i} \ar[rr]^-{\rho^{s-1}} & & \Sym^{m}C\times \Delta^{s-1} \ar[dd]^-{\partial_i} \\ \\
C_0(m)\times \Delta^{s}\ar[rr]^-{\rho^s} & & \Sym^m C\times \Delta^s
\enddiagram
$$
That gives the following commutative diagram at the level of $\bcZ^*$.
$$
\diagram
\bcZ^*(\Sym^m C\times \Delta^{s})\ar[dd]_-{\partial_i^*} \ar[rr]^-{\rho^{s*}} & & \bcZ^*(C_0(m)\times \Delta^{s-1}) \ar[dd]^-{\partial_i^*} \\ \\
\bcZ^*(\Sym^m C\times \Delta^{s-1})\ar[rr]^-{\rho^{s-1*}} & & \bcZ^*(C_0(m)\times \Delta^{s-1})
\enddiagram
$$
Therefore we have $\rho^{s*}$ is well defined at the level of higher Chow groups.
\end{proof}
\begin{corollary}
Let $W$ be a closed subscheme in $\Sym^m C$. Denote the complement of $\Sym^{m-1} C\setminus i_{m,n}^{-1}W$ in $\Sym^m C\setminus W$ as $W_0(m)$. Let $\rho$ be the inclusion of $W_0(m)$ into $\Sym^m C\setminus W$. Then the homomorphism $\rho^{s*}$ is well defined from $\CH^*(\Sym^m C\setminus W,s)$ to $\CH^*(W_0(m),s)$.
\end{corollary}
\begin{proof}
Proof follows by arguing as in lemma \ref{lemma2} with $C_0(m),\Sym^m C$ replaced by $W_0(m),\Sym^m C\setminus W$.
\end{proof}
\begin{proposition}
\label{prop4}
The push-forward homomorphism $i^s_*$ from $\CH^*(\Sym^m X,s)$ to $\CH^*(\Sym^n X,s)$ is injective.
\end{proposition}
\begin{proof}
We prove this by induction. First $\Sym^0 C$ is a single point and the morphism $i^s_{0,n}=(p,\cdots,p)$, so the push-forward induced by this morphism is injective. Assume now that $i^s_*$ is injective for $m-1$ and any $n$ greater than or equal to $m-1$. Then consider the following commutative diagram
$$
\xymatrix{
0 \ar[r]^-{} & \CH^*(\Sym^{m-1} C,s) \ar[r]^-{i^s_{m-1,m*}} \ar[dd]_-{}
& \CH^*(\Sym^m C,s) \ar[r]^-{\rho^{s*}} \ar[dd]_-{i^s_{mn*}}
& \CH^*(C_0(m),s) \ar[dd]_-{} \
\\ \\
0 \ar[r]^-{} & \CH^*(\Sym^{m-1} C,s) \ar[r]^-{i^s_{m-1,n*}}
& \CH^*(\Sym^n C,s) \ar[r]^-{}
& \CH^*({(\Sym^{m-1}C)^c,s})
}
$$
In the above $(\Sym^{m-1}C)^c$ is the complement of $\Sym^{m-1}C$ in $\Sym^n C$.
In this diagram the left part of the two rows are exact by the induction hypothesis and the middle part is exact by the localization exact sequence for higher Chow groups.
Now suppose that $z$ belongs to $\CH^*(\Sym^m C,s)$, such that $$i^s_{m,n*}(z)=0$$
and let $Z$ be the cycle such that the cycle class of $Z$ is $z$. Let $cl(Z)$ denote the cycle class in the Higher Chow group, corresponding to the algebraic cycle $Z$.
Then we have
$$cl(\rho^{s*}g^s_*i^s_*(Z))=0$$
which means by the proposition \ref{prop3}
$$cl(\rho^{s*}(Z))=0\;,$$
hence
$$\rho^{s*}(cl(Z))=\rho^{s*}(z)=0\;.$$
$$$$
So by the localization exact sequence there exists $z'$ in $\CH^*(\Sym^{m-1}C,s)$, such that
$$z=i^s_{m-1,m*}(z')\;.$$
By the commutativity of the left square of the above commutative diagram we get that
$$i^s_{m-1,n*}(z')=0\;.$$
By the injectivity of $i^s_{m-1,n*}$ we get that $z'=0$, so $z=0$, hence $i^s_{m,n*}$ is injective.
\end{proof}
\begin{corollary}\label{openCollino}
Let $W$ be a closed subscheme inside $\Sym^n C$. Consider the embedding $i_{m,n}$ from $\Sym^m C\setminus i_{m,n}^{-1}(W)$ to $\Sym^n C\setminus W$. Then the homomorphism $i_{m,n*}^s$ from $\CH_*(\Sym^m C\setminus i_{m,n}^{-1}(W),s)$ to $\CH_*(\Sym^n C\setminus W,s)$ is injective.
\end{corollary}
\begin{proof}
The proof follows by arguing as in proposition \ref{prop4} with $\Sym^m C,\Sym^n C$ replaced by $\Sym^m C\setminus i_{m,n}^{-1}(W), \Sym^n C\setminus W$ respectively and by corollary \ref{corollary1}.
\end{proof}
| 93,663
|
SEARCH JOBS
MY SEARCHES
MY JOBS
MY ACCOUNT
NEWS
EVENTS & COURSES
12567
Staffing Firms
Pharmaceuticals
Bsc
United Kingdom,--Slough,Slough,
sas, sas programmer, SDTM, CDISC, London, statistics, statistical programmer
£35 - £55 per hour + benefits
16 Aug, 2018
Key People
Slough
SAS Jobs
Fresher
Contract
negotiable
Within this role you will be responsible for developing specifications and program derived datasets, statistical analyses, tables, figures and listings. You will work closely with the statisticians and data managers and provide ad hoc statistical programming and supportive SAS programming for the data management group.
Home based working is an option for the right person!
The ideal person will have excellent SAS programming skills and have strong problem-solving skills.
If you are looking for a challenging and rewarding role in a company which will provide you with the scope for growth and development, we would love to hear from you.
Responsibilities:
Program and QC datasets (CDISC or other format), tables, figures and listings using SAS to summarise and analyse clinical data.
Develop SAS Macros.
Ad hoc programming tasks as required.
Contribute to all phases of clinical development.
Ensure that clinical trial outputs included in the study report are reliable and accurate.
Liaise with Data Management to ensure accurate derivation of statistical output from Case Report Form (CRF) data.
Maintain up to date knowledge of relevant regulatory guidance and requirements.
Review and update Standard Operating Procedures (SOPs).
Contribute towards technical leadership of studies and process improvement initiatives within the statistical programming team.
Represent the statistics team at sponsor meetings and data review meetings as required.
Provide solutions to issues that arise during the conduct and analysis of the study.
Manage the flow of work for allocated studies, adhere to agreed project timelines and flag potential problems.
Provide in-house training, technical support and mentoring for colleagues.
Provide business development support and attend bid defence meetings as required.
Perform any other duty as allocated by the department head.
Education:
*Statistics, Mathematics, Computer Science or Science degree.
Experience:
*Production and QC of datasets, tables, figures and listings using SAS to summarise and analyse clinical data.
*CDISC SDTM and/or ADaM
*4 years (plus) relevant experience within the pharmaceutical industry or a clinical research organisation.
*Working in multi-disciplinary teams.
*Knowledge of the clinical development process and its critical paths.
*Knowledge of ICH GCP..
| 198,175
|
This was the homily given at the Dallas March for Life Mass. It was truly inspiring.
Blessings,
tc
Tom Clark
President, Hike for Life Inc.
22 Wednesday Jan 2014
This was the homily given at the Dallas March for Life Mass. It was truly inspiring.
Blessings,
tc
Tom Clark
President, Hike for Life Inc.
21 Tuesday Jan 2014
2014 is here and so are the Hikes. Come on out to Weatherford and join hundreds of Pro-Lifers marching down Main Street. It will be great! Register today
Blessings!
02 Wednesday Oct 2013
Everyone –.
27 Friday Sep 2013
Hope you’ll be there
Plano, Dallas, and Waxahachie will all have Hikes going on tomorrow – Rain or Shine. Come on out and take a stand for the unborn!
15 Sunday Sep 2013
If you haven’t registered for the 2013 Hike for Life now is the time. Simply go to and choose the Hike Site you wish to go to… once on the Site webpage simply sign up and start looking for sponsors.
Remember, if you get up to $25 you will receive a 2013 H4L 40th Anniversary t-shirt
$150 and you get the t-shirt plus a meditation cross that has a relic that was touched to Christ’s tomb
And $625 gets you into the 625 Club… you get the shirt, cross, and a special 625 pin that means that you saved a baby from abortion… for each $625 you raise I will send you a pin. I wear 4 of them… I have saved 4 lives in 4 years… I hope to save 2 this year!
Blessings always,
Tom Clark
President – Hike for Life Inc
10 Wednesday Apr 2013
Save the Date
The Brownsville Hike for Life is scheduled for Saturday May 18th
For more information, or to sign up, please go to:
Blessings,
tc
| 189,488
|
The movie is undermined because these shark scenes, including the opening one, lack power, danger, thrills or scariness. Fuller often liked a grabber of an opener to set the movie hurtling into motion. In light of that it seems very likely story structure-wise that the opening was Fuller’s idea (it is certainly unconventional, opening with quiet, calm underwater scenes with no music, no credits, no story, no explanation or context for what we are seeing) and that the blame for the weakness of its execution this time must rest with him. (I’ll report back if I can research further the specific interference by the producers.)
I’ll have to hunt for an other-region DVD that may be of better quality (perhaps DVD Beaver knows of one).
| 223,933
|
Heritage
About Granite Belt → ← Back to Explore
Heritage in Granite BeltUncover the rich history of Granite Belt by exploring the heritage buildings and celebrated venues. Browse the listings to delve into our story.
Apple Blossom Cottage
Phone 07 4684 3376
Granite Belt
Beverley Homestead
Phone 07 4683 5100
Granite Belt
Donnelly’s Castle
| 267,139
|
Stevie Johnson Played Sunday After Learning His Mother Died On Saturday
Stevie Johnson catches a lot of flak for some of the silly things he does on and off the field, but huge kudos go out to Johnson for playing Sunday when he probably shouldn’t have.
Johnson was informed by Buffalo Bills head coach Doug Marrone shortly after the team arrived in Jacksonville that his mother had just died unexpectedly in California.
According to the Bills official website, Marrone and we just sat and talked,” Johnson said.
“We talked about everything. It wasn’t about football. I felt good about it. He talked to me like I was his son or something. He made me feel comfortable with my decision whichever way it went, whether it was to fly right back or stay. I felt like we’ve got business here and I’m here already so let’s take care of business and we’ll deal with everything else after the game.”
Johnson is expected to travel back to California this week.
We at BlackSportsOnline offer our condolences to Johnson and his family on their loss.
| 288,810
|
RELATED BANDS
Create an account to follow these acts. Receive updates on them as related content is announced.
RELATED VENUES
OTHER RELATED ARTICLES
Saturday, 2 May 2015 |
Groovin The Moo 2015
As a sea of onesies, ponchos and gumboots streamed into Bendigo’s Prince Of Wales Showgrounds over the weekend, you could be forgiven for thinking that the Groovin the Moo festivalgoers had finally read the weather forecast ahead of time. Alas, you’d be (partially) wrong – the predicted wind, rain and cloud still didn’t deter the female population from the obligatory short-shorts and crop tops. Hideously inappropriate for a late-autumn day, but who cares? It’s Groovin the Moo and with a stellar lineup for 2015, it’s time to party!
Opening a music festival is never an easy task, but local act The Pierce Brothers took it in stride, drawing a surprisingly large crowd with their catchy folk music and energetic live show.
In the Moolin Rouge tent, The Delta Riggs took to the stage with their signature swagger. Surrounded by a haze of smoke emanating suspiciously from the audience, and with the help of an on-stage tiki bar, frontman Elliott Hammond captivated the crowd with his Jagger-esque dance moves and raw vocals.
There was something almost serendipitous about the sun finally making an appearance as Peace took to the triple j stage. Hailing from the UK, the band is building a solid reputation for their impressive live shows and this was no exception as they both teased and pleased the crowd.
They were followed by the sultry, soaring vocals of Meg Mac who was an absolute standout; almost everyone in the vicinity stopped and stared in sheer awe at both her talent and her incredible velvet cape. Her soulful voice was captivating as she took the crowd through her impressive repertoire, including 'Known Better' and 'Roll Up Your Sleeves'.
Local indie pop darlings, The Preatures, put on a fast-paced, rocking set. Frontwoman Isabella Manfredi was reminiscent of the late Chrissie Amphlett in her pinafore and white blouse. A comparison brought eerily to life when the band launched into a energetic cover of the Divinyls’ 'Boys In Town'.
Despite a disappointing turnout by the crowd, UK rockers You Me At Six hit the stage with fierce determination and a catalogue of catchy tunes. Opening with 'Room To Breathe' and 'Loverboy', diehard fans and passing festivalgoers alike were immediately drawn to the charismatic stage presence of frontman Josh Franceschi.
Back to the Moolin Rouge tent and Peaches made her appearance, resplendent in costumes that wouldn’t seem out of place in a production of Priscilla, Queen Of The Desert. The only artist who could successfully rhyme ‘vaginoplasty’ in a song, her R-rated performance (complete with oversized genitals) was both incredible and mesmerising.
Charli XCX and her all-girl band kicked off their set with a defiant “put your fucking middle fingers in the air!” The heaving crowd responded in kind, almost drowning out the lead vocals as Charli lead them into hit song, 'I Don’t Care'. The rest of the set was an exhilarating ride! One festivalgoer was overheard repeating “Wow” as the band left the stage at the end of the set.
Wolfmother drew a sizeable crowd but frontman Andrew Stockdale’s attempts at humour in between his self-indulgent guitar solos fell flat for this reviewer. Older hits, 'Woman' and 'Joker And The Thief', were well received but were the few highlights of an otherwise lacklustre set.
Having been tasked with closing out the triple j stage, The Hilltop Hoods kicked off with 'Chase That Feeling' to an adoring crowd. Their set was predictable, filled with a mix of old and new but it kept everyone happy. Songs from their latest album Walking Under Stars, receiving almost as much enthusiasm as classics like 'Nosebleed Section'.
As the last act in the Moolin Rouge, Flight Facilities had the smoke-filled tent buzzing with anticipation. Introducing themselves with a very formal "Thank you for choosing to fly with Flight Facilities", they were joined by guest vocalist, the always-popular Owl Eyes. Their set was pure party and kept everyone dancing until the very end.
Despite the weary legs and tired eyes befalling punters as they trudged out the gates, there was an overwhelming sense of achievement and electricity in the air. Groovin the Moo had once again rocked Bendigo to its very core.
Follow The Dwarf on Facebook
| 376,300
|
Browser does not support script.
Currently there are no notifications to view
In 2014 we took the unprecedented move to allow cameras into the force 24/7 to film a fly-on-the-wall documentary capturing some of the issues faced by police officers today.
We and continues to grow in popularity with further series aired in 2016 and 2017. by people who featured in previous episodes.
If you would like to be a part of 'Team Bedfordshire' visit our recruitment section to view the latest vacancies.
Our website uses cookies to improve your experience.
| 150,589
|
TITLE: $A$ and $B$ are closed subset of $\mathbb R$. Show that $A\cap B$ is also closed in $\mathbb R$.
QUESTION [1 upvotes]: Definition - A subset $S$ of $\mathbb R$ is said to be closed provided that if ${\{a_n}\}$ is a sequence in $S$ that converges to a number $a$, then the limit $a$ also belongs to $S$.
Actually, the exercise was two-part; first part was proof of the closedness of $A\cup B$ which is easy, but I can't prove for $A\cap B$.
Suppose ${\{a_n}\}$ is a sequence in $A$. Its limit $a\in A$, so is $a\in A\cup B$; and if ${\{b_n}\}$ is a sequence in $B$ then its limit $b\in B$, so is $b\in A\cup B$. Q.E.D.
Would someone please guide me how to prove it only based on the mentioned definition.
Than kyou.
REPLY [1 votes]: HINT
For the intersection, you have to start with a (converging) sequence $\{a_n\} \subseteq A\cap B$, and you have to show that $\lim a_n \in A\cap B$.
Now, remember that by the definition of intersection we have both $\{a_n\} \subseteq A$ and $\{a_n\} \subseteq B$.
Can you take it from here?
For the union: let $\{a_n\} \subseteq A\cup B$. This sequence has infinitely many elements, some of the elements are in $A$ and some are in $B$. One of the sets must contain infinitely many elements from the sequence (why?), and let's assume (WLOG) that it's $A$.
So there are infinitely many indices $j$ for which $a_j \in A$. Thus we can construct a subsequence $a_{n_k}$ of $a_n$ that lies entirely in $A$. Now, since $a_n$ converges, so does its subsequence $a_{n_k}$; and since $A$ is closed...
REPLY [0 votes]: Hint for $A\cap B$: let $(a_n)_{n\in\mathbb N}$ be a sequence in $A\cap B$ with $a_n\rightarrow a~(n\to\infty)$. We need to show: $a\in A\cap B$.
We can definitely say that $a_n\in A$, so...can you take it from here?
| 108,276
|
We at KTEN love our soldiers. That's why we're helping spread the word of how you can put a smile on a soldier's face who may be thousands of miles away from home this holiday season. Through the American Red Cross, Holiday Mail for Heroes program, you could make the holiday's so much brighter for a soldier who can't be home for the holidays.
Sara Jerome with our local chapter of the American Red Cross joined Lisanne on Good Morning Texoma to talk more about how you can send your love with a holiday card. Also, don't forget, our local folks are in need this time of year too. With cold weather moving in, the threat of house fires goes up. If you'd like to donate you can contact your local Red Cross. Monetary donations are easiest, but they're also in need of new sweats, all sizes. Those help families who've been forced out of their homes with only the clothes on their backs.
For information on the Holiday Mail for Heroes Program go to
Every card received will be screened for hazardous materials and then reviewed by Red Cross volunteers working around the country.
Please observe the following guidelines to ensure a quick reviewing process:
All holiday greetings should be addressed and sent to:Holiday Mail for Heroes P.O. Box 5456 Capitol Heights, MD 20791-5456The deadline for having cards to the P.O. Box is December 6th. Holiday cards received after this date cannot be guaranteed delivery.
For more information on the Texoma Area Red Cross:
| 256,785
|
Whether in person or virtual, fusion will forge ahead: the 28th FEC Conference (FEC2020) will be maintained for 10-15 May 2021 with the now finalized scientific program.
The IAEA, together with the Local Organizing Committee (LOC), is currently reviewing the evolving Covid-19 situation and its impact on the conference. A decision whether to maintain a physical conference or move to a virtual event will be taken in early 2021. You will be kept informed.
So be ready to join us at the FEC2020, as the Fusion show must go on. We look forward to seeing you there!
For more information about the conference and updates regarding the program, you can visit
| 335,260
|
The internet loves to laugh, and while that usually means silly pictures of cats and toddlers being adorable, sometimes it takes on more serious topics. In this case, a felony theft. It all started when the Reynoldsburg Police decided to share the photo of a suspect that they believed was involved in a theft. She was accused of helping two other people steal gaming consoles…but it turned out that people who saw the picture were more concerned about the woman’s eyebrows than anything else.
The real question is who stole her eyebrows,” one woman wrote.
I’m guessing their next stop was the Apple Store so she could steal some iBrows,” another wrote.
Thanks to her viral fame as being a “thief with no brows,” the story spread far and wide, making it easier for the police to learn who the suspects were. Arrests were made and the saga ended…but the jokes did not!
😜Are you the Winner we Picked a second ago?😜
Why they gotta make fun of the car for being rusty? What that car ever do to them?” One said.
Other comments advised future thieves not to steal from a store best known for its high-quality cameras and surveillance equipment!
To see more inspiring articles and uplifting content, check out Happy Tango every day! If you loved what you saw here then like and share this with the links below!
Images via, via
| 323,549
|
Founded by Dick Conway and Doug Pedersen, Conway Pedersen
Economics, Inc., is an economic forecasting service for the Puget
Sound region. The goal of the company is to provide insight into
current and future economic conditions of the region, presenting
forecasts and analyses in a clear, informative, and timely manner.
We strongly believe that good economic forecasting and analysis
involve both sound methodology and expert judgment. Our work
makes use of a sophisticated regional econometric model that
over the years has proven to be an effective forecasting tool. Our
expert judgment comes from a combined fifty years of experience
studying and forecasting the Puget Sound economy.
Since 1993 Conway Pedersen Economics has published The Puget
Sound Economic Forecaster, a quarterly forecast and commentary.
The newsletter is designed for business executives, marketing
directors, investors, government managers, and researchers who need
a professional and objective view on the economic prospects for the
region. Recognized for its accurate forecasts and perceptive
analyses, the newsletter is read by a virtual "who's who" of local
business and government. The Puget Sound Economic Forecaster
and its principals are often quoted in newspapers and magazines and
on radio and television.
With the addition of The Puget Sound Economic Forecaster web
site, Conway Pedersen Economics makes available to subscribers a
new online regional economic resource. By providing more
information and putting it at one's fingertips, the web site is a
valuable aid to anyone conducting business, making investments, or
in other ways concerned about the economy in the Puget Sound
region.
| 268,672
|
\begin{document}\setul{2.5ex}{.25ex}
\title[]{Resonances near thresholds in slightly \Bk Twisted Waveguides}
\author[V.\ Bruneau]{Vincent Bruneau}
\address{Universit\'e de Bordeaux, IMB, UMR 5251, 33405 TALENCE cedex, France}\email{Vincent.Bruneau@u-bordeaux.fr}
\author[P.\ Miranda]{Pablo Miranda}\address{Departamento de Matem\'atica y Ciencia de la Computaci\'on, Universidad de Santiago de Chile, Las Sophoras 173. Santiago, Chile.}\email{pablo.miranda.r@usach.cl}
\author[N.\ Popoff]{Nicolas Popoff}
\address{Universit\'e de Bordeaux, IMB, UMR 5251, 33405 TALENCE cedex, France}
\email{Nicolas.Popoff@u-bordeaux.fr}
\email{}
\maketitle
\begin{abstract}
We consider the Dirichlet Laplacian in a straight three dimensional waveguide with non-rotationally invariant cross section, perturbed by a twisting of small amplitude. It is well known that such a perturbation does not create eigenvalues below the essential spectrum. However, around the bottom of the spectrum, we provide a meromorphic extension of the weighted resolvent of the perturbed operator, and show the existence of exactly one resonance near this point. Moreover, we obtain the asymptotic behavior of this resonance as the size of the twisting goes to 0. We also extend the analysis to the upper eigenvalues of the transversal problem, showing that the number of resonances is bounded by the multiplicity of the eigenvalue and obtaining the corresponding asymptotic behavior.
\end{abstract}
\noindent {\bf AMS 2000 Mathematics Subject Classification:} 35J10, 81Q10,
35P20.\\
\noindent {\bf Keywords:}
Twisted waveguide, Dirichlet Laplacian, Resonances near thresholds. \\
\section{Introduction}
Let $\omega$ be a bounded domain in $\R^2$ with Lipschitz boundary.
Set $\Omega:=\omega\times\R$ and $(x_1,x_2,x_3)=:(x_t,x_3)$.
Define $H_{0}$ as the Laplacian in $\Omega$ with Dirichlet boundary conditions. Consider $-\Delta_\omega$ (the Laplacian in $\omega$ with Dirichlet boundary conditions). Since $\omega$ is bounded, the spectrum of the operator $-\Delta_\omega$
is a discrete sequence of values converging to infinity, denoted by $\{\lambda_n\}_{n=1}^\infty$. Then, the spectrum of $H_0$ is given by
$$\sigma(H_0)=\bigcup_{n=1}^\infty[\lambda_n,\infty)=[\lambda_1,\infty),$$
and is purely absolutely continuous.
Geometric deformations of such a straight waveguide have been widely studied in recent years, and have numerous applications in quantum transport in nanotubes. The spectrum of the Dirichlet Laplacian in
waveguides provides information about the quantum transport of spinless particles with {\it hardwall} boundary conditions. In particular, the existence of eigenvalues describes the occurrence of {\it bound states} corresponding to trapped trajectories created by the geometric deformations. For a review
we refer to \cite{Kre07}, where bending against twisting is discussed, and to \cite{Gru04} for a general differential
approach.
Without being exhaustive we recall some well known situations: a local bending of the waveguide creates eigenvalues below the essential spectrum, as also do a local enlarging of its width (\cite{DuEx95, Gru04}). On the contrary, it has been proved, under general assumptions, that a twisting of the waveguide does not lower the spectrum (\cite{EkKovKre08}), in particular a twisting going to 0 at infinity will not modify the spectrum (\cite{Gru04}).
In such a situation it is natural to introduce the notion of resonance and to analyze the effect of the twisting on the resonances near the real axis. There already exist studies of resonances in waveguides: resonances in a thin curved waveguide (\cite{DuExMel97,Ned97}), or more recently in a straight waveguide with an electric potential, perturbed by a twisting (\cite{KovSacc07}). In these both cases, however, the resonances appear as perturbations of embedded eigenvalues of a reference operator, and follow the {\it Fermi golden rule} (see \cite{Ha07} for references and for an overview on such resonances). As we will see, in our case the origin of the resonances will be rather due to the presence of thresholds appearing as branch points created by a 1d Laplacian. Our analysis will be close to the studies near 0 of the 1d Laplacian (see for instance \cite{Si76,BulGesReSi97} where, even if resonances are not discussed, the ``threshold" behavior appears). A similar phenomena of threshold resonances was already studied for a magnetic Hamiltonian in \cite{BonBruRai07}, where the thresholds are eigenvalues of infinite multiplicity of some transversal problem.
In this article we will consider a small twisting of the waveguide: Let $\varepsilon: \R\to \R$ be a non-zero function of class $C^1$ \Bk with exponential decay i.e., for some $\alpha >2(\lambda_2-\lambda_1)^{1/2}$ (this hypothesis can be relaxed, see Remark \ref{2oct17a}), $\varepsilon$ satisfies
\bel{hypeps}
\varepsilon(x_3)=O(e^{-\alpha\langle x_3\rangle}), \quad \varepsilon'(x_3)=O(e^{-\alpha\langle x_3\rangle}),
\ee
where $\langle x_3\rangle:=(1+x_3^2)^{1/2}$.
Then, for $\delta>0$ we define $\Omega_{\delta}$ as the waveguide obtained by twisting $\Omega$ with $ \theta_\delta$, where $\theta_\delta'(x_3)=\delta\varepsilon(x_3)$, \Bk \Bk i.e., we define
$$\Omega_{\delta}:=\{ (r_{\theta_\delta (x_{3})}(x_{t}),x_{3}), (x_{t},x_{3})\in \Omega)\},$$
where $r_{\theta}$ is the rotation of angle $\theta$ in $\R^{2}$. Set $$W(\delta):=-\delta\partial_\varphi\varepsilon\partial_3-\delta\partial_3\varepsilon\partial_\varphi-\delta^2\varepsilon^2\partial_\varphi^2=-2\delta\varepsilon\partial_\varphi\partial_3-\delta\varepsilon'\partial_\varphi-\delta^2\varepsilon^2\partial^2_\varphi,$$
with the notation $\partial_\varphi$ for $x_1\partial_2-x_2\partial_1$. Then, it is standard (see for instance \Bk \cite[Section 2]{Gru04})
that the Dirichlet Laplacian in $\Omega_{\delta}$ is unitarily equivalent to the operator
$$H(\delta):=H_{0}+W(\delta),$$
defined in $\Omega$ with a Dirichlet boundary condition. Since the perturbation is a second order \Bk differential operator, $H(\delta)$ is not a relatively compact perturbation of $H_{0}$. However the resolvent difference $H(\delta)^{-1}-H_{0}^{-1}$ is compact (\cite[Section 4.1]{BriKoRaSo09}), and therefore $H(\delta)$ and $H_{0}$ \Bk have the same essential spectrum.
Moreover,
the spectrum of $H(\delta)$ coincide with $[\lambda_{1},+\infty)$, see \cite{EkKovKre08}.
In this article we will show that around
$\lambda_{1}$ there exists, for $\delta$ small enough, a meromorphic extension of the weighted resolvent of $H(\delta )$ with respect to the variable $k:= \sqrt{z-\lambda_1}$, \Bk where $z$ is the spectral parameter. In other words, the resolvent $(H(\delta)- z)^{-1}$, first defined for $z$ in $\C\setminus [0, + \infty)$, admits a meromorphic extension on a weighted space (space of functions with exponential decay along the tube), for values in a neighborhood of $\lambda_1$ in a 2-sheeted Riemann surface. We will identify the resonances \Bk
around $\lambda_1$ with the poles of this meromorphic extension in the parameter $k$. We will prove in Theorem \ref{T:main} that in a neighborhood independent of $\delta$, there is exactly one pole $k(\delta)$, whose behavior as $\delta\to0$ is explicit:
\bel{E:MT}
k(\delta)=-i\mu\delta^2+O(\delta^3),
\ee
where $\mu>0$ is given by \eqref{E:Defmu} below, and moreover, $k(\delta)$ is on the imaginary axis.
The fact that $k(\delta)$ is on the negative imaginary axis means that in the spectral variable the resonance is on the second sheet of the 2-sheeted Riemann surface, far from the real axis (it is sometimes called an antibound state \cite{Sim2000}). In particular such a resonance can not be detected using dilations (a dilation of angle larger than $\pi$ would be needed) \Bk and is completely different in nature from those created by perturbations of embedded eigenvalues. For this reason we define resonances as the poles of weighted resolvents, assuming that $\varepsilon$ is exponentially decaying.
However, \Bk a difficulty comes from the non relatively compactness of the perturbation $W(\delta).$
This problem will be
overcome exploiting the smallness of the perturbation and the locality of our problem.
Our analysis provides an analogous result for higher thresholds, in Section \ref{R:higher}: Around each $\lambda_{q_{0}}$ there are at most $m_{0}$ resonances (for all $\delta$ small enough), where $m_{0}$ is the multiplicity of $\lambda_{q_{0}}$ as
eigenvalue of $-\Delta_{\omega}$. Moreover,
under an additional assumption, each one of these resonances have an asymptotic behavior of the form \eqref{E:MT}, \Bk
where the constant $\mu$
is an eigenvalue of a $m_0\times m_0$ explicit matrix (not necessarily Hermitian).
Although Theorem \ref{T:higher} may be viewed as a generalization of Theorem \ref{T:main}, we preferred to push forward the proof for the
first threshold for the following reasons: it is easier to follow and contain all the main ingredients needed for the proof in the upper thresholds, the eigenvalues of $-\Delta_{\omega}$ are generically simple as we know the first eigenvalue is.
\begin{remark}
Independent of the size of the perturbation $W(\delta)$, a more global definition of resonances would be possible by showing that a generalized determinant (as in \cite{BouBru08} or in \cite[Definition 4.3]{Sjo14}) is well defined on $\C\setminus [0, + \infty)$ and admits an analytic extension. Then the resonances would be defined as the zeros of this determinant on a infinite-sheeted Riemann surface (as in \cite[Definitions 1-2]{BonBruRai07}).
\end{remark}
\section{Preliminary decomposition of the free resolvent}\label{S1}
Let us describe the singularities of the free resolvent. Setting $D_3:=-i\partial_3$, we have that
\bel{24aug17aa}
H_{0}-\lambda_{1}=(-\Delta_{\omega}-\lambda_{1})\otimes I_{x_3}+I_{x_t}\otimes D_{3}^2.
\ee
For $k\in \C^{+}:= \{k \in \C; \; {\rm Im \,} k >0 \}$, \Bk define
$$R_{0}(k):=(H_{0}-\lambda_{1}-k^2)^{-1},$$
and $R$ similarly for $H(\delta)$. If for $n\in\N$, $\pi_n$
is the orthogonal projection onto ker$(-\Delta_\omega-\lambda_n)$, using
\eqref{24aug17aa} for $ k^2 \in \C\setminus [0, + \infty)$, \Bk we have
that \Bk
\bel{24aug17a}R_0(k)=(H_0-\lambda_1-k^2)^{-1}=\sum_{q=1} \pi_q\otimes(D^2_{3}+(\lambda_q-\lambda_1)-k^2)^{-1}.\ee
The integral kernel of $(D_3^2-k^2)^{-1}$ is explicitly given
by
\bel{3sep17a}
\frac{i}{2k}e^{i\,k|x_3-x_3'|}.
\ee
Let $\eta$ be an exponential weight of the form $\eta(x_{3})=e^{-N \left\langle x_{3}\right \rangle}$, for $(\lambda_2-\lambda_1)^{1/2}<N<\alpha/2$.
Also, for $a\in\C$ and $r>0$ set $B(a,r):=\{z\in\C;|a-z|<r\}$.
Then, as in \cite[Lemma 1]{BonBruRai07} it can be seen that the operator valued-function $k\mapsto (R_{0}(k):\eta^{-1}L^2(\Omega)\to\eta L^2(\Omega)) $, initially defined on $\C^+$, has a meromorphic extension \Bk in $B(0,r)$ for any $0<r<(\lambda_2-\lambda_1)^{1/2}$, with a unique pole, of multiplicity one, at $k=0$. \Bk More precisely,
\bel{6sep17a}
\eta R_{0}(k)\eta = \frac{1}{k} \pi_1 \otimes \alpha_0 + A_0(k),
\ee
where $\alpha_0$ is the rank one operator $\alpha_0 = \frac{i}{2} |\eta \rangle\langle \eta|$ and
$k\mapsto(A_0(k)$: $L^2(\Omega)\to L^2(\Omega))$ is the analytic operator-valued function
\bel{7sep17a}
A_0(k) := \pi_1 \otimes r_1(k) + \sum_{q=2} \pi_q\otimes \eta (D^2_{3}+(\lambda_q-\lambda_1)-k^2)^{-1}\eta,
\ee
with $r_1$ being the operator in $L^2(\R)$ with integral kernel given
by
$$i\eta(x_3) \frac{ (e^{i\,k|x_3-x_3'|}-1)}{2k} \eta(x_3').$$
Clearly, for $0<r<(\lambda_2-\lambda_1)^{1/2}$, the family of operators $A_0(k)$ is uniformly bounded on $B(0,r)$.
\begin{remark}\label{2oct17a}
Note that the condition $\alpha >2(\lambda_2-\lambda_1)^{\frac12}$ on the function $\varepsilon$, enters here in order to have analytic properties in the ball $B(0,r)$, $0<r<(\lambda_2-\lambda_1)^{1/2}$. This assumption can be relaxed to $\alpha >0$, but the results will be restricted to $B(0,r)$ with $0<r< \frac{\alpha}{2}$.
\end{remark}
In order to define and study the resonances, we will consider a suitable meromorphic extension of $R(k)$, using the identity
\bel{4sept17e}
\eta R(k)\eta=\eta R_{0}(k)\eta\left({\rm Id}+\eta^{-1} W(\delta)R_{0}(k)\eta\right)^{-1}.\ee
Since $H(\delta)$ has no eigenvalue below $\lambda_1$ (see \cite{EkKovKre08}), the above relation is initially well defined and analytic for $k \in \C^+$. \Bk
It is necessary then to understand under which conditions this formula can be used to define such an extension.
Since we can not apply directly the meromorphic Fredholm theory ($W(\delta)$ is not $H_{0}$-compact), we will need to show explicitly that $\left({\rm Id}+\eta^{-1} W(\delta)R_{0}(k)\eta\right)^{-1}$ is meromorphic in some region around zero.
Let $\psi_1$ be such that $-\Delta_\omega \psi_1=\lambda_1 \psi_1$, $\|\psi_1\|_{L^2(\omega)}=1$ (then $\pi_1=| \psi_1 \rangle \langle \psi_1 |$), and define
\bel{E:defPhi}\Phi_\delta:= -\frac{i}{2}((\partial_\varphi \psi_1 \otimes \eta^{-1} \varepsilon' )
+
\delta(\partial^2_\varphi \psi_1 \otimes \eta^{-1} \varepsilon^2 )).
\ee
\begin{lemma}\label{Lholo0}
Let $0<r<(\lambda_2-\lambda_1)^{1/2}$. There exists $\delta_0>0$ such that
for any $0<\delta\leq\delta_0$ and $k\in B(0,r)\setminus\{0\}$
$$\eta^{-1} W(\delta)R_{0}(k)\eta=\frac{\delta}{k} K_0 + \delta T(\delta,k),$$
where $K_0$ is the rank one operator
\bel{4sep17} K_0:= |\Phi_\delta\rangle\langle \psi_1 \otimes \eta|,
\ee
and
$B(0,r)\ni \Bk k\mapsto(T(\delta,k)$: $L^2(\Omega)\to L^2(\Omega))$ is an analytic operator-valued function. Moreover,
\bel{2sep17}\sup_{0<\delta\leq\delta_0, \, k \in B(0,r)}||T(\delta,k)||<\infty.\ee
\end{lemma}
\begin{proof}
Thanks to \eqref{6sep17a},
\bel{E:devWR}
\eta^{-1} W(\delta)R_{0}(k)\eta= \frac{1}{k} \eta^{-1} W(\delta) \eta^{-1} ( \pi_1 \otimes \alpha_0) + \eta^{-1} W(\delta)\eta^{-1} A_0(k).
\ee
Since the range of the operator $\eta^{-1} \alpha_0=\frac{i}{2}|1\rangle \langle \eta$ is spanned by constant functions, we have $\partial_3 \eta^{-1} \alpha_0=0$, and therefore
$$\eta^{-1} W(\delta) \eta^{-1}( \pi_1 \otimes \alpha_0)=\frac{i}{2} | \eta^{-1} (-\delta\varepsilon'\partial_\varphi-\delta^2\varepsilon^2\partial^2_\varphi) \eta^{-1} (\psi_1 \otimes \eta)\rangle\langle \psi_1 \otimes \eta| = \delta |\Phi_\delta\rangle\langle \psi_1 \otimes \eta|=\delta K_{0}.$$
We now treat the last term of \eqref{E:devWR}: Setting $ \delta T(\delta,k)= \eta^{-1} W(\delta)\eta^{-1} A_0(k)$ we immediately get
\begin{align*}T(\delta,k)
=&-2
\Big(\partial_\varphi \pi_1 \otimes \eta^{-1} \varepsilon \partial_3 \eta^{-1} r_1(k) + \sum_{ q\geq 2 } \partial_\varphi \pi_q\otimes \eta^{-1} \varepsilon \partial_3 (D^2_{3}+(\lambda_q-\lambda_1)-k^2)^{-1}\eta\Big)\\
&- \Big(\partial_\varphi \pi_1 \otimes \eta^{-1} \varepsilon' \eta^{-1} r_1(k) + \sum_{q\geq 2} \partial_\varphi \pi_q\otimes \eta^{-1} \varepsilon' (D^2_{3}+(\lambda_q-\lambda_1)-k^2)^{-1}\eta \Big)\\&
- \delta \Big( \partial_\varphi^2 \pi_1 \otimes \eta^{-1} \varepsilon^2 \eta^{-1} r_1(k) + \sum_{q\geq 2} \partial_\varphi^2 \pi_q\otimes \eta^{-1} \varepsilon^2 (D^2_{3}+(\lambda_q-\lambda_1)-k^2)^{-1}\eta\Big).\end{align*}
It is clear that the two last terms are analytic and uniformly bounded in $B(0,r)$. For the first one, we note that the kernel of $\partial_{3}\eta^{-1}r_{1}(k)$ is $(x_{3},x_{3}') \mapsto -\tfrac{1}{2}\eta(x_{3}')\mathrm{sign} (x_{3}-x_{3}')e^{ik|x_{3}-x_{3}'|}$, and therefore $\partial_\varphi \pi_1 \otimes \eta^{-1} \varepsilon \partial_3 \eta^{-1} r_1$ admits an analytic expansion which is uniformly bounded. The same arguments run for $\sum_{q\geq 2} \partial_\varphi \pi_q\otimes \eta^{-1} \varepsilon \partial_3 (D^2_{3}+(\lambda_q-\lambda_1)-k^2)^{-1}\eta$.
\end{proof}
\section{Meromorphic extension of the resolvent and study of the resonance}\label{S3}
\begin{proposition}\label{propinv}
Let $\mathcal{D}\subset B(0,\sqrt{\lambda_2-\lambda_1})$ be a compact neighborhood of zero. With the notation of Lemma \ref{Lholo0}, for $\delta$ sufficiently small, let us introduce the functions $\tilde{\Phi}_{\delta}=({\rm Id}+ \delta T(\delta,k) )^{-1} \Phi_\delta$ and
\bel{4sept17c}
w_{\delta}(k)= \delta \langle \tilde{\Phi}_{\delta}| \psi_1 \otimes \eta \rangle.
\ee
Then:
\begin{enumerate}
\item[i)] There exists $\delta_0$ such that for any $k \in \mathcal{D}$, $\delta \in (0,\delta_0)$,
\bel{E:expw}
w_\delta(k)= i\mu \delta^2+O(\delta^3)+\delta^2 k g_\delta(k)
\ee
where
\bel{E:Defmu}
\mu:=\tfrac{1}{2}\sum_{q\geq2}(\lambda_{q}-\lambda_{1})\langle \partial_{\varphi}\psi_{1}|\pi_{q}\partial_{\varphi}\psi_{1}\rangle\langle \varepsilon|(D_{3}^2+\lambda_{q}-\lambda_{1})^{-1}\varepsilon\rangle
\ee is a positive constant, and
$g_{\delta}$ is an analytic function in $\mathcal{D}$ satisfying
$$\sup_{\delta \in (0,\delta_{0})}\sup_{k\in\cD}|g_{\delta}(k)|<+\infty.$$
\item[ii)] When $\alpha\in \R$, there holds $w_{\delta}(i\alpha)\in i\R$.
\end{enumerate}
\end{proposition}
\begin{proof}
We use the Taylor expansion and Lemma \ref{Lholo0} to see that
\bel{12oct17}({\rm Id}+\delta T(\delta,k))^{-1} ={\rm Id}-\delta T(\delta, 0 )+\delta k G_{\delta}(k) +O(\delta^2),\ee where $G_{\delta}(k)$ is holomorphic operator-valued function that is uniformly bounded for $ k\in\mathcal{D}$ and $\delta$ small.
By definition of $\Phi_\delta$, we have:
$$ \langle \Phi_\delta | \psi_1 \otimes \eta\rangle
= -\tfrac{i}{2}\left( \langle \partial_\varphi \psi_1 | \psi_1\rangle_{L^2(\omega)} \, \langle \eta^{-1} \varepsilon' | \eta \rangle_{L^2(\R)} + \delta \langle \partial^2_\varphi\psi_1 | \psi_1 \rangle_{L^2(\omega)} \, \langle \eta^{-1} \varepsilon^2 | \eta \rangle_{L^2(\R)}\right).
$$
The first term is zero because $\varepsilon$ tends to zero \Bk at infinity.
Using integration by part, since $\psi_1$ satisfies a Dirichlet boundary condition, we deduce
$$\langle \Phi_\delta | \psi_1 \otimes \eta\rangle = \delta\frac{i}{2} \|\partial_{\varphi}\psi_{1}\|^2\| \varepsilon \|^2.$$
Noticing that $\| \Phi_{\delta} \|=O(1)$, from \eqref{12oct17} we get
\bel{E:expandwstart}
w_\delta(k)=\delta^2\frac{i}{2}\|\partial_{\varphi}\psi_{1}\|^2\| \varepsilon \|^2-\delta^2 \langle T(\delta,0 ) \Phi_\delta | \psi_{1} \otimes \eta \rangle+ \delta^2 k g_\delta(k) + O(\delta^3), \Bk
\ee
where $g_\delta(k)$ is holomorphic and uniformly bounded for $ k\in\mathcal{D}$ and $\delta$ small.
We now compute $\langle T(\delta,0) \Phi_\delta | \psi_{1} \otimes \eta \rangle$.
First recall that $ T(\delta,k)=\delta^{-1} \eta^{-1} W(\delta)\eta^{-1} A_0(k)$.
Next, note that since $ \langle \partial_\varphi \psi_1 | \psi_1\rangle=0$,
$$\pi_{1}\partial_{\varphi}\psi_{1}=0,$$
and therefore,
using the definition of $\Phi_\delta$ in \eqref{E:defPhi}, we get
$$(\pi_{1}\otimes r_{1}(0))\Phi_{\delta}=-\delta \frac{i}{2} \pi_{1} \partial^2_\varphi \psi_1 \otimes r_{1}(0) \eta^{-1} \varepsilon^2, $$
which in turn implies that
$$\langle (\delta^{-1}\eta^{-1}W(\delta) \eta^{-1}) (\pi_{1}\otimes r_{1}(0))\Phi_{\delta} | \psi_{1}\otimes \eta \rangle=O(\delta).$$
In consequence, having in mind \eqref{E:defPhi} again,
we deduce
\begin{equation}
\label{E:expandTscalar}
\langle T(\delta,0 ) \Phi_{\delta},\psi_{1}\otimes \eta\rangle
\end{equation}
\begin{align*}=&
\langle \eta^{-1} \left(-2\varepsilon\partial_{\varphi}\partial_{3}-\varepsilon' \partial_{\varphi}-\delta \varepsilon^2\partial_{\varphi}^2\right) \eta^{-1} \big(\sum_{q \geq 2} \pi_q\otimes \eta (D^2_{3}+(\lambda_q-\lambda_1))^{-1} \eta\big)\Phi_{\delta} | \psi_{1}\otimes \eta \rangle+O(\delta)
\\
=&\frac{i}{2}\langle \eta^{-1} \big(2\varepsilon\partial_{\varphi}\partial_{3}+\varepsilon' \partial_{\varphi}\big) \big(\sum_{q \geq 2} \pi_q\otimes (D^2_{3}+(\lambda_q-\lambda_1))^{-1}\big)\partial_\varphi \psi_1 \otimes \varepsilon' | \psi_{1}\otimes \eta \rangle+O(\delta).
\end{align*}
We compute the main term of the last expression using integration by parts, both in the $\varphi$ and the $x_{3}$ variables:
\begin{align}
\label{E:interTdelta}\begin{split}&
\langle \eta^{-1} \left(2\varepsilon\partial_{\varphi}\partial_{3}+\varepsilon' \partial_{\varphi}\right)\left(\sum_{q \geq 2} \pi_q\otimes (D^2_{3}+(\lambda_q-\lambda_1))^{-1}\right)\partial_\varphi \psi_1 \otimes \varepsilon' | \psi_{1}\otimes \eta \rangle
\\=&\sum_{q\geq2}\langle \partial_{\varphi}\psi_{1}|\pi_{q}\partial_{\varphi}\psi_{1}\rangle \times \langle \varepsilon'|(D_{3}^2+\lambda_{q}-\lambda_{1})^{-1}\varepsilon'\rangle.
\end{split}\end{align}
Now, we notice that
\begin{align*}
\langle \varepsilon'|(D_{3}^2+\lambda_{q}-\lambda_{1})^{-1}\varepsilon'\rangle&=\langle \varepsilon |(D_{3}^2+\lambda_{q}-\lambda_{1})^{-1}D_{3}^2\varepsilon\rangle
\\&= \|\varepsilon\|^2-(\lambda_{q}-\lambda_{1})\langle \varepsilon|(D_{3}^2+\lambda_{q}-\lambda_{1})^{-1}\varepsilon\rangle.
\end{align*}
In addition, since $\pi_{1}\partial_{\varphi}\psi_{1}=0$ and $\sum_{q\geq1}\pi_{q}=\mathrm{Id}$, we have that
\bel{E:sumpositiv}
\sum_{q\geq 2}\langle \partial_{\varphi}\psi_{1}|\pi_{q}\partial_{\varphi}\psi_{1}\rangle =\|\partial_{\varphi}\psi_{1}\|^2.
\ee
Then, from \eqref{E:expandTscalar} and \eqref{E:interTdelta} we get
\bel{26sep17}\ba{ll}
&\langle T(\delta,0) \Phi_{\delta},\psi_{1}\otimes \eta\rangle
\\[.5em]
=&\displaystyle{\tfrac{i}{2}\|\varepsilon\|^2\|\partial_{\varphi}\psi_{1}\|^2-\tfrac{i}{2}\sum_{q\geq2}(\lambda_{q}-\lambda_{1})\langle \partial_{\varphi}\psi_{1}|\pi_{q}\partial_{\varphi}\psi_{1}\rangle\langle \varepsilon|(D_{3}^2+\lambda_{q}-\lambda_{1})^{-1}\varepsilon\rangle +O(\delta)}.
\ea\ee
Putting together \eqref{E:expandwstart} and \eqref{26sep17}, we deduce \eqref{E:expw}.
Moreover, $\mu$ is clearly non-negative, and from \eqref{E:sumpositiv}, there exists $q \geq 2$ such that $\langle \partial_{\varphi}\psi_{1}|\pi_{q}\partial_{\varphi}\psi_{1}\rangle>0$. Since $(D_{3}^2+\lambda_{q}-\lambda_{1})^{-1}$ is a positive operator, we get $\mu>0$.
Let us prove ii. For all $\alpha\in \R$, $A_{0}(i\alpha)$ has a real integral kernel, see \eqref{7sep17a}. Therefore if $u\in L^{2}(\Omega)$ is real valued, so is $({\rm Id}+\delta T(\delta,i\alpha))^{-1}u$. In consequence, since $\Phi_{\delta}$ has values in $i\R$, so is $\tilde{\Phi}_{\delta}=({\rm Id}+\delta T(\delta,i\alpha))^{-1}\Phi_{\delta}$, and we deduce that $w_{\delta}(i\alpha)$ has values in $i\R$ as well.
\Bk
\end{proof}
\begin{theorem}
\label{T:main}
Let $\varepsilon: \R\to \R$ be a non-zero $C^1$-function satisfying \eqref{hypeps} and \Bk $\mathcal{D}\subset B(0,\sqrt{\lambda_2-\lambda_1})$ be a compact neighborhood of zero. Then, for $\delta$ sufficiently small, $ k\mapsto R(k)=(H-\lambda_{1}-k^2)^{-1}$, initially defined in $\C^{+}$, admits a meromorphic operator-valued extension on $\cD$, whose operator-values act from $\eta^{-1}L^{2}(\Omega)$ into $\eta L^{2}(\Omega)$. \Bk This function has exactly one pole $k(\delta)$ in $\cD$, called a resonance of $H$, and it is of multiplicity one. Moreover, we have the asymptotic expansion
$$k(\delta)=-i\mu\delta^2+O(\delta^3),$$
with $\mu$ given by \eqref{E:Defmu} and $\mathrm{Re}(k(\delta))=0$.
\end{theorem}
\begin{proof}
Consider the identity \eqref{4sept17e}, and note that from Lemma \ref{Lholo0}
for $k\in \mathcal{D}\setminus\{0\}$ \Bk and
\Bk \ $\delta$ sufficiently small we can write
\bel{27sep17}\Big( {\rm Id} + \eta^{-1} W(\delta)R_{0}(k)\eta \Big)=
\Big( {\rm Id} +\delta T(\delta,k) \Big)
\Big( {\rm Id} +\frac{\delta}{k}({\rm Id}+ \delta T(\delta,k) )^{-1} K_0 \Big).\ee
For $k\in \cD\setminus\{0\}$ let us set
$$K:=\frac{\delta}{k}({\rm Id}+ \delta T(\delta,k) )^{-1} K_0=\frac{\delta}{k} |\tilde{\Phi}_{\delta}><\psi_{1}\otimes \eta|,$$
which is a rank one operator. Then, we need to study the inverse of $({\rm Id}+K)$.
Let us consider $\Pi_{\delta}^{\perp}$, the projection onto $(\mbox{span }\{\psi_1 \otimes \eta\})^{\perp}$ into the direction $\tilde{\Phi}_\delta$ and $\Pi_{\delta}={\rm Id}-\Pi_{\delta}^{\perp}$, the projection onto $\mbox{span }\{\tilde{\Phi}_\delta\}$ into the direction normal to $(\psi_1 \otimes \eta)$.
\Bk We can easily see that \Bk $$({\rm Id}+K) \Pi_{\delta}^{\perp} =\Pi_{\delta}^{\perp} \qquad \mbox{and} \qquad ({\rm Id}+K) \Pi_{ \delta} =(1+\frac{\delta}{k}\langle \tilde{\Phi}_{\delta} | \psi_{1}\otimes \eta \rangle) \Pi_{ \delta}=\frac{k+w_{\delta}(k)}{k}\Pi_{\delta}.$$
Therefore, ${\rm Id}+K$ is invertible if and only if $k+w_{\delta}(k)\neq 0$, and
\bel{29sept17a}
({\rm Id}+K)^{-1}=\Pi_{\delta}^{\perp}+\frac{k}{k+w_{\delta}(k)} \Pi_{\delta}.
\ee
Let us consider the equation $k+w_{\delta}(k)=0$. Using \eqref{E:expw}, for all $\kappa\in (0,\sqrt{\lambda_{2}-\lambda_{1}})$, for $\delta$ small enough, the equation has no solution for $k\in \cD$ and $|k|\geq \kappa$. We then apply Rouch\'e Theorem inside the ball $B(0,\kappa)$: consider the analytic functions $h_{\delta}(k)=i\mu\delta^2 + k$ and $f_{\delta}(k)=w_{\delta}(k)+k$. The function $h_{\delta}$ has exactly one root, and on the circle $C(0,\kappa)$, using again \eqref{E:expw}, there holds $|h_{\delta}-f_{\delta}| \leq h_{\delta}$ for $\delta$ small enough.
\Bk Thus, we deduce that the equation $k+w_{\delta}(k)=0$ has exactly one solution $k(\delta)$ in $\cD$, for each fixed $\delta$ small enough.
In consequence, putting together \eqref{4sept17e}, \eqref{6sep17a}, \eqref{27sep17} and \eqref{29sept17a} we have that for all $k\in \cD\setminus \{ 0, k(\delta)\}$
$$\eta R(k)\eta = \Big( \frac{1}{k} \pi_1 \otimes \alpha_0 + A_0(k) \Big)
\Big( \Pi_{\delta}^{\perp} + \frac{k}{k+w_\delta(k)} \Pi_{\delta} \Big) \; ({\rm Id}+ \delta T(\delta,k) )^{-1}.$$
By the definition of $\Pi_{\delta}^{\perp}$,
we have that $ (\pi_1 \otimes \alpha_0) \Pi_{\delta}^{\perp}=0$ and then:
$$\eta R(k)\eta = \frac{1}{k+w_\delta(k)} (\pi_1 \otimes \alpha_0)\Pi_{\delta} ({\rm Id}+ \delta T(\delta,k) )^{-1} $$
$$+ \frac{k}{k+w_\delta(k)} A_0(k)\Pi_{\delta} ({\rm Id}+ \delta T(\delta,k) )^{-1} + A_0(k) \Pi_{\delta}^{\perp} ({\rm Id}+ \delta T(\delta,k) )^{-1}.$$
Therefore, for $\delta$ sufficiently small, $k \mapsto \eta R(k)\eta $ admits a meromorphic extension to $\cD\setminus\{k(\delta)\}$, where the pole $k(\delta)$ is giving by the solution of
$k+w_\delta(k)=0$.
Using \eqref{E:expw}, the asymptotic expansion of $k(\delta)$ follows immediately. \Bk
Further, the multiplicity of this resonance is the rank of the residue of $\eta R(k)\eta$, which coincides with the rank of
$(\pi_1 \otimes \alpha_0)\Pi_{\delta} + k(\delta)A_0(k(\delta)) \Pi_{\delta}$. It is one because $\Pi_{\delta}$ is of rank one with its range in span$\{\tilde{\Phi}_\delta\}$ and
$$\Big( (\pi_1 \otimes \alpha_0) + k(\delta)A_0(k(\delta))\Big)\tilde{\Phi}_\delta= \frac{i}{2} \langle \tilde{\Phi}_{\delta}| \psi_1 \otimes \eta \rangle (\psi_1 \otimes \eta) + O(\delta^2)= - \frac{\delta\mu}2 (\psi_1 \otimes \eta) + O(\delta^2)$$
does not vanish for $\delta$ sufficiently small.
Finally let us prove that $k(\delta)\in i \R$. As a consequence of Proposition \ref{propinv}.ii, we have that the function $s_{\delta}$, defined on $\R\cap B(0,\delta)$ by $s_{\delta}(\alpha)=i( i\alpha+w_{\delta}(i\alpha))$ is real valued. Moreover, using \eqref{E:expw} for $\delta$ small, $s_{\delta}(0)<0$ and $s_{\delta}(-\delta)>0$. In consequence, this function admits a root $\alpha(\delta)$ which is real. By uniqueness, $k(\delta)=i\alpha(\delta)$.
\end{proof}
\section{Upper Thresholds}
\label{R:higher}
We now extend our analysis to the upper thresholds. We will show that if $\lambda_{q_0}$ is an eigenvalue of multiplicity $m_{0} \geq 1$ of $(-\Delta_{\omega})$, then $m_0$ is a bound for the number of resonances around $\lambda_{q_0}$.
Let $(\psi_{q_0,{j}})_{j=1,\ldots, m_{0}}$ be a normalized basis of $\ker(-\Delta_{\omega}-\lambda_{q_{0}})$. In analogy with \eqref{E:Defmu}, for $1\leq j,l\leq m_0$ define
\bel{6oct17}\ba{ll}\mu_{j,l}&= \displaystyle{\langle \partial_\varphi\psi_{q_0,j}|\pi_{q_0} \partial_\varphi\psi_{q_0,l}\rangle\,||\varepsilon||^2}\\[1em]
&+ \frac12 \displaystyle{\sum_{q, \, \,\lambda_q\neq\lambda_{q_0}}(\lambda_q-\lambda_{q_0})\langle \partial_\varphi\psi_{q_0,j}| \pi_q \partial_\varphi\psi_{q_0,l}\rangle\langle (D^2_3+\lambda_q-\lambda_{q_0})^{-1}\varepsilon|\varepsilon\rangle,}\ea\ee
and let $\Upsilon_{q_0}$ be the matrix $(\mu_{j,l})$.
Denote by $r_{0}:=\min(\sqrt{|\lambda_{q_0}-\lambda_{q_{0}-1}|},\sqrt{|\lambda_{q_0+1}-\lambda_{q_{0}}|})$ and $\C^{++}:= \{k \in \C^{+}; \; {\rm Re \,} k >0 \}$. \Bk
\begin{theorem}
\label{T:higher}
Suppose that $\lambda_{q_0}$ is an eigenvalue of multiplicity $m_{0}\geq 1$ of $(-\Delta_{\omega})$, that $\varepsilon: \R\to \R$ is a non-zero $C^1$-function satisfying \eqref{hypeps} with $\alpha >2r_{0}$, \Bk and that $\mathcal{D}\subset B(0,r_0)$ is a compact neighborhood of zero. Then, for all $\delta$ sufficiently small, the operator-valued function $k\mapsto (H(\delta)-\lambda_{q_{0}}-k^2)^{-1}$, initially defined in $\C^{++}$, admits a meromorphic extension on $\cD$. This extension has at most $m_{0}$ poles, counted with multiplicity. These poles are among the zeros $(k_l(\delta))_{1 \leq l \leq m_0}$ of some determinant, which satisfy
$$k_l(\delta)= -i\nu_{q_0,l}\,\delta^2+ o(\delta^2) , \quad \delta\downarrow 0,$$
where $(\nu_{q_0,l})_{1 \leq l \leq m_0}$ are the eigenvalues \Bk of the matrix $\Upsilon_{q_0}$.
\end{theorem}
\begin{proof}
Some points in this proof are close to what has been done for the first threshold. We will keep the same notations and explain how to modify the arguments of the previous sections. \Bk
In analogy with section \ref{S1} set
$$\Phi_{{j},\delta}:= -\frac{i}{2}((\partial_\varphi \psi_{q_0,j} \otimes \eta^{-1} \varepsilon' )
+
\delta(\partial^2_\varphi \psi_{q_0,j} \otimes \eta^{-1} \varepsilon^2 )) \ \ \mbox{and} \ \
K_0:= \sum_{j}|\Phi_{j,\delta}\rangle\langle \psi_{q_0,j} \otimes \eta|.
$$
Then, the analog of Lemma \ref{Lholo0} still holds. Here, since $\lambda_{q_0}$ is in the interior of the essential spectrum, the resolvent $(H(\delta)-z)^{-1}$ is initially defined for ${\rm Im \,} z>0$ near $z=\lambda_{q_0}$ and the extension of the weighted resolvent is done with respect to $k= \sqrt{z-\lambda_{q_0}}$ from $\C^{++}$ to a neighborhood of $k=0$.\Bk
Also, as in the proof of Theorem \ref{T:main}, we have for $k \in \C^{++}$, \Bk with $R_{0}(k):=(H_{0}-\lambda_{q_{0}}-k^2)^{-1}$ (and similar notation for $R(k)$): \Bk
\bel{E:KeyId}
\eta R(k) \eta =\eta R_{0}(k)\eta ({\rm Id}+K)^{-1}({\rm Id}+\delta T(\delta,k))^{-1},
\ee
where $$K:=\frac{\delta}{k}({\rm Id}+\delta T(\delta,k))^{-1}K_{0}=\frac{\delta}{k}\sum_{j=1}^{m_{0}}|\tilde{\Phi}_{{j},\delta}\rangle\langle \psi_{q_0,{j}} \otimes \eta|$$
is now of rank $m_{0}$, with obvious notation for $\tilde{\Phi}_{{j},\delta}$.
Next, let $\Pi_{\delta}^{\perp}$ be the projection over $\Big( \ker(-\Delta_{\omega}-\lambda_{q_{0}}) \otimes\, \mathrm{span}\{\eta\} \Big)^\perp$ \Bk in the direction of $\mathrm{span}\{\tilde{\Phi}_{{1},\delta},...,\tilde{\Phi}_{{m_{0}},\delta}\}$
and $\Pi_{\delta}:={\rm Id}-\Pi_{\delta}^{\perp}$.
Then, the matrix of $({\rm Id}+K)\Pi_{\delta}$
in the basis $\{\tilde{\Phi}_{j,\delta}\}_{1}^{m_{0}}$ is given, for $k\neq0$, by
\bel{29sep17}\frac{1}{k}\begin{bmatrix}k+\omega_{1,1,\delta}(k)&\dots&\omega_{m_{0},1,\delta}(k)\\
\vdots&\ddots&\vdots\\
\omega_{1,m_{0},\delta}(k)&\dots&k+\omega_{m_{0},m_{0},\delta}(k)
\end{bmatrix}:=\frac{1}{k}M_{\delta}(k)\ee
where we have set
$w_{j,l,\delta}(k)= \delta \langle \tilde{\Phi}_{{j},\delta}| \psi_{q_0,l} \otimes \eta \rangle.
$
Assume that $M_{\delta}(k)$ is invertible, then by \eqref{E:KeyId}
\begin{align*}
\eta R(k)\eta =& \Big(\frac{i}{2k} \sum_{j}|\psi_{q_0,j} \otimes \eta\rangle\langle \psi_{q_0,j} \otimes \eta|+ A_0(k) \Big)
\Big( \Pi_{\delta}^{\perp} + kM_{\delta}(k)^{-1} \Pi_{\delta}\Big) \; ({\rm Id}+ \delta T(\delta,k) )^{-1}
\\
=&\left( \frac{i}{2} \sum_{j}|\psi_{q_0,j} \otimes \eta\rangle\langle \psi_{q_0,j} \otimes \eta| M_{\delta}(k)^{-1} \Pi_{\delta} +A_{0}(k)\left(\Pi_{\delta}^{\perp}+k M_{\delta}(k)^{-1}\Pi_{\delta}\right) \right)\; ({\rm Id}+ \delta T(\delta,k) )^{-1}.
\end{align*}
\Bk
In consequence, since the $w_{l,k,\delta}$ are holomorpic, $\eta R \eta$ admits a meromorphic extension to $\cD$, and the poles of this extension are among the poles of $\Big(\frac{i}{2} \Bk \sum_{j}|\psi_{q_0,j} \otimes \eta\rangle\langle \psi_{q_0,j} \otimes \eta| + k A_{0}(k) \Big)M_{\delta}(k)^{-1} \Pi_{\delta} $. Evidently, the poles are included in the set of zeros of the determinant of $M_{\delta}(k)$.
Define
$$\Delta(k,\delta):={\rm det}(M_{\delta}(k)).$$
We can check as in Proposition \ref{propinv} that
\bel{1}w_{j,l,\delta}(k)= i\mu_{j,l} \delta^2+O(\delta^3)+\delta^2 k g_{j,l}(k,\delta),\ee
where the $\mu_{j,l}$ are given by \eqref{6oct17}. Then
$$\Delta(k,\delta)= \delta^{2m_0} \Bk {\rm det}(k\delta^{-2}+ i\mu_{j,l}+O(\delta)+ k g_{j,l}(k,\delta)),
$$
and the zeros of $\Delta(\cdot,\delta)$ are the complex numbers of the form $k=u\delta^2$, with $u$ being a zero of
$$
\tilde{\Delta}(u,\delta): = {\rm det}(u+ i\mu_{j,l}+O(\delta)+ \delta^2 u g_{j,l}(\delta^2u,\delta)).
$$
Since
\bel{5oct17}\tilde{\Delta}(u,\delta)=\tilde{\Delta}(u,0)+\delta h(u,\delta)={\rm det}(u+ i\mu_{j,l})+\delta h(u,\delta),\ee where $h$ is an analytic function in $u$ and $\delta$, taking the ball $B(0,C)$
with $C$ larger than the modulus of the larger eigenvalue of $\Upsilon_{q_0}$ and applying Rouche theorem,
we conclude that all the zeros of $\tilde{\Delta}( \cdot \Bk ,\delta)$ are inside this ball for $\delta$ sufficiently small. \Bk Moreover,
if we denote by $\nu_{q_0,l}$ the eigenvalues of $\Upsilon_{q_0}$, \eqref{5oct17} yields
$u_{q_0,l}(\delta)= -i \Bk (\nu_{q_0,l}+o(1)).$ This immediately implies that all the zeros of $\Delta( \cdot \Bk,\delta)$ in $\cD$, denoted by $k_l$, are inside the ball $B(0,C\delta^2),$ and satisfy
$$k_{l}(\delta)=- i \delta^2 \Bk (\nu_{q_0,l}+o(1)).$$
\Bk
\end{proof}
\begin{remark}\label{4oct17}
In the last theorem, if $m_0=1$, we are able to obtain extra information. For instance, as in Theorem \ref{T:main}, for the unique zero of the determinant, $k_{1}(\delta)$, we have that
$k_{1}(\delta)= -i \mu_{q_0}\delta^2+O(\delta^3)$
with
$$
\mu_{q_0}:=\mu_{1,1}= \tfrac{1}{2}\sum_{q\neq q_{0}}(\lambda_{q}-\lambda_{q_{0}})\langle \partial_{\varphi}\psi_{q_0}|\pi_{q}\partial_{\varphi}\psi_{q_{0}}\rangle\langle \varepsilon|(D_{3}^2+\lambda_{q}-\lambda_{q_{0}})^{-1}\varepsilon\rangle.$$\Bk Then, as in the proof of Theorem \ref{T:main}, $k_{q_{0}}$ is a pole of rank one when $\mu_{q_{0}}\neq 0$.
It is also important to notice that, for $q<q_{0}$, the operator $(D_{3}^2+\lambda_{q}-\lambda_{q_{0}})^{-1}$ has to be understood as the limit of $(D_{3}^2+\lambda_{q}-\lambda_{q_{0}}-k^2)^{-1}$, acting in weighted spaces, when $k\to 0$. It is not a selfadjoint operator anymore, therefore $\mu_{q_0}$ is not necessarily real. Actually, in general, it has a non zero imaginary part coming from the first terms when $q<q_0$. Indeed, thanks to \eqref{3sep17a}, for $q<q_0$, the imaginary part of $ 2 (\lambda_{q_{0}}-\lambda_{q})^{-1/2} \Bk \, \langle \varepsilon|(D_{3}^2+\lambda_{q}-\lambda_{q_{0}})^{-1}\varepsilon\rangle$ is given by:
$$
-\Big(\int_\R \cos( \sqrt{\lambda_{q_0}-\lambda_q} x_3) \, \varepsilon (x_3) \, d{x_3} \Big)^2 - \Big(\int_\R \sin( \sqrt{\lambda_{q_0}-\lambda_q} x_3) \, \varepsilon (x_3) \, d{x_3} \Big)^2 = - \sqrt{2\pi} | \widehat{\varepsilon} ( \sqrt{\lambda_{q_0}-\lambda_q}) |^2.
$$
where $\widehat{\varepsilon}$ if the Fourier transform of $\varepsilon$. Then, the imaginary part of $\mu_{q_0}$ is:
$${\hbox{Im}}(\mu_{q_0})= - \tfrac{\sqrt{2\pi(\lambda_{q_0}-\lambda_q)}}{ 4}\sum_{q < q_{0}} \|\pi_{q}\partial_{\varphi}\psi_{q_{0}}\|^2 \, | \widehat{\varepsilon} ( \sqrt{\lambda_{q_0}-\lambda_q}) |^2.
$$
This identity allows to give sufficient conditions on the eigenfunctions of $-\Delta_{\omega}$ and on $\hat{\varepsilon}$, so that $\mu_{q_{0}}\neq0$, giving rise to a unique resonance of multiplicity one, with ${\rm Re\,} k_1(\delta)<0$.
\end{remark}
\bibliographystyle{plain}
\bibliography{biblio,bibliopof}
\end{document}
| 157,712
|
Are you taking the electrical exam tomorrow?
Here are some last minute tips for the electrical exam. If you are taking the journeyman license exam, you can skip tip #3. Hope these help!
Electrical exam prep tip #1
The one thing you do need to remember is you need to get some good rest tonight. So at some point you need to cut this off and put it down. Because if you've spent a lot of time looking at the material you'll lose sleep and won't be able to think well tomorrow at the test. So you've got to trust in yourself now. Especially if you attended the seminar. At this point give yourself enough time to relax.
Electrical exam prep tip# 2
When you get to the testing center don't be the first person in the door. Be one of the last persons. Don't get me wrong, don't be late. But don't be the first person there because if you're the first person and there's a bunch of other people that are waiting behind you, they're going to be checking your book page by page and being as strict as they can be. But if you're the last person they're about to go on break and couldn't care less. Not that you have anything to hide but this is going to help minimize your anxiety levels.
Electrical exam prep tip#3
Also if they give you plans when you go in for the master license exam definetely follow this tip. When you first sit down, your math is the first thing you'll go through. But don't start the math part of your test until you've spent some time going through those plans.
In other words if you wait for the time clock to start and then dig through those plans but you haven't looked at them yet, you're giving yourself kind of a disadvantage. But they have that little pretest in the beginning of the exam where they're asking random questions like how many square miles are there in the US and things that nature, ignore that. That section doesn't matter as long as you put your social security number in there and hit the start button. That first little pre-test which does not count for you or against you is like a minute window. Use that time to look at those plans to get familiar with them. You won't necessarily know what questions they are going to ask you but that's the time to invest in kind of figuring out the layout of them.
Bonus: Electrical exam prep tip
Get your mind ready for the math portion after you've reviewed the plans. When you end the math portion it will automatically start the code section. That code section will be so fast you'll run out of time if you're not careful. So on the math part, if you have to use the restroom or get up or stretch or anything you have to do, do it while the clock is still running on your math. Because if you don't, you just won't have any time after that second part starts.
We hope these last minute tips help. If you aren't taking the exam tomorrow, make sure to check out the 2-day seminars in Texas. This is truly the best way to pass the exam.
Thanks for reading!
Prepare. Pass. Excel.
PS. These Texas Live Seminars are selling out super fast. If you see one you want - sign up asap. They won't last long. Learn more or sign up at ElectricalExcel.com
| 1,707
|
\section{General background}
\subsection{Elliptic curves}
\subsubsection{General notions}
Throughout this paper we consider most of our geometric objects to
be defined over the field $\mathbb{Q}$ of rational numbers. Sometimes
we argue geometrically and regard the objects to be defined over an
algebraic closure $\overline{\mathbb{Q}}$ or over some number field.
\smallskip
An elliptic curve defined over $\mathbb{Q}$ is a plane projective
curve $E$ whose affine part is given by a Weierstra\ss{} equation of the
form $y^{2}=P(x)$ where $P\in\mathbb{Q}[x]$ is a polynomial of degree $3$
with nonzero discriminant. Note that such a curve always has a
single smooth point at infinity which is defined over $\mathbb{Q}$. The
geometric points of $E$ carry the structure of an algebraic group, the
group law being given by Newton's well-known secant and tangent construction.
We fix the structure such that the point at infinity is the neutral
element of this group law. The group law restricts to the set $E(\mathbb{Q})$
of rational points on $E$, the so-called Mordell-Weil group of the
elliptic curve $E$. By the important theorem of Mordell and Weil (cf. [13],
[22]) this group is a finitely generated abelian group, hence is isomorphic
(as an abstract group) to $\mathbb{Z}^{r}\times T$ where $r=\hbox{rank}
(E(\mathbb{Q}))$ is the rank of $E(\mathbb{Q})$ and $T$ is a torsion group,
i. e., a finite abelian group. \smallskip
If $E$ is explicitly given then it is relatively easy to compute
the torsion part of the Mordell-Weil group by the theorem of Lutz-Nagell.
Moreover, the famous theorem of Mazur tells us that there are only very
few possibilities for a finite abelian group to occur as the
torsion subgroup of the Mordell-Weil group of an elliptic curve. In fact, $T$ is
either isomorphic to $\mathbb{Z}/n\mathbb{Z}$ where $1\leq n\leq12$ , $n\neq11$,
or else is isomorphic to $\mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/2n\mathbb{Z}$
where $1\leq n\leq 4$. \smallskip
In contrast to the torsion part, the determination of the rank of the
Mordell-Weil group is a much deeper problem. Moreover, it is very
difficult to find non-torsion solutions even if one knows in advance
that the rank is positive, i. e., that there exist infinitely many rational
points on $E$. This is due to the fact that even if the coefficients
of the elliptic curve are small (measured for example in terms of a naive
notion of height in projective space) the smallest non-torsion point on $E$
may be very large.
\subsubsection{Elliptic curves corresponding to concordant forms}
In the sequel we will deal with elliptic curves $E$ defined over
the rationals with torsion subgroup containing $\mathbb{Z}/2\mathbb{Z}
\times\mathbb{Z}/2\mathbb{Z}$. This is equivalent to saying that $E$ is
defined by an affine equation of the form $y^2 = (x-e_1)(x-e_2)(x-e_3)$
with pairwise different rational numbers $e_1,e_2,e_3\in\mathbb{Q}$.
We denote this elliptic curve by $E_{e_1,e_2,e_3}$. By simple
rational transformations we can clear denominators and translate the
$x$-coordinates such that the equation becomes $y^2=x(x+M)(x+N)$
with different nonzero integers $M,N\in\mathbb{Z}$; we denote this curve
by $E_{M,N}$. (In projective notation we use homogeneous coordinates
$(T,X,Y)$ and write the points of the affine part of $E_{M,N}$ as
$(x,y) = (X/T, Y/T)$.) In this form the elliptic curve correponds to Euler's
concordant form problem (cf. [3], [17]), which is the problem of finding
nontrivial integral solutions of the system of two quadratic equations
$X_0^2+MX_1^2=X_2^2$ and $X_0^2+NX_1^2=X_3^2$. (See [2], Ch. 8, for the
representation of elliptic curves as intersections of quadrics.) The intersection
of these two quadrics in projective three-space $\mathbb{P}^{3}(\overline{
\mathbb{Q}})$ will be denoted by $Q_{M,N}$. For future reference, let us
fix the notations\\
\parbox{.03\textwidth}{(1)}
\parbox{.97\textwidth}{
\begin{align*}
E_{M,N} & = \{ (T:X:Y)\in\mathbb{P}^2(\mathbb{\overline{Q}})\mid
TY^2=X(X\! +\! TM)(X\! +\! TN)\}, \\
Q_{M,N} & = \{ (X_0:X_1:X_2:X_3)\in\mathbb{P}^3(\mathbb{\overline{Q}})\mid
X_0^2\! +\! MX_1^2=X_2^2,\ X_0^2\! +\! NX_1^2=X_3^2\}.
\end{align*} }\\
We note that $Q_{M,N}$ defines an abstract elliptic curve (i.e., a smooth projective
curve of arithmetic genus one) which is isomorphic to the curve $E_{M,N}$. In fact,
the mappings $F:Q_{M,N}\rightarrow E_{M,N}$ given by
$$F\left(\begin{array}{c} X_0\\ X_1\\ X_2\\ X_3\end{array}\right) =
\left(\begin{array}{c} NX_2-MX_3+(M-N)X_0\\ MN(X_3-X_2)\\ MN(M-N)X_1
\end{array}\right) \leqno{(2)}$$
and $G:E_{M,N}\rightarrow Q_{M,N}$ given by
$$G\left(\begin{array}{c} T\\ X\\ Y\end{array}\right) = \left(\begin{array}{c}
-(X+MT)(Y^2-M(X+NT)^2)\\ 2Y(X+NT)(X+MT)\\ -(X+MT)(Y^2+M(X+NT)^2)\\
-(X+NT)(Y^2+N(X+MT)^2) \end{array}\right)\leqno{(3)}$$
extend to completely defined biregular mappings which are mutually
inverse; cf. [19]. Note that for the equation of $E_{M,N}$ we may assume that
$M>0$ and $N<0$ after a trivial change of
variables. Furthermore, we may write $M=pk$ and $-N=qk$ with coprime natural
numbers $p,q\in\mathbb{N}$ and a squarefree natural number $k\in\mathbb{N}$.
In this form the equation is closely related to the $\theta$-congruent number
problem in the sense of Fujiwara (cf. [4]; see also [7], [24]). \smallskip
Note that the two-torsion points $(0,0)$, $(-M,0)$, $(-N,0)$ together
with the point at infinity on $E_{M,N}$ correspond to the trivial
solutions $(1,0,\pm1,\pm1)$ of the concordant form equations and
to the trivial solution (the degenerated triangle) of the $\theta$-congruent
number problem. Let us point out that by Mazur's Theorem the only further
torsion points could be either $4$- or $8$-torsion points or else
$3$- or $6$-torsion points. Moreover, these additional torsion points
can occur only in very rare cases. In fact, in the form $E_{pk,-qk}$
of the elliptic curve such torsion points can only occur when $k=1,2,3$
or $6$ (cf. [5], [19]). Since the torsion points can be calculated easily
we will ignore them in the subsequent considerations in which we develop methods
for finding non-torsion points on elliptic curves of the considered form.
\smallskip
\subsection{Quadratic Forms}
In the following sections we frequently consider ternary quadratic
forms, in most cases of a rather special form. We will collect some
facts about these forms needed later. The quadratic forms which occur
in the following considerations are of the form
$F(X_0,X_1,X_2)=a_{00}X_0^{2}+a_{01}X_0X_1+a_{11}X_1^2+a_{22}X_2^2$
with coprime integer coefficients $a_{ij}\in\mathbb{Z}$. In most cases
the forms are already in diagonal form (i.e., $a_{01}=0$) or are
transformed to diagonal form in the course of our investigations.
\subsubsection{Solvability criterion}
Let $F(X_0,X_1,X_2)=a_{00}X_0^2+a_{11}X_1^2+a_{22}X_2^2$ be a ternary
diagonal form with integer coefficients. Then by standard techniques we
can transform this equation into one for which the product $a_{00}
a_{11}a_{22}$ is squarefree (equivalently, such that the three coefficients
are squarefree and pairwise coprime). The following criterion for the
existence of a nonzero integer solution $(x_0,x_1,x_2)\in\mathbb{Z}^{3}$
dates back to Legendre (cf. [10]). \bigskip
{\bf Legendre Criterion:} The equation $a_{00}X_0^2+a_{11}X_1^2+a_{22}X_2^2
=0$ has a nonzero solution if and only if not all the coefficients have the
same sign (which is equivalent to saying that there is a solution over the
real numbers) and, in addition, for all permutations $(i,j,k)$ of $(0,1,2)$
the number $-a_{ii}a_{jj}$ is a quadratic residue modulo $a_{kk}$.\bigskip
With this criterion one can easily check whether or not a given ternary
quadratic form has a nontrivial solution. In addition, by the work of Holzer
and his followers (cf. [6], [14], [23]) there are algorithms to find explicit
solutions which terminate with a complexity which is known {\it a priori}
depending on the coefficients. (Incidentally, Holzer's Theorem occurs both
as a tool within our algorithm and as a model for our overall approach, namely,
to produce explicit solutions to certain types of equations in which the mere
existence of solutions is known by other, more abstract, methods.) \bigskip
{\bf Holzer's Theorem:} If $F(X_0,X_1,X_2)=a_{00}X_0^2+a_{11}X_1^2+a_{22}X_2^2$
is a quadratic form with pairwise coprime and squarefree coefficients such
that the equation $F(X_0,X_1,X_2)=0$ has a nontrivial solution, then
there exists such a solution $(x_0,x_1,x_2)$ satisfying the inequalities
$|x_0|<\sqrt{|a_{11}a_{22}|}$ , $|x_1|<\sqrt{|a_{00}a_{22}|}$ and
$|x_{2}|<\sqrt{|a_{00}a_{11}|}$.\bigskip
\subsubsection{Parametrization of quadratic forms}
In the sequel we will make use of the fact that there is a systematic
approach to finding all solutions of a quadratic form, provided one
fixed solution is known. The technique is a well-known construction,
dating back to Newton, which geometrically uses the fact that any
line through the fixed point has a uniquely determined second point
of intersection with the quadric, whose coordinates are rational
expressions in the coefficients of the given quadric and the coordinates
of the fixed point, where the slope of the line serves as a parameter. We
summarize the calculations, writing down the parametrization using projective
coordinates.\bigskip
{\bf Lemma:} Consider the projective quadric $Q=\{(x_0,x_1,x_2)\in\mathbb{P}^{2}
\,|\,F(X_0,X_1,X_2)$ $=0\}$ where $F(X_0,X_1,X_2)=a_{00}X_0^2+a_{01}X_0X_1+a_{11}X_1^2
+a_{22}X_2^2$. The set of all points $(x_0,x_1,x_2)\in Q$ with $x_2\not= 0$ can be
parametrized by the rational mapping $\varPhi:\mathbb{P}^1\rightarrow Q$ given by
$\varPhi(\xi_0,\xi_1) = (\varphi_0(\xi_0,\xi_1),\varphi_1(\xi_0,\xi_1),\varphi_2(\xi_0,\xi_1))$
where
\parbox{.05\textwidth}{(4a)}
\parbox{.95\textwidth}{
\begin{alignat*}{3}
& X_0\ &&=\ \varphi_0(\xi_0,\xi_1)\ &&=
\ a_{11}x_0\xi_0^2 - 2a_{11}x_1\xi_0\xi_1 - (a_{01}x_1+a_{00}x_0)\xi_1^2, \\
& X_1\ &&=\ \varphi_1(\xi_0,\xi_1)\ &&=
\ (-a_{11}x_1-a_{01}x_0)\xi_0^2 - 2a_{00}x_0\xi_0\xi_1 + a_{00}x_1\xi_1^2, \\
& X_2\ &&=\ \varphi_2(\xi_0,\xi_1)\ &&=
\ a_{11}x_2\xi_0^2 + a_{01}x_2\xi_0\xi_1 + a_{00}x_2\xi_1^2.
\end{alignat*} } \\
whereas if $x_2=0$ and $x_1\not= 0$ the following parametrization can be used: \\
\parbox{.05\textwidth}{(4b)}
\parbox{.95\textwidth}{
\begin{alignat*}{3}
& X_0\ &&=\ \varphi_0(\xi_0,\xi_1)\ &&=
\ -(a_{01}x_1+a_{00}x_0)\xi_0^2+a_{22}x_0\xi_1^2, \\
& X_1\ &&=\ \varphi_1(\xi_0,\xi_1)\ &&=
\ a_{00}x_1\xi_0^2 + a_{22}x_1\xi_1^2, \\
& X_2\ &&=\ \varphi_2(\xi_0,\xi_1)\ &&=
\ -(a_{01}x_1+2a_{00}x_0) \xi_0\xi_1.
\end{alignat*} } \\
The proof is an elementary and simple calculation, which we omit.
\bigskip
{\bf Remark:} We did not specify a base field over which the projective
spaces and all the coordinates are defined, since the lemma is valid for any
field. We will apply this lemma in the situation that the quadric is defined
over the rational numbers; i.e., the coefficients $a_{00},a_{01},a_{11},a_{22}$
as well as the coordinates of the fixed point $(x_0,x_1,x_2)$ and the parameters
$(\xi_0,\xi_1)$ are rational numbers. In this situation we can always assume that
all these data are, in fact, integers such that the coefficients $a_{00}, a_{01},
a_{11}, a_{22}$ are coprime, the coordinates $x_0,x_1,x_2$ are coprime and the
parameters $\xi_0,\xi_1$ are coprime.\\
\subsubsection{Pairs of quadrics with separated variables}
In the situation of the following discussion we will sometimes be
given two quadrics in projective three-space of a special form in which
the variables of the quadrics will be separated in some sense. More precisely,
the quadrics $Q_{1}$ and $Q_{2}$ will have equations of the form \\
\parbox{.05\textwidth}{(5)}
\parbox{.95\textwidth}{
\begin{alignat*}{3}
& Q_1: && \quad a_{00}X_0^2 + a_{01}X_0X_1 + a_{11}X_1^2 + a_{22}X_2^2\ &&=\ 0, \\
& Q_2: && \quad b_{00}X_0^2 + b_{01}X_0X_1 + b_{11}X_1^2 + b_{33}X_3^2\ &&=\ 0
\end{alignat*} } \\
so that the equations for the quadrics share two of the four variables such that
the other two variables occur only as squares. We always assume the quadrics to have
rational solutions, so by means of a fixed point $(x_0,x_1,x_2)$ on $Q_1$ we can
parametrize the points on the first quadric $Q_1$, viewed as a quadric in the projective
plane with coordinates $(X_0,X_1,X_2)$, and we can substitute the parametrizations for
the common variables $X_0,X_1$ into the second quadric, thus yielding a function of the
variables $\xi_0,\xi_1,X_3$ which is of degree $4$ in the parameter variables $\xi_0,\xi_1$
and of pure degree $2$ in the third variable $X_3$. Let us summarize the result of the
calculations. \\
{\bf Lemma:} Let $Q_1,Q_2$ be two quadratic forms as above, let $(x_0,x_1,x_2)$ be a
point on $Q_1$ and let \\
\parbox{.05\textwidth}{(6)}
\parbox{.95\textwidth}{
\begin{alignat*}{3}
& X_0\ &&=\ \varphi_0(\xi_0,\xi_1)\ &&=\ \alpha_{00}\xi_0^2 + \alpha_{01}\xi_0\xi_1
+ \alpha_{11}\xi_1^2 \\
& X_1\ &&=\ \varphi_1(\xi_0,\xi_1)\ &&=\ \beta_{00}\xi_0^2 + \beta_{01}\xi_0\xi_1
+ \beta_{11}\xi_1^2 \\
& X_2\ &&=\ \varphi_2(\xi_0,\xi_1)\ &&=\ \gamma_{00}\xi_0^2 + \gamma_{01}\xi_0\xi_1
+ \gamma_{11}\xi_1^2
\end{alignat*} } \\
be the parametrization of $Q_1$ projecting from $(x_{0},x_{1},x_{2})$.
Then the substitution of $\varphi_0$ and $\varphi_1$ into the quadric
$Q_2$ gives the equation
$$Q_{2}:\quad B_{40}\xi_0^4 + B_{31}\xi_0^3\xi_1 + B_{22}\xi_0^2\xi_1^2
+ B_{13}\xi_0\xi_1^3 + B_{04}\xi_1^4 + b_{33}X_3^2 \leqno{(7)}$$
where \\
\parbox{.03\textwidth}{(8)}
\parbox{.97\textwidth}{
\begin{alignat*}{2}
& B_{40} &&= b_{00}\alpha_{00}^{2} + b_{01}\alpha_{00}\beta_{00} + b_{11}\beta_{00}^{2} \\
& B_{31} &&= 2b_{00}\alpha_{00}\alpha_{01} + b_{01}(\alpha_{00}\beta_{01}\! +\!\alpha_{01}
\beta_{00}) + 2b_{11}\beta_{00}\beta_{01} \\
& B_{22} &&= b_{00}(2\alpha_{00}\alpha_{11}\! +\!\alpha_{01}^{2}) + b_{01}(\alpha_{00}\beta_{11}
\! +\!\alpha_{01}\beta_{01} + \alpha_{11}\beta_{00}) + b_{11}(2\beta_{00}\beta_{11}
\! +\!\beta_{01}^{2}) \\
& B_{13} &&= 2b_{00}\alpha_{01}\alpha_{11} + b_{01}(\alpha_{01}\beta_{11}\! + \!\alpha_{11}
\beta_{01})+2b_{11}\beta_{01}\beta_{11} \\
& B_{04} &&= b_{00}\alpha_{11}^{2}+b_{01}\alpha_{11}\beta_{11}+b_{11}\beta_{11}^{2}
\end{alignat*} } \\
Again, the proof is just an easy calculation and is omitted.\\
\\
{\bf Corollary:} If the two quadrics $Q_1$ and $Q_2$ are diagonal
(i.e., if $a_{01}=b_{01}=0$) and if one of the coordinates $x_0$ or
$x_1$ of the fixed point is zero, then the substituted form of
$Q_2$ is biquadratic in $(\xi_0,\xi_1)$.\smallskip
In fact, in this situation we have $\alpha_{00}=\alpha_{11}=\beta_{01}=0$
if $x_{0}=0$ or else $\alpha_{01}=\beta_{00}=\beta_{11}=0$ if $x_{1}=0$.
In both cases the two coefficients $B_{31}$ and $B_{13}$ vanish.
\subsection{Two-descent}
\subsubsection{General theory}
We are interested in explicitly finding rational points on the elliptic
curves $E_{e_1,e_2,e_3} = \{ (x,y)\in\mathbb{Q}^2\mid y^2=(x-e_1)(x-e_2)(x-e_3)\}$
as defined in 2.1.2. We may restrict our considerations to the case that the
numbers $e_i$ are integers. According to Silverman (cf. [20], Chap. X, Remark
3.4) the \emph{existence} of rational points may be checked by deciding whether
certain homogeneous spaces over $E_{e_1,e_2,e_3}$ are trivial in the sense of
[20], Chap. X, §3, Prop. 3.3. But these homogeneous spaces are also useful for
calculating rational points explicitly. Since for our purposes we do not make
use of the important homological criteria for the existence of solutions
(local-global principle, Selmer group, Tate-Shafarevich group etc.), we only recall
the principal facts on the determination of homogeneous spaces belonging to rational
points on $E_{e_1,e_2,e_3}$. Here we may restrict our attention to the isogeny given
by the multiplication-by-$2$-map on $E_{e_1,e_2,e_3}$. We use the notations of [20]
(see Chap. X, Section 1 and Example 4.5.1). The same calculations can be found in [8]
(see Chap. IV, Section 3). \\
First we observe that any element of $\mathbb{Q}^{*}/(\mathbb{Q}^{*})^{2}$ can be
uniquely represented by a squarefree integer. Any pair $(b_1,b_2)\in\mathbb{Q}^{*}/(
\mathbb{Q}^{*})^{2}\times\mathbb{Q}^{*}/(\mathbb{Q}^{*})^{2}$ defines the intersection
of two quadrics $Q_{e_1,e_2,e_3,b_1,b_2} = Q_1\cap Q_2$ in projective three-space by \\
\parbox{.05\textwidth}{(9)}
\parbox{.95\textwidth}{
\begin{alignat*}{3}
& Q_1: &&\quad b_1X_1^2 - b_2X_2^2 + (e_1-e_2)X_0^2 &&=\ 0, \\
& Q_2: &&\quad b_1X_1^2 - b_1b_2X_3^2 + (e_1-e_3)X_0^2\ &&=\ 0.
\end{alignat*} } \\
(Cf. [20], Chap. X, Section 1.) This intersection defines an abstract elliptic curve which
in general does not have any rational point, but which is a twist of the given curve; i.e.,
it becomes isomorphic to $E_{e_1,e_2,e_3}$ over an algebraic closure $\overline{\mathbb{Q}}$.
This curve defines a homogeneous space over $E_{e_1,e_2,e_3}$, hence an element in the
Weil-Chatelet group $WC(E_{e_1,e_2,e_3},\mathbb{Q})$. More precisely, since the above
construction stems from the multiplication by $2$ on $E_{e_1,e_2,e_3}$, it defines an element
of the 2-torsion part of $WC(E_{e_1,e_2,e_3},\mathbb{Q})$ (cf. [20], Ex. 4.5.1). \smallskip
The assignment $(x_0,x_1,x_2,x_3) \mapsto \bigl( (b_1x_1^2/x_0^2)+e_1, \, b_1b_2x_1x_2x_3/x_0^3
\bigr)$ extends to a well-defined regular mapping $Q_{e_1,e_2,e_3,b_1,b_2} \rightarrow
E_{e_1,e_2,e_3}$ of degree 4. In particular, if $(x_0,x_1,x_2,x_3)$ is a rational point on
the homogeneous space $Q_{e_1,e_2,e_3,b_1,b_2}$, one gets a rational point on the given curve.
Note that the logarithmic height of the point $(b_1x_1^2/x_0^2+e_1,b_1b_2x_1x_2x_3/x_0^3)$
on $E_{e_1e_2e_3}$ is about three times the height of the point $(x_0,x_1,x_2,x_3)$ on
$Q_{e_1,e_2,e_3,b_1,b_2}$. (Recall that the logarithmic height of a point
$P\in{\mathbb{P}}^N(\mathbb{Q})$ is the logarithm of $\max (|x_0|,\ldots ,|x_N|)$ where
$P=(x_0:x_1:\cdots :x_N)$ is represented with coprime integers $x_i$.) This explains why it
is reasonable to look for
points on $Q_{e_1,e_2,e_3,b_1,b_2}$ instead of finding rational points on $E_{e_1,e_2,e_3}$
directly. The problem is to determine the approriate homogeneous spaces which have
rational points. The most important approach to this question is contained in the following
two-descent procedure (cf. [20], Chap. X, Prop. 1.4; [8], Chap. IV, Section 3). \\
Let $S$ be the set of integers consisting of $-1$, $2$ and all the divisors of the discriminant
of $E_{e_1,e_2,e_3}$. Let $\mathbb{Q}(S,2)$ be the subgroup of $\mathbb{Q}^{*}/(\mathbb{Q}^{*}
)^{2}$ generated by the elements of $S$. Thus any element in $\mathbb{Q}(S,2)$
has a unique representation by an integer which is squarefree and which has prime factors
only in $S$. Let us consider the mappings $\varphi_{i}:E_{e_1,e_2,e_3}(\mathbb{Q})/2E_{
e_1,e_2,e_3}(\mathbb{Q})\rightarrow\mathbb{Q}(S,2)$ defined by
$$\varphi_{i}(P):=\begin{cases}
x-e_{i} & \hbox{if}\ P=(x,y)\ \hbox{with}\ x\neq e_i; \\
(e_i-e_j)(e_i-e_k) & \hbox{if}\ P=(e_i,0)\ \hbox{where}\ \{ i,j,k\} = \{ 1,2,3\}; \\
1 & \hbox{if}\ P=\infty.
\end{cases}\leqno{(10)}$$
Here the values are taken as representatives modulo $(\mathbb{Q}^{*})^{2}$.
Then $\varphi_{i}$ is well-defined and a homomorphism of groups. Furthermore,
the homomorphism
$$\varphi:
\begin{matrix}
E_{e_1,e_2,e_3}(\mathbb{Q})/2E_{e_1,e_2,e_3}(\mathbb{Q}) & \rightarrow & \mathbb{Q}(S,2)
\times\mathbb{Q}(S,2) \\
P & \mapsto & \bigl(\varphi_1(P), \varphi_2(P)\bigr)
\end{matrix}\leqno{(11)}$$
is injective (see [8], Prop. 4.8, or [20], Chap. X, Prop. 1.4).\\
\\
{\bf Consequence:} The homogeneous spaces $Q_{e_1,e_2,e_3,b_1,b_2}$ which contain a rational
point (and hence are trivial in the sense of [20], Chap. X, Section 3) are exactly those for
which $(b_1,b_2)$ is contained in the image of the mapping $\varphi$. Since $\mathbb{Q}(S,2)
\times\mathbb{Q}(S,2)$ is finite, we therefore have to check only a finite number of homogeneous
spaces for the existence of a rational point.\\
\\
{\bf Remark:} If the curve is given in the Weierstra\ss{} form for $E_{e_1,e_2,e_3}$, the two-descent
procedure (and thus the meaning of the mapping $\varphi$) can be explained in a very elementary
way. The affine part of $E_{e_1,e_2,e_3}$ is given by the equation $y^2=(x-e_1)(x-e_2)(x-e_3)$.
If $p=(x,y)\in\mathbb{Q}^2$ is a point on $E_{e_1,e_2,e_3}(\mathbb{Q})$ such that all three
factors $(x-e_i)$ are rational squares, then $p=[2]\cdot q$ for some rational point $q\in E_{
e_1,e_2,e_3}(\mathbb{Q})$, where $[2]$ denotes the multiplication-by-two map on $E_{e_1,e_2,e_3}$.
This halving procedure stops after a finite number of steps (because $E_{e_1,e_2,e_3}(\mathbb{Q})$
is finitely generated), yielding a point $p\in E_{e_1,e_2,e_3}(\mathbb{Q})$ which is not twice
another rational point on $E_{e_1,e_2,e_3}(\mathbb{Q})$. For this point $p=(x,y)$ we know that
not all the factors $(x-e_i)$ can be rational squares, whereas the product $y^2=(x-e_1)(x-e_2)
(x-e_3)$ is a rational square. Writing
$$x-e_1=A_1\alpha_1^2,\qquad x-e_2=A_2\alpha_2^2,\qquad x-e_3=A_3\alpha_3^2\leqno{(12)}$$
with squarefree integers $A_i$, we can eliminate $x=A_1\alpha_1^2+e_1$ to arrive at
the equations
$$A_1\alpha_1^2-A_2\alpha_2^2 = e_2-e_1,\qquad A_1\alpha_1^2-A_3\alpha_3^2=e_3-e_1.\leqno{(13)}$$
Moreover, the product $A_1A_2A_3$ is a perfect square, which implies that $A_3=A_1A_2$
up to a square factor. In this way we obtain the equations of the homogeneous space in
the above considerations. \medskip
\subsubsection{Reducing the number of possibilities}
Let us first observe that, given a set $P\subseteq E_{e_1,e_2,e_3}(\mathbb{Q})$ of known
rational points on $E_{e_1,e_2,e_3}$, and given a homogeneous space $Q_{e_1,e_2,e_3,b_1,b_2}$
by coefficients $(b_1,b_2)\in\mathbb{Q}^{*}/(\mathbb{Q}^{*})^{2}\times\mathbb{Q}^{*}/(
\mathbb{Q}^{*})^{2}$, then $Q_{e_1,e_2,e_3,b_1,b_2}$ contains a rational point (i.e., is
trivial in the Weil-Chatelet group) if and only if for any point $p\in P$ the homogeneous
space $Q_{e_1,e_2,e_3,c_1,c_2}$ with $(c_1,c_2)=(b_1,b_2)\cdot\varphi(p)$ has a rational point.
So we may say that $(b_1,b_2),(c_1,c_2)\in\mathbb{Q}^{*}/(\mathbb{Q}^{*})^{2}\times
\mathbb{Q}^{*}/(\mathbb{Q}^{*})^{2}$ are $P$-equivalent if and only if there is a point
$p\in P$ such that $(c_1,c_2)=(b_1,b_2)\cdot\varphi(p)$. Using this relation, we may
restrict our considerations to special representatives of homogeneous spaces whenever we
{\it a priori} know some rational points on $E_{e_1,e_2,e_3}$. Note that a well-known set
of rational points on $E_{e_1,e_2,e_3}$ is always given by the set of 2-torsion points.\\
\\
Now we consider either a single curve $E_{e_1,e_2,e_3}$ or else a family of curves
$E_{e_1,e_2,e_3}$ depending on some parameter(s) such that we know in advance that the
curve(s) contain rational points other than the 2-torsion points. To find appropriate
homogeneous spaces for $E_{e_1,e_2,e_3}$ possessing rational points, we may pursue the
following strategy:
\begin{itemize}
\item determine the (finite) set $\mathbb{Q}(S,2)$;
\item build equivalence classes of pairs $(b_1,b_2)\in\mathbb{Q}(S,2)\times\mathbb{Q}(S,2)$
which are $P$-equivalent with respect to the set $P$ of 2-torsion points of $E_{e_1,e_2,e_3}$;
\item exclude those classes of pairs $(b_{1},b_{2})\in\mathbb{Q}(S,2)\times\mathbb{Q}(S,2)$
for which a rational solution cannot exist for one of the following reasons:
\begin{itemize}
\item the quadric intersection $Q_{e_1,e_2,e_3,b_1,b_2}$ has no rational points because of
obstructions due to properties of quadratic residues;
\item at least one of the quadratic equations defining $Q_{e_1,e_2,e_3,b_1,b_2}$ has no
solution (for example because of Legendre's criterion).
\end{itemize}
\end{itemize}
~\\
Then the homogeneous spaces associated with the remaining classes are good candidates for
having rational points and thus giving rational points on the original elliptic curve(s)
$E_{e_1,e_2,e_3}$. Examples for the application of this method will be presented in the
next section.
\subsubsection{Notations }
The notations used up to this point were chosen to be consistent with the ones used in
[20] and [8]. For the use in later sections, it will be convenient to modify the notations
slightly. As mentioned in 2.1, we always may assume that the elliptic curve under
consideration is one of the curves $E_{M,N}$ with different nonzero integers $M,N\in\mathbb{Z}
\setminus\{ 0\}$, given in affine form by a Weierstra\ss{} equation of the special form $y^2 =
x(x+M)(x+N)$. In addition, we may assume $M=pk>0$ and $N=-qk<0$ with coprime natural numbers
$p,q\in\mathbb{N}$ and a square-free natural number $k$. We write the equations for the
homogeneous spaces in the form
$$AX_0^2+MX_1^2-BX_2^2=0,\qquad AX_0^2+NX_1^2-CX_3^2=0\leqno{(14)}$$
where $A,B,C\in\mathbb{Q}(S,2)$ are represented by squarefree integers, with $C$ depending on
$A$ and $B$ by the condition that $ABC$ is a perfect square. We call $(A,B,C)$ a triplet defining
a homogeneous space for $E_{M,N}$. The approach sketched in 2.3.2 provides us with means to
reduce the number of potential triplets $(A,B,C)$ by ruling out impossible ones and by
identifying triplets with respect to 2-torsion equivalence. (In the next section some speficic
examples are provided which explicitly show how this is done.) \smallskip
Let us choose any of the possible 2-descent parameter sets
$(A,B,C)$. Then we know that finding the expected solution of $E_{M,N}$ is
equivalent to finding a solution of the system $(12)$, which in our situation
reads
$$x=A\alpha^2, \qquad x+M=B\beta^2, \qquad x+N=C\gamma^2.\leqno{(15)}$$
where $x,\alpha,\beta,\gamma\in\mathbb{Q}$. With such a solution
we get a rational point $(x,y)$ on $E_{M,N}$ by setting $y=\sqrt{ABC}\alpha\beta\gamma$.
The system (15) is equivalent to the system (14), and an integer solution $(x_0,x_1,x_2,x_3)$
of (14) yields a rational solution of (15) via
$$x = \frac{Ax_0^2}{x_1^{2}}, \qquad y=\sqrt{ABC}\,\frac{x_0x_2x_3}{x_1^3}, \qquad
(\alpha,\beta,\gamma)=\left(\frac{x_0}{x_1},\frac{x_2}{x_1},\frac{x_3}{x_1}\right).
\leqno{(16)}$$
In projective notation, the sought rational point $(x,y)$ on $E_{M,N}$ is given by the
expression $(x_1^3 : Ax_0^2x_1 : \sqrt{ABC}\,x_0x_2x_3)$. We see that the logarithmic height
of this point is about 3 times the logarithmic height of the solution $(x_0,x_1,x_2,x_3)$.
\smallskip
In the sequel we thus may restrict our considerations to systems of quadratic equations
of the form (14). We may also assume that this system has a solution at all, which in
particular implies that each of the two individual quadratic equations has a solution.
| 214,240
|
The answer to whether you can set the resale price on a Consignment Agreement at first looks like a “no” – as the Competition and Consumer Act 2010 (Cth) (“CCA”) prohibits resale price maintenance.
In other words, it is generally illegal for suppliers to:
- Put pressure on businesses to charge their recommended retail price or any other set price, for example by threatening to stop supplying to the reseller; or
- Stop resellers from advertising, displaying or selling goods from the supplier below a specified price
Resellers also can’t ask their suppliers to use recommended price lists to stop competitors from discounting.
Suppliers can generally recommend a resale price, and specifying maximum prices is usually OK too. They just must not put any pressure on the reseller to charge the recommended price/s, or any other price (e.g. the recommended price less 10%).
Legal Agent
However, when the reseller is acting as your legal agent (which is the case in some types of consignment arrangements), then you can dictate the sale price, as the reseller is not acting as a reseller but rather as your agent, and acts on your behalf with your authority to form a contract directly between you and the end consumer.
You need to have your consignment agreement professionally drafted to ensure they have this effect and are not just the usual “sale or return” type consignment agreements, as resale price maintenance in the latter case will breach the CCA.
You should also be aware that the GST treatment of each type of consignment agreement is very different so you should also look into this thoroughly as part of the process.
Consignment Agreement (Agency) vs Consignment Agreement (Sale or Return)
The best guide to this we have found is the ATO website – because of the differences in the GST treatment, they have tried to clarify precisely the difference between these two different types of relationship, which has come in handy for us when drafting reseller (consignment agency) agreements for our clients.
The ATO suggests that consignment (agency) sale is one where the consignee (agent):
- does not set the sale price of the goods;
- receives either a flat rate or percentage commission on completion of the sale;
- does not hold the goods out in their own right;
- agrees with the owner of the goods agree that they will act as an agent.
The last feature is often misunderstood by clients. An agent in a legal sense is expressly authorised by their principal (you) to enter into contracts with others (the end buyers in this case) on your behalf. There is never any contract formed between the agent and the end buyer. The only sales contract is formed between you and the end buyer. They are not, in a strict legal sense, acting as a reseller, the correct term is consignment agent.
Conclusion
But don’t get too bogged down in the legalities – that’s what we are here for. We have lots of experience in drafting Consignment Agreements for clients who wish to set the retail price for their products, and if that’s what you want to do, we can help you too.
Get in touch today on 1300 544 755. Our Contract Lawyers are happy to provide a fixed-fee quote!
| 57,350
|
machinery required for marble quarry. Marble Quarry - Marini Quarries Group index ENG. ... SEM has been serving the stone crushing amp grinding industry for over 20 years ... Read more. what machinery is required to mine marble - BINQ Mining. ... karachi marble granite quarry processing machinery ... granet stone factory machinery equipments ...
Send Email:[email protected]
Aggregate quarry equipment Manufacturer. As a leading global manufacturer of crushing and milling equipment, we offer advanced, rational solutions for any size-reduction requirements, including quarry, aggregate, grinding production and complete plant plan. We also supply individual crushers and mills as well as spare parts of
machines required for quarry processing industry. machines use in granite and quarry industries - machines used in a quarry use of new tools and equipment in the granite quarry industry from description of the granite History of Aggregate industries, Types of Machinery in the Quarry Industry A quarry is a site where a stone or gravel producerLearn More
Second stone processing equipment. Third, the transport equipment. Fourth, the staff. In addition to the hardware equipment, the quarry also requires a certain amount to have the relevant skills of staff. Fifth, electricity, water supply, gas supply, oil supply, communications and otherLearn More
A quarry is a place where different types of stones are extracted. ... ...Learn More
Supply Quarry Stone Crushing Machine - Rock Crusher Quarry Stone Crushing Machine. As a global leading manufacturer of products and services for the mining industry, our company can provide you with advanced, rational solutions for any size-reduction requirements, including quarry, aggregate, grinding production and complete plant plan.Learn More
Guarding of Machinery Guidance from the Quarry Products Association August 2005. Historically, a persistent feature of all industries operating machinery and equipment has been the many injuries - including fatalities - caused by the inadequacy or absence ... has relevance to fixed processing plant,Learn More
and processing facilities located throughout the United States, and through extensive consultation with industry experts and quarry operators. Choosing a diverse array of facilities was key to this process as a broad understanding of stone industry operations was needed toLearn More
We offer a wide selection of rollers with options to meet most of your material handling needs. Charlton Equipment sell replacement rollers for many different brands of machinery such as, but not limited to, ExtecSandvik, Fintec, Finlay, McCloskey, Powerscreen, Maximus and Edge.. Please contact us with any requirements you may have.Learn More-term growth initiatives and purchase a machine sized to allow for that growth ...Learn More
Whether your involvement in the Quarry and Demolition segment is producing aggregates, processing material, recycling used material, operating a quarry, dealing with or handling demolition waste, Blue Group specialists offer a range of solutions to meet your needs.Learn More
quarry stone crusher, quarry stone crusher Suppliers . Alibaba offers 3,966 quarry stone crusher products. About 69 of these are Crusher, 0 are Plastic Crushing Machines. A wide variety of quarry stone crusher options are available to you, such as condition,Learn More
Machine Required For Granite Quarry - henrys . Machines Required At Granite Quarry Czeu. Machines required at granite quarry machine required for granite quarryquipment required to set up granite quarry quarry machinesachines for block extracting, stone quarrying, granite quarrying, marble quarrying 17684 yemen we are planning to buy different newused.Learn More
Aug 01, 2019 The aggregate industry workforce in 2013 was comprised of about 81,500 men and women working at about 10,000 operations around the United States. Still, tens of thousands more jobs in industries that provide machinery, equipment and services are dependent on aggregate production.Learn More
Required Machine Required For Granite Quarry. Machinery required for granite mining - machinery required for granite mining granite mining equipment stone quarry machines for sale wholesale, quarry machine bout 67 of these are crusher, 6 are stone machinery, and 5 are mine mill gold ore mineral mining machinery gold copper flotation machine about ushat you need is what we.Learn More
Sep 01, 2012 , E c is the energy consumption per tile in kWh, obtained from Table 9, th is the thermal efficiency of the quarry machinery andor diesel truck engine, assumed here to have a value of 0.4, LHV is the lower heating value of a typical diesel fuel in kJkg and is the diesel fuel density in kgm 3. Eq.Learn More ...Learn More
Used Quarry Machines, Source Used Quarry Machines Products at Crusher, Curb amp Paving Stone Forming Machinery from Manufacturers and Suppliers around the , Get Price Chat Online Quarry Machine,Stone Quarry Equipment,Quarry Crusher , Quarry machine manufacturer, XSM service for stone quarry plant industry for 20 years.Learn More
We have pellegrini quarry machines,PELLEGRINI The Stone Master Quarry Equipment Download 64MB TRIMWIRE Mobile single wire saw for squaring and cutting thick slabs of marble and granite download 470 KB Pneumatic drilling machines Machine for dry cutting soft stones using discs fitted with hard metal segmentsLearn More
machines on processing line sand making stone quarry Know More. Equipment for Stone quarry plant in and grinding machine for stone quarry plant in China more granite mining and processing industry across the country 27 Get Quote machinery required for sand industry Mining World Quarry Service Online Clay quarryLearn More
Stone Quarrying Process LineIn mining industry, many stone crusher machines are at sale for different purposes, such as crushing machines, grinding mill machine stone processing stone crushing line for quarryLearn More
The total energy required to quarry and process limestonecement rock is 29,932 Btutonne. stone limestone Limestone quarry industrial cause of May 20, 2011 Almost all of the stone mining quarrying industrial process can produce a lot of wastewater, Limestone quarry industry is no exception.Learn More
Quarry amp Aggregates. In todays quarry and aggregates industry, efficiency and production matter more than ever. It matters in the equipment you buy and the partners you choose. But it doesnt stop there. To compete, you need to leverage the available technology andLearn More
QUARRY EQUIPMENT. Quarry equipment thats fully equipped to conquer the most difficult projects you encounter and yet humble enough to let you be the boss, our product line includes everything from Wheel Loaders to Excavators and can meet all the challenges of the quarry, aggregates and cement industry.Learn More
machines required for granite quarry. ... diamond wire saws, and large granite quarry machinery for granite granite industry and stone industry from DIACO Innovations Elberton, Georgia. ... quarry equipment,quarry plant,Aggregate Processing Line ... quarry equipment,quarry plant,Aggregate Processing Line,Sand Production Line, all of those ...Learn More
Sep 10, 2015 Selecting equipment for washing and classifying can seem like a daunting task. There are countless solutions using a variety of equipment that can be put together based on a producers needs. Although some of this equipment was detailed earlier, following is a more-complete overview of wet processing equipment available to aggregate producers.Learn More
quarry stone industry. Application ... With its process know-how, high quality - reliable machines and services, HAZEMAG plays an important and highly contributing role in the crushing and processing of the raw materials that ultimately result in this essential product we all call aggregate.Learn More
aggregate processing in quarry at malaysia. aggregate quarry in gambang pahang malaysia - YouTube 14 May 2014 crusher grinding for aggregate sand Quarry equipment in Malaysia like rock crusher and grinding mill are used for Flow Chart Of Granite Quarry Processing In Malaysia processFlow Chart Of Safety award for Malaysian industry - Quarry MagazineLearn More
A Sample Stone Quarry Business Plan Template 1. Industry Overview. A stone quarry business is a business that involves the excavation of different dimension of stones, rocks, ripraps, construction aggregates, slates and gravels for the constructions industry.Learn More ...Learn More.Learn More
Welcome to thegrinding millampgrinding machineampgrinder. How does a Drotsky hammermill work ... This rotor rotates at a high speed inside the milling chamber and the material is crushed by repeated hammer impacts..
Bead Mill For Calcium Carbonate Slurry. ELE Bead mill is suitable for Calcium carbonate slurry, it is efficient equipment in wet grinding machine, whose optimization design of accelerating turbo wheels provides more kinetic energy for beads in chamber.
Jul 17, 2012 To obtain ammonium or potassium perchlorate, we will use what is called a double decomposition. Introduce, either ammonium or potassium chloride in the solution and the corresponding salt will precipitate out as their solubility is very low compared to sodium perchlorate. To finish, extract the perchlorate and let it dry in the sun.
Quartz.
As an equipment manufacturer, Fives designs and supplies solutions for best performances and easy operation. To answer its clients daily maintenance needs, Fives proposes a wide range of spare parts for crushing, grinding, pyroprocess and combustion equipment, as well as emission treatment systems.
Crusher For Sale Ireland 180T - Mc World. Used crushers in irlanda - essentialgap.co.a. used crusher sale ireland crushers for sale screens for sale quarry mobile cone crusher crushers for sale ireland 180t Used And New Fixed Crusher Shredder. Get Price. used crusher for sale in irland - paver. Details...
patriot coal reflective clothing. ..., ...
From applications that require vibration to feed, convey, pack, screen and more, to tasks that include bulky or very fine materials, GK has the necessary equipment. Accomplish your business needs with the seamless integration of General Kinematics low-energy consumption equipment..
Screening and vibratory equipment take in various shapes, separate the material into consistent sizes and dispense a uniform product from as many as five unique discharge points. Its why we like to compare Superior screening equipment to the highest paying slot machines in Las Vegas.
The Boulanger Project is located 60 km by paved road to the south of Cayenne, the capital city of French Guiana and is accessible by all-weather roads. Reunion Gold Corporation is a gold explorer with a portfolio of projects in Guyana, Suriname and French Guiana, all located in the Guiana Shield, South America. We seek safe harbor.
Crusher Tria Xt Amp B Hs Sls.
Everything Attachments 72 Pulverizer Yard Tool Version 2.0. The Everything Attachments pulverizer yard tool is a top of the line attachment that features other manufacturers upgrades as standard equipment. Most manufacturers of pulverizers offer each size of pulverizer with two options of teeth, railroad style teeth, or big teeth for an ...
Conservative estimates report that 80 percent over 22 tons of ASM gold mined in the Democratic Republic of the Congo is smuggled out,14 other estimates report that the number is as high as 98 percent.
Mar 07, 2014 Theres gold in them thar circuit boards -- laptops, phones, cameras and other devices use the precious metal to connect components, and it can
Sand bricks are helpful in a number of different construction scenarios. Actually, together with the latest developments in technology, sand bricks have become increasingly found in various projects throughout the country. As a result, we have seen a spike inside the interest construction worke ...
Crushercompanyin south africacrushercompanyin south africaCrusher feeding systems suppliers in south africaknow moresouth africamalaysia india nigeria indonesia ....,
Liner For Crushing And Grinding Machine. Liner For Crushing And Grinding Machine liner face grinding machine yogamayacomfortsin liner face grinding machine lfgm 500 NMN Manufacturer of Crankshaft Grinding Machine Liner Landing Spring cone crusher is an efficient rock crushing equipment MORE INFORMATIONS ..
| 113,872
|
\begin{document}
\title{Scaling of the Thue--Morse diffraction measure}
\author{Michael Baake$^{a}$, Uwe Grimm$^{b}$ and Johan Nilsson$^{a}$}
\affiliation{{}$^{a}\!$Fakult\"{a}t f\"{u}r Mathematik,
Universit\"{a}t Bielefeld,
Postfach 100131, 33501 Bielefeld, Germany\\
{}$^{b}\!$Department of Mathematics and Statistics,
The Open University,
Walton Hall, Milton Keynes MK7 6AA, UK}
\begin{abstract}
We revisit the well-known and much studied Riesz product
representation of the Thue--Morse diffraction measure, which is also the
maximal spectral measure for the corresponding dynamical spectrum in
the complement of the pure point part. The known scaling relations
are summarised, and some new findings are explained.
\end{abstract}
\pacs{61.05.cc,
61.43.-j,
61.44.Br
}
\maketitle
\section{Introduction}
The Thue--Morse (TM) sequence is defined via the binary substitution
$1\mapsto 1\bar{1}$, $\bar{1}\mapsto \bar{1}1$; see \cite{AS,tao} and
references therein for general background. The corresponding dynamical
system is known to have mixed (pure point and singular continuous)
spectrum \cite{Q,ME,Kea}, with a pure point part on the dyadic points and
a singular continuous spectral measure in the form of a Riesz
product. The latter coincides with the diffraction measure
$\widehat{\gamma}$ of the TM Dirac comb with weights $\pm 1$; compare
\cite{BG08} for details.
The Riesz product representation of the TM diffraction measure reads
\begin{equation}\label{eq:TM-Riesz}
\widehat{\gamma}\, =
\prod_{n\ge 0} \bigl( 1 - \cos(2^{n+1}\pi k)\bigr),
\end{equation}
with convergence (as a measure, not as a function) in the vague
topology; see \cite{Z} for general background. The singular continuous
nature of $\widehat{\gamma}$ is traditionally proved \cite{Q,Nat} by
excluding pure points by Wiener's criterion \cite{Wie,Mah} and
absolutely continuous parts by the Riemann--Lebesgue lemma
\cite{Kaku}; compare \cite{BG08,tao} and references therein for
further material.
Since diffraction measures with singular continuous components do
occur in practice \cite{Withers}, it is of interest to study such
measures in more detail. Below, we use the TM paradigm to rigorously
explore the scaling properties of `singular peaks' in a diffraction
measure, combining methods from harmonic analysis and number theory;
for further results of a similar type, we refer to
\cite{CSM,GL,Zaks,ZPK1,ZPK2} and references therein.
\section{Uniform distribution properties}
In what follows, arguments around uniform distribution will be
important. Let us thus begin with a summary of equivalent
characterisations.
\begin{prop}\label{prop:unidist}
Let\/ $(x_{n})^{}_{n\in\NN}$ be a sequence of real numbers
in the interval $[0,1]$. Then, the
following properties are equivalent.
\begin{itemize}
\item[$1.$] The sequence $(x_{n})^{}_{n\in\NN}$ is uniformly
distributed in the interval\/ $[0,1]$, also known as
uniform distribution mod\/ $1$.
\item[$2.$] For every pair\/ $a,b$ of real numbers subject to the
condition\/ $0\le a<b\le 1$, one has\newline
$\lim_{N\to\infty}\frac{1}{N}\ts\mathrm{card} \bigl(\{x_{n}\mid n\le
N\}\cap [a,b)\bigr) = b-a$.
\item[$3.$] For every real-valued continuous function on the closed unit
interval, one has\newline
$\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}f(x_{n}) =
\int_{0}^{1} f(x) \dd x$.
\item[$4.$] One has\/ $\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}
e^{2\pi i k x_{n}} = 0$ for all wave numbers\/ $k\in\ZZ\setminus\{0\}$.
\end{itemize}
\end{prop}
For details and proofs, we refer to \cite[Chs.~1.1 and 1.2]{KN}. Note
that the continuity condition in property $(3)$ can be replaced by
Riemann integrability \cite[Cor.~1.1.1]{KN}, but not by Lebesgue
integrability (for obvious reasons).
Below, we particularly need uniform distribution properties of the
sequence defined by $x_{n}=2^{n}$, which was extensively studied in
this context by Kac \cite{Kac}. This is a special case of a classic
family of sequences considered by Hardy and Littlewood \cite{HL}, and
by Weyl; see also \cite[Thm.~1.4.1 and Notes to Ch.~1.4]{KN}.
\begin{lemma}\label{lem:unidist}
Let\/ $(x_{n})^{}_{n\in\NN}$ be a sequence of distinct integers. Then,
the sequence\/ $(x_{n} k)^{}_{n\in\NN}$ is uniformly distributed\/
${}\bmod 1$ for Lebesgue-almost all\/ $k\in\RR$. In particular, this
holds for\/ $x_{n}=\ell^{n}$ with any fixed integer\/ $\ell\ge 2$.
\end{lemma}
Consider the function $f(x)=\log\bigl(1-\cos(2\pi x)\bigr)$ on $[0,1]$.
It has singularities at $x=0$ and $x=1$, which are both integrable (via
standard arguments). In fact, one has
\begin{equation}\label{eq:int}
\int_{0}^{1} \log\bigl(1-\cos(2\pi x)\bigr)\dd x
\, = \, -\log(2)\, .
\end{equation}
We also need a discrete analogue of this formula. Via
$1-\cos(2\vartheta)=2\bigl(\sin(\vartheta)\bigr)^{2}$ together with
the well-known identity $\prod_{m=1}^{n-1} \sin(\pi\frac{m}{n}) =
n/2^{n-1}$, one can derive that
\begin{equation}\label{eq:qsum}
\sum_{m=1}^{n-1} \log \bigl(1-\cos(2\pi \tfrac{m}{n})\bigr)\, = \,
\log\biggl(\frac{n^{2}}{2_{\phantom{T}}^{n-1}}\biggr)
\end{equation}
holds for all $n\ge 1$.\smallskip
In order to apply uniform distribution results, we set
\[
f^{}_{\!\scriptscriptstyle\circ}(x) \, = \,
\begin{cases} f(x), & \text{if $0<x<1$,}\\
0, & \text{if $x=0$ or $x=1$,}\end{cases}
\]
which is Riemann integrable. By Proposition~\ref{prop:unidist},
we have
\[
\lim_{N\to\infty} \frac{1}{N} \sum_{n=1}^{N}
f^{}_{\!\scriptscriptstyle\circ}(x_{n}) \, = \, -\log(2)
\]
for any real-valued sequence $(x_{n})^{}_{n\in\NN}$ that is uniformly
distributed ${}\bmod 1$. Alternatively, one may directly work with the
function $f$ itself if the sequence $(x_{n})^{}_{n\in\NN}$ avoids the
points $x=0$ and $x=1$.
\section{Riesz product}
A direct path to the Riesz product of the TM diffraction measure can
be obtained as follows. Consider the recursion $v^{(n+1)}= v^{(n)}
\bar{v}^{(n)}$ with initial condition $v^{(0)}=1$, which gives an
iteration towards the one-sided fixed point $v$ of the TM substitution
on the alphabet $\{1,\bar{1}\}$. If we define the exponential sum
\[
g_{n}(k) \, = \, \sum_{\ell=0}^{2^{n}-1} v^{}_{\ell}\, e^{-2\pi ik \ell},
\]
where $v = v^{}_{0} v^{}_{1} v^{}_{2} \cdots$, the function $g_{n}$ is
then the Fourier transform of the weighted Dirac comb for $v_{n}$,
when it is realised with Dirac measures (of weight $\pm 1$) on the
left endpoints of the unit intervals that represent the symbolic
sequence of $v_{n}$. In particular, one has $g^{}_{0}(k)=1$ and
\[
g_{n+1}(k) \, = \, \bigl(1- e^{-2\pi ik 2^{n}}\bigr)\, g_{n}(k)
\]
for $n\ge 0$, so that
\[
\bigl| g_{n+1}(k)\bigr|^{2} \, = \, 2\ts \bigl| g_{n}(k)\bigr|^{2}
\bigl(1 - \cos(2^{n+1}\pi k)\bigr).
\]
One can then explicitly check that $f_{n}(k) = \frac{1}{2^{n}}\bigl|
g_{n}(k)\bigr|^{2} = \prod_{\ell=0}^{n-1} \bigl( 1 - \cos(2^{\ell+1}
\pi k)\bigr)$, which reproduces the Riesz product of
Eq.~\eqref{eq:TM-Riesz} in the sense that $\lim_{n\to\infty} f_{n} =
\widehat{\gamma}$ as measures in the vague topology.
As $g_{n}$ corresponds to a chain of length $2^{n}$, the growth
rate $\beta (k)$ (when it is well-defined) is obtained as
\[
\beta (k) \, = \, \lim_{n\to\infty}
\frac{\log \bigl(f^{}_{n} (k)\bigr)}{n \ts \log (2)} \ts .
\]
Let us now consider the growth rate for various cases of the wave
number $k$.
\emph{Case A.} When $k=\frac{m}{2^{r}}$ with $r \ge 0$ and $m\in\ZZ$,
all but finitely many factors of the Riesz product \eqref{eq:TM-Riesz}
vanish, so that no contribution can emerge from such values of $k$. In
fact, these are the dyadic points, which support the pure point part
of the dynamical spectrum. They are extinction points for the
diffraction measure of the balanced weight case considered here;
compare \cite{ME} for a discussion of this connection.
\emph{Case B.} When $k\in\RR$ is such that the sequence
$(2^{n}k)^{}_{n\in\NN}$ is uniformly distributed ${}\bmod 1$, which is
true for Lebesgue-almost all $k\in\RR$ by Lemma~\ref{lem:unidist}, one
obtains the growth rate
\[
\begin{aligned}
\beta(k)\, & = \lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}
\frac{\log\bigl(1-\cos(2^{n+1}\pi k)\bigr)}{\log(2)}\\[1mm]
& = \int_{0}^{1}\frac{\log\bigl(1-
\cos(2 \pi x)\bigr)}{\log(2)} \dd x
\, = -1
\end{aligned}
\]
by uniform distribution and Eq.~\eqref{eq:int}. Note that none of
these sequences ever visits a dyadic point, so that the limit
according to Proposition~\ref{prop:unidist} applies. These values of
the wave number $k$ thus do not contribute to the TM measure.
Note that this argument shows that $\lim_{n\to\infty}f_{n}(k)=0$
pointwise for almost all $k\in\RR$ and thus provides an alternative
proof for the fact that the measure from Eq.~\eqref{eq:TM-Riesz} does
not comprise an absolutely continuous part; compare the Introduction
as well as \cite{Kaku,BG08}.
\emph{Case C.} When $k=\frac{m}{3}$ with $m$ not divisible by $3$, one
finds $f_{n}(k)=(3/2)^{n}$. Since this corresponds to a system (or
sequence) of length $2^{n}$, we have a growth rate of
\[
\beta(k) \, = \, \frac{\log(3/2)}{\log (2)}\, \approx\,
0.584963\, .
\]
The same growth rate applies to all numbers of the form
$k=\frac{m}{2_{\vphantom{I}}^{r} \cdot \, 3}$ with $r \ge 0$ and $m$
not divisible by $3$, because the factor $2^{r}$ in the denominator
has no influence on the asymptotic scaling, due to the structure of
the Riesz product \eqref{eq:TM-Riesz}. Note that the points of this
form are dense in $\RR$, but countable.
Similarly, when $k=\frac{m}{2_{\vphantom{I}}^{r}\cdot\, 5}$ with $r\ge
0$ and $m$ not a multiple of $5$, one finds
\[
\beta(k)\, = \, \frac{\log(5/4)}{2\ts\log(2)} \, \approx \, 0.160964\, .
\]
\emph{Case D.} More generally, when ${k=\frac{p}{2_{\vphantom{I}}^{r}q}}$
with ${r\ge 0}$, ${q\ge 3}$ odd and $\gcd(p,q)=1$, one can
determine the growth rate explicitly. Recall that ${U_{\nts q}}
:= {(\ZZ/q\ZZ)^{\times}} = {\{ 1 \le p < q \mid \gcd (p,q)=1 \}}$ is
the unit group of the residue class ring $\ZZ/q\ZZ$. If $S_{q} = {\{
2^n \bmod q \mid n\ge 0 \}}$ is the subgroup of $U_{\nts q}$
generated by the unit $2$, one finds
\begin{equation}\label{eq:betapq}
\beta (k) \, = \, \frac{1}{ \mathrm{card} (p\ts S_{q})}
\sum_{n\in pS_{q}} \frac{\log
\bigl(1-\cos (2 \pi \frac{n}{q}) \bigr)}{\log (2)}
\end{equation}
by an elementary calculation. When $q=2m+1$, the integer
$\mathrm{card} (S_{q})$ is the multiplicative order of $2$ mod $q$,
which is sequence \textsf{A\ts 002326} in \cite{Sloane}.
\begin{figure}
\centerline{\includegraphics[width=\columnwidth]{beta1q.eps}}
\caption{Exponents $\beta(1/q)$ according to Eq.~\eqref{eq:betapq} for
all odd $3\le q< 1050$. Apart from $q=3$ and $q=5$, the exponents
are negative. The solid line is the function $g$ from
Eq.~\eqref{eq:top}. Exponents with large negative values emerge for
$q=2^{r}\pm 1$.\label{fig:betafig}}
\end{figure}
When $\gcd(p,q)=1$, one has $\mathrm{card}(p\ts S_{q}) =
\mathrm{card}(S_{q})$, even when the set $p\ts S_{q}$ is considered
mod $q$. Note that formula \eqref{eq:betapq} is written in such a way
that it also holds for all $p$ not divisible by $q$. If $\gcd(p,q)>1$,
the set $p\ts S_{q}$ may be reduced mod $q$, which shows that the
formula consistently gives $\beta(p/q)$ in such cases.
\emph{Case E.}
When $\mathrm{card}(S_{q})=q-1$, Eq.~\eqref{eq:qsum} leads to
$\beta(1/q)=g(q)$ with
\begin{equation}\label{eq:top}
g(q) \, = \, \frac{\log\Bigl(
\frac{q^{2}}{2_{\vphantom{I}}^{q-1}}\Bigr)}
{\log\bigl( 2_{\vphantom{I}}^{q-1}\bigr)} \, = \,
\frac{2\ts\log(q)}{(q-1)\log(2)} - 1\ts .
\end{equation}
For odd $q\ge 3$, the function $g(q)$ is positive precisely for $q=3$
and $q=5$, and negative otherwise; compare Figure~\ref{fig:betafig}.
In fact, also $\beta(1/q)$ seems to be negative for all odd $q\ge 7$,
though this does not hold for general $\beta(p/q)$. Indeed,
$\beta(3/17)>0$, and all positive exponents for odd $7\le q<1000$ are
listed in Table~\ref{qtab}.
More generally, for any odd $q\ge 3$, one obtains (from Case D) the
formula
\begin{equation}
\frac{1}{q-1} \sum_{1<d|q} \mathrm{card}(S_{d})
\sum_{p\in U_{\nts d}/S_{d}} \beta\bigl(\tfrac{p}{d}\bigr)
\, = \, g(q)\ts .
\end{equation}
Now, M\"{o}bius inversion (with the M\"{o}bius function $\mu$) leads
to
\begin{equation}
\sum_{p\in U_{\nts q}/S_{q}} \beta\bigl(\tfrac{p}{q}\bigr)
\, = \, \frac{1}{\mathrm{card}(S_{q})} \sum_{1\ne d|q}
\mu\bigl(\tfrac{q}{d}\bigr)\, (d-1)\, g(d)\ts ,
\end{equation}
while a simpler formula than Eq.~\eqref{eq:betapq} for the individual
exponents seems difficult in general.
\emph{Case F.} As is shown in \cite{CSM} (by way of an explicit
example), there are wave numbers $k$ for which the exponent $\beta(k)$
does not exist. The construction is based on a suitable mixture of
binary expansions for wave numbers with different exponents. Clearly,
there are uncountably many such examples, though they still form a
null set. Here, one can define a `spectrum' of exponents via the
limits of all converging subsequences.
\begin{table}\renewcommand{\arraystretch}{1.3}
\caption{Wave numbers $k=\frac{p}{q}$ with positive exponents, for all odd
integers $5<q<1000$. For a given $q$, all $p\in U_{\nts q}/S_{q}$ are
considered (we choose the smallest element of the set $p\ts S_{q}$ mod $q$
as representative).\label{qtab}}
\begin{footnotesize}
\begin{tabular}{|@{\,}cl@{\,}|@{\,}cl@{\,}|@{\,}cl@{\,}|@{\,}cl
@{\,}|@{\,}cl@{\,}|@{\,}cl@{\,}|}\hline
$\frac{p}{q}$ & $\,\beta\bigl(\frac{p}{q}\bigr)$ &
$\frac{p}{q}$ & $\,\beta\bigl(\frac{p}{q}\bigr)$ &
$\frac{p}{q}$ & $\,\beta\bigl(\frac{p}{q}\bigr)$ &
$\frac{p}{q}$ & $\,\beta\bigl(\frac{p}{q}\bigr)$ &
$\frac{p}{q}$ & $\,\beta\bigl(\frac{p}{q}\bigr)$ &
$\frac{p}{q}$ & $\,\beta\bigl(\frac{p}{q}\bigr)$\rule[-6pt]{0pt}{6pt}\\ \hline
$\frac{3}{17}$ & 0.266 &
$\frac{25}{117}$ & 0.172 &
$\frac{37}{255}$ & 0.150 &
$\frac{65}{381}$ & 0.067 &
$\frac{47}{565}$ & 0.144 &
$\frac{65}{771}$ & 0.140 \\
$\frac{5}{31}$ & 0.272 &
$\frac{19}{127}$ & 0.108 &
$\frac{43}{255}$ & 0.318 &
$\frac{47}{451}$ & 0.127 &
$\frac{81}{565}$ & 0.113 &
$\frac{161}{771}$ & 0.140 \\
$\frac{11}{31}$ & 0.272 &
$\frac{21}{127}$ & 0.373 &
$\frac{53}{255}$ & 0.318 &
$\frac{65}{451}$ & 0.127 &
$\frac{61}{585}$ & 0.126 &
$\frac{69}{775}$ & 0.101 \\
$\frac{5}{33}$ & 0.105 &
$\frac{27}{127}$ & 0.108 &
$\frac{91}{255}$ & 0.150 &
$\frac{67}{455}$ & 0.128 &
$\frac{97}{585}$ & 0.126 &
$\frac{83}{775}$ & 0.127 \\
$\frac{7}{43}$ & 0.267 &
$\frac{43}{127}$ & 0.373 &
$\frac{37}{257}$ & 0.049 &
$\frac{69}{455}$ & 0.128 &
$\frac{53}{595}$ & 0.031 &
$\frac{111}{775}$ & 0.127 \\
$\frac{11}{63}$ & 0.244 &
$\frac{19}{129}$ & 0.143 &
$\frac{43}{257}$ & 0.404 &
$\frac{53}{511}$ & 0.028 &
$\frac{87}{595}$ & 0.031 &
$\frac{117}{775}$ & 0.101 \\
$\frac{13}{63}$ & 0.244 &
$\frac{11}{151}$ & 0.012 &
$\frac{45}{257}$ & 0.221 &
$\frac{75}{511}$ & 0.163 &
$\frac{51}{601}$ & 0.042 &
$\frac{57}{785}$ & 0.085 \\
$\frac{11}{65}$ & 0.350 &
$\frac{35}{151}$ & 0.012 &
$\frac{23}{275}$ & 0.117 &
$\frac{83}{511}$ & 0.239 &
$\frac{63}{601}$ & 0.042 &
$\frac{137}{819}$ & 0.359 \\
$\frac{11}{73}$ & 0.165 &
$\frac{25}{171}$ & 0.220 &
$\frac{49}{275}$ & 0.117 &
$\frac{85}{511}$ & 0.422 &
$\frac{53}{657}$ & 0.061 &
$\frac{145}{819}$ & 0.359 \\
$\frac{13}{73}$ & 0.165 &
$\frac{19}{185}$ & 0.126 &
$\frac{25}{331}$ & 0.067 &
$\frac{87}{511}$ & 0.028 &
$\frac{101}{657}$ & 0.061 &
$\frac{67}{825}$ & 0.089 \\
$\frac{13}{89}$ & 0.229 &
$\frac{17}{195}$ & 0.001 &
$\frac{35}{337}$ & 0.149 &
$\frac{107}{511}$ & 0.239 &
$\frac{51}{673}$ & 0.038 &
$\frac{173}{825}$ & 0.089 \\
$\frac{19}{89}$ & 0.229 &
$\frac{41}{195}$ & 0.001 &
$\frac{57}{337}$ & 0.149 &
$\frac{109}{511}$ & 0.163 &
$\frac{57}{683}$ & 0.131 &
$\frac{67}{889}$ & 0.050 \\
$\frac{9}{91}$ & 0.075 &
$\frac{17}{205}$ & 0.047 &
$\frac{49}{341}$ & 0.060 &
$\frac{171}{511}$ & 0.422 &
$\frac{71}{683}$ & 0.087 &
$\frac{95}{889}$ & 0.012 \\
$\frac{19}{91}$ & 0.075 &
$\frac{31}{205}$ & 0.183 &
$\frac{57}{341}$ & 0.369 &
$\frac{43}{513}$ & 0.033 &
$\frac{103}{683}$ & 0.179 &
$\frac{129}{889}$ & 0.012 \\
$\frac{11}{105}$ & 0.060 &
$\frac{19}{217}$ & 0.073 &
$\frac{71}{341}$ & 0.369 &
$\frac{77}{513}$ & 0.122 &
$\frac{111}{683}$ & 0.226 &
$\frac{157}{889}$ & 0.050 \\
$\frac{17}{105}$ & 0.060 &
$\frac{37}{217}$ & 0.073 &
$\frac{73}{341}$ & 0.060 &
$\frac{83}{513}$ & 0.272 &
$\frac{113}{683}$ & 0.335 &
$\frac{83}{993}$ & 0.124 \\
$\frac{17}{117}$ & 0.172 &
$\frac{35}{241}$ & 0.194 &
$\frac{31}{381}$ & 0.067 &
$\frac{85}{513}$ & 0.343 &
$\frac{55}{753}$ & 0.054 &
$\frac{149}{993}$ & 0.172\rule[-5pt]{0pt}{5pt}\\ \hline
\end{tabular}
\end{footnotesize}
\end{table}
\emph{Case G.} So far, we have identified countably many values of
$k$, for which the scaling exponents can be calculated, while
(due to Case B) Lebesgue-almost all $k\in\RR$ carry no singular
peak. The remaining problem is to cope with the uncountably many wave
numbers (of zero Lebesgue measure) that belong to the supporting set
of the TM measure and may possess well-defined exponents.
The existence of such numbers can be understood via Diophantine
approximation. Again, it is useful to start with the binary expansion
of a wave number $k$, and then modify it in a suitable way. Consider
first the example
\[
k \, = \, \tfrac{1}{3} \, = \, 0.0101010101\ldots
\]
If we now switch the binary digits at positions $2^r$, with $r\in\NN$,
we obtain a different wave number $k'$ that is irrational but
nevertheless still has the same scaling exponent $\beta$ as
$k=\frac{1}{3}$, as longer and longer stretches of the binary
expansion of $k'$ agree with that of $k$. Clearly, via similar
modifications, we can obtain uncountably many distinct irrational
numbers with $\beta = \beta(1/3)$.
The same strategy works for all other rational wave numbers $k$, and
underlies the nature of the TM measure. In particular, this explains
the existence of uncountably many `singular peaks', which together (in
view of Case~B) still form a Lebesgue null set. These
scaling exponents are accessible via our above arguments. It remains
to decide in which sense the above analysis is complete.
\section{Concluding remarks}
An analogous approach works for all measures of the form of a classic
Riesz product. In particular, the generalised Thue--Morse sequences
from \cite{BGG} can be analysed along these lines; compare also
\cite{Kea}. Likewise, the choice of different interval lengths is
possible, though technically more complicated; compare \cite{Wolny}
for some examples.
Higher-dimensional examples with purely singular continuous spectrum,
such as the squiral tiling \cite{squiral} or similar bijective block
substitutions \cite{Nat}, may still lead to classic Riesz products,
though they are now in more than one variable, and the analysis is
hence more involved. Nevertheless, the scaling analysis will still
lead to a better understanding of such measures.
\section*{Acknowledgements}
This work was supported by the German Research Foundation (DFG) within
the CRC~701.
| 99,274
|
Definition of Cordaitaceae
1. Noun. Chiefly Paleozoic plants; Cordaites is the chief and typical genus.
Generic synonyms: Gymnosperm Family
Group relationships: Cordaitales, Order Cordaitales
Member holonyms: Cordaites, Genus Cordaites
Cordaitaceae Pictures
Click the following link to bring up a new window with an automated collection of images related to the term: Cordaitaceae Images
Lexicographical Neighbors of Cordaitaceae
Literary usage of Cordaitaceae
Below you will find example usage of this term as found in modern and/or classical literature:
1. Strasburger's Text-book of Botany by Eduard Strasburger, Hans Fitting (1921)
"Tlie cordaitaceae were lofty, branched trees with linear or broad and lobed leaves with parallel venation. Their flowers differ considerably from those of ..."
2. Report of the Annual Meeting (1900)
"... in the Carboniferous epoch there is a striking falling-off in the Devonian, in which, however, plants of high organisation, such as the cordaitaceae, ..."
3. Geologisches Zentralblatt (1908)
"... l cordaitaceae und 7 Semina. Sie ist, wie die Flora des erzgebirgischen Beckens überhaupt, in der Hauptsache ein Äquivalent der mittleren und oberen ..."
4. Essentials of College Botany by Charles Edwin Bessey, Ernst Athearn Bessey (1914)
"cordaitaceae. Order GINKGOALES. Maidenhair Trees. Branching trees with fan-shaped, parallel-veined leaves. (All extinct but one species.) Family 10. ..."
| 191,234
|
Google shares hit $1,000 for the first time
Google shares have reached the $1,000 milestone for the first time after the company reported better-than-expected earnings.
Google posted a 36% jump in net profits to $2.97 billion for the July-to-September period.
Shares in the giant online search and ads company rose more than 13% to $1,006, and are now up 41% since the start of 2013.
Google’s revenues also beat forecasts with a 12% rise year-on-year.
“We are closing in on our goal of a beautiful, simple, and intuitive experience regardless of your device,” Google’s chief Larry Page said in a conference call with analysts.
The strong earnings report also helped other online companies, with Facebook shares adding 4.4% to a new high of more than $55. Amazon rose 3.4%.
Google shares have reached the $1,000 milestone for the first time after the company reported better-than-expected earnings
At $1,000 a share, Google’s market value is about $334 billion, which is still well below Apple’s $461bn.
Google was floated in August 2004 at $85 a share, giving the company a market value at the time of $23 billion..
“We view solid paid clicks growth to be a good indicator of demand, driven by the continued shift to mobile,” JP Morgan analysts said in a note.
[youtube BGZToK3KO9Q 650]
| 408,031
|
We are pleased to announce that. A number of departments such as the Research Computing, the Committee for the Protection of Human Subjects, the Library, and the Health Promotion Research Center at Dartmouth will be on site. Please visit our website for a list of all confirmed participants. We expect the list to grow, so please monitor the site for updates.
This event will be made possible thanks to generous sponsorship from Dartmouth SYNERGY: Clinical and Translational Science Institute, and will be organized by the Biomedical Libraries and Research Computing.
| 252,943
|
\section{ADAPTIVE CONTROL WITH A HIGH-ORDER TUNER} \label{sec:ht_adaptive_control}
We now state the main result of this paper. For the plant given in \eqref{eqn:plant_simplified}, we propose the adaptive controller given in Algorithm \ref{alg:adaptive_control}, with Algorithm \ref{alg:high_order_tuner} in place of the ADAPTIVE\_LAW in line 11. Algorithm \ref{alg:high_order_tuner} summarizes the high-order tuner adaptive law \cite{gaudio2020accelerated}.
\begin{algorithm}[b]
\caption{ADAPTIVE\_LAW (High Order Tuner) \cite{gaudio2020accelerated}}
\label{alg:high_order_tuner}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} time step $k$, current parameter estimate $\theta_k$, regressor $\phi_k$, prediction error $\varepsilon_{k+1}$, gains $\gamma$, $\beta$
\IF{$k == 0$}
\STATE $\vartheta_0 \gets \theta_0$
\ELSE
\STATE \textbf{Receive} $\vartheta_k$ from previous iteration
\ENDIF
\STATE Let $\N_k = 1 + \|\phi_k\|^2$, \\
$\nabla\bar{f}_k(\theta_k) = \frac{1}{\N_k}\phi_k\varepsilon_{k+1}$, \\
$\bar{\theta}_{k} = \theta_k - \gamma\beta\nabla\bar{f}_k(\theta_k)$
\STATE $\theta_{k+1} \gets \bar{\theta}_{k} - \beta(\bar{\theta}_{k} - \vartheta)$
\STATE Let $\nabla\bar{f}_k(\theta_{k+1}) = \frac{1}{\N_k}\phi_k(\phi_k^\top(\theta_{k+1} - \theta_k) + \varepsilon_{k+1})$
\STATE $\vartheta_{k+1} \gets \vartheta_k - \gamma\nabla\bar{f}_k(\theta_{k+1})$
\STATE \textbf{Return} $\theta_{k+1}$
\end{algorithmic}
\end{algorithm}
The specific updates that define the evolution of $\theta_k$ are summarized as
\begin{align}
\vartheta_{k+1} &= \vartheta_k - \gamma\nabla\bar{f}_k(\theta_{k+1}) \label{eqn:4} \\
\bar{\theta}_{k} &= \theta_k - \gamma\beta\nabla\bar{f}_k(\theta_k) \label{eqn:5} \\
\theta_{k+1} &= \bar{\theta}_{k} - \beta(\bar{\theta}_{k} - \vartheta_k) \label{eqn:6}
\end{align}
where $\nabla\bar{f}_k(\theta_k) = \frac{\nabla L_k(\theta_k)}{\N_k}$ and $\nabla\bar{f}_k(\theta_{k+1}) = \frac{\nabla L_k(\theta_{k+1})}{\N_k}$, and the gradients of $L_k$ are given by
\begin{align}
\nabla L_k(\theta_k) &= \phi_k\phi_k^\top\tilde{\theta}_k = \phi_k\varepsilon_{k+1} \label{eqn:ht_L_gradient_k} \\
\nabla L_k(\theta_{k+1}) &= \phi_k\phi_k^\top\tilde{\theta}_{k+1} \nonumber \\
&= \phi_k(\phi_k^\top(\theta_{k+1} - \theta_k) + \varepsilon_{k+1}) \label{eqn:ht_L_gradient_k+1}
\end{align}
When the regressors are constants (i.e. $\phi_k = \phi =$ constant), \eqref{eqn:4}-\eqref{eqn:6} reduce to Nesterov's algorithm \cite{gaudio2020accelerated},
\begin{equation} \label{eqn:nesterov}
\theta_{k+1} = \theta_k + \bar{\beta}(\theta_k - \theta_{k-1}) - \bar{\gamma}\nabla L(\theta_k + \bar{\beta}(\theta_k - \theta_{k-1}))
\end{equation}
where
\begin{equation}
\bar{\beta} = 1 - \beta, \,\, \bar{\gamma} = \gamma\beta
\end{equation}
\eqref{eqn:4}-\eqref{eqn:6} are therefore a high-order counterpart to the adaptive law in \eqref{eqn:gd_adaptive_law}, and include momentum components (second term in \eqref{eqn:nesterov}) and acceleration components (third term in \eqref{eqn:nesterov}).
\subsection{Stability of the High-Order Tuner Adaptive Law} \label{subsec:ht_stability}
As before, we first prove the boundedness of the parameter error $\tilde{\theta}_k$ and the auxiliary parameter estimate $\vartheta_k$ in the following proposition.
\begin{proposition} \label{pro:ht_Lyapunov}
The adaptive law in \eqref{eqn:4}-\eqref{eqn:6} results in a bounded parameter error $\tilde{\theta}_k$ and a bounded auxiliary parameter estimate $\vartheta_k$ for all $k$ if $0 < \beta < 1$ and $0 < \gamma < \frac{4\beta(1 - \beta)}{1 + (3 - 5\beta)(1 - \beta)}$ with
\begin{equation} \label{eqn:ht_Lyapunov_function}
V_k = \|\vartheta_k - \theta_*\|^2 + \|\theta_k - \vartheta_k\|^2
\end{equation}
as a Lyapunov function.
\end{proposition}
\begin{proof}
See Subsection \ref{subsec:ht_Lyapunov_proof} in the Appendix.
\end{proof}
The main result of this paper, that the high-order tuner algorithm leads to global boundedness of the closed-loop system and guarantees the control objective of $e_k \tends 0$ as $k \tends \infty$, is now given in the following theorem.
\begin{theorem} \label{thm:ht_error_bounded}
For the plant given in \eqref{eqn:plant_simplified}, Algorithm \ref{alg:adaptive_control} with Algorithm \ref{alg:high_order_tuner} as the adaptive law and $0 < \beta < 1$, $0 < \gamma < \frac{4\beta(1 - \beta)}{1 + (3 - 5\beta)(1 - \beta)}$ results in $e_k \tends 0$ as $k \tends \infty$.
\end{theorem}
\begin{proof}
See Subsection \ref{subsec:ht_stability_proof} in the Appendix.
\end{proof}
\begin{remark}
In Proposition \ref{pro:ht_Lyapunov} and Theorem \ref{thm:ht_error_bounded} (as in Proposition \ref{pro:gd_Lyapunov} and Theorem \ref{thm:gd_error_bounded}), we make no assumptions on the level of excitation in the input or on the initial parameter estimate $\theta_0$. Therefore, this adaptive law can be applied with any bounded input $\{r_k\}$ and any initial parameter estimate.
\end{remark}
| 129,747
|
TITLE: Checking when a linear isomorphism is natural
QUESTION [1 upvotes]: I am thinking about what it means for linear maps to be "natural" and in particular I am thinking about the following example. Let $V$ be a finite-dimensional real vector space with dimension $n$. Then Hom$_{\mathbb{R}}(V\otimes\mathbb{C},\mathbb{R})\cong$ Hom$_{\mathbb{R}}(V,\mathbb{R})\otimes\mathbb{C}$ since they both have real dimension $2n$. My question is whether or not there is a definition for this isomorphism that is independent of choice of basis and why. It seems to me that this cannot be the case since we'd like to define the image of any $\mathbb{R}$-linear map $f$ on $V\otimes\mathbb{C}$ in Hom$_{\mathbb{R}}(V,\mathbb{R})\otimes\mathbb{C}$ as $f|_{V}\otimes \lambda$ where $\lambda$ is either $1$ or $i$, but how would you know which to choose? How can I prove this more rigorously?
Edit: I am comfortable with the meaning of "natural" and "independent of choice of basis". At this time I would just like to know whether or not such a natural isomorphism exists.
REPLY [1 votes]: The idea that a map is 'natural' is sometimes informally stated as being independent of choice of basis, but what it really means is that it's a component of a natural transformation between two functors.
Here, the functors are $\operatorname{Hom}(-\otimes \mathbb{C},\mathbb{R})$ and $\operatorname{Hom}(-,\mathbb{R})\otimes\mathbb{C}$. The question of whether they're naturally isomorphic requires you to produce a natural isomorphism between these two functors, which means a choice of isomorphism $$\alpha_V:\operatorname{Hom}(V\otimes \mathbb{C},\mathbb{R})\to\operatorname{Hom}(V,\mathbb{R})\otimes\mathbb{C}$$ for every choice of vector space $V$ such that for any map $f:W\to V$, you obtain a commutative square
$$\begin{matrix}&\alpha_W&\\\operatorname{Hom}(W\otimes \mathbb{C},\mathbb{R})&\cong&\operatorname{Hom}(W,\mathbb{R})\otimes\mathbb{C}\\\downarrow &&\downarrow\\\operatorname{Hom}(V\otimes \mathbb{C},\mathbb{R})&\cong&\operatorname{Hom}(V,\mathbb{R})\otimes \mathbb{C}\\&\alpha_V&
\end{matrix}$$
These two functors may be naturally isomorphic, but, at least to me, not in an obvious way.
| 49,216
|
Salzgittersee
Details about this bathing water:
The bathing place Salzgittersee is located by the lake in the municipality/city Salzgitter, Stadt in the region of Niedersachsen, Salzgitter, Krfr.st. and was rated in 2017 with the best water quality (three stars - excellent bathing water quality).
current rating: excellent quality
country: Germany
region: Niedersachsen
province: Salzgitter, Krfr.st.
commune: Salzgitter, Stadt
category: lake
water flows to North-east Atlantic Ocean
| 69,098
|
There is a sunflower field on Calkins Road that I was determined to get out and photograph.
It was getting late in the evening when I got there, and I missed the best light but I still took a few shots.
Aaron took a few pics too and we both shot some video.
I recommend if you don't like bees you skip the part between 2:25 and 3:54... Tina this means you. Although Tina did pretty well when I showed up at work with this necklace on.
I like honey bees (maybe because I like honey so much?)... especially the big fat lazy ones that graze on sunflowers. Here is the video.
Sunflower Field from Jennifer Cisney on Vimeo.
Very Cool. One doesn't see sunflower farms that often in the Bay Area. Do the petals close up in the evening? I noticed this also happening in the sunflower photo you took awhile back.
And what I really want to know is whether the farm sells sunflower seeds, oils or any products?
I love sun flowers! The sun flower filed is stunning!
Gorgeous!!
Can you tell us where you got the necklace?
Ok, CMON! That's just way too much BEE video!
Seriously though, very creative and beautiful.
PS - you might like this website:
Linda - the only thing I have seen the farm sell is cut sunflowers.
Becky - I got the necklace at Fossil... the watch company!
Tina - har har.
Sunflowers are my favorite. I've seen a lot of photos of sunflowers but none as beautiful as yours!!!
What beautiful, cheerful pictures! I'm so glad you took the time to share them. Thank you!
(Love your necklace.)
Sunflowers and Regina Spektor - that stellar combination totally cheered me up!!
beautiful photos!
brightened my day...thanks!
i am also fond of bees (see my screenname).
Love the pics. Such a nice pop of color. I like the one photo in particular where it looks like the sunflower is trying to hide. It's like it's thinking "oh no not another picture of me. I know I am beautiful." Lol.
LJC - what is the cross street on calkins for the sunflower farm? Interested in checking it out myself :)
Calkins and Clover
Where were these taken exactly? I LOVE sunflowers and would love to plan a trip out to see this field when it's in bloom this year.
Ashley - the farm moves the locations every year but it's always off of Clover past Jefferson.
| 89,136
|
The red planet and the sun will be on exact opposite sides of Earth.
Look up Friday night and you won’t see only a full moon, but also an orange-red orb right next to it. That’s Mars.
Mars is in opposition Friday, which means that the red planet and the sun will be on opposite sides of Earth. It’s a phenomenon that happens about every two years, but this time Mars will be closer to Earth than it has been in 15 years.
As Mars makes its glowing appearance, some parts of the world will also see what astronomers say will be the longest total lunar eclipse of the century. The eclipse, which will last 1 hour and 43 minutes, won’t be visible in North America, according to EarthSky.org.
Mars will be closest to Earth around 1 a.m. July 31, at 35.8 million miles away.
Since Mars will rise to its highest point later at night, the Goldendale Observatory southeast of Yakima is staying open until 2 a.m. July 27 to July 29.
Though an observatory will allow for a more detailed view, Mars is so bright and close that a basic telescope will do.
“Find an amateur with a telescope,” said Guy Worthey, an associate professor of astrophysics at Washington State University. “It does not have to be a big one. The two-inch department store one works pretty well for Mars.”
“I always get a thrill, because you can see features. You can see the dark patterns” on Mars, Worthey said.
| 376,792
|
Resolving Conflicts (Team Foundation Source Control)
Visual Studio 2005
This section contains topics that describe how to solve conflicts that occur during a merge, during check-in, and when you use Get from Team Foundation source control.
In This Section
- Understanding Conflict Types
Explains both check-in and Get conflict types.
- How to: Resolve Conflicts
Describes the procedure used to resolve conflicts.
| 37,269
|
We provide individual integrated approach to solving problems
Develop your customer and employee happiness with our intelligent service. Our approachable, anonymous and easy-to-use system help’s you listen to your customers and employees in all business environments. They encourage interaction, increase customer and employee engagement, and give you 20 times more feedback compared to our competitors’ solutions.
Effective feedback, both positive and negative, is very helpful. Feedback is valuable information that will be used to make important decisions.
Report incidents in Real-time direct to the correct operative and improve your SLA's, The application also has a built in helpdesk and ticketing system.
New QR Code on every template. Users can scan the QR code from any device and enter their feedback on the go. Allowing them to complete the survey anytime.
Up to date information on numerous areas within your business. Display floor plans and facilities in 3d on the screen. Help visitors locate facilities within your business.
Live streaming of local traffic news, weather information and much more.
Display today's Guest Wifi password, today's restaurant menu.
A poorly designed app can be damaging for your brand because this is the first thing your potential customers see and judge you on. The app is your representative in the market and your direct link to your customers and it should therefore reflect nothing but the best for your company.
| 397,452
|
TITLE: Trace of matrices is equal to zero or one
QUESTION [0 upvotes]: I want to prove that if matrices $E_{i,j}$ in $\mathbb{K}^{m,n}$ are matrices with all zeros except for one 1 on the $i,j$ index, then $\text{tr}((E_{i,j})^TE_{k,l})$ equals $1$ if $E_{i,j}=E_{k,l}$ and 0 in other cases.
I have an intuition that this is true but I don't know how to formally prove it.
REPLY [0 votes]: Given a matrix $A \in {\rm Mat}(n\times m, \Bbb R)$, let $\widetilde{A} \in \Bbb R^{nm}$ be the vector whose entries are the entries of $A$, listed row after row. You can show (using the definitions of trace and matrix multiplication) that ${\rm tr}(A^TB)$ equals the inner product of $\widetilde{A}$ and $\widetilde{B}$ in $\Bbb R^{nm}$. Very obviously the matrices $E_{ij}$ correspond to the standard basis of $\Bbb R^{nm}$, and the standard basis is orthonormal.
| 59,008
|
The main law governing the incorporation of companies in Gibraltar is the Companies Act which was first enabled in 1930. The Government amended the old legislation in order to create a more suitable and attractive law for foreign investors coming to Gibraltar. As a result of the amendments brought to the new law, Gibraltar has now become one of the most attractive venues for companies operating in the financial and gaming industries. The Companies Act came into effect on November 1st,2014.
The new Companies Law was updated so that it provided for simpler company registration procedures. In order to accomplish that, the new law provides has given a new form to the Memorandum of Association which has been shortened. Foreign investors registering businesses under the Companies Act 2014 of Gibraltar are no longer required to declare the object of activity of the business.
The Articles of Association have also been altered, as a new simplified form has been drafted. The new Articles have a standard form and can be customized. The law also specifies that Gibraltar companies using an old template for these documents are not required to replace them.
The 2014 Companies Act also provides for a new type of company: the micro-entity. This type of company can have an average number of 10 employees, a net turnover of 362,000 £ and its total assets must be evaluated at 316,000 £.
Our company formation agents in Gibraltar can provide you with more information about the Companies Act of 2014.
You can also watch our video in the Gibraltar Companies Act 2014:
The second and most important amendment brought to the Companies Law 2014 refers to auditing and accounting standards. Under the new legislation, the Gibraltar Trade Register will accept electronically filed accounts and balance sheets. The company must obtain an unique identifier in order to gain access to the system.
Private companies in Gibraltar may now file their annual accounts within 12 months from the end of the financial year. The new accounting regulations have been enforced at the beginning of 2016.
If you want to register a company in Gibraltar and need assistance, do not hesitate to contact our local consultants.
| 217,808
|
\section{The equivariant universal cover}\label{distex2}
In \autoref{distex1} we defined the equivariant component space and proved
it was dualizable. In Sections \ref{ge_i_sec} and \ref{gl_i_sec} we used this distributor
to define the geometric and global Lefschetz numbers. We also used
this distributor to compare these invariants.
In this section we define a generalization of the equivariant component space
and prove that it is dualizable. We will use this distributor to define the geometric
and global Reidemeister traces and also to compare these invariants.
If $x(H)$ is an object in the equivariant fundamental category of $X$, let
$\widetilde{X^H_x}$ be the universal cover of $X^H_x$ thought
of as homotopy classes of paths that start at $x(eH)$.
If $\gamma\colon
I\rightarrow X^K$ represents an element of $\widetilde{X^K_y}$
and $(R_a,w)$ in a object of \[\efuncat{G}{X}(x(H),y(K)),\] let
$(R_a,w)\circ \gamma$ be \[(\gamma a)w(eH).\] This is a path in $X^H$ based at $x(eH)$.
This composition defines an action of $\efuncat{G}{X}(x(H),y(K))$ on
$\tilde{X}(y(K))$.
\begin{defn} \cite[I.10.13]{tomDieck}
The \emph{equivariant universal cover} of a $G$-space $X$ is the
functor \[\tilde{X}\colon \efuncat{G}{X}^\op\rightarrow {\sf Top}_*\] defined on objects by
\[\tilde{X}(x(H))=\widetilde{X^H_x}.\] The action of morphisms is defined above.
\end{defn}
This functor is the starting point of our definition of the equivariant geometric Reidemeister
trace. Like in \autoref{ge_i_sec}, we will modified this functor before we define
the Reidemeister trace.
Replace the functor $\tilde{X}$ with the functor \[\hat{X}:\efuncat{G}{X}^\op\rightarrow \Top_*\] defined by
\[\hat{X}(x(H))=\widetilde{X^H_x}\cup C
(\widetilde{X^{>H}_x})\] where $\widetilde{X^{>H}_x}=\{\gamma\in \widetilde{X^H_x}|
\gamma(1)\in X^{>H}\}$.
The action of a morphism in $\efuncat{G}{X}$ is that induced by the action on $\tilde{X}$.
\begin{prop}\label{geodual}
If $X$ is a compact $G$-ENR then $\hat{X}$ is dualizable as a right $\efuncat{G}{X}$-module.
\end{prop}
We first prove a preliminary lemma.
\begin{lem}\label{geodualparts} For each object $x(H)$ in $\efuncat{G}{X}$ $\hat{X}(x(H))$ is dualizable as a
module over $\efuncat{G}{X}(x(H),x(H))$.
\end{lem}
\begin{proof} The group $\efuncat{G}{X}(x(H),x(H))$ is a discrete group
and $\hat{X}(x(H))$ is cocompact. Then \autoref{ranicki_duality} implies
$\hat{X}(x(H))$ is Ranicki dualizable.
\end{proof}
\begin{proof}[Proof of \autoref{geodual}]
The dual spaces, $D\hat{X}(x(H))$, of $\hat{X}(x(H))$ from
\autoref{geodualparts}
assemble to define a functor \[D\hat{X}:\efuncat{G}{X}
\rightarrow \Top_*.\]
The action of morphisms is induced by the action of each of the
$D\hat{X}(x(H))$.
The evaluation for the dual pair $(\hat{X},D\hat{X})$
is a natural transformation
\[D\hat{X}(y(K))\wedge \hat{X}(x(H))\rightarrow S^n\wedge \efuncat{G}{X}(y(K),x(H))_+.\]
If $K$ is not subconjugate to $H$, $\efuncat{G}{X}(x(H),y(K))$ is empty and there is nothing to
define. If $K$ is subconjugate to $H$ but not conjugate to $H$, naturality implies
this map is the constant map. It only remains to consider the case where $H$ and $K$ are conjugate.
In that case we can use the evaluation defined above.
The coevaluation is a map \[S^n\rightarrow B(\hat{X},\efuncat{G}{X},D\hat{X}).\]
The space \[\vee_{\ob\efuncat{G}{X}}B(\hat{X}(x(H)),\efuncat{G}{X}(x(H),x(H)),D\hat{X}(x(H)))\]
is a subspace of $B(\hat{X},\efuncat{G}{X},D\hat{X})$. The coevaluation is the composite
of the pinch map $S^n\rightarrow \vee S^n$, the coevaluation maps constructed in
\autoref{geodualparts}
and this inclusion.
Since many of the components of the evaluation map are the constant map, checking
that the required diagrams commute reduces to checking that each pair
$(\hat{X}(x(H)),D\hat{X}(x(H)))$ is a dual pair. This was checked in \autoref{geodualparts}.
\end{proof}
\begin{rmk} For this proposition $G$ does not have to be finite. It is enough that
$X^H/WH$ is a compact ENR for each subgroup $H$ of $G$ as in \autoref{xgdual}.
\end{rmk}
| 108,673
|
Berenice
Round Purple/Silver Eyeglasses
Size: M 52 - 20 - 147 Size guide
Promotion: First Pair Free
Good Housekeeping magazine rates GlassesShop.com Best Eyewear Website for Budget Buys.
Details
Reviews (2)
Details
Details
- SKU: FT0309
- Frame: Full
- Shape: Round
- Weight: 7.8g (0.28 oz),
is Lightweight Frame
- Gender: Women
- Material: Titanium
- Bifocal: Yes
- Progressive: Yes
Description
GlassesShop Berenice Round eyeglasses are made of super-light and anti-allergic titanium material. Three color options: Black, Purple/Silver and Rose Gold. Featured with adjustable nose pads and slender temple, they are only 7.8g. It is a good choice for most of people in different collections. Computer eyeglasses and reading eyeglasses are available.
Reviews (2)
Overall Rating: 5/5
June 11, 2020
- Great looking frames
- Color: Purple/Silver
- Size: M
- June 11, 2020
I love these lightweight frames. They are so understated and they feel so comfortable on my face. I never knew until now how comfortable titanium frames are. I love the color which is a light purple with a silver accent. I recommend these frames highly.
January 25, 2020
- Alba Marie
- Color: Black
- Size: M
- January 25, 2020
This glasses are my favorites so far i use it every day all day because comfort comfort comfort.
1 - 10
Please login to write a review.
| 101,034
|
\begin{document}
\title[Groups, polynomials, matrices, and
automorphisms]{Walks on groups, counting reducible matrices, polynomials, and surface and free group automorphisms}
\author{Igor Rivin}
\address{Department of Mathematics, Temple University, Philadelphia}
\email{rivin@math.temple.edu}
\thanks{I would like to thank Ilya Kapovich for asking the
questions on the irreducibility of automorphisms of surfaces and
free groups, and for suggesting that these questions may be
fruitfully attacked by studying the action on homology.
I would also like to thank Nick Katz for making him aware of Nick
Chavdarov's work, Sinai Robins for making him aware of Morris Newman's
classic book, and Akshay Venkatesh, Peter Sarnak, and Alex Lubotzky for enlightening
conversations. The author would also like to thank Benson Farb, Ilya Kapovich, and Lee Mosher for
comments on previous versions of this note.}
\curraddr{Mathematics Department, Stanford University, Stanford, California}
\email{rivin@math.stanford.edu}
\date{\today}
\keywords{irreducible, reducible, matrices, polynomials, surfaces,
automorphisms}
\begin{abstract}
We prove sharp limit theorems on random walks on graphs with values in
finite groups. We then apply these results (together with some
elementary algebraic geometry, number theory, and representation
theory) to finite
quotients of lattices in semisimple Lie groups (specifically
$\SL(n,\integers)$ and $\Sp(2n, \integers)$) to show that a ``random''
element in one of these lattices has irreducible characteristic
polynomials (over $\integers$). The term ``random'' can be defined in
at least two ways (in terms of height and also in terms of word length
in terms of a generating set) -- we show the result using both
definitions.
We use the above results to show that a random (in terms of word
length) element of the mapping class group of a surface is
pseudo-Anosov, and that a a random free group automorphism is
irreducible with irreducible powers (or \emph{strongly irreducible}
\footnote{terminology due to Lee Mosher}).
\end{abstract}
\maketitle
\section*{Introduction}
This paper was inspired by the following question, first brought to
the author's attention by Ilya Kapovich:
\begin{quotation}
Is it true that a random element of the mapping class group of a
surface is pseudo-Anosov?
\end{quotation}
The definition of \emph{random} in the question above is not
explicitely given, but a reasonable way to define is to fix a
generating set of the mapping class group, and look at all the words
of bounded length.\footnote{Another way is to look at a combinatorial
ball of radius $N$ around identity; it turns out that our results,
together with the results announced recently by Ursula Hammenstadt
answer both questions.}
Kapovich had suggested that a reasonable way to attach this question
was to study the action of the mapping class group $\mathcal{M}_g$ on
homology, which gives a symplectic representation of $\mathcal{M}_g.$
Results of Casson and Bleiler (see \cite{CassonBleiler}) then give a
set of sufficient conditions for a surface automorphism to be
pseudo-Anosov in terms of its image under that representation.
This paper, then, is the embodiment of the program as described
above. Along the way, we show a number of results not explicitely
related to low dimensional topology or geometric group theory.
Here is a summary of the results:
In my old preprint \cite{walks} I state, and sketch the proof of a
general equidistribution theorem for products of elements of finite
(and, more generally, compact) groups along long paths in finite
graphs. In the current paper I give complete arguments. In the follow
up-paper \cite{rirred2} much sharper results are given
(with speed of convergence bounds). In particular, the
convergence bounds are \emph{uniform} for families of finite quotients
of a group satisfying Lubotzky's property $\tau$ (or the stronger
Kazhdan's property $T$).
We then show that the set of polynomials reducible
over $\integers$ with constant coefficient $1$ is an algebraic
subvariety of the set of polynomials, and using elementary theory of
algebraic groups show that the set of matrices in $\SL(n, R)$ (where
$R$ is the coefficient ring) with reducible characteristic polynomials
is a finite union of Zarisky-closed sets, so in particular, when
$R=\integers_p,$ the proportion of matrices in $\SL(n, R)$ with
reducible characteristic polynomials decreases as $O(1/p),$ with
constants independent of $p$ (Section \ref{matrices}). Using these
methods together with Weil's estimate for the number of points on a
curve over $\mathbb{F}_p$ we show that the probability that the
characteristic polynomial of a matrix in $\SL(n, \integers_p)$ has the
full symmetric group $S_n$ as Galois group goes is bounded (uniformly)
away from $0.$ Along the way, we show that the probability that a
polynomial with fixed constant term $1$ has a prescribed splitting
type $T$ modulo $p$ is the same as the probability that a
\emph{random} polynomial has splitting type $T$ (for large
$p$). (Section \ref{galoisrestricted}).
We quote results of Borel (proofs can be found in Nick Chavdarov's
thesis \cite{chavdarov}) which show that the probability that a matrix
in
$\Sp(2n, \integers_p)$ has reducible characteristic polynomial is
bounded away from $1$ (uniformly in $p.$) (Section \ref{matrices2})
Combining these results with with the results on walks on graphs,
together with elementary local-to-global estimates, we show that for a given
\emph{undirected} graph $G$ whose vertices are decorated with a
symmetric generating set of $\Gamma$ (where $\Gamma$ is either $\SL(n,
\integers)$ or $\Sp(2n, \integers)$), the probability that the product
of generators along a random long walk of length $N$ gives a matrix
with reducible characteristic polynomial (or, in the case of $\SL(n,
\integers),$ a polynomial with non-generic Galois group) goes to $0$
as $N$ goes to infinity.
From a number-theoretic standpoint, using the length of the
representing word in generators is a less natural way to define the
size of an integral matrix than a(ny) matrix norm. It turns out that
with that definition of size the result follows by combining our
results on finite matrix groups with the work of \cite{DRS}, and
effective bounds can be obtained using
the uniformity results of \cite{NevoSarnak} (see Theorem \ref{slnzgro}).
Finally, we apply the above-mentioned results to automorphisms of
surfaces (Section \ref{mcg}) and free group automorphisms (Section
\ref{free}). For surface automorphisms we use the observations of
Casson and Bleiler (\cite{CassonBleiler}) to show that for an
arbitrary generating set, most products of up to $N$ generators are
pseudo-Anosov, and if the generating set happens to be symmetric, then
the fraction of the non-pseudo-Anosov products goes to zero
exponentially fast with $N.$ It can be argued that it is more natural
to consider all elements in a combinatorial ball of radius $N.$ Since
our limit theorems for graphs are for undirected graphs, such a result
will follow (for certain generating sets) once we know that the
mapping class groups are bi-automatic.
Similarly, it is shown that most words in a generating set of the
outer automorphism group of a free group $F_n$ are irreducible with
irreducible powers, or what Lee Mosher calls strongly irreducible.
\section{Generalities on algebraic geometry and algebraic groups}
\label{closure}
First, recall the following:
\begin{definition}[\cite{geck2004}[p.~23]
Let $V\subseteq k^n,$ and $W \subseteq k^m$ be non-empty algebraic
sets. We say that $\phi: V \rightarrow W$ is a \emph{regular} map, if
there exist $f_1, \dotsc, f_m \in k[X_1, \dotsc, X_n],$ (where $X_1,
\dotsc, X_n$ are indeterminantes) such that
\[
\phi(x) = (f_1(x), \dotsc, f_m(x))
\]
for all $x \in V.$
\end{definition}
\begin{example}
\label{firsteg}
Let $V = \SL(n, k) \subset k^{n^2},$ and let $W = k^n.$
Then the map associating to each matrix the coefficients its characteristic polynomial
is a regular map.
\end{example}
Consider the following setup: we have a parametrized set $S$ in $k^n$
(for $k$ an algebraically closed field), that is:
\begin{gather*}
x_1 = f_1(s_1, \dotsc, s_m),\\
x_2 = f_2(s_1, \dotsc, s_m),\\
\vdots\\
x_n = f_n(s_1, \dotsc, s_m),
\end{gather*}
where $f_1, \dotsc, f_n$ are polynomials in $s_1, \dotsc, s_m.$
By the ``Implicitization Theorem'' \cite{CLO_IVA}[Chapter 3], the
Zariski closure of $S$ is an affine variety, whose dimension is
bounded above by $m$ (by considering the dimension of the tangent
space, see \cite{geck2004}[Theorem 1.4.11]).
\subsection{Fibers of dominant morphisms}
\label{domfib}
The following results are standard; the statement is taken from
\cite{geck2004}[pp.~116,118]. First, let $A[X]$ be the algebra of
regular functions on $X,$ and second, for $g \in A[X],$ let $X_f =
\{x\in X \vert f(x) \neq 0\}.$ Now
\begin{theorem}
\label{geck1}
Let $\phi: X \rightarrow Y$ be a dominant morphism of irreducible
affine varieties. In particular, $\phi^*:A[Y] \rightarrow A[X]$ is
injective, and $d = \dim X - \dim Y \geq 0.$ Then we have a
factorization
\[
\begin{CD}
\phi\vert_{X_{\phi^*(g)}} : X_{\phi^*(g)}
@>{\overline{\phi}}>> Y_g \times k^d @>{p_1}>> Y_g,
\end{CD}
\]
where $\overline{\phi}$ is a finite dominant morphism, and $p_1$ is
the projection on the first factor.
\end{theorem}
\begin{corollary}
\label{borel}
Let $\alpha: X \rightarrow Y$ be a dominant morphism of irreducible
varieties, and put $r = \dim X - \dim Y.$ Then, there exists a
non-empty open set $U \subseteq Y,$ such that $U \subseteq \phi(X),$
and such that $\dim \phi^{-1}(y) = r$ for all $y \in U.$
\end{corollary}
\begin{remark}
\label{geckrem}
$U$ can be taken to be of the form $X_f,$ for some $f \in A[Y].$
\end{remark}
A \emph{dominant morphism $\alpha$} is a morphism such that the
preimage of any open dense set in the image variety is dense. In
particular, a surjective regular map is dominant.
\begin{example}
\label{secondeg}
Consider $V\subset k^{n^2},$ where $V = \SL(n, k).$ The map $\chi,$
which associates to each matrix the coefficients of its characteristic
polynomial is a dominant morphism onto $k^{n-1}$ (by the companion matrix
construction, it is surjective). By the results (and notation) above,
there is a $g \in A[V],$ such that the fibers of $\chi$ restricted to
$V_g$ are $n^2-n$-dimensional varieties.
Similarly, let $\Sp(2n, k) = W \subset k^{4n^2},$ and let
$\chi^\prime$ be the map which associates to each matrix $m$ the
coefficients of $x, \dotsc, x^n$ of the characteristic polynomial of
$m.$
By Theorem \ref{kirby}, $\chi^\prime$ is a dominant morphism, and by
Fact \ref{dickfact} and the results of this section, there is an $h
\in A[W],$ such that the fibers of $\chi^\prime$ restricted to $W_h$
are $2n^2$-dimensional varieties.
\end{example}
\section{Counting points on varieties}
\label{counting}
Let $S$ be a variety o
f dimension $m$ over $\mathbb{C}$ (or $\overline{\mathbb{Q}}.$ Consider a reduction of $S$
modulo $p.$
\begin{theorem}[Lang-Weil, \cite{langweil}]
\label{langweil}
The number of $F_p$ points on $S$ grows as $O(p^m).$ The implied
constant is uniform (that is, it is a function of the dimension and
codimension of the variety \emph{only}).
\end{theorem}
It should be noted that we are using this theorem for the upper bound
only, and the upper bound is an easy result (see \cite[Lemma
4.1.3]{geck2004}), unlike the full Lang-Weil result which gives both
upper and lower bounds. Lang and Weil is deduced from:
A.~Weil's estimate on the number of
$\mathbb{F}_p$ points on a curve defined over $\mathbb{F}_p:$
\begin{theorem}[A. Weil,\cite{weilcourbes}]
\label{weilcourbesthm}
Let $f \in \mathbb{F}_p[X, y]$ be an absolutely irreducible (that is,
irreducible in $\overline{\mathbb{F}}_p[X, Y]$) polynomial of degree $d$. Then if
\[\mathcal{C} = \{(x, y) \in \mathbb{F}_p^2 | f(x, y) = 0\},\]
we have the estimate
\[||C| - p| \leq 2 g \sqrt{p} + d^2,\] where $g$ is the genus of the
curve defined by $f$ (which satisfies $g \leq (d-1)(d-2).$ This estimate is sharp.
\end{theorem}
Theorem \ref{langweil}, together with the results of Section
\ref{domfib} imply:
\begin{theorem}
\label{domcount}
With the situation as described in Corollary \ref{borel}, if the
ground field is $\ffp{p},$ if $S \subseteq Y,$ then $|\alpha^{-1}(S)| \leq
c_1 |S| p^r+c_2(p^{\dim X - 1})$ where the constants $c_1, c_2$
depend only on the dimensions of $X, Y.$
\end{theorem}
\section{Classical groups}
\label{classgroups}
In this paper we will be primarily concerned with the special linear
group and the symplectic group over various domains. We will need the
following facts:
\begin{fact}[See \cite{borelgroups,geck2004}]
\label{irredgp}
The groups $\GL(n, k), \SL(n, k)$ and $\Sp(n, k)$ over an algebraically closed
field $k$ are \emph{irreducible} as algebraic varieties. This is
equivalent to saying that the groups in question are connected.
\end{fact}
\begin{fact}
\label{dimgp}
The dimension of $\SL(n, k)$ equals $n^2-1.$ The dimension of $\Sp(2n,
k)$ equals $2n^2 + n.$ In both cases, dimension is meant in the sense
of algebraic geometry.
\end{fact}
Let $\ffp{p} = \mathbb{Z}/p \mathbb{Z}.$
The following goes back to Dickson:
\begin{fact}
\label{dickfact}
The order of $\SL(n, \ffp{p})$ equals
\[
p^{(n^2-n)/2} (p^2-1)(p^3-1) \dots (p^n - 1) = p^{n^2-1} +
O(p^{n^2-3}).
\]
The order of $\GL(n, \ffp{p})$ equals
\[
p^{(n^2-n)/2} (p-1)(p^2-1)(p^3-1) \dots (p^n - 1) = p^{n^2} +
O(p^{n^2-1}).
\]
The order of $\Sp(2n, \ffp{p})$ equals
\[
p^{n^2}(p^2-1)(p^4 - 1) \dots (p^{2n} - 1) = p^{2n^2+n} + O(p^{2n^2 +
n - 2}).
\]
\end{fact}
\section{Applications to polynomials}
\label{poly}
Let $\mathcal{P}_d(k, \beta)$ be the set of all monic polynomials in one
variable of
degree $d$ over a field
$k$ with constant coefficient $\beta,$ and let $\mathcal{P}_d^m(k,
\beta, \alpha)\subseteq \mathcal{P}_d(k, \beta)$ be the
set of those polynomials which have a polynomial factor (over
$k$) of degree $m$ with constant term $\alpha.$
Let us identify $\mathcal{P}_d(k, \beta)$ with the affine space $k^{d-1}.$
Then, we have the following:
\begin{theorem}
\label{hyper1}
The set $\mathcal{P}_d^m(k, \beta, \alpha)$ is contained in an affine hypersurface of
$\mathcal{P}_{d}(k, \beta).$
\end{theorem}
\begin{proof}
Let \[p(x) = x^{d} + \sum_{i=0}^{d-1}a_{i} x^{i}¥ \in
\mathcal{P}.\] By assumption, $p(x) = q(x) r(x).$ Assume that the
degree of $q(x) = m,$ while the constant term of $q(x)$ equals
$\alpha.$ Writing \[q(x) = x^{m} + \sum_{j=1}^{m-1} b_{j}x^{j},\]
and \[r(x) = x^{d-m} + a_{0}/\alpha + \sum_{k=1}^{d-m-1}
c_{k}x^{k},\] we see that $\mathcal{P}_d^m(k, \beta,\alpha)$ is a polynomially
parametrized hypersurface of $\mathcal{P}_d(\beta,k).$
\end{proof}
\begin{corollary}
\label{allreds}
The set of all reducible monic polynomials with fixed constant term
$\beta$ is the union
\[
\bigcup_{m=1}^d \mathcal{P}_d^m(k, \beta, \alpha)
\]
of affine hypersurfaces of $\mathcal{P}_d(k, \beta) \simeq k^{d-1}.$
\end{corollary}
\begin{theorem}
\label{constone2}
Let $P(d)(\ffp{p})$ be the set of polynomials of degree $d$ with
coefficieded in absolute value by $N$ and constant coefficient $1,$ and let
$R_1(d)(\ffp{p})$ be the set of polynomials reducible over $\ffp{p}$ with
some factor having constant term equal to $1 \mod p.$ Then, $R_1(d)$ lies
on an algebraic
hypersurface $\mathbb{C}^{d-1}$ (where the coordinates are the
coefficients), and consequently
\[
\dfrac{R_1(d)}{P_1(d)} = O\left(\dfrac{1}{p}\right).
\]
\end{theorem}
\begin{proof}
Immediate from Theorem \ref{langweil}.
\end{proof}
Finally, one definition:
\begin{definition}
We say that a polynomial $p(x) \in P_1(d)$ is \emph{reciprocal} if
$x^d p(1/x) = p(x)$ -- in other words, the list of coefficients of $p$
is the same read from left to right as from right to left. Reciprocal
polynomials can also be defined as follows:
A (monic) polynomial (of even degree $2n$) is reciprocal if it can be
written as
\[
\prod_{j=1}^n (x-r_i)(x-1/r_i) = \prod_{j=1}^n (x^2 - (r_i + r_i^{-1})
x + 1).
\]
\end{definition}
\section{Applications to matrices}
\label{matrices}
\subsection{The special linear group.}
\label{slnsec}
\begin{lemma}
\label{slnpprob}
The probability that the characteristic polynomial of a matrix $M$ in
$\SL(n, \ffp{p})$ has a factor with constant term $1$ is of order $O(1/p).$
\end{lemma}
\begin{proof}
This follows immediately from Example \ref{secondeg} and Theorems
\ref{constone2} and \ref{domcount}
\end{proof}
Unfortunately, since the number of integral points on $\SL(n,
\mathbb{Z})$ of height (absolute value) bounded by $B$ grows much slower than
$B^{n^2-1}$ the above results \emph{do not} imply the following
\begin{theorem}
\label{slnzgro}
The probability that a matrix in $\SL(n, \mathbb{Z})$ with coefficients
bounded by $B$ has reducible characteristic polynomial goes to $0$ as
$B$ goes to infinity.
\end{theorem}
\begin{proof}
For the ``soft'' result as above we use results of Duke Rudnick and
Sarnak (\cite{DRS}) or of Eskin and McMullen (\cite{EskinMac}). These
imply that the matrices in $\SL(n, \mathbb{Z})$ with coefficients
bounded by $B$ are asymptotically equidistributed among the cosets of
any subgroup of finite index, in particular the cosets of the principal
congruence subgroup $\Gamma_p(n, \mathbb{Z}).$ Since reducibility
modulo $p$ depends only on the coset modulo $\Gamma_p,$ Lemma
\ref{slnpprob} immediately replies the result.
\end{proof}
\begin{remark}
To get an effective result, we need to know that the error terms in
equidistribution modulo $\Gamma_p$ are uniform (do not depend on
$p.$). Precisely such a result is shown in the preprint of Amos Nevo
and Peter Sarnak \cite{NevoSarnak}.
\end{remark}
\section{Random products of matrices in the symplectic and special
linear groups}
\label{matrices2}
In the preceeding section we defined the size of a matrix by (in
essence) its $L^1$ norm (any other Banach norm will give the same
results). However, it is sometimes more natural to measure size
differently: In particular, if we have a generating set $\gamma_1,
\dots, \gamma_l$ of our lattice $\Gamma$ (which might be $\SL(n,
\mathbb{Z})$ or $Sp(2 n, \mathbb{Z})$) we might want to measure the
size of an element by the length of the (shortest) word in $\gamma_i$
equal to that element -- this is the combinatorial measure of
size. The relationship between the size of elements and combinatorial
length is not at all clear, so the results in this section are proved
quite differently from the results in the preceding section.
We will be using Theorems \ref{mythm1} and \ref{mythm2}.
\begin{remark}
We will be applying Theorem \ref{mythm1} to groups
$\SL(n, \mathbb{Z}/p \mathbb{Z})$ and $\Sp(2n, \mathbb{Z}/p
\mathbb{Z}).$ Since those groups have no non-trivial one-dimensional
representations, the assumption on $\rho$ in the statement of the
theorem is vacuous.
\end{remark}
We will also need the following results of Nick Chavdarov and Armand Borel.
\begin{theorem}[N.~Chavdarov, A.~Borel \cite{chavdarov}]
\label{chavthm}
Let $q > 4$, and let $R_q(n)$ be the set of $2n \times 2n$ symplectic matrices over the field $F_q$ with
\emph{reducible} characteristic polynomials. Then
\[
\dfrac{|R_q(n)|}{|\Sp(2n, F_p)|} < 1- \frac{1}{3 n}.
\]
\end{theorem}
\begin{theorem}[N.~Chavdarov, A.~Borel \cite{chavdarov}]
\label{chavthm2}
Let $q > 4$, and let $G_q(n)$ be the set of $n \times n$ matrices with
determinant $\gamma \neq 0$ over the field $F_q$ with
\emph{reducible} characteristic polynomials. Then
\[
\dfrac{|G_q(n)|}{|\SL(n, F_q)|} < 1- \frac{1}{2 n}.
\]
\end{theorem}
Theorem \ref{chavthm2} follows easily from the following result of A.~Borel:
\begin{theorem}[A.~Borel]
\label{borthm2}
Let $F$ be a monic polynomial of degree $N$ over $\mathbb{Z}/p \mathbb{Z}$
with nonzero constant term. Then, the number $\#{F, p}$ of matrices in $\GL(N,
p)$ with characteristic polynomial equal to $F$ satisfies
\[
(p-3)^{N^2-N} \leq \#(F, p) \leq (p+3)^{N^2-N}.
\]
\end{theorem}
Theorem \ref{borthm2} will be used in Section \ref{strong}.
We now have our results:
\begin{theorem}
\label{randprodthm}
Let $G$ and $S_N,$ be as in the statement of Theorem
\ref{mythm1}, but with $\Gamma = \Sp(2n, \mathbb{Z}),$ or $\Gamma = \SL(2,
\mathbb{Z}).$
Then the
probability that a matrix in $S_N$ has a reducible characteristic
polynomial goes to $0$ as $N$ tends to infinity.
\end{theorem}
\begin{proof} Let $\Gamma_l$ be the set of matrices in $\Gamma$ reduced
modulo $l$ -- it is known (see \cite{newmanmats}) that $\Gamma_l$ is $\SL(n,
l)$ or $\Sp(2n, l)$ (depending on which $\Gamma$ we took. Let $p_1, \dotsc,
p_k$ be distinct primes, let $K = p_1
\dots p_k.$
We know that:
\[\Gamma_K = \Gamma_{p_1} \times \dots
\times \Gamma_{p_k}.\] (see \cite{newmanmats} for the proof of the last
equality). A generating set of $\Sp(2n, \mathbb{Z})$ projects via
reduction modulo $K$ to a a generating set of $\Gamma_K$ (see, again,
Newman's book \cite{newmanmats}), and also, via reduction mod $p_i$ to
generating sets of the $\Sp(2n, p_i).$ By Theorems
\ref{mythm1},\ref{mythm2} and
\ref{chavthm}, the probability that the characteristic polynomial in a
random product of $N \gg 1$ generators is reducible modulo all of the
$p_i$ is at most equal to $(1-3/n)^k.$ Since this is an upper bound on
the probability of being reducible modulo $\mathbb{Z},$ the result
follows.
\end{proof}
\begin{remark}
Using Lemma \ref{slnpprob} instead of Theorem \ref{chavthm2} for
$\SL(n, \mathbb{Z})$ gives a sharper result, as well as a more elementary argument.
\end{remark}
\begin{remark}
The proof of Theorem \ref{slnzgro} translates \emph{mutatis mutandis}
into this setting -- instead of the principal congruence subgroup
$\Gamma_p$ for a single prime $p,$ we use it for $q = p_1 \times \dots
\times p_k.$
\end{remark}
An example of a graph $G$ is a graph where every vertex is connected
to every vertex (\emph{including itself}), and the set of label is a
\emph{symmetric} generating set.
In this case, we are just taking
random products of generators or their inverses. Another is the graph
(studied in \cite{walks}) where a vertex labelled by a generator $a$
is connected to every vertex \emph{except} the one labelled by $a^{-1},$
so that only reduced words in the generators are allowed, and so on.
\section{Stronger irreducibility}
\label{strong}
We might ask if something stronger than irreducibility of the
characteristic polynomial can be shown.The answer is in the
affirmative. Indeed, the methods of the preceeding sections combined with the
results of the Appendix give immediately:
\begin{theorem}
\label{galthm}
The probability that a random word of length $L$ in a generating set of $SL(N,
\mathbb{Z})$ has characteristic polynomial with Galois group $S_N$
goes to $1$ as $L$ goes to infinity.
\end{theorem}
Aside from its intrinsic interest, Theorem \ref{galthm} implies the
following:
\begin{theorem}
\label{iwip}
The probability that a random word $w$ of length $L$ in a generating set
of $\SL(n, \mathbb{Z})$ and all proper powers $w^k$ have irreducible
characteristic polynomials goes to $1$ as $L$ goes to infinity.
\end{theorem}
Theorem \ref{iwip} will follow easily from Theorem \ref{galthm}
together with the following Lemma\footnote{Compare with
\cite{chavdarov}[Lemma 5.3]}:
\begin{lemma}
\label{iwip2}
Let $M \in \SL(n, \mathbb{Z})$ be such that the characteristic
polynomial of $M^k$ is \emph{reducible} for some $k.$ Then the Galois group
of the characteristic polynomial of $M$ is \emph{imprimitive}, or the characteristic polynomial of $M$
is cyclotomic.
\end{lemma}
\begin{remark} For the definition of \emph{imprimitive} see, for
example, \cite{wielandt,mhall}.
\end{remark}
\begin{proof}
Assume that the characteristic polynomial $\chi(M)$ is irreducible
(otherwise the conclusion of the Lemma obviously holds, since the
Galois group of $\chi(M)$ is not even transitive). Let the roots of
$\chi(M)$ (in the algebraic closure of $\mathbb{Q}$) be $\alpha_1, \dotsc, \alpha_n.$ The roots of
$\chi(M^k)$ are $\beta_1, \dotsc, \beta_n,$ where $\beta_j = \alpha_j^k.$ Suppose that
$\chi(M^k)$ is reducible, and so there is a factor of $\chi(M^k)$
whose roots are $\beta_1, \dots, \beta_l,$ for some $l < n.$ Since
$\Gal(\chi(M))$ acts transitively on $\alpha_1, \dotsc, \alpha_n,$ it
must be true that for every $i \in \{1, \dots, n\},$ $\alpha_i^k =
\beta_j,$ for some $j \in \{1, \dotsc, l\}.$ Let $B_j$ be those $i$
for which $\alpha_i^k = \beta_j.$ This defines a partition of $\{1,
\dotsc, n\}$ into blocks, which is stabilized by the Galois group of
$\chi(M),$ and so $G$ is an intransitive subgroup of $S_n,$
\emph{unless} $l = 1.$ In that case, the characteristic polynomial of
$M^k$ equals $(x-\beta)^n,$ and since $M^k \in \SL(n, \mathbb{Z})$ it
follows that $\beta = 1,$ and all the eigenvalues of $M$ are $n$-th
roots of unity, so that $M^k = 1.$
\end{proof}
\section{The mapping class group}
\label{mcg}
Let $S_g$ be a closed surface of genus $g\geq 1,$
and let $\Gamma_g$ be the mapping class group of $S_g.$ The group $\Gamma_g$
admits a homomorphism $\mathfrak{s}$ onto $\Sp(2g, \mathbb{Z})$ (we associate to each
element its action on homology; the symplectic structure comes from the
intersection pairing). The following result can be found in \cite{CassonBleiler}:
\begin{theorem}
\label{cassonthm}
For $\gamma \in \Gamma_g$ to be pseudo-Anosov, it is sufficient that $g =
\mathfrak{\gamma}$ satisfy all of the following conditions:
\begin{enumerate}
\item
The characteristic polynomial of $g$ is irreducible.
\item
The characteristic polynomial of $g$ is not cyclotomic.
\item
The characteristic polynomial of $g$ is not of the form $g = h(x^k),$ for some
$k>1.$
\end{enumerate}
\end{theorem}
The following is a corollary of our results on matrix groups:
\begin{theorem}
Let $g_1, \dots, g_k$ be a generating set of $\Sp(2n, \mathbb{Z}).$ The
probability that a random product of length $N$ of $g_1, \dots, g_k$
satisfies the conditions of Theorem \ref{cassonthm} goes to $1$ as $N$ goes to
infinity.
\end{theorem}
\begin{proof}
We prove that the probability that the random word $w_N$ \emph{not} satisfy the
conditions goes to $0.$ By Theorem \ref{randprodthm}, the probability that
$w_N$ has reducible characteristic polynomial goes to $0.$ In order for the
characteristic polynomial to be of the form $g = h(x^k)$ it is necessary that
the linear term (the trace) vanish. The set of traceless matrices is a
proper subvariety of $\Sp(2g),$ so by Theorem \ref{langweil}, for a
large $p,$ the probability that a given matrix is traceless is $\ll
1/p,$ and in particular is bounded away from $1.$ The proof of Theorem
\ref{randprodthm} now goes through verbatim to show that the set of
traceless matrices is asymptotically negligible in $\Sp(2g,
\mathbb{Z}).$
Finally, the set of
cyclotomic polynomials of a given degree $2g$ is bounded by some
function $h(g).$
\end{proof}
\section{Free Group Automorphisms}
\label{free}
An automorphism of $\phi$ of a free group $F_n$ is called
\emph{strongly irreducible}\footnote{This terminology, with strong
support from this author, has been introduced by L. Mosher and
M. Handel for what was previously known as \emph{irreducible with irreducible powers}} if no (positive) power of
$\phi$ sends a free factor $H$ of $F_n$ to a conjugate. This concept was
introduced by M.~Bestvina and M.~Handel \cite{bestvinahandel1}, and many of the results of the
theory of automorphisms of free groups are shown for such
automorphisms (for a survey, and the relationship between strongly
irreducible automorphisms and pseudo-Anosov automorphisms of surfaces,
the reader is urged to read the very clear survey\footnote{
to appear, but available at
\newline
http://www.math.cornell.edu/~vogtmann/papers/AutQuestions/Questions.html}\cite{bridsonvogtmann}. By passing to the action of $\phi$ on the
abelianization of $F_n$ (equivalently, on $H_1(F_n, \mathbb{Z})$), Section
\ref{strong}\footnote{We need to change $\SL(n, \mathbb{Z})$ to
$\GL(n, \mathbb{Z})$ throughout} shows the following:
\begin{theorem}
\label{irredpow}
Let $f_1, \dots, f_k$ be a generating set of the automorphism group of
$F_n.$ Consider all words of length $L$ in $f_1, \dots, f_k.$ Then,
for any $n,$ the probability that such a word is irreducible tends to
$1$ as $L$ tends to infinity and also the probability that
such a word is strongly irreducible tends to $1$ as $L$
tends to infinity.
\end{theorem}
\section{Galois groups of generic restricted polynomials}
\label{galoisrestricted}
Let $P_{N, d}(\mathbb{Z})$ be the set of monic polynomials of degree $d$ with
integral coefficients bounded by $N$ in absolute value. It is a
classical result of B.~L.~van~der~Waerden that the probability that
the Galois group of $p \in P_{N, d}(\mathbb{Z})$ is the full symmetric
group $S_d$ tends to $1$ as $N$ tends to infinity. The argument is
quite elegant: First, it is observed that a subgroup $H < S_d$ is the
full symmetric group if and only if $H$ intersects every conjugacy
class of $S_d.$ This means that $H$ has an element with every possible
cycle type. It is further noted that there is a cycle type $(n_1,
\dots, n_k)$ in the Galois group of $p$ over $\mathbb{Z}/p \mathbb{Z}$
if and only if $p$ factors over $\mathbb{Z}/p\mathbb{Z}$ into
irreducible polynomials of degrees $n_1, \dotsc, n_k.$ Using
Dedekind's generating function for the number of irreducible
polynomials over $\mathbb{Z}/p\mathbb{Z}$ of a given degree, it is
shown that the probability of a fixed partition is is bounded below by
a constant (independent of the prime $p$), and the proof is finished
by an application of a Chinese Remainder Theorem.
In this note, we ask the following simple-sounding question: Let $P_{N, d, a,
k}(\mathbb{Z})$ be the set of all polynomials in $P_{N,
d}(\mathbb{Z})$ where the coefficient of $x^k$ equals $a.$ Is it still
true that the Galois group of a random such polynomial is the full
symmetric group? The result would obviously follow if the probability
that the Galois group of a random general polynomial is ``generic''
were to go to $1$ sufficiently fast with $N.$ In fact, the probability
that an element of $P_{N, d}$ is \emph{reducible} (which means that
its Galois group is not transitive, hence not $S_n$) is of the order
of $1/N,$ so that approach does not work.
Mimicking the proof of van der Waerden's result does not appear to work
(at least not easily): Dedekind's argument enumerates all irreducible
polynomials, and the result is not ``graded'' by specific
coefficients. It is certainly possible that the argument can be pushed
through, but this appears to be somewhat involved.
Given this sad state of affairs, we first use a simple trick and Dirichlet's
theorem on primes on arithmetic progressions to show first the following technical result:
\begin{theorem}
\label{premainthm}
The probability that a random element of $P_{N, d, a, k}(\mathbb{Z}/p
\mathbb{Z})$ has a a prescribed splitting type $s$
approachs the probability that a random unrestricted polynomial of degree
$d$ has the splitting type $s,$ as long as $p-1$ is relatively prime
to $(d-k)!,$ and as $p$ becomes large.
\end{theorem}
which implies (by van der Waerden's argument):
\begin{theorem}
\label{mainthm}
The probability that a random element of $P_{N, d, a, k}(\mathbb{Z})$
has $S_d$ as the Galois group tends to $1$ as $N$ tends to infinity,
\end{theorem}
It should be noted that the (multivariate) Large Sieve (as used by P. X. Gallagher
in \cite{galgal}) can be used to give an effective estimate on the
probability in the statement of Theorems \ref{mainthm}: that
is: $p(N) \ll N^{-1/2} \log N.$
\subsection{Proof of Theorem \ref{mainthm}}
\label{mainproof}
We will need two ingredients other than van der Waerden's original
idea. The first is A. Weil's estimate (Theorem \ref{weilcourbesthm}),
the second is the following classical result:
\begin{theorem}[\cite{lang}[Theorem VIII.9.1]]
\label{langthm}
Let $k$ be a field, and $n\geq 2$ an integer. Let $a\in k,$ $a\neq 0.$
Assume that for all prime numbers $p$ such that $p|n$ we have $a
\notin k^p,$ and if $4|n,$ then $a\notin - 4 k^4.$ Then $X^n - a$ is
irreducible in $k[X].$
\end{theorem}
Theorem \ref{langthm} goes essentially back to N. H. Abel's
foundational memoir.
We will need an additional observation:
\begin{lemma}
\label{freelem}
Let $q = p^l,$ and
let $x_1, \dots, x_k \in \mathbb{F}_q.$ Let $a, b \in \mathbb{F}_p,$
with $(a, b) \neq (1, 0)$
and let $g(a, b)(x) = a x + b$ be a transformation of $\mathbb{F}_q$
to itself. Then, it is not possible for $g(a, b)$ to permute $x_1,
\dots, x_k,$ if $k!$ is coprime to $p-1.$
\end{lemma}
\begin{lemma}
\label{freecor}
Consider a polynomial $f$ of degree $d$ over $\mathbb{F}_p,$ such that
$d < p,$ and such that the coefficient of $x^{d-1}$ does not vanish. Then there is no pair $(a, b) \neq (1, 0),$ such that
$f(a x + b) = a^d f(x),$ for all $x \in \mathbb{F}_p.$
\end{lemma}
\begin{proof}
There are two distinct cases to analyze. The first is when $a=1.$ In
that case, $f(x+b) = f(x)$ for all $x\in \mathbb{F}_p,$ and since $p >
d,$ $f(x+b) = f(x),$ for all $x$ in the algebraic closure of $F_p.$ Let
$r$ be a root of $f.$ Then, so are $r+b, r+2b, \dotsc, r+b(p-1),$
which are all distinct as long as $b \neq 0,$ but
since $p$ is greater than $d$ that means that $f$ is identically $0.$
The second case is when $a\neq 1.$ In that case, $x_0 = b/(1-a)$ is
fixed under the substitution $x \rightarrow a x + b,$ so setting
$z = x - x_0,$ we see that $f(ax+b) = f(az),$ and so $f(ax+b) = a^d
f(x)$ implies that $f(az) = a^d f(z),$ for all $z \in \mathbb{F}_p.$
Since $p > \deg f,$
the corresponding
coefficients of the right and the left hand polynomials must be equal Since the coefficient of
$x^{d-1}$ does not vanish, it follows tha $a=1,$ which contradicts our
assumption.
\end{proof}
The argument now proceeds as follows. First, we note that if the
polynomial $f(x)$ of degree $d$ has a certain splitting type (hence Galois group)
over $\mathbb{F}_p$ then so does $f(a x + b)/a^d,$ for any $a\neq 0, b \in
\mathbb{F}_p.$ The set of all linear substitutions forms a group $\mathbb{A}$,
which acts freely on the set of polynomials of degree $d,$ except for
the (small) exceptional set of polynomials with a vanishing
coefficient of $x^{d-1}$ as long as $d < p$ (by Lemma \ref{freecor}), so the distribution of splitting types among the $\mathbb{A}$
orbits is the same as among all of the polynomials of degree $d.$ Now,
consider polynomials with constant term $1.$ How many of them are
there in the $\mathbb{A}$ orbit of $f(x)?$ It is easy to see that the
number is equal to the number of solutions to
\[f(b) = a^d.\] If the curve $\mathcal{C}_f$ given by $f(x) - y^d$ is absolutely irreducible,
that number is $p + O(\sqrt{p}),$ by Theorem \ref{weilcourbesthm}. By
Theorem \ref{langthm}, in order for $\mathcal{C}_f$ to not be
absolutely irreducible, we must either have that $f(x) = g^q(x),$ for
some $q|d,$ or $f(x) = - 4 h^4(x),$ in case $4|d.$ But the number of
such polynomials is bounded by $O(p^{d/2}),$ which is asymptotically
neglible. So, we see that the distribution of splitting types amongst
polynomials of degree $d$ with constant term $1$ is the same as for
all polynomials, as long as $d < p.$
\section{Random walks on groups and graphs}
\label{intp2}
We consider the following situation:
$G$ is a finite \emph{undirected} (multi)graph with $n$ vertices, and
$\Gamma$ is a finite group. Each vertex $v_i$ of $G$ is decorated with
an element $t_i$ of $\Gamma;$ we assume that the set $\{t_1,
\dotsc, t_n\}$ generates $\Gamma.$ We make the following assumptions:
\begin{condition}
\label{standardcond}
\begin{enumerate}
\item The graph $G$ is
\emph{ergodic}: the adjacency matrix$A(G)$
of $G$ is a Perron-Frobenius matrix (meaning that there is a unique
eigenvalue of maximal modulus, and that some power $A^k(G)$ has all
entries positive).
\item We assume further that for every nontrivial
one-dimensional unitary representation $\rho$ of $\Gamma,$
there exists $1\leq i, j\leq n,$ such that $\rho(t_i) \neq \rho(t_j).$
\end{enumerate}
\end{condition}
To each walk $w=(v_{i_1}, v_{i_2}, \dotsc, v_{i_k})$ on $G$ we
associate the element $\gamma_w = t_{i_k} \dots t_{i_2} t_{i_1}.$ We
now denote the set of all walks of length $N$ by $W_N.$
Furthermore, we define a \emph{probability distribution} $P_N$ on
$\Gamma,$ as follows: the probability density $p_N: \Gamma \rightarrow
\reals_+$ is defined as $p_N(\gamma) = |\{w\in W_N \quad | \quad
\gamma_w = \gamma\}|/|W_N|.$
A slightly more abstract way to think of this is as follows: $P_N$ is
the function on the group ring $\reals[\Gamma],$ defined by:
\[
P_N = \dfrac{1}{|W_N} \sum_{w \in W_N} \gamma_w.
\]
Our main result is the following:
\begin{theorem}
\label{mythm1}
The distributions $P_N$ converge to the uniform distribution on
$\Gamma$ as $N$ goes to infinity. Furthermore, there is a constant $c
= c(G, \Gamma, T) < 1,$ such that $p_N(\gamma) - 1/|\Gamma| < c^N,$
for all $\gamma \in \Gamma.$ Here, $T$ refers to the assignment of
generators of $\Gamma$ to the vertices of $G.$
\end{theorem}
We will actually define a slightly stronger result: we will pick
$1\leq i, j \leq n,$ and consider the set of all walks $W_{N, i, j}$
of length $N$ from the $i$th to the $j$th vertex of $G.$ We define
distributions $P_{N, i, j}$ in the obvious way. Then:
\begin{theorem}
\label{mythm2}
Theorem \ref{mythm1} holds with $P_N$ replaced by $P_{N, i, j}.$
\end{theorem}
The starting point for the proof of the theorems above is Fourier
Transform on finite groups, which is discussed in Section
\ref{fouriergroups}. In particular, we will be using Theorem
\ref{fourierest} and Corollary \ref{closetoconst} to reduce the
question of whether a probability distribution is close to uniform to
the proving that the Fourier Transform is small at every
\emph{non-trivial} representation. The reader might well wonder how
moving the problem to Fourier transform space helps us -- the answer
is that it turns out that we can reduce the estimation of the
``fourier coefficients'' to questions in linear algebra, through the
construction in Section \ref{fouriertomat}.
\section{Fourier Transform on finite groups}
\label{fouriergroups}
For a thorough introduction to the topic of this section the reader is
referred to \cite{serrereps,simonreps}.
Let $\Gamma$ be a finite group, and let $f:\Gamma\rightarrow \cx$ be a
function on $\Gamma.$ Furthermore, let $\widehat{\Gamma}$ be the
\emph{unitary dual} of $\Gamma:$ the set of all irreducible complex unitary
representations of $\Gamma.$ To $f$ we can associate its \emph{Fourier
Transform} $\hat{f}.$ This is a function which associates to
each $d$-dimensional unitary representation $\rho$ a $d\times d$ matrix
$\hat{f}(\rho)$ as follows:
\[
\hat{f}(\rho) = \sum_{\gamma \in \Gamma} f(\gamma) \rho(\gamma).
\]
There is an inverse transformation, as well. Given a function $g$
on $\widehat{\Gamma}$ which associates to each $d$-dimensional
representation $\rho$ a $d\times d$ matrix $g(\rho),$ we can write:
\[
g^\sharp (\gamma) = \dfrac{1}{\abs{\Gamma}} \sum_{\rho\in
\widehat{\Gamma}} d_\rho \tr (g(\rho) \rho(\gamma^{-1}),
\]
where $d_\rho$ is the dimension of $\rho.$ We mean ``inverse'' in the
most direct way possible:
\[
\hat{f}^\sharp = f.
\]
The following result is classical (see, eg, \cite{simonreps}):
\begin{theorem}
\[
\sum_{\rho in \widehat{\Gamma}} d_\rho^2 = \abs{\Gamma},
\]
\end{theorem}
and, together with the Fourier inversion formula, implies
\begin{theorem}
\label{fourierest}
Let $g$ be a function on $\widehat{\Gamma},$ such that for every
\emph{nontrivial} $\rho \in \widehat{\Gamma},$
\[
\opnorm{g(\rho)} < \epsilon,
\]
where $\opnorm{\bullet}$ denotes the operator norm (see Section
\ref{matnorm}).
Then, for any $\gamma_1, \gamma_2 \in \Gamma,$
\[
\abs{g^\sharp(\gamma_1) - g^\sharp(\gamma_2)} < 2\epsilon.
\]
\end{theorem}
\begin{proof}
First, note that for the trivial representation $\rho_0,$ the quantity
\[
d_{\rho_0}g(\rho_0)\rho_0(\gamma) = g(\rho_0),
\]
so does not depend on $\gamma.$
By the Fourier inversion formula, then,
\[
\begin{split}
\abs{g^\sharp(\gamma_1) - g^\sharp(\gamma_2)} & = \\
\left\lvert \dfrac{1}{\abs{\Gamma}}\sum_{\substack{\rho \in \widehat{\Gamma}\\
\rho \neq \rho_0}} d_\rho \tr(g(\rho) (\rho(\gamma_1) -
\rho(\gamma_2)))\right\rvert & \leq \\
\sum_{i=1}^2 \left\lvert \dfrac{1}{\abs{\Gamma}}\sum_{\substack{\rho \in \widehat{\Gamma}\\
\rho \neq \rho_0}} d_\rho \tr\left(g(\rho) \rho(\gamma_i) \right)
\right\rvert &
\underset{\text{by
Eq. \eqref{traceineq}}}{\leq} \\
\dfrac{2}{\abs{\Gamma}} \sum_{\rho \in \widehat{\Gamma}} d_\rho^2
\opnorm{g(\rho)} & < 2\epsilon.
\end{split}
\]
\end{proof}
\begin{corollary}
\label{closetoconst}
Under the assumption of Theorem \ref{fourierest}, and assuming in
addition that $g$ is real valued, if
\[\sum_{\gamma \in \Gamma} g(\gamma) = 1,
\]
then \[g(\gamma) - 1/|\Gamma| < 2 \epsilon\quad\forall \gamma \in
\Gamma.\]
Furthermore, if $\Omega \in \Gamma,$
\begin{equation}
\label{closetoconstset}
\left| \sum_{\gamma \in \Omega} g(\gamma) -
\dfrac{\Omega}{|\Gamma|}\right| < 2 \epsilon |\Omega.|
\end{equation}
\end{corollary}
\begin{proof}
Without loss of generality, suppose that $g(\gamma) > 1/|\Gamma|.$
Then there is a $\gamma_2,$ such that $g(\gamma_2) < 1/|\Gamma|.$
Thus,
\[
g(\gamma) - 1/|\Gamma| < g(\gamma)-g(\gamma_2) < 2\epsilon.
\]
The estimate \eqref{closetoconstset} follows immediately by summing
over $\Omega.$
\end{proof}
\section{Fourier estimates via linear algebra}
\label{fouriertomat}
In order to prove Theorem \ref{mythm2}, we would like to use Theorem
\ref{fourierest}, and to show the equidistribution result, we would
need to show that for every \emph{nontrivial} irreducible
representation $\rho,$
\begin{equation}
\label{coeffdecay}
\lim_{N\rightarrow \infty}\dfrac{1}{|W_{N, i, j}|}\tr{\sum_{w\in W_{N, i,
j}} \rho(\gamma_w)} = 0.
\end{equation}
To demonstrate Eq. \eqref{coeffdecay}, suppose that $\rho$ is
$k$-dimensional, so acts on a $k$-dimensional Hilbert space $H_\rho=H.$
Let $Z = L^2(G)$ -- the space of complex-valued functions from $V(G)$
to $\cx,$ let $e_1, \dotsc, e_n$ be the standard basis of $Z,$ and let
$P_i$ be the orthogonal projection on the $i$-th coordinate space. We
introduce the matrix
\[
U_\rho = \sum_{i=1}^n P_i \otimes \rho(t_i) =
\begin{pmatrix}
\rho(t_1) & 0 & \dots & 0\\
0 & \rho(t_2) & \dots & 0\\
\hdotsfor[2]{4}\\
0 & 0 & \dots & \rho(t_n)
\end{pmatrix},
\]
and also the matrix $A_\rho = A(G) \otimes I_H,$ where $I_H$ is the
identity operator on $H.$ Both $U_\rho$ and $A_\rho$ act on $Z \otimes H.$
The following is immediate:
\begin{lemma}
\label{matprodlem}
Consider the matrix $(U_\rho A_\rho)^l,$ and think of it as an
$n\times n$ matrix of $k\times k$ blocks. Then the $ij$-th block
equals the sum over all paths $w$ of length $l$ beginning at $v_i$ and
ending of $v_j$ of $\rho(\gamma_w).$
\end{lemma}
Now, let $T_{ji}$ be the operator on $Z$ which maps $e_k$ to
$\delta_{kj} e_i.$
\begin{lemma}
\label{projlemma}
\[
\tr{\left[\left((T_{ji}^t P_j)\otimes I_H\right) (U_\rho \otimes A_\rho)^N (P_i\otimes I_H)\right]} = \tr{\sum_{w\in W_{N, i, j}} \rho(\gamma_w)}
\]
\end{lemma}
\begin{proof}
The argument of trace on the left hand side simply extracts the $ij$-th $k\times k$ block from $(U_{\rho} \otimes A_\rho)^N.$
\end{proof}
By submulticativity of operator norm, we see that
\[
\opnorm{(T_{ji}^t P_j)\otimes I_H (U_\rho \otimes A_\rho)^N P_i\otimes
I_H} \leq \opnorm{ (U_\rho \otimes A_\rho)^N},
\]
and so proving Theorem \ref{mythm2} reduces (thanks to Theorem
\ref{fourierest}) to showing
\begin{theorem}
\label{fundcollapse}
\[
\lim_{N\rightarrow \infty} \dfrac{\opnorm{(U_\rho \otimes
A_\rho)^N}}{|W_{N, i, j}|
} = 0,
\]
for any non-trivial $\rho.$
\end{theorem}
\begin{notation}
We will denote the spectral radius of an operator $A$ by $\specrad{A}.$
\end{notation}
Since $|W_{N, I, j}| \asymp \mathcal{R}^N(A(G)),$ and by Gelfand's Theorem
(Theorem \ref{gelfand}),
\[
\lim_{N\rightarrow \infty} \|B^N\|^{1/N} = \specrad{B},
\]
for any matrix $B$ and any matrix norm $\|\bullet\|,$ Theorem
\ref{fundcollapse} is equivalent to the statement that the spectral
radius of $U_\rho \otimes A_\rho$ is smaller than that of $A(G).$
Theorem \ref{fundcollapse} is proved in Section \ref{myproof}.
\subsection{Proof of Theorem \ref{fundcollapse}}
\label{myproof}
\begin{lemma}
\label{twistlemma}
Let $A$ be a bounded hermitian operator $A:H\rightarrow H,$ and $U:
H\rightarrow H$ a unitary operator on the same Hilbert space $H.$
Then the spectral radius of $U A$ is smaller than the spectral radius
of $A,$ and the inequality is strict unless an eigenvector of $A$ with
maximal eigenvalue is also an eigenvector of $U.$
\end{lemma}
\begin{proof}
The spectral radius of $UA$ does not exceed the operator norm of $UA,$
which is equal to the spectral radius of $A.$ Suppose that the two are
equal, so that there is a $v,$ such that $\norm{UA v} = \specrad{A}{v},$
and $v$ is an eigenvector of $UA.$
Since $U$ is unitary, $v$ must be an eigenvector of $A,$ and since it
is also an eigenvector of $UA,$ it must also be an eigenvector of $U.$
\end{proof}
In the case of interest to us, $\rho$ is a $k$-dimensional irreducible
representation of $\Gamma,$ $U = \diag(\rho(t_1), \dots,
\rho(t_n),$ while $A = A(G) \otimes I_k.$ We assume that $A(G)$ is an
irreducible matrix, so that there is a unique eigenvalue of modulus
$\specrad{A(G)},$ that eigenvalue $\lambda_{\max}$
(the \emph{Perron-Frobenius eigenvalue})
is positive, and it has a strictly positive eigenvector $v_{\max}.$ We
know that the spectral radius of $A$ equals the spectral radius of
$A(G),$ and the eigenspace of $\lambda_{\max}$ is the set of vectors of
the form $v_{\max} \otimes w,$ where $w$ is an arbitrary vector in
$\mathbb{C}^k.$ If $v_{\max} = (x_1, \dotsc, x_n),$ we can write
$v_{\max} \otimes w = (x_1 w, \dotsc, x_n w),$ and so $U(v_{\max} \otimes
w) = (x_1 \rho(t_1) w, \dotsc, x_n \rho(t_n) w).$ Since all of the
$x_i$ are nonzero, in order for the inequality in Lemma
\ref{twistlemma} to be nonstrict, we must have some $w$ for which
$\rho(t_i) w = c w$ (where the constant $c$ does \emph{not} depend on
$i.$)
Since the elements $t_i$ generate $\Gamma,$ the existence of such a
$w$ contradicts the irreducibility of $\rho,$ \emph{unless} $\rho$ is
one dimensional. This proves Theorem \ref{fundcollapse}
\section{Some remarks on matrix norms}
\label{matnorm}
In this note we use a number of matrix norms, and it is useful to
summarize what they are, and some basic relationships and inequalities
satisfied by them. For an extensive discussion the reader is referred
to the classic \cite{hornjohnson}. All matrices are assumed square,
and $n\times n.$
A basic tool in the inequalities below is the \emph{singular value
decomposition} of a matrix $A.$
\begin{definition}
The singular values of $A$ are the non-negative
square roots of the eigenvalues of
$A A^*,$ where $A^*$ is the conjugate transpose of $A.$
\end{definition}
Since $AA^*$
is a positive semi-definite Hermitian matrix for any $A,$ the singular
values $\sigma_1 \overset{\text{def}}{=} \sigma_{\max} \geq \sigma_2
\geq \dots$ are non-negative real numbers. For a Hermitian $A,$ the
singular values are simply the absolute values of the eigenvalues of $A.$
The first matrix norm is the \emph{Frobenius norm}, denoted by
$\norm{\bullet}.$
This is defined as
\[
\norm{A} = \sqrt{\tr{A A^*}} = \sqrt{\sum_i \sigma_i^2}.
\]
This is also the sum of the square moduli of the elements of $A.$
The next matrix norm is the \emph{operator norm}, $\opnorm{\bullet},$
defined as
\[
\opnorm{A} = \max_{\norm{v} = 1} \norm{Av} = \sigma_{\max}
\]
Both the norms $\norm{\bullet}$ and $\opnorm{\bullet}$ are
\emph{submultiplicative} (submultiplicativity is part of the
definition of matrix norm: saying that the norm $\matnorm{\bullet}$ is
submultiplicative means that $\matnorm{AB} \leq \matnorm{A}
\matnorm{B}$.)
From the singular value interpretation\footnote{A celebrated result of
John von Neumann states that \emph{any} unitarily invariant matrix
norm is a symmetric guage on the space of singular values -
\cite{vnguage}.} of the two matrix norms and the Cauchy-Schwartz
inequality we see immediately that
\begin{equation}
\label{twonorms}
\norm{A}/\sqrt{n}\leq \opnorm{A}\leq \norm{A}
\end{equation}
We will also need the following simple inequalities:
\begin{lemma}
\label{ulemma}
Let $U$ be a unitary matrix:
\begin{equation}
\label{traceineq}
\abs{\tr AU} \leq \norm{A}\sqrt{n} \leq n \opnorm{A}.
\end{equation}
\end{lemma}
\begin{proof}
Since $U$ is unitary, $\norm{U}= \norm{U^t} = \sqrt{n}.$
So, by the Cauchy-Schwartz inequality,
$tr A U \leq \norm{A} \norm{U} = \sqrt{n} \norm{U}.$
The second inequality follows from the inequality \eqref{twonorms}.
\end{proof}
The final (and deepest result) we will have the opportunity to use is:
\begin{theorem}[Gelfand]
\label{gelfand}
For any
operator $M,$ the spectral radius $\specrad{M}$ and any matrix norm
$\matnorm{\bullet},$
\[
\specrad{M} = \lim_{k\rightarrow \infty} \matnorm{M^k}^{1/k},
\]
\end{theorem}
\appendix
\section{Symplectic matrices and David Kirby's Theorem}
\label{kirbysec}
Recall that the \emph{symplectic quadratic form} in $2n$ dimensions is
given by the
matrix
\[
J =
\begin{pmatrix}
0 & \mathbf{I}_n\\
-\mathbf{I}_n & 0
\end{pmatrix},
\]
where $\mathbf{I}_n$ is the $n\times n$ identity matrix. A $2n \times
2n$ matrix $M$ is called \emph{symplectic} if it preserves the
symplectic form $J,$ that is
\begin{equation}
\label{sympeq}
M^t J M = J.
\end{equation}
The invariance relation Eq. \eqref{sympeq} can be written slightly
more explicitly if we write $M$ in $n\times n$ block form, as
\[
\label{genmat}
M =
\begin{pmatrix}
A & B\\
C & D
\end{pmatrix}
\]
In that case, Eq. \eqref{sympeq} is equivalent to the following
system:
\begin{gather}
\label{symp1}
A^t C = C^t A (= (A^t C)^t),\\
\label{symp2}
B^t D = D^t B (= (B^t D)^t),\\
\label{symp3}
A^t D - C^t B = \mathbf{I}_n.
\end{gather}
This simple observation is all that is needed to prove the following
\begin{theorem}[David Kirby, \cite{Kirby_symplecticchar}]
\label{kirby}
Let $\mathcal{R}$ be a commutative ring with $1,$ and let $p(x) \in
\mathcal{R}[x]$ be a reciprocal polynomial.
Then, there is a symplectic matrix $M$ with coefficients in
$\mathcal{R},$ such that
\[
\det (x \mathbf{I}_{2n} - M) = p(x).
\]
\end{theorem}
\begin{proof}[Proof sketch]
In the notation of Eq. \eqref{genmat}, let $A = \mathbf{0},$ let $\det
B = 1,$ and let $C = -(B^t)^{-1}.$ Then, in order for \[
M
= \begin{pmatrix}
A & B \\
C & D
\end{pmatrix} =
\begin{pmatrix}
\mathbf{0} & B \\
-(B^t)^{-1} & D
\end{pmatrix}
\]
to be symplectic, it is necessary and sufficient that $B^t D$ be
symmetric (since Eq. \eqref{symp1} and Eq. \eqref{symp3} hold
automatically.
Now, let
\[
B_{ij} = \begin{cases}
0 & i + j < n\\
1 & i + j = n\\
b_{i+j+1-n} & i + j > n,
\end{cases}
\]
and let $D = E + F,$
where
\[
E_{ij} = \begin{cases}
0 & \mbox{$| i - j| > 1$ or $i=j > 1.$}\\
1 & | i - j| = 1\\
-1 & i=j=1,
\end{cases}
\]
and
\[
F_{ij} = \begin{cases}
0 & j < n\\
f_i & \mbox{otherwise}
\end{cases}
\]
for some $b_2, \dotsc, b_n, f_1, \dotsc, f_n \in \mathcal{R}.$
It is not hard to check that this is always so if $n = 1,$ and when
$n>1,$ it is necessary and sufficient that
\begin{gather}
b_2 = f_1 + 1,\\
b_i = f_{i-1} + \sum_{j=2}^{i-1} b_j f_{i-j}.
\end{gather}
Now, if $p(x) = (1+x^{2n}) + a_1 * (x + x^{2n-1}) + \dotso + a_n(x^n +
x^{n+1}),$
a computation shows that a matrix $M$ as above with $f_i = a_{i-1} -
a_i$ does the trick. We won't go through all the details, but the main
idea in computing the determinant comes from the following:
\[
\begin{pmatrix}
x\mathbf{I} & -B\\
B^{-1} & x I - D
\end{pmatrix}
\begin{pmatrix}
B & 0\\
x \mathbf{I} & B^{-1}
\end{pmatrix} =
\begin{pmatrix}
0 & -\mathbf{I}\\
(x^2+1)\mathbf{I} - x D & x B^{-1} - D B^{-1},
\end{pmatrix}
\]
so by taking determinants:
\[
\det(x \mathbf{I} - M) = \det((x^2+1) \mathbf{I} - x D).
\]
\end{proof}
\bibliographystyle{plain}
\bibliography{rivin}
\end{document}
| 192,226
|
gwp112032 Glow Images Royalty Free Photograph
Keywords1 man only, 1 person, 1 person only, adult, adults, bending, bends, exterior, exteriors, green, harvesting, load, loading, loads, male, males, man, men, minnesota, one, one man only, one person, one person only, outdoor, outdoors, outside, people, sweet corn, vertical, work, working, works, stock image, images, royalty free photo, stock photos, stock photograph, stock photographs, picture, pictures, graphic, graphics, royalty free, gwp1120
| 6,166
|
TITLE: $f$ and $g$ is homotopic $\iff$ $g-f$ is null-homotopic?
QUESTION [1 upvotes]: In the projective model structure on the category of chain complexes,
is the homotopicness between $f:A\to B$ and $g:A\to B$ is equivalent to null-homotopicness of $g-f:A\to B$ ?
I think it's intuitively true but can not find the proof.
REPLY [2 votes]: The claim is true for any model structure on any additive category.
Suppose we have a (weak) path object for $B$, i.e. an object $P$ and weak equivalences $j : B \to P$ and $q_0, q_1: P \to B$ such that $q_0 \circ j = q_1 \circ j = \textrm{id}_B$, and a homotopy from $f$ to $g$ w.r.t. this path object, i.e. a morphism $h : A \to P$ such that $q_0 \circ h = f$ and $q_1 \circ h = g$.
Then $h - j \circ f$ is a homotopy from $0$ to $g - f$: indeed, $q_0 \circ (h - j \circ f) = q_0 \circ h - q \circ j \circ f = f - f = 0$ and similarly $q_1 \circ (h - j \circ f) = q_0 \circ h - q \circ j \circ f = g - f$, as required.
Actually, more is true: even without a model structure, the localisation of an additive category is always additive and the localisation functor is additive.
| 138,555
|
\begin{document}
\title{Conditioning and covariance on caterpillars}
\author{Sarah R. Allen\thanks{
Department of Computer Science, Carnegie Mellon University.
This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 0946825.
\texttt{srallen@cs.cmu.edu}}
\and Ryan O'Donnell\thanks{
Department of Computer Science, Carnegie Mellon University.
Supported by NSF grants CCF-0747250 and CCF-1116594. Part of this work performed at the Bo\u{g}azi\c{c}i University Computer Engineering Department, supported by Marie Curie International Incoming Fellowship project number 626373.
\texttt{odonnell@cs.cmu.edu}}}
\maketitle
\begin{abstract}
Let $\bX_1, \dots, \bX_n$ be joint $\{ \pm 1\}$-valued random variables. It is known that conditioning on a random subset of~$O(1/\eps^2)$ of them reduces their average pairwise covariance to below~$\eps$ (in expectation). We conjecture that $O(1/\eps^2)$ can be improved to~$O(1/\eps)$. The motivation for the problem and our conjectured improvement comes from the theory of global correlation rounding for convex relaxation hierarchies. We suggest attempting the conjecture in the case that $\bX_1, \dots, \bX_n$ are the leaves of an information flow tree. We prove the conjecture in the case that the information flow tree is a caterpillar graph (similar to a two-state hidden Markov model).
\end{abstract}
\section{Introduction} \label{sec:intro}
Let $\bX = (\bX_1, \dots, \bX_n)$ be a list of jointly distributed Boolean random variables taking values in~$\{\pm 1\}$. We are interested in the quantity
\[
\avg_{\substack{\text{distinct pairs} \\ u, v \in [n]}} \SET{\bigl \lvert \Cov[\bX_u,\bX_v] \bigr \rvert} \in [0,1].
\]
For brevity we call this the \emph{average covariance} of the random variables (absolute-value sign notwithstanding). It is a quantification of the extent to which the random variables are (pairwise) independent.
If the average covariance of $\bX_1, \dots, \bX_n$ is not small, then in some sense a ``typical''~$\bX_j$ contains a considerable amount of information about a sizeable fraction of the other~$\bX_k$'s. Then if we condition on~$\bX_j$, we might expect the variance of these other $\bX_k$'s to decrease, thereby decreasing the overall average covariance. For $t \in \Z^+$, we introduce the following notation:
\[
\avgCovCond{t}(\bX) \coloneqq \avg_{\substack{J \subseteq [n] \\ \abs{J} = t}} \ \ \avg_{\substack{\text{distinct pairs} \\ u, v \in [n] \setminus J}} \SET{ \E\Brak{\Abs{\Cov[\bX_u,\bX_v]} \mid (\bX_j)_{j \in J}}}.
\]
The intuitions just described lead to the idea that choosing large~$t$ should cause $\avgCovCond{t}(\bX)$ to become small. Indeed, the following has recently been proven~\cite{Guruswami-Sinop,Barak-Raghavendra-Steurer,Raghavendra-Tan}:
\begin{theorem} \label{thm:r-t}
Let $\bX = (\bX_1, \dots, \bX_n)$ be $\{\pm 1\}$-valued random variables and let $0 < \eps \leq 1$. Then for some integer $0 \leq t \leq O(1/\eps^2)$ it holds that $\avgCovCond{t}(\bX) \leq \eps$.
\end{theorem}
We present the following conjecture, made jointly with Yuan Zhou.
\begin{named}{Conjecture A}
Theorem~\ref{thm:r-t} holds with $O(1/\eps)$ in place of $O(1/\eps^2)$.
\end{named}
\begin{remark} \label{rem:order-of-quantifiers}
In Theorem~\ref{thm:r-t}, by $t \leq O(1/\eps^2)$ we mean $t \leq C/\eps^2$ where~$C$ is a universal constant independent of $\bX_1, \dots, \bX_n$. (We also assume $n \geq C/\eps^2 + 2$.) However one \emph{cannot} simply fix $t = \lceil C/\eps^2 \rceil$ independently of $\bX_1, \dots, \bX_n$; this would make Theorem~\ref{thm:r-t} false (see Proposition~\ref{prop:annoying-counterexample}). These comments apply equally to Conjecture~A with $O(1/\eps)$ in place of $O(1/\eps^2)$.
\end{remark}
Motivation for Theorem~\ref{thm:r-t} and Conjecture~A comes from the theory of rounding algorithms for convex relaxations of optimization problems; specifically, the ``correlation rounding'' technique for the Sherali--Adams and SOS hierarchies. In Section~\ref{sec:motivation} we further discuss this motivation, as well as the importance of improving the bound $t \leq O(1/\eps^2)$ to $t \leq O(1/\eps)$.
We were led to make Conjecture~A based on algorithmic optimism as well as being unable to find any counterexample refuting it. The following example (which we call the ``homogeneous star'') is particularly instructive. Let $\bX_0 \sim \{\pm 1\}$ be uniformly random and suppose $\bX = (\bX_1, \dots, \bX_n)$ is a list of independent ``$\rho$-correlated'' copies of~$\bX_0$ (where $\rho \in [0,1]$). I.e., for each $j \in [n]$ we have $\bX_j = \bX_0 \bR_j$, where $\bR_1, \dots, \bR_n$ are independent $\{\pm 1\}$-valued random variables satisfying $\E[\bR_j] = \rho$. By symmetry, all sets~$J$ in the definition of $\avgCovCond{t}(\bX)$ contribute equally to the average, so suppose we condition on~$\bX_1, \dots, \bX_t$. It is not hard to check that the conditional average covariance of $\bX_{t+1}, \dots, \bX_n$ is then
\[
\rho^2 \Var[\bX_0 \mid \bX_1, \dots, \bX_t].
\]
If $\rho \leq \sqrt{\eps}$ then this quantity is automatically at most~$\eps$, even without conditioning. On the other hand, if $\rho \gg \sqrt{\eps}$ then we need to rely on the conditional variance above being small. It's not difficult to show via a Hoeffding bound that this conditional variance is very small if (and only if)~$t \rho^2 \gg 1$; i.e., $\rho \gg 1/\sqrt{t}$. Thus by taking~$t$ a little bigger than~$1/\eps$, the case of $\rho \gg \sqrt{\eps}$ is handled as well. In other words, these rough calculations confirm (perhaps up to a log factor) that Conjecture~A holds for the homogeneous star for every value of~$\rho$. On the other hand, this example also implies that one cannot hope for an improved bound of $t < o(1/\eps)$ in Conjecture~A.
\subsection{Information flow trees}
Being unable to prove Conjecture~A, we turn to trying to prove it in a wide family of special cases. Specifically, we study the conjecture in the special case of \emph{information flow trees} (which includes the homogeneous star example discussed above). Information flow trees have been studied in an extremely wide variety of contexts, under various names: in the theory of noisy communication and computation; in statistical physics (as the \emph{Ising model} on trees); in biology (as \emph{phylogenetic trees}); and in learning theory (as Markov networks/graphical models). See Evans et al.~\cite{evans} for a number of results, and Mossel~\cite{mossel} for a survey.
\begin{definition}
An \emph{information flow tree} $\calT = (V, E, \rho)$ is an undirected tree graph~$(V,E)$ (with $|V| > 1$) together with a function $\rho \co E \to [-1,1]$ giving a \emph{correlation} parameter for each edge. We think of $\calT$ as generating a collection of $\{\pm 1\}$-valued random variables $(\bX_v)_{v \in V}$, $(\bR_e)_{e \in E}$ as follows: First, the random variables~$\bR_e \in \{\pm 1\}$ are chosen such that $\E[\bR_e] = \rho(e)$, independently for all $e \in E$. Next, the random variables $(\bX_v)_{v \in V}$ are collectively chosen so that $\bX_u \bX_v = \bR_{(u,v)}$ holds for all $(u,v) \in E$, uniformly at random from the two possibilities.
\end{definition}
\begin{remark} \label{rem:tree-def}
An equivalent way to think of the $(\bX_v)$ random variables being generated is as follows: First, a vertex $r \in V$ is chosen to be the ``root''. (This choice can be arbitrary, as it does not affect the final distribution.) Next, $\bX_r$ is chosen uniformly at random from~$\{\pm 1\}$. Finally, the remaining random variables $(\bX_v)_{v \neq r}$ are determined by ``noisily propagating'' $\bX_r$'s value along edges of the tree: if~$\bX_u$ has been chosen, and $(u,v) \in E$, then $\bX_v$ is set to $\bX_u$ with probability $\half + \half \rho(u,v)$ and is set to $-\bX_u$ otherwise. We add the remark that in the end, each $\bX_v$ is individually uniformly distributed on~$\{\pm 1\}$.
\end{remark}
\begin{remark}
When discussing information flow trees, we think of the vertex random variables~$\bX_v$ as the main objects of interest, and the edge random variables~$\bR_e$ merely as ancillary information used to construct the~$\bX_v$'s. Furthermore, if $V = L \sqcup M$ is the partition of~$V$ into \emph{leaf} vertices~$L$ and \emph{internal} vertices~$M$, we usually think of the \emph{leaf random variables} $(\bX_v)_{v \in L}$ as being ``observable'' and the internal random variables $(\bX_v)_{v \in M}$ as being ``hidden''.
\end{remark}
In this paper we study the special case of Conjecture~A in which $\bX_1, \dots, \bX_n$ are the leaf random variables of an information flow tree. Referring to Remark~\ref{rem:order-of-quantifiers}, in this case we conjecture it \emph{is} possible to fix $t = \text{const}/\eps$ independently of~$\bX_1, \dots, \bX_n$. Assuming we can fix~$t$ allows us to make a few more simplifications. Since
\[
\avgCovCond{t}(\bX) = \avg_{\substack{U \subseteq [n] \\ |U| = t+2}} \SET{\avgCovCond{t}\Paren{(\bX_k)_{k \in U}}},
\]
it follows that proving the conjecture in the $n = t+2$ case suffices to prove it for general $n \geq t+2$. And when $n = t+2$, the experiment reduces to the following: we choose a random pair of leaves $u$ and $v$, condition on \emph{all} other leaf random variables~$\bX_w$, and then measure the (conditional) covariance of $\bX_u, \bX_v$. Thus we are led to the following conjecture (in which we write~$t$ instead of~$t+2$ for notational simplicity):
\begin{named}{Conjecture B}
Let $\calT$ be an information flow tree with leaf random variables $\bX_1, \dots, \bX_t$ (where $t \geq 2$). Then
\begin{equation} \label{eqn:conj-b}
\avg_{\substack{\text{distinct pairs} \\ u, v \in [t]}} \E\Brak{\Abs{\Cov[\bX_u,\bX_v]} \mid (\bX_j)_{j \in [t] \setminus \{u,v\}}} \leq O(1/t).
\end{equation}
\end{named}
\noindent We emphasize that Conjecture~B implies Conjecture~A in the case that $\bX_1, \dots, \bX_n$ are the leaves of an information flow tree, and is in fact slightly stronger in that the bound is $O(1/t)$ for all~$t$, independently of $\bX_1, \dots, \bX_n$.
In Sections~\ref{sec:prelims}--\ref{sec:star} we will give some results in the direction of proving Conjecture~B; however, we are still unable to prove the conjecture. The main theorem that we \emph{do} prove in this work is that Conjecture~B holds for \emph{caterpillars}.
\begin{named}{Theorem C}
Conjecture~B holds when the underlying tree of~$\calT$ is a caterpillar.
\end{named}
Here we are using the following standard graph-theoretic definition:
\begin{definition}
A \emph{caterpillar graph} is a tree in which every vertex has distance at most~$1$ from a central \emph{spine} (path). Equivalently, a caterpillar is a graph of pathwidth~$1$. An example of a caterpillar tree is depicted in Figure~\ref{fig:exampleCat}.
\end{definition}
\begin{figure}
\begin{center}
\begin{tikzpicture}[shorten >=1pt, auto, node distance=0.5cm]
\tikzset{every state/.style={minimum size=0pt}}
\node[state] (S1) {};
\node[state] (S2) [right =1cm of S1] {};
\node[state] (S3) [right =1cm of S2] {};
\node[state] (S4) [right =1.5cm of S3] {};
\node[state] (S5) [right =1cm of S4] {};
\node[state] (S6) [right =1.2cm of S5] {};
\node[state] (S7) [right=1cm of S6] {};
\node[state] (L11) [below left =1.1cm and 0.15cm of S1] {};
\node[state] (L12) [below right = 1.1cm and 0.15cm of S1] {};
\node[state] (L21) [below =1cm of S2] {};
\node[state] (L31) [below left =1.1cm and 0.4cm of S3] {};
\node[state] (L32) [below =1cm of S3] {};
\node[state] (L33) [below right = 1.1cm and 0.4cm of S3] {};
\node[state] (L41) [below left =1.1cm and 0.15cm of S4] {};
\node[state] (L42) [below right = 1.1cm and 0.15cm of S4] {};
\node[state] (L51) [below =1cm of S5] {};
\node[state] (L61) [below left =1.1cm and 0.5cm of S6] {};
\node[state] (L63) [below right = 1.1cm and 0cm of S6] {};
\node[state] (L62) [below left =1.1cm and 0cm of S6] {};
\node[state] (L64) [below right = 1.1cm and 0.5cm of S6] {};
\node[state] (L71) [below = 1cm of S7] {};
\path[-] (S1) edge [above] (S2)
(S2) edge [above] (S3)
(S3) edge [above] (S4)
(S4) edge [above] (S5)
(S5) edge [above] (S6)
(S6) edge [above] (S7)
(S1) edge [left] (L11)
(S1) edge [right] (L12)
(S2) edge (L21)
(S3) edge [left] (L31)
(S3) edge [right] (L32)
(S3) edge [right] (L33)
(S4) edge [above] (L41)
(S4) edge [above] (L42)
(S5) edge [above] (L51)
(S6) edge [above] (L61)
(S6) edge [above] (L62)
(S6) edge [above] (L63)
(S6) edge [above] (L64)
(S7) edge [above] (L71)
;
\end{tikzpicture}
\end{center}
\caption{An example of a caterpillar tree}
\label{fig:exampleCat}
\end{figure}
We remark that caterpillar graphs arise quite naturally in some of the contexts where information flow trees are studied; for example, in Hidden Markov Models, where the leaf random variables are observed and the spine random variables are hidden.
\subsection{Motivation and previous work} \label{sec:motivation}
Besides being a natural problem in information theory, Conjecture~A is motivated by certain problems in the algorithmic theory of convex relaxation hierarchies. We give here a very high-level sketch of the connection, as developed in the following works: \cite{Guruswami-Sinop,Barak-Raghavendra-Steurer,Raghavendra-Tan,Barak-Brandao-Harrow-Kelner-Steurer-Zhou,Austrin-Benabbas-Georgiou,Yoshida-Zhou,Barak-Kelner-Steurer,Rothvoss}.
Consider a Boolean optimization problem such as Max-Cut on a graph $G = (V,E)$, where we write $V = [n]$; the task is to find a $\pm 1$ assignment $x_1, \dots, x_n$ to the vertices so as to minimize $\avg_{(u,v) \in E} x_u x_v$. This is a non-convex (and $\mathsf{NP}$-hard) optimization problem. A natural algorithmic approach is to relax it to an (efficiently-solvable) convex optimization problem and then argue that the relaxed solution can be ``rounded'' to a genuine $\pm 1$ assignment with approximately the same value. Two important families of such relaxations are the Sherali--Adams LP relaxation and the SOS (Lasserre--Parrilo) SDP relaxation. The families have a tunable ``degree'' parameter~$t \in \Z^+$; as~$t$ increases, the convex relaxations become tighter and tighter but the running time for solving them increases like $n^{O(t)}$.
Roughly speaking, solving these relaxations yields an optimal solution to the original Max-Cut problem, except that instead of getting a $\pm 1$ assignment $x_1, \dots, x_n$, one gets a collection of ``fake degree-$t$ $\pm 1$-valued random variables'' $\bX_1, \dots, \bX_n$. In fact, these are not random variables at all; they are merely a list of numbers $\rho_S$ for all $S \subseteq [n]$ with $|S| \leq t$. However, there is a promise that for each such~$S$ there exists a collection of true $\pm 1$-valued random variables $(\bY_v)_{v \in S}$ with $\E[\prod_{v \in S} \bY_v] = \rho_s$. Thus, being very imprecise, an algorithm can act as though it has true random variables $\bX_1, \dots, \bX_n$, as long as it only ever uses them in groups of at most~$t$.
The objective function minimized by the convex relaxation is $\alpha \coloneqq \avg_{(u,v) \in E} \rho_{\{u,v\}}$. An algorithm would now like to take the fake random variables and produce a genuine $\pm 1$ assignment $x_1, \dots, x_n$ which has, say, $\avg_{(u,v) \in E} x_ux_v \leq \alpha + \eps$. A simple idea for doing this is to draw~$x_j$ according to $\bX_j$, independently for each $j \in [n]$. (This counts as using the fake random variables in groups of size~$1$ and is thus legal since $t \geq 1$.) However in doing this we will get $\E[x_u x_v] = \E[\bX_u]\E[\bX_v] = \rho_{\{u\}} \rho_{\{v\}}$, which need not bear any relationship to the quantities $\rho_{\{u,v\}}$ entering into the definition of~$\alpha$. What would be desirable is if we had $|\rho_{\{u,v\}} - \rho_{\{u\}}\rho_{\{v\}}| \leq \eps$ for all pairs~$(u,v)$, or at least on average over all pairs. In other words, we wish for the ``average covariance'' (as defined at the beginning of Section~\ref{sec:intro}) of the fake random variables $\bX_1, \dots, \bX_n$ to be smaller than some~$\eps$. Of course it need not be, but Conjecture~A implies that it can be made so, provided we are allowed to condition on some $t \leq O(1/\eps)$ randomly chosen $\bX_j$'s. In the end, using the Sherali--Adams or SOS relaxations with degree parameter~$t$ would allow us to do this in time~$n^{O(1/\eps)}$.
Thus we see that the quantitative dependence in Conjecture~A directly relates to the running time of algorithms based on ``correlation rounding'' of Sherali--Adams/SOS hierarchies. An example consequence of Conjecture~A (see~\cite{Yoshida-Zhou}) would be that the Sherali--Adams LP hierarchy provides an arbitrarily good multiplicative approximation to Max-Cut on $n$-vertex, $\eps n^2$-edge graphs in time~$n^{O(1/\eps)}$. This gives a very nice tradeoff between density and running time, one that works almost all the way down to the ``sparse'' regime (i.e., $O(n)$ edges). On the other hand, using the weaker Theorem~\ref{thm:r-t}, the running time becomes $n^{O(1/\eps^2)}$. This is only nontrivial when $\eps \gg n^{-1/2}$; i.e., for graphs with $\omega(n^{3/2})$ edges.
We end this section by commenting on the Raghavendra--Tan proof~\cite{Raghavendra-Tan} of Theorem~\ref{thm:r-t}. They study the analog $\avgInfoCond{t}(\bX)$ of $\avgCovCond{t}(\bX)$, in which $\abs{\Cov(\bX_u,\bX_v)}$ is replaced by the \emph{mutual information}, $I(\bX_u; \bX_v) \geq 0$. They deduce very simply from the definitions that for any $0 < T < n-1$,
\[
\sum_{t = 0}^{T-1} \avgInfoCond{t}(\bX) \leq 1.
\]
This means that there exists a $t < T$ such that $\avgInfoCond{t}(\bX) \leq 1/T$. The basic relationship $\abs{\Cov[\bX_u,\bX_v]} \leq \sqrt{2} \sqrt{I(\bX_u; \bX_v)}$ lets them complete the proof Theorem~\ref{thm:r-t} with a bound of $t \leq 2/\eps^2$. Thus we see that proving Conjecture~A requires surmounting a familiar difficulty: the quadratic relationship between $L_1$-distance and KL-distance.
Finally, while it's tempting to think that $\avgCovCond{t}(\bX)$ and $\avgInfoCond{t}(\bX)$ should be decreasing functions of~$t$ (thereby allowing us to fix~$t$ independently of $\bX_1, \dots, \bX_n$ in Theorem~\ref{thm:r-t} and Conjecture~A), this is not the case.
\begin{proposition} \label{prop:annoying-counterexample}
For any fixed integer $T \in \Z^+$, there exist random variables $\bX = (\bX_1, \dots, \bX_{n})$, $n = T+2$, such that $\avgCovCond{t}(\bX) = 0$ for $t < T$ but $\avgCovCond{T}(\bX) = 1$ (and similarly for $\avgInfoCond{t}$).
\end{proposition}
\begin{proof}
We simply define $\bX_1, \dots, \bX_{T+2}$ to be uniformly random conditioned on $\bX_1 \bX_2 \cdots \bX_{T+2} = 1$. Then consider any $J \subset [T+2]$ and any outcome of $(\bX_j)_{j \in J}$. If $\abs{J} < T$ then the remaining $\bX_k$'s are (conditionally) pairwise independent. On the other hand, if $\abs{J} = T$ then the remaining pair $(\bX_u,\bX_v)$ is either uniform on $\{(+1,+1), (-1,-1)\}$ or uniform on $\{(+1,-1), (-1,+1)\}$; in either case, the (conditional) covariance is~$1$.
\end{proof}
\subsection{Organization of this paper}
In Section~\ref{sec:prelims}, we describe some basic transformations on information flow trees that preserve the joint distribution on the leaf random variables. These allow us to make certain convenient assumptions about the structure of our information flow trees in in subsequent sections.
Section~\ref{sec:exp-cov-formula} contains an explicit formula for the covariance of two leaves in an information flow tree conditioned on some outcome of the other leaves.
In Section~\ref{sec:monotonicity} we demonstrate that this expression is nondecreasing as a function of edge correlations along the spine. This essentially lets us reduce a caterpillar tree to an inhomogeneous star; we analyze the latter in Section~\ref{sec:star}. Finally, the proof of Theorem~C is given in Section~\ref{sec:final}.
\section{Information flow tree equivalences} \label{sec:prelims}
Given an information flow tree, there are several ways it can be modified so that the joint distribution of its leaf random variables does not change. Since Conjecture~B and Theorem~C are only concerned with the leaf random variables, and not the ``internal'' random variables, we are free to make such modifications. We use the following definition:
\begin{definition}
Let $\calT$ and $\calT'$ be information flow trees, generating random variables $(\bX_v)_{v \in V}$ and $(\bX'_{v'})_{v' \in V'}$. Further, assume that~$V$ and~$V'$ have the same set of leaves,~$L$ (though $\calT$ and $\calT'$ may otherwise have different tree topologies and correlation functions). We say $\calT$ and $\calT'$ are \emph{equivalent} if $(\bX_\ell)_{\ell \in L}$ and $(\bX'_{\ell})_{\ell \in L}$ have the same joint distribution.
\end{definition}
\noindent In this section we describe some transformations on general information flow trees that put them into simpler, equivalent forms.
The first two transformations allow us to assume without loss of generality that $\rho(e) \geq 0$ for all edges~$e$, except possibly for edges incident to leaves. (In fact, allowing correlations in $[-1,0)$ is not really an essential aspect of our model; the reader will not lose much by simply assuming $\rho \geq 0$ always.)
\begin{lemma} \label{lem:push-negations1}
Let $\calT = (V,E,\rho)$ be an information flow tree and let $w \in V$ be an internal vertex. Let $\calT' = (V,E,\rho')$ be the information flow tree that is the same as~$\calT$ except with $\rho'(e) = -\rho(e)$ for all edges~$e$ incident on~$w$. Then $\calT$ and $\calT'$ are equivalent.
\end{lemma}
\begin{proof}
Let $(\bX_v)_{v \in V}$ and $(\bR_e)_{e \in E}$ be the random variables generated by~$\calT$. Define:
\[
\bR'_e = \begin{cases}
-\bR_e & \text{if $e$ is incident to~$w$,} \\
\phantom{-}\bR_e & \text{otherwise;}
\end{cases}
\quad
\bX'_v = \begin{cases}
-\bX_v & \text{if $v = w$,} \\
\phantom{-}\bX_v & \text{otherwise.}
\end{cases}
\]
It's easy to see that $(\bR'_{e})_{e \in E}$ has the correct joint distribution for~$\calT'$. It's then easy to see that $(\bX'_v)_{v \in V}$ has the correct joint distribution for~$\calT'$. Since~$w$ is not a leaf, we have $\bX_\ell = \bX'_\ell$ for all leaves~$\ell$. Thus $\calT$ and $\calT'$ are equivalent.
\end{proof}
\begin{lemma} \label{lem:push-negations2}
For every information flow tree~$\calT = (V,E,\rho)$, there is an equivalent one~$\calT' = (V,E,\rho')$ in which $\rho'(e) \geq 0$ for all ``internal'' edges~$e$; i.e., for edges~$e$ not touching a leaf.
\end{lemma}
\begin{proof}
Given~$\calT$, choose a root vertex $r \in V$ arbitrarily. The idea is that in a top-down fashion starting from~$r$, we ``fix'' all negative internal edges. Specifically, we apply the following procedure:\\
\hspace*{1in} for $j = 1, 2, 3, \dots$,
\hspace*{1in} \quad for each non-leaf vertex~$w$ at distance~$j$ from~$r$,
\hspace*{1in} \quad \quad if the parent edge~$e$ of~$w$ has $\rho(e) < 0$,
\hspace*{1in} \quad \quad \quad apply the transformation from Lemma~\ref{lem:push-negations2} to~$w$.\\
It is easy to see that this procedure terminates with an equivalent information flow tree~$\calT'$ in which all internal edges have a nonnegative correlation value.
\end{proof}
Correctness of the next two transformations follows from the fact that if $\bR_1$, $\bR_2$ are independent $\{\pm 1\}$-valued random variables with expectations $\rho_1, \rho_2$, then $\bR_1\bR_2$ is a $\{\pm 1\}$-valued random variable with expectation $\rho_1\rho_2$:
\begin{lemma} \label{lem:merge}
Suppose $\calT = (V,E,\rho)$ is an information flow tree, $v \in V$ has degree~$2$, and $(e_1,e_2)$ is the length-two path of edges through~$v$. Modify~$\calT$ by deleting $v$ and replacing $(e_1,e_2)$ with a single edge~$e$ satisfying $\rho(e) = \rho(e_1)\rho(e_2)$. Then the resulting information flow tree is equivalent to the original~$\calT$.
\end{lemma}
\begin{lemma} \label{lem:split}
Let $\calT = (V,E,\rho)$ be an information flow tree and let $e \in E$. Modify $\calT$ by splitting~$e$ into a length-two path $(e_1,e_2)$ with $\rho(e_1)\rho(e_2) = \rho(e)$. Then the resulting information flow tree is equivalent to the original~$\calT$.
\end{lemma}
The next two transformations are similar and use the fact that along a path in which all correlations are~$1$, the vertex random variables are always equal.
\begin{lemma} \label{lem:glue}
Let $\calT = (V,E,\rho)$ be an information flow tree and let $P = (V',E')$ be a connected subgraph of~$(V,E)$ in which $\rho(e) = 1$ for all $e \in E'$. (For us, $P$ will typically be a path.) Assume~$V'$ does not contain leaves of~$V$. Then the information flow tree gotten from~$\calT$ by contracting~$V'$ into a single vertex is equivalent to~$\calT$.
\end{lemma}
\begin{lemma} \label{lem:vertex-split}
Let $\calT = (V,E,\rho)$ be an information flow tree and let $v \in V$. For any $m \in \Z^+$, suppose we delete~$v$ and replace it with a path $(w_1, \dots, w_m)$ whose edges are assigned correlation~$1$ by~$\rho$. For each edge $e = (u,v)$ formerly attached to~$v$, we replace it with $e = (u,w_i)$ for an arbitrarily chosen $i \in [m]$. Then the resulting information flow tree is equivalent to the original~$\calT$. An example of such a transformation is depicted in Figure~\ref{fig:splitvertex}.
\end{lemma}
\newcommand{\firstcolor}{ForestGreen}
\newcommand{\secondcolor}{OrangeRed}
\newcommand{\thirdcolor}{Cerulean}
\begin{figure}
\begin{center}
\begin{tikzpicture}[ shorten >=1pt,auto,node distance=2cm]
{\color{\firstcolor}
\node[state] (S1) {$v_1$};
}
{\color{\secondcolor}
\node[state] (S2) [right =2.5cm of S1] {$v_2$};
}
{\color{\thirdcolor}
\node[state] (S3) [right =3cm of S2] {$v_3$};
}
{\color{\firstcolor}
\node[state] (L11) [below left =1.5cm and 0.3cm of S1] {$\ell_{1}$};
\node[state] (L12) [below right = 1.5cm and 0.3cm of S1] {$\ell_{2}$};
}
{\color{\secondcolor}
\node[state] (L21) [below =1.2cm of S2] {$\ell_{3}$};
}
{\color{\thirdcolor}
\node[state] (L31) [below left =1.5cm and 0.8cm of S3] {$\ell_{4}$};
\node[state] (L32) [below =1.2cm of S3] {$\ell_{5}$};
\node[state] (L33) [below right = 1.5cm and 1.5cm of S3] {$\ell_{6}$};
}
\path[-] (S1) edge [above] node{$\rho_1$} (S2)
(S2) edge [above] node{$\rho_2$} (S3)
(S1) edge [left] node{$\rho_3$} (L11)
(S1) edge [right] node{$\rho_4$} (L12)
(S2) edge node{$\rho_5$} (L21)
(S3) edge [left] node{$\rho_6$} (L31)
(S3) edge [right] node{$\rho_7$} (L32)
(S3) edge [right] node{$\rho_8$} (L33)
;
\end{tikzpicture}
{\Large$$\Downarrow$$}
\begin{tikzpicture}[shorten >=1pt,auto,node distance=2cm]
{\color{\firstcolor}
\node[state] (V1) {$w_{1_1}$};
\node[state] (V2) [right of =V1] {$w_{1_2}$};
}
{\color{\secondcolor}
\node[state] (V3) [right of =V2] {$w_{2_1}$};
}
{\color{\thirdcolor}
\node[state] (V4) [right of =V3] {$w_{3_1}$};
\node[state] (V5) [right of =V4] {$w_{3_2}$};
\node[state] (V6) [right of =V5] {$w_{3_3}$};
}
{\color{\firstcolor}
\node[state] (L1) [below of = V1] {$\ell_{1}$};
\node[state] (L2) [below of = V2] {$\ell_{2}$};
}
{\color{\secondcolor}
\node[state] (L3) [below of = V3] {$\ell_{3}$};
}
{\color{\thirdcolor}
\node[state] (L4) [below of = V4] {$\ell_{4}$};
\node[state] (L5) [below of = V5] {$\ell_{5}$};
\node[state] (L6) [below of = V6] {$\ell_{6}$};
}
\path[-] (V1) edge [above] node{$1$} (V2)
(V2) edge [above] node{$ \rho_1$} (V3)
(V3) edge [above] node{$\rho_2$} (V4)
(V4) edge [above] node{$1$} (V5)
(V5) edge [above] node{$1$} (V6)
(V1) edge node{$\rho_3$} (L1)
(V2) edge node{$\rho_4$ } (L2)
(V3) edge node{$\rho_5$ } (L3)
(V4) edge node{$\rho_6$ } (L4)
(V5) edge node{$\rho_7$ } (L5)
(V6) edge node{$\rho_7$ } (L6)
;
\end{tikzpicture}\\
\end{center}
\caption{Applying the transformation of Lemma~\ref{lem:vertex-split} to $v_1$ and $v_3$}
\label{fig:splitvertex}
\end{figure}
The next transformation allows us to convert to trees of maximum degree~$3$. Indeed, we can say slightly more.
\begin{lemma} \label{lem:degree-3-ize}
For each information flow tree $\calT$, there is an equivalent one $\calT'$ in which the underlying graph has maximum degree~$3$. Indeed, we can take $\calT'$ to be a rooted binary tree in which each internal node has exactly~$2$ children.
\end{lemma}
\begin{proof}
Suppose $v$ is a vertex in $\calT$ of degree $d > 3$. We apply Lemma~\ref{lem:vertex-split} to~$v$; by taking $m = d$ when doing so, we have room to attach each of $v$'s neighbors to a different~$w_i$. As a result, each $w_i$ will have degree at most~$3$. Repeating this for each vertex of degree exceeding~$3$, we get an equivalent tree~$\calT'$ of degree at most~$3$. By applying Lemma~\ref{lem:split} to an arbitrary edge~$e$, we can root the tree at newly created vertex. Finally, any vertices of degree~$2$ (other than the root) that remain can be eliminated using Lemma~\ref{lem:merge}.
\end{proof}
Finally, we will use the last lemma to simplify general caterpillar trees.
\begin{definition} \label{def:simple-caterpillar}
We say a caterpillar is \emph{simple} if the spine has at least two vertices, and each vertex of the spine is attached to exactly one leaf.
\end{definition}
\begin{lemma} \label{lem:simple-caterpillar}
For each information flow caterpillar~$\calT$, there is an equivalent information flow \emph{simple} caterpillar~$\calT'$.
\end{lemma}
\begin{proof}
We apply Lemma~\ref{lem:degree-3-ize} to~$\calT$, taking care with one step: When replacing a high-degree spine vertex~$v$, we insert the new path $(w_1, \dots, w_m)$ as part of the spine, with $v$'s spine neighbor(s) attached appropriately at $w_1$ or $w_m$. Finally, the resulting binary tree will not quite be a simple caterpillar because the spine node furthest from the root will have two leaf children. To fix this we can simply take either of these two leaf edges and split it using Lemma~\ref{lem:split}, creating one more spine vertex.
\end{proof}
\begin{remark}
It is also possible to convert any information flow tree~$\calT$ into an ``essentially'' equivalent one $\calT' = (V',E',\rho')$ which is \emph{homogeneous} --- meaning $\rho'$ is a constant --- and which has maximum degree~$3$. Since we won't use this, we merely sketch the conversion. Given~$\calT$, we can assume it has maximum degree~$3$ using the first part of Lemma~\ref{lem:degree-3-ize}. Next, fix $\rho' = -(1-\delta)$ for some very small $\delta > 0$. Now for each edge $e \in E$, replace it with a path of length $k \in \Z^+$ so that $(\rho')^k$ is as close as possible to~$\rho(e)$. Since we can make $\delta$ arbitrarily small, we can get all approximations $(\rho')^k \approx \rho(e)$ simultaneously as close as desired, yielding an ``essentially'' equivalent tree~$\calT'$. Note that we can't quite ensure all vertices of $\calT'$ have exactly two children: merging the degree-$2$ vertices of~$\calT'$ would spoil its homogeneity property.
\end{remark}
\section{A formula for conditional covariance on general trees} \label{sec:exp-cov-formula}
Suppose $\bX_{v_1}, \bX_{v_m}$ are two vertex random variables in an information flow tree. Prior to any conditioning, it's easy to see that their covariance is equal to the product of $\rho(e)$ along the edges~$e$ joining~$v_1$ and~$v_m$. In this section we generalize this to a formula for their \emph{expected} covariance when conditioned on any event that is comprised of several conditionally independent events.
\newcommand{\Lsub}{L}
\newcommand{\Xbar}{\overline{\bX}}
\newcommand{\bits}{\{\pm 1\}}
\begin{theorem} \label{thm:covariance-induction}
Let $\calT = (V,E,\rho)$ be a information flow tree, with associated vertex random variables $(\bX_v)_{v \in V}$. Fix any path~$P = (v_1, \dots, v_m)$ of vertices in~$\calT$, where $m \in \Z^+$. We think of~$P$ as partitioning~$\calT$ into a sequence of subtrees~$\calT_i$, with $\calT_i$ rooted at~$v_i$. For notational simplicity we write
\[
\bX_{i} = \bX_{v_i}, \qquad \Xbar = (\bX_1, \dots, \bX_m), \qquad \rho_i = \rho(v_{i},v_{i+1}).
\]
Let~$\Lsub_i$ be any event depending only on the random variable outcomes in~$\calT_i$. Write $\Lsub = \Lsub_1 \wedge \Lsub_2 \wedge \dots \wedge \Lsub_m$, and assume~$\Lsub$ has nonzero probability. Then
\begin{align}
\Cov[\bX_{1}, \bX_{m} \mid \Lsub]
&= \frac{\prod_{i = 1}^{m-1}\rho_i \prod_{i = 1}^m\Pr[L_i \mid \bX_i = +1]\Pr[L_i \mid \bX_i = -1]}{\Pr[L]^2} \label{eqn:cov-ind1}
\end{align}
\end{theorem}
\begin{proof}
Introducing the notation
\begin{equation} \label{eqn:lambda-def}
\lambda_i^{\pm} = \Pr[\Lsub_i \mid \bX_i = \pm 1],
\end{equation}
we want to show that
\begin{equation} \label{eqn:cov-ind}
\Pr[\Lsub]^2 \Cov[\bX_1, \bX_m \mid \Lsub] = \prod_{i = 1}^{m-1} \rho_i \prod_{i = 1}^m \lambda_i^+ \lambda_i^-.
\end{equation}
Recall that if $(\bY, \bZ)$ is a pair of random variables, $\Cov[\bY,\bZ] = \frac12 \E[(\bY - \bY')(\bZ - \bZ')]$, where $(\bY',\bZ')$ denotes an independent copy of $(\bY,\bZ)$. Substituting this into~\eqref{eqn:cov-ind}, the identity we want to prove becomes
\begin{align}
\prod_{i = 1}^{m-1} \rho_{i} \prod_{i = 1}^m \lambda_i^+ \lambda_i^-
& = \Pr[\Lsub]^2 \cdot \tfrac12 \E[(\bX_1 - \bX_1')(\bX_m- \bX_m') \mid \Lsub] \nonumber\\
& = \Pr[\Lsub]^2 \cdot \sum_{x, x' \in \bits^m} \Pr[\Xbar = x \mid \Lsub] \Pr[\Xbar' = x' \mid \Lsub] \cdot \tfrac12(x_1-x_1')(x_m-x_m')
\nonumber\\
& = \sum_{x, x' \in \bits^m} \Pr[\Xbar = x , \Lsub] \Pr[\Xbar' = x' , \Lsub] \cdot \tfrac12(x_1-x_1')(x_m-x_m').
\label{eqn:induct-me}
\end{align}
We will prove~\eqref{eqn:induct-me} by induction on~$m$. The base case, $m = 1$, is
\begin{equation} \label{eqn:cov-base}
\lambda_1^+ \lambda_1^- = \sum_{x,x' \in \bits} \Pr[\bX_1 = x , \Lsub_1] \Pr[\bX_1' = x' , \Lsub_1] \cdot \tfrac12(x - x')^2.
\end{equation}
To verify this, note that when $x = x'$ the summand in~\eqref{eqn:cov-base} is zero and when $x \neq x'$ the summand in~\eqref{eqn:cov-base} is ${2\Pr[\bX_1 = +1 , \Lsub_1] \Pr[\bX_1 = -1 , \Lsub_1]}$. Thus the whole sum in~\eqref{eqn:cov-base} is indeed
\begin{multline*}
4\Pr[\bX_1 = +1 , \Lsub_1] \Pr[\bX_1 = -1 , \Lsub_1] \\ = 4 (\Pr[\Lsub_1 \mid \bX_1 = +1] \Pr[\bX_1 = +1] )(\Pr[\Lsub_1 \mid \bX_1 = -1] \Pr[\bX_1 = -1]) = 4 \lambda_1^+ \cdot \tfrac12 \cdot \lambda_1^- \cdot \tfrac12 = \lambda_1^+ \lambda_1^-.
\end{multline*}
We now assume~\eqref{eqn:induct-me} holds for a given~$m \in \Z^+$ and prove it for $m+1$. Thus we need to show
\begin{equation} \label{eqn:induction-business}
\begin{aligned}
\prod_{i = 1}^{m} \rho_{i} \prod_{i = 1}^{m+1} \lambda_i^+ \lambda_i^-
= \sum_{\substack{x\phantom{'} \in \bits^m\\ x' \in \bits^m}} \sum_{\substack{x_{m+1} \in \bits \\ x'_{m+1} \in \bits}}
&\Pr[\Xbar = x,\phantom{'} \bX_{m+1} = x_{m+1}, \Lsub, \Lsub_{m+1}] \cdot {}\\
&\Pr[\Xbar' = x', \bX'_{m+1} = x'_{m+1}, \Lsub, \Lsub_{m+1}] \cdot \tfrac12(x_1-x_1')(x_{m+1}-x_{m+1}'),
\end{aligned}
\end{equation}
Because of the information flow tree structure we have
\begin{align*}
\Pr[\Xbar = x, \bX_{m+1} = x_{m+1}, \Lsub, \Lsub_{m+1}] &= \Pr[\Xbar = x, \Lsub] \Pr[\bX_{m+1} = x_{m+1}, \Lsub_{m+1} \mid \bX_m = x_m] \\
&= \Pr[\Xbar = x, \Lsub] \cdot (\tfrac12 + \tfrac12 \rho_m x_m x_{m+1}) \cdot \lambda_{m+1}^{x_{m+1}},
\end{align*}
and similarly for $x', x'_{m+1}$. Thus the right-hand side of~\eqref{eqn:induction-business} is
\begin{equation} \label{eqn:inner-mess}
\begin{aligned}
\sum_{\substack{x\phantom{'} \in \bits^m \\ x' \in \bits^m}} \Bigl(& \Pr[\Xbar = x, \Lsub] \Pr[\Xbar' = x', \Lsub]
\cdot \tfrac12(x_1 - x_1') \\
& \quad {} \cdot \sum_{\substack{x_{m+1} \in \bits \\ x'_{m+1} \in \bits}}
(\tfrac12 + \tfrac12 \rho_m x_m x_{m+1}) \cdot \lambda_{m+1}^{x_{m+1}}
\cdot (\tfrac12 + \tfrac12 \rho_m x_m' x_{m+1}') \cdot \lambda_{m+1}^{x_{m+1}'}
\cdot(x_{m+1} - x_{m+1}')\Bigr).
\end{aligned}
\end{equation}
Regarding the inner sum here (i.e., the second line in~\eqref{eqn:inner-mess}), there is no contribution when $x_{m+1} = x_{m+1}'$; by a little algebra, the contribution from the two $x_{m+1} \neq x_{m+1}'$ summands is
\[
\tfrac12\lambda_{m+1}^+\lambda_{m+1}^-
\left( (1+\rho_m x_m)(1-\rho_m x_{m+1}') - (1-\rho_m x_m)(1+\rho_m x_{m+1}') \right) = \lambda_{m+1}^+\lambda_{m+1}^- \rho_m (x_m - x'_m).
\]
Thus~\eqref{eqn:inner-mess} (equivalently, the right-hand side of~\eqref{eqn:induction-business}) is
\[
\rho_{m+1} \lambda_{m+1}^+\lambda_{m+1}^-
\sum_{x,x' \in \bits^m} \Pr[\Xbar = x, \Lsub] \Pr[\Xbar' = x', \Lsub] \cdot \tfrac12(x_1 - x_1')(x_{m} - x_{m}')
= \prod_{i = 1}^{m} \rho_{i} \prod_{i = 1}^{m+1} \lambda_i^+ \lambda_i^-,
\]
by the induction hypothesis~\eqref{eqn:induct-me}.
\end{proof}
We present an equivalent way to write~\eqref{eqn:cov-ind1} in the following corollary.
\begin{corollary}
Suppose we are in the setting of Theorem~\ref{thm:covariance-induction}. Then for any $x \in \{\pm 1 \}^m$ and its bitwise negation $-x$,
\begin{align*}
\Cov[\bX_{1}, \bX_{m} \mid \Lsub]
&= \prod_{i = 1}^{m-1} \rho_i \cdot \frac{\Pr[\Xbar = x \mid \Lsub]}{\Pr[\Xbar = x]} \cdot \frac{\Pr[\Xbar = -x \mid \Lsub]}{\Pr[\Xbar = -x]}.
\end{align*}
\end{corollary}
\begin{proof}
Beginning with~\eqref{eqn:cov-ind1}, we have
\begin{align*}
\Cov[\bX_{1}, \bX_{m} \mid \Lsub]
&= \prod_{i = 1}^{m-1}\rho_i \cdot \frac{\prod_{i = 1}^m\Pr[L_i \mid \bX_i = +1]}{\Pr[L]}\cdot\frac{\prod_{i = 1}^m\Pr[L_i \mid \bX_i = -1]}{\Pr[L]}\\
&= \prod_{i = 1}^{m-1}\rho_i \cdot \frac{\prod_{i = 1}^m\Pr[L_i \mid \bX_i = x_i]}{\Pr[L]}\cdot\frac{\prod_{i = 1}^m\Pr[L_i \mid \bX_i = -x_i]}{\Pr[L]}.
\end{align*}
By virtue of the information flow tree structure we have $\Pr[L_i \mid \Xbar = x] = \Pr[L_i \mid \bX_i = x_i]$. Thus the above equals
\begin{align*}
\prod_{i = 1}^{m-1}\rho_i \cdot \frac{\prod_{i = 1}^m\Pr[L_i \mid \Xbar = x]}{\Pr[L]}\cdot\frac{\prod_{i = 1}^m\Pr[L_i \mid \Xbar = -x]}{\Pr[L]}
= \prod_{i = 1}^{m-1}\rho_i \cdot \frac{\Pr[L \mid \bX = x]}{\Pr[L]}\cdot \frac{\Pr[L \mid \bX = -x]}{\Pr[L]}.
\end{align*}
The proof is now completed by applying Bayes' theorem.
\end{proof}
\section{A monotonicity property} \label{sec:monotonicity}
Though the formula in Theorem~\ref{thm:covariance-induction} is quite precise, we will only use it in a rather ``soft'' way, to show a certain monotonicity property. Suppose we are in the setting of Theorem~\ref{thm:covariance-induction} and that~$\Lsub$ denotes a certain outcome for all the leaf random variables in the tree (these are the events of main interest for us). Assume for simplicity that the ``path correlations'' $\rho_1, \dots, \rho_{m-1}$ are all nonnegative. Then the formula from Theorem~\ref{thm:covariance-induction} implies that $\Cov[\bX_1, \bX_m \mid \Lsub]$ is also nonnegative, a fact that does not seem obvious a~priori. We'll in fact be interested in the \emph{expected value} of this conditional covariance, over all the outcomes~$\Lsub$.
The goal of this section is to show that this expected covariance can only increase if one of the ``path correlations''~$\rho_i$ is increased. Though this fact seems ``intuitive'', it's also not a~priori obvious; we've only been able to prove it with the aid of Theorem~\ref{thm:covariance-induction}. Note that this monotonicity property is not immediately obvious from the formula in Theorem~\ref{thm:covariance-induction}, since the expressions $\Pr[\Xbar = (\pm 1, \dots, \pm 1) \mid \Lsub]$ have an implicit dependence on the~$\rho_i$'s.
\begin{theorem} \label{thm:monotonic}
In the setting of Theorem~\ref{thm:covariance-induction}, assume that $\rho_1, \dots, \rho_{m-1} \in [0,1]$. Let $\overline{\bY} = (\bY_1, \dots, \bY_\ell)$ be the leaf random variables of~$\calT$. Then
\begin{equation} \label{eqn:should-increase}
\E\Brak{\Abs{\Cov[\bX_1,\bX_m]} \mid \bY}
\end{equation}
is a nondecreasing function of each~$\rho_i$.
\end{theorem}
\begin{proof}
Let $y \in \bits^\ell$ be any potential outcome for the leaf random variables, and let $\Lsub_y$ denote the event that $\overline{\bY} = y$. Then it's easy to see that~$\Lsub_y$ has the factorizable form described in Theorem~\ref{thm:covariance-induction}. (A slight annoyance is that it's possible to have $\Pr[\Lsub_y] = 0$. However this can only happen if some pair $\bY_i, \bY_j$ is fully correlated; i.e., $\Cov[\bY_i \bY_j] = \pm 1$. In this case, letting~$\bX_i$ denote one of the ancestors of $\bY_i, \bY_j$ on path~$P$, we have that~$\bX_i$ is fully determined by \emph{every} possible outcome~$y$. In turn, this means~$\bX_1$ and~$\bX_m$ are \emph{independent} conditioned on every possible outcome~$y$; i.e., the random variable $\Cov[\bX_1,\bX_m \mid \bY]$ is identically~$0$. Then~\eqref{eqn:should-increase} is trivially a nondecreasing function of the~$\rho_i$'s. Thus we may henceforth assume that no pair~$\bY_i, \bY_j$ is fully correlated and hence that $\Pr[\Lsub_y] \neq 0$ for all $y \in \bits^\ell$, no matter what the~$\rho_i$'s are.)
Rewriting identity~\eqref{eqn:cov-ind}, Theorem~\ref{thm:covariance-induction} equivalently states that for any $y \in \bits^\ell$,
\begin{equation} \label{eqn:the-nonneg-summands}
\Pr[\Lsub_y] \cdot \Abs{\Cov[\bX_1, \bX_m \mid \Lsub_y]} = \frac{\prod_{i=1}^{m-1} \rho_i \prod_{i=1}^m \lambda_i^+ \lambda_i^-}{\Pr[\Lsub_y]}.
\end{equation}
(We are able to insert the absolute-value sign on the left because, as noted earlier, the right-hand side is evidently nonnegative.) By definition, our quantity of interest~\eqref{eqn:should-increase} is the sum of~\eqref{eqn:the-nonneg-summands} over all $y \in \bits^\ell$. We'll in fact show that for \emph{every} $y \in \bits^\ell$ and every $j \in [m-1]$, the quantity~\eqref{eqn:the-nonneg-summands} is a nondecreasing function of~$\rho_j$.
In the numerator of~\eqref{eqn:the-nonneg-summands} we have that $\prod_{i \neq j} \rho_i$ is a nonnegative constant independent of~$\rho_j$. The same is true of $\prod_{i=1}^m \lambda_i^+ \lambda_i^-$: by the definition~\eqref{eqn:lambda-def}, each~$\lambda_i^{\pm 1}$ represents a probability that depends on~$y$ but not on any of the~$\rho_i$'s. Thus it remains to show that
\begin{equation} \label{eqn:I-increase}
\frac{\rho_{j}}{\Pr[\Lsub_y]} = \frac{\rho_j}{\Pr[\overline{\bY} = y]}
\end{equation}
is a nondecreasing function of~$\rho_j$. Note that $\Pr[\overline{\bY} = y]$ implicitly depends on all of the~$\rho_i$'s; in fact, it's a \emph{linear} function of each of them. To see this, note that $\rho_j = \rho(v_j,v_{j+1})$ enters into the generation of $\calT$'s random variables only through the edge random variable~$\bR_{v_j,v_{j+1}}$; thus we can write
\[
\Pr[\overline{\bY} = y] = (\half + \half \rho_j) \Pr[\overline{\bY} = y \mid \bR_{v_j,v_{j+1}} = +1] + (\half - \half \rho_j) \Pr[\overline{\bY} = y \mid \bR_{v_j,v_{j+1}} = -1],
\]
where the two conditional probabilities on the right do not depend on~$\rho_j$. Thus we can express~\eqref{eqn:I-increase} as
\begin{equation} \label{eqn:finish-monotonicity}
\frac{\rho_j}{\Pr[\overline{\bY} = y]} = \frac{\rho_j}{b + c\rho_j}
\end{equation}
for some numbers $b, c$ not depending on~$\rho_j$. Now a function of this form, $\frac{\rho_j}{b+c\rho_j}$, is nondecreasing if and only if~$b \geq 0$; i.e., if and only if the denominator in~\eqref{eqn:finish-monotonicity} is nonnegative for $\rho_j = 0$. But indeed this quantity is nonnegative, being a probability.
\end{proof}
We end this section by observing that although we have shown that~\eqref{eqn:should-increase} is an increasing function of the ``path correlations''~$\rho_i$, we actually expect it to be a \emph{decreasing} function of $|\rho(e)|$ for all edges~$e$ \emph{not} on the path between~$\bX_{v_1}$ and~$\bx_{v_m}$. The intuition is that increasing one such~$|\rho(e)|$ gives more information about its ancestor random variable $\bX_{v_i}$ on the path~$P$. In turn, this should decrease the expected covariance between $\bX_{v_1}$ and $\bX_{v_m}$. As an example, if~$v_i$ had just a single edge $(v_i,\ell)$ hanging off it, and $\rho(v_i,\ell)$ were increased to~$1$, then observing the leaf random variable~$\bX_{\ell}$ would determine $\bX_{v_i}$ completely. Thus $\bX_{v_1}$ and $\bX_{v_m}$ would become independent (covariance-$0$) conditioned on any observed outcome for $\bX_\ell$.
\section{The inhomogeneous star} \label{sec:star}
To motivate the result in this section, let's recall Conjecture~B. Suppose we are given any information flow tree~$\calT$ and we would like to upper-bound the expected covariance of some \emph{particular} pair of leaves $\bY_u, \bY_v$. As we'll see, it's easy to reduce this to analyzing the expected covariance of the leaves' parents, call them $\bX_{v_1}, \bX_{v_m}$. Next, our monotonicity result Theorem~\ref{thm:monotonic} implies that this expected covariance can only increase if all edge-correlations along the path between $\bX_{v_1}, \bX_{v_m}$ were increased to~$1$. In this case, by Lemma~\ref{lem:glue} we could equivalently think of the entire path as being contracted into one internal random variable~$\bX_0$.
Suppose now that the original tree was a caterpillar---in fact, by Lemma~\ref{lem:simple-caterpillar} we can assume it was a simple caterpillar. After contracting the path, the collection~$\calL$ of leaves that were originally ``between''~$\bY_u$ and~$\bY_v$ now hang directly off of~$\bX_0$. The two parts of the caterpillar ``to the outside'' of $\bX_{v_1}$ and~$\bX_{v_m}$ also hang off of~$\bX_0$ as gangly caterpillar-subtrees --- but we plan on ignoring them. We only intend to analyze the ``inhomogeneous star'' formed by~$\bX_0$ and~$\calL$. The hope will be that if there is ``squared correlation'' along the edges to~$\calL$, then conditioning on them will typically leave~$\bX_0$ with very small variance; equivalently,~$\bX_{v_1}$ and~$\bX_{v_m}$ will have very small covariance.
The following lemma concerning the inhomogeneous star uses well-known ideas, but as we do not have a reference for the exact statement, we give a proof.
\begin{lemma} \label{lem:inhom-star}
Let $\calT$ be an information flow tree comprising a ``star center'' vertex with random variable~$\bX_0$, as well as~$m$ leaf vertices with random variables denoted $\bY_1, \dots, \bY_m$. We allow $\calT$ to contain additional vertices not mentioned. Write~$\rho_i \in [-1,1]$ for the correlation between~$\bX_0$ and~$\bY_i$, and write $\alpha = \sum_{i=1}^m \rho_i^2$. Then
\[
\E\BRAK{\Var[\bX_0] \mid \overline{\bY}} \leq 4\exp(-\alpha /2),
\]
where $\overline{\bY}$ denotes $(\bY_1, \dots, \bY_m)$.
\end{lemma}
\begin{proof}
Let us define the random variable
\[
\bS = \bS(\overline{\bY}) = \sgn(\rho_1 \bY_1 + \cdots + \rho_m \bY_m).
\]
(Take $\sgn(0) = +1$ for definiteness.) For each $y \in \bits^m$ let's write
\[
p(y) = \Pr[\bX_0 \neq \bS \mid \overline{\bY} = y].
\]
Once we condition on~$\overline{\bY} = y$, the random variable~$\bS$ becomes some fixed sign $s \in \bits$, and the random variable~$\bX_0$ takes on some conditioned distribution, call it~$\bZ$. Now since $\bZ$ is a $\{\pm 1\}$-valued random variable we have
\[
\Var[\bZ] = 4\Pr[\bZ = +1]\Pr[\bZ = -1] \leq 4 \Pr[\bZ \neq s],
\]
no matter what~$s$ is. Thus
\[
\Var[\bX_0 \mid \overline{\bY} = y] \leq 4p(y),
\]
and so
\[
\E\BRAK{\Var[\bX_0] \mid \overline{\bY}} \leq 4\E[p(\overline{\bY})] = 4\Pr[\bX_0 \neq \bS].
\]
It thus remains to show that
\[
\exp(-\alpha/2) \geq \Pr[\bX_0 \neq \bS] \geq \Pr[\bX_0 (\rho_1 \bY_1 + \cdots + \rho_m \bY_m) \leq 0].
\]
The two cases $\bX_0 = \pm 1$ are symmetric, so we may assume $\bX_0 = +1$. Then $\bY_1, \dots, \bY_m$ are independent $\{\pm 1\}$-valued random variables with $\E[\bY_i] = \rho_i$, and we wish to show that
\[
\Pr[\rho_1 \bY_1 + \cdots + \rho_m \bY_m \leq 0] \leq \exp(-\alpha/2).
\]
This follows immediately from Hoeffding's inequality, applied to the random variables $\rho_i \bY_i \in [-\rho_i, \rho_i]$.
\end{proof}
\section{Proof of Theorem C}\label{sec:final}
In this section we prove Theorem~C. Let $\calT = (V,E,\rho)$ be a information flow caterpillar tree with~$t \geq 2$ leaves. By Lemma~\ref{lem:simple-caterpillar} we may assume $\calT$ is a simple caterpillar. By Lemma~\ref{lem:push-negations2} we may assume that~$\rho$ has a nonnegative value on all spine edges of~$\calT$. We write $\bX_1, \dots, \bX_t$ for the vertex random variables along~$\calT$'s spine and $\bY_1, \dots, \bY_t$ for the leaf random variables, with $e_i$ denoting the edge between $\bX_i$ and $\bY_i$. We write $\bR_i = \bX_i\bY_i$ for the edge random variable that $\calT$ associates to $e_i$, and we write $\rho_i = \rho(e_i) = \E[\bR_i]$.
See Figure~\ref{fig:catTree} for a depiction of this.
\begin{figure}
\begin{center}
\begin{tikzpicture}[shorten >=1pt,auto,node distance=2cm]
\node[state] (X0) {$\bX_1$};
\node[state] (X1) [right of=X0] {$\bX_2$};
\node[state] (Y0) [below of=X0] {$\bY_1$};
\node[state] (Y1) [below of=X1] {$\bY_2$};
\node[state] (X2) [right of=X1] {$\bX_3$};
\node[state] (Y2) [below of=X2] {$\bY_3$};
\node[state] (Xn) [right =2cm of X2] {$\bX_t$};
\node[state] (Yn) [below of=Xn] {$\bY_t$};
\path[-] (X0) edge [right] node{$\bR_1$} (Y0)
(X1) edge node{$\bR_2$} (Y1)
(X1) edge [above] node{} (X0)
(Xn) edge [dashed, above] (X2)
(Xn) edge [right] node{$\bR_t$} (Yn)
(X1) edge node{} (X2)
(X2) edge node{$\bR_3$} (Y2)
;
\end{tikzpicture}\\
\end{center}
\caption{A Simple Information Flow Caterpillar Tree}
\label{fig:catTree}
\end{figure}
Recall that we wish to show
\begin{equation} \label{eqn:conj-cat}
\avg_{\substack{\text{distinct pairs} \\ u, v \in [t]}} \E\Brak{\Abs{\Cov[\bY_u,\bY_v]} \mid (\bY_j)_{j \in [t] \setminus \{u,v\}}} \leq O(1/t).
\end{equation}
Let us suppose for some time that the pair $u, v \in [t]$ is fixed. For brevity we'll write $\bcalY = (\bY_j)_{j \in [t] \setminus \{u,v\}}$ for the leaf random variables other than $\bY_u, \bY_v$. Then
\begin{align}
\E\Brak{\Abs{\Cov[\bY_u,\bY_v]} \mid \bcalY} = \E\Brak{\Abs{\Cov[\bX_u\bR_u,\bX_v\bR_u]} \mid \bcalY} &= \E\Brak{\Abs{\rho_u\rho_v\Cov[\bX_u,\bX_v]} \mid \bcalY} \nonumber
\\ &= \ABS{\rho_u} \ABS{\rho_v} \E\Brak{\Abs{\Cov[\bX_u,\bX_v]} \mid \bcalY} \label{eqn:final-upper-me}
\end{align}
where the second equality uses that $\bR_u, \bR_v$ are independent of $(\bX_u, \bX_v, \bcalY)$.
Given $u, v$, let $\calT'$ denote $\calT$ with edges $e_u, e_v$ deleted. We may apply our monotonicity result Theorem~\ref{thm:monotonic} to~$\calT'$, with $P$ being the spine path between $\bX_u$ and $\bX_v$. (Note that we earlier arranged for all spine edges to have nonnegative correlation, as required for Theorem~\ref{thm:monotonic}.) We conclude that \emph{if} the edge correlations along~$P$ were raised to~$1$, this could only increase the quantity $\E\Brak{\Abs{\Cov[\bX_u,\bX_v]} \mid \bcalY}$ appearing in~\eqref{eqn:final-upper-me}. We could further upper-bound this quantity as follows: Write $\calT_{uv}$ for the modification of $\calT'$ in which~$P$ is contracted to a single vertex with random variable called~$\bX_0$ (as in Lemma~\ref{lem:glue}). Then by applying the inhomogeneous star result, Lemma~\ref{lem:inhom-star} to~$\calT_{uv}$, we would get
\[
\E\Brak{\Abs{\Cov[\bX_u,\bX_v]} \mid \bcalY} = \E\Brak{\Var[\bX_0] \mid \bcalY} \leq 4\exp(-\alpha(u,v)/2),
\]
where
\[
\alpha(u,v) \coloneqq \sum \SET{\rho_i^2 : i \text{ is between $u$ and $v$}}.
\]
Putting these observations together, we conclude that for a fixed pair $u, v \in [t]$,
\[
\E\Brak{\Abs{\Cov[\bY_u,\bY_v]} \mid (\bY_j)_{j \in [t] \setminus \{u,v\}}} \leq \ABS{\rho_u} \ABS{\rho_v} \cdot 4 \exp(-\alpha(u,v)/2).
\]
Thus to complete the proof of~\eqref{eqn:conj-cat} we need to show
\begin{equation} \label{eqn:final-ineq1}
(*) \coloneqq \avg_{\substack{\text{distinct pairs} \\ u, v \in [t]}} \SET{\ABS{\rho_u} \ABS{\rho_v} \cdot \exp(-\alpha(u,v)/2)} \leq O(1/t).
\end{equation}
This is now simply a combinatorial problem concerning the list of numbers $\rho_1, \cdots, \rho_t$.
We solve the problem as follows. First, we'd like to switch $u$ and $v$ to being drawn \emph{without} replacement. Note that
\[
(*) = \E_{\substack{\bu, \bv \sim [t] \\ \text{uniformly, independently}}}\BRAK{
\SET{
\begin{array}{cl}
\ABS{\rho_{\bu}} \ABS{\rho_{\bv}} \cdot \exp(-\alpha(\bu,\bv)/2) & \text{if $\bu \neq \bv$,} \\
(*) & \text{if $\bu = \bv$.}
\end{array}
}}
\]
Since $\ABS{\rho_{\bu}} \ABS{\rho_{\bv}} \cdot \exp(-\alpha(\bu,\bv)/2) \in [0,1]$ always, and since $\Pr[\bu = \bv] = 1/t$, the above differs from
\begin{equation} \label{eqn:final-ineq2}
\E_{\substack{\bu, \bv \sim [t] \\ \text{uniformly, independently}}} \BRAK{\ABS{\rho_{\bu}} \ABS{\rho_{\bv}} \cdot \exp(-\alpha(\bu,\bv)/2)}
\end{equation}
by at most~$2/t$. Thus to show~\eqref{eqn:final-ineq1}, it suffices to upper-bound~\eqref{eqn:final-ineq2} by~$O(1/t)$. To do this, we first apply Cauchy--Schwarz, obtaining
\begin{align}
\E_{\bu, \bv \sim [t]} \BRAK{\ABS{\rho_{\bu}} \ABS{\rho_{\bv}} \cdot \exp(-\alpha(\bu,\bv)/2)} &\leq \sqrt{\E_{\bu,\bv \sim [t]} \BRAK{\rho_{\bu}^2 \cdot \exp(-\alpha(\bu,\bv)/2)}} \sqrt{\E_{\bu,\bv \sim [t]} \BRAK{\rho_{\bv}^2 \cdot \exp(-\alpha(\bu,\bv)/2)}} \nonumber \\
&= \E_{\bu \sim [t]} \BRAK{\rho_{\bu}^2 \cdot \exp(-\alpha(\bu,\bv)/2)}, \label{eqn:final-ineq3}
\end{align}
the last equality because $\bu$ and $\bv$ are symmetrically distributed. Let's introduce the following events:
\[
A_0 = \text{``$\alpha(\bu,\bv) \in [0,1)$'',} \qquad A_k = \text{``$\alpha(\bu,\bv) \in [2^{k-1},2^k)$''}, \quad k \in \Z^+.
\]
Using the fact that $\sum_{k \geq 0} \bone_{A_k} \equiv 1$, we have that~\eqref{eqn:final-ineq3} equals
\[
\E_{\bu,\bv \sim [t]} \BRAK{\rho_{\bu}^2 \cdot \sum_{k \geq 0} \bone_{A_k} \cdot \exp(-\alpha(\bu,\bv/2))} \leq \E_{\bu,\bv \sim [t]} \BRAK{\rho_{\bu}^2 \cdot \sum_{k \geq 0} \bone_{A_k} \cdot e^{1/4} \exp(-2^{k-2})}.
\]
Here we essentially lower-bounded $\alpha(\bu,\bv)/2$ by $2^{k-2}$ on the event that~$A_k$ occurs --- except, that is not quite correct when $k = 0$; this why we included the factor $e^{1/4}$, to cover the $k = 0$ case. Thus it remains to show
\begin{equation} \label{eqn:final-ineq4}
\sum_{k \geq 0} \exp(-2^{k-2}) \cdot \E_{\bu,\bv \sim [t]} \BRAK{\rho_{\bu}^2 \cdot \bone_{A_k}} \leq O(1/t).
\end{equation}
To show this, let's consider a fixed integer $k \geq 0$ and imagine that in the expectation, $\bv \sim [t]$ is chosen first. Once $\bv$ is chosen, we define interval $U^-_k(\bv) \subseteq [t]$ to be the set of all possible $\bu < \bv$ such that the event $A_k$ occurs. We define $U^+_k(\bv)$ similarly, but for $\bu > \bv$. Figure~\ref{fig:depictU} shows a small example. Denote the union of $U^-_k(\bv)$ and $U^+_k(\bv)$ by $U_k(\bv)$ .
\begin{figure}
\begin{center}
\begin{tikzpicture}[shorten >=1pt, auto, node distance=1cm]
\tikzset{every state/.style={minimum size=0pt}}
\node[state] (S0) {};
\node[state] (S1) [right=0.6cm of S0] {};
\node[state] (S2) [right =0.6cm of S1] {};
\node[state] (S3) [right =0.6cm of S2] {};
\node[state] (S4) [right =0.6cm of S3] {};
\node[state] (S5) [right =0.6cm of S4] {};
\node[state] (S6) [right =0.6cm of S5] {};
\node[state] (S7) [right=0.6cm of S6] {};
\node[state] (S8) [right=0.6cm of S7] {};
\node[state] (S9) [right=0.6cm of S8] {};
\node[state] (S10) [right=0.6cm of S9] {};
\node[state] (S11) [right=0.6cm of S10] {};
\node[state] (S12) [right=0.6cm of S11] {};
\node[state] (L0) [below of = S0] {};
\node[state] (L1) [below of = S1] {};
\node[state] (L2) [below of = S2] {};
\node[state] (L3) [below of = S3] {};
\node[state] (L4) [below of = S4] {};
\node[state] (L5) [below of = S5] {};
\node[state, fill=black] (L6) [below of = S6] {};
\node[state] (L7) [below of = S7] {};
\node[state] (L8) [below of = S8] {};
\node[state] (L9) [below of = S9] {};
\node[state] (L10) [below of = S10] {};
\node[state] (L11) [below of = S11] {};
\node[state] (L12) [below of = S12] {};
\path[-]
(S0) edge (S1)
edge node [right] {\footnotesize{$.99$}} (L0)
(S1) edge (S2)
edge node [right] {\footnotesize{$.96$}} (L1)
(S2) edge (S3)
edge node [right] {\footnotesize{$.98$}} (L2)
(S3) edge (S4)
edge node [right] {\footnotesize{$.97$}} (L3)
(S4) edge (S5)
edge node [right] {\footnotesize{$.99$}} (L4)
(S5) edge (S6)
edge node [right] {\footnotesize{$.96$}} (L5)
(S6) edge (S7)
edge node [right] {} (L6)
(S7) edge (S8)
edge node [left] {\footnotesize{$.99$}} (L7)
(S8) edge (S9)
edge node [left] {\footnotesize{$.96$}} (L8)
(S9) edge (S10)
edge node [left] {\footnotesize{$.01$}} (L9)
(S10) edge (S11)
edge node [left] {\footnotesize{$.97$}} (L10)
(S11) edge (S12)
edge node [left] {\footnotesize{$.99$}} (L11)
(S12) edge node [left] {\footnotesize{$.96$}} (L12);
\draw[decorate,decoration={brace,mirror,raise=6pt}, thick] ($(L5.south west) + (-0.2, 0)$)--($(L6.south west) + (-0.4, 0)$)node [black,midway,yshift=-24pt] {\footnotesize
$U_0^-$};
\draw[decorate,decoration={brace,mirror,raise=6pt}, thick] ($(L4.south west)+ (-0.2, 0)$)--($(L5.south west)+ (-0.4, 0)$) node [black,midway,yshift=-24pt] {\footnotesize
$U_1^-$};
\draw[decorate,decoration={brace,mirror,raise=6pt}, thick] ($(L2.south west)+ (-0.2, 0)$)--($(L4.south west) + (-0.4, 0)$)node [black,midway,yshift=-24pt] {\footnotesize
$U_2^-$};
\draw[decorate,decoration={brace,mirror,raise=6pt}, thick] ($(L0.south west)+ (-0.2, 0)$)--($(L2.south west)+ (-0.4, 0)$) node [black,midway,yshift=-24pt] {\footnotesize
$U_3^-$};
\draw[decorate,decoration={brace,mirror,raise=6pt}, thick] ($(L6.south east) + (0.4,0)$)--($(L7.south east) + (0.2,0)$) node [black,midway,yshift=-24pt] {\footnotesize
$U_0^+$};
\draw[decorate,decoration={brace,mirror,raise=6pt}, thick] ($(L7.south east)+ (0.4,0)$)--($(L9.south east) + (0.2,0)$) node [black,midway,yshift=-24pt] {\footnotesize
$U_1^+$};
\draw[decorate,decoration={brace,mirror,raise=6pt}, thick] ($(L9.south east) + (0.4,0)$)--($(L11.south east) + (0.2,0)$) node [black,midway,yshift=-24pt] {\footnotesize
$U_2^+$};
\draw[decorate,decoration={brace,mirror,raise=6pt}, thick] ($(L11.south east) + (0.4,0)$) --($(L12.south east) + (0.2,0)$) node [black,midway,yshift=-24pt] {\footnotesize
$U_3^+$};
\draw node [below = 0.15cm of L6] {$\bv$};
\end{tikzpicture}
\end{center}
\caption{A small example illustrating the indices of $U_k(\bv)$. The label on edge $e$ is $\rho_e^2$.}
\label{fig:depictU}
\end{figure}
Furthermore, we must have
\[
\phantom{\quad \forall k \geq 0.} \sum_{\mathclap{u \in U_k(\bv)}} \rho_u^2 = \sum_{\mathclap{u \in U_k^+(\bv)}} \rho_u^2 + \sum_{\mathclap{u \in U_k^-(\bv)}} \rho_u^2 \leq 2^{k} + 2^k = 2^{k+1} \quad \forall k \geq 0.
\]
It follows that we have the upper bound
\[
\E_{\bu,\bv \sim [t]} \BRAK{\rho_{\bu}^2 \cdot \bone_{A_k}} = \E_{\bv \sim [t]} \BRAK{\sum_{u \in U_k(\bv)} \Pr[\bu = u] \rho_{u}^2} = (1/t) \E_{\bv \sim [t]} \BRAK{\sum_{u \in U_k(\bv)} \rho_{u}^2} \leq (1/t) \E_{\bv \sim [t]} \BRAK{2^{k+1}} = 2^{k+1}/t.
\]
Substituting this into~\eqref{eqn:final-ineq4}, it remains to observe that indeed
\[
\sum_{k \geq 0} \exp(-2^{k-2}) \cdot 2^{k+1} \leq O(1).
\]
The proof of Theorem~C is complete.
\section{Conclusions}
Lacking any directions for proving the main Conjecture~A, we believe that Conjecture~B (the case of general information flow trees) is a good place to start. Having proved Theorem~C (the case of caterpillars), a natural next case to consider is an information flow tree with the property that each leaf is at distance at most \emph{two} from a central spine. By the transformations in Section~\ref{sec:prelims}, it suffices to consider the case that each spine node has a single edge hanging off it, which in turn has an inhomogeneous star hanging off it. Perhaps some of the ``reconstruction'' results from~\cite{evans} in terms of effective electrical resistance could be of use here. Another interesting special case of Conjecture~B that one could try to resolve is that of a complete binary tree in which all edge correlations have the same value~$\rho$. (This is the most heavily-studied information flow tree.) We believe that this case satisfies Conjecture~B by a wide margin for all~$\rho$, even with a sub-inverse-polynomial bound in place of~$O(1/t)$. Perhaps the formula in our Theorem~\ref{thm:covariance-induction} could help prove this.
\subsection*{Acknowledgments}
We thank Yuan Zhou for several early discussions on the topic of this paper.
\bibliographystyle{plain}
\bibliography{cat}
\end{document}
| 12,449
|
TITLE: Proof of expression for Christoffel symbols of the first kind, $[i , j k] = {\bf e}_i \cdot \frac{\partial {\bf e}_j}{\partial x^k}$
QUESTION [2 upvotes]: On page 155 of Vector and Tensor Analysis with Applications, by A.I Borishenko and I.E. Tarapov, the authors assert that,
$$\frac{\partial {\bf e}_j}{\partial x^k} = \left\{ i \atop j \; k \right\} {\bf e}_i$$
and
$$ [i,jk] = \frac{\partial {\bf e}_j}{\partial x_k} {\bf e}^i $$
imply that
$$[i , j k] = {\bf e}_i \cdot \frac{\partial {\bf e}_j}{\partial x^k}$$
Unfortunately, I am unable to show how the final equation follows from the first two.
Update
Extract from the text
The derivative of ${\bf e}_j$ is first covariant, then contravariant, then covariant. Besides surely $\left\{ i \atop j \; k \right\}$ cannot be expansion coefficients of $\frac{\partial {\bf e}_j}{\partial x_k}$. Could this be a typo?
REPLY [0 votes]: An answer to this question has been posted on mathoverflow.
I have added this extra content to stop the system from converting my answer to a comment.
| 148,319
|
No products
Viewed products
These stunning fair trade hand carved...
Utterly divine black, fully lined,...
These stunning fair trade hand carved wooden star decorations look lovely hanging on a wall or on the mantle piece
Made by skilled craftspeople in India from mango wood, they've been hand-painted to give a stylish finish.
Dimensions: 15cm (approx) Also available in large.
1 Item Items
Last items in stock!
Availability date:
| 121,385
|
Here is a AMF Avenger Evel Knievel bike I just finished building. It started out as a Avenger 350. I repainted and decaled it into the Evel Knievel model. It has new Everything on it. Paint, decals, seat, grips, tires, chain, pedals
This is the third one of these I have built. They are a easy sell and for good money.
-
| 119,791
|
TITLE: How to find the derivative of a fraction?
QUESTION [6 upvotes]: So I have this fancy problem I've been working on for two days:
I need to find two things:
1) $f'(t)$
2) $f'(2)$
I have tried plugging it into the definition of a derivative, but do not know how to solve due to its complexity.
Here is the equation I am presented:
If $f(t) = \sqrt{2}/t^7$ find $f'(t)$, than find $f'(2)$.
How do I convert this problem into a more readable format? (no fractions or division), otherwise, how do I complete it with the fractions?
Thanks
REPLY [11 votes]: I see some rewriting methods have been presented, and in this case, that is the simplest and fastest method. But it can also be solved as a fraction using the quotient rule, so for reference, here is a valid method for solving it as a fraction.
Let $f(x) = \frac{\sqrt 2}{t^7}$
Let the numerator and denominator be separate functions, so that $$g(x) = \sqrt2$$ $$h(x) = t^7$$
So $$f(t) = \frac{g(t)}{h(t)}$$
The quotient rules states that $$f'(t) = \frac{g'(t)h(t) - g(t)h'(t)}{h^2(t)}$$
Using $$g'(t) = \frac{d}{dt}\sqrt2 = 0$$ $$h'(t) = \frac{d}{dt}t^7 = 7t^6$$
we get, by plugging this into the quotient rule: $$f'(t) = \frac{0\cdot t^7 - \sqrt2\cdot7t^6}{t^{14}}$$
Simplifying this gives us $$\underline{\underline{f'(t) = -\frac{7\sqrt2}{t^8}}}$$
This is also the same as the result you should get by rewriting $$f(t) = \frac{\sqrt2}{t^7} = \sqrt2 \cdot t^{-7}$$ and using the power rule.
REPLY [2 votes]: Hint: $$\rm\dfrac{d}{dx}ax^{b}=ab\,x^{\,b-1}.\tag{for all $\rm b\in\mathbb Z$}$$
| 72,103
|
TITLE: Finite completely reducible groups-reference request
QUESTION [2 upvotes]: A group G is called completely reducible if it is a direct product of simple groups. It is known that a Krull-Schmidt Remak type unicity for the decomposition in the direct product of simple groups hold. The proof s of this facts can be found for example in Chapter 3 of D. Robinson's book A Course in the Theory of Groups
My questions are the following:
Are any known characterizations for finite completely reducible groups given in the literature? Given a finite group how can one decide whether the group is completely reducible or not?, of course without searching for direct product decompositions if possible.
If one replaces the direct product decomposition with iterated crossed product decomposition is anything known about unicity in this case?
REPLY [2 votes]: Well, it depends what you are prepared to accept as a reasonable answer. A finite group is
completely reducible if and only if the intersection of all its maximal normal subgroups
is trivial. This is presumably a lot of work to check for any reasonably sized group.
On the other hand, you can regard the intersection of the maximal normal subgroups as
a kind of "radical" of a finite group, and the above characterization seems reasonable.
One (fairly obvious) fact to use in proving that a finite group is NOT completely reducible
is that if G is completely reducible, then so are all its non-trivial normal subgroups.
Checking minimal normal subgroups is no help here, of course.
| 60,944
|
written by Clinch, December 29, 2008
written by Gerben, December 29, 2008
Running a modern OHC engine for extended periods at low RPM is a surefire way to wear the lobes on the camshaft. This is because they are not supplied oil at full pressure at low RPM. This is OK for normal street operation where there is always a large fluctuation in RPMs.
written by paul, December 29, 2008
On the other hand, I;m not sure of my girlfriend's Prius, but she's reluctant to let me hack into her car... Yet. :-)
DEC 29
"testing the new akismet functionality. Please please please!..."View all Comments
| 163,768
|
TITLE: If a line and its points are removed from a projective geometry, is the resulting is affine geometry?
QUESTION [0 upvotes]: I saw that it's written in a couple of places, but couldn't find proper proof for this.
I saw it for example in:
here and here
I would like to find a reference to a proof of this (as I'm sure some book or a thing like that addresses the problem) or the proof itself.
I hope it's legit to ask for this.
REPLY [1 votes]: Let $E$ be a vector space over the field $\Bbb K$, and let
$$ \mathcal P(E) = \big\{F\subseteq E\;\; : \;\; F \text{ subspace of } E, \;\dim(F)=1\big\} \;=\;
\big\{\text{vector (straight) lines of } E\big\} $$
be the projective space induced by $E$.
If $H$ is a hyperplane of $E$, i.e. $\mathcal P(H)\:$ is, by definition, a hyperplane of $\mathcal P(E)$, we want to prove that
$$ \mathcal A := \mathcal P(E)\setminus\mathcal P(H) \tag{1}$$
is an affine space contained in $\mathcal P(E)$, with associated translation space $H$, also denoted $\;\mathcal A(H)$.
Since $H$ is a hyperplane of $E$, we have $\;\text{codim}(H)=1$, hence there exists $\;a\in E, \;a\neq0,$ such that
$$ E=H\oplus \langle a\rangle. \tag{2}$$
For each $v\in E$, we have $\;v=h+\lambda a\;$ for suitable and uniquely determined $\;h\in H\;$ and $\;\lambda\in\Bbb K$; as the vectors $\;h\in H\;$ are obtained for $\;\lambda= 0$, we can write
$$ \mathcal P(E) = \mathcal P(H)\,\dot\cup\,\big\{\big\langle h+\lambda a\big\rangle\;\; :\;\;h\in H, \;\lambda\in\Bbb R, \;\lambda\neq 0\big\}, \tag{3}$$
where $\;\dot\cup\;$ denote the disjoint union.
Now, we observe that the set $\,\mathcal A\,$ after the disjoint union sign in $(3)$ is equal to
$$ \mathcal A = \big\{\big\langle h+a\big\rangle\;\;:\;\;h\in H\big\}. $$
Indeed, being $\;\lambda\neq 0,\;$ we have
$$ \big\langle h+\lambda a\big\rangle = \big\langle \lambda\lambda^{-1}h+\lambda a\big\rangle = \big\langle \lambda\big(\lambda^{-1}h+a\big)\big\rangle = \big\langle\lambda^{-1}h+a\big\rangle, $$
and obviously $\;\lambda^{-1}h\;$ describes $\;H\;$ as $\;h\;$ describes $\;H$.
The points of $\;\mathcal A(H)\;$ are therefore by definition the projective points lying in $\; \mathcal P(E)\setminus \mathcal P(H)\;$, i.e. $(1)$ holds.
Finally, $\;\mathcal A(H)\;$ is an affine space through the structure defined by the function
$$ \phi\; : \; \mathcal A(H) \times \mathcal A(H) \;\;\longrightarrow\;\; H $$
defined by
$$ \phi\big(\big\langle h_1+a\big\rangle, \big\langle h_2+a\big\rangle\big) := h_2-h_1, $$
as it is easy to verify.
As a last observation, it can also be easily verified that the preceding construction of $\;\mathcal A(H)\;$ does not depend on the choice of $\;a\;$ in $(2)$. In fact, if $\,a'\neq0\,$ is another vector satisfying $(2)$, and denoted $\,\mathcal A'(H)\,$ the affine space so obtained, the function
$$ f\;:\; \langle h+a\rangle \longmapsto \langle h+a'\rangle\; $$
from $\;\mathcal A(H)\;$ to $\;\mathcal A'(H)\;$ is an affine isomorphism.
| 13,049
|
London Men's '14-'15 let's have a look shall we? These are a few of my favs from the first set of shows. I'll do another post later this afternoon after all of our fashion friends in the UK are in dreamland or drunk on champagne...
Palmer - Harding
Lou Dalton
Astrid Anderson
| 87,188
|
TITLE: Finding the derivative at a particular point
QUESTION [4 upvotes]: So I've got this question and I'm not 100% if I've actually answered it correctly, would be appreciated if you can check it and tell me whether its correct if not can you tell me where I went wrong and where I can improve thanks! :)
Question
Find the derivative of the function
$$f(x)=\begin{cases}
x^2\sin(\frac{1}{x}), &x \neq 0 \\
0, & x=0
\end{cases}$$
at $x=0$
Working
$\lim_{h\to 0}\frac{f(h)-f(0)}{h}=\lim_{h\to 0} h^2\sin(\frac{1}{h})$
*Apply squeeze theorem.
$-1\leq\lim_{h\to 0}h^2\sin(\frac{1}{h})\leq1$
$\therefore \lim_{h\to 0}h^2(\frac{1}{-1})\leq\lim_{h\to 0}h^2\sin(\frac{1}{h})\leq\lim_{h\to 0}h^2(\frac{1}{1})$
$0\leq\lim_{h\to 0}h^2\sin(\frac{1}{h})\leq 0$
$\implies \lim_{h\to 0}h^2\sin(\frac{1}{h})=0$
REPLY [1 votes]: The work shown in the original posting goes in the right direction. However, there are some mistakes in it that are partially already covered in the comments, but a very important one is not (although it is inconsequential here):
It is correct that what needs to be evaluated is
$$
\lim_{h\rightarrow 0}\frac{f(h)-f(0)}{h}
$$
but this is
$$
\lim_{h\rightarrow 0}\frac{f(h)-f(0)}{h} = \lim_{h\rightarrow 0}\frac{h^2\sin(\frac1h)-0}{h} = \lim_{h\rightarrow 0} h\sin(\frac1h)
$$
In the work of the original posting this was incorrectly evaluated as $\lim_{h\rightarrow 0} h^2\sin(\frac1h)$.
In the original posting when applying the squeeze theorem, the term $\lim_{h\rightarrow 0} h^2\sin(\frac1h)$ was always in the middle of the 'squeeze', even in the starting line
$$
-1 \le \lim_{h\rightarrow 0} h^2\sin(\frac1h) \le 1
$$
But this makes no sense, because you are already making statements about the limit, which you are just about to show (but haven't yet) that it is 0. In other words, how do you know that that limit is between -1 and 1?
The squeeze theorem wants you to put the elements of the sequence under consideration between elements of other sequences that are hopefully easier to evaluate and hopefully converging against the same limit. So you have to ask yourself: How can I squeeze $h\sin(\frac1h)$ between sequences that converge to $0$?
What you do know is
$$
-1 \le \sin(\frac1h) \le 1
$$
or alternatively written
$$
|\sin(\frac1h)| \le 1.
$$
If you multiply both sides by the positive $|h|$ ($h$ can't be zero, obviously), you get
$$
|h\sin(\frac1h)| = |\sin(\frac1h)||h| \le |h|,
$$
or alternatively
$$
-|h| \le h\sin(\frac1h) \le |h|
$$
Since $\lim_{h\rightarrow 0} |h| = \lim_{h\rightarrow 0} -|h| = 0$, this successfully squeezes $h\sin(\frac1h)$ between two sequences converging to 0, so you finally get what you wanted to prove:
$$
\lim_{h\rightarrow 0} h\sin(\frac1h) = 0.
$$
Of course, this is a very detailed explanation. Most people are satisfied with something that is already stated in the comments, like "$|h\sin(\frac1h)| \le |h|$ and $\lim_{h \rightarrow 0} |h|=0$, so $\lim_{h\rightarrow 0} h\sin(\frac1h) = 0$ follows".
| 73,807
|
TITLE: Find the subgroups of order 4?
QUESTION [0 upvotes]: I have the following abelian group $\mathbb{Z}_2\times \mathbb{Z}_2\times\mathbb{Z}_2$. I know every element, except $(0,0,0)$ generates a cyclic subgroup of order 2.
Is there a systematic way I can find all subgroups of order 4 without having to list them all out?
REPLY [2 votes]: Every such subgroup $H$ is isomorphic to Klein's $4$-group, namely it is of the form $H=\{e,a,b,a+b\}$, where $a,b$ are chosen among the $7$ elements of order $2$ of $G:=\mathbb{Z/2Z}\times\mathbb{Z/2Z}\times\mathbb{Z/2Z}$. So, the number of distinct subgroups of order $4$ of $G$ is $1/6\cdot7\cdot(7-1)=7$.
Edit. In general, let (additive notation) $I_G:=\{a_i\in G\mid 2a_i=e_G\}$, where $G$ is any abelian group. Then, the subgroups of $G$ isomorphic to Klein's $4$-group are of the form $K_{ij}=\{e_G,a_i,a_j,a_i+a_j\mid a_i,a_j\in I_G, \space1\le i<j\le |I_G|\}$. Note that, once set $a_k:=a_i+a_j\in I_G$, we may have either:
$k<i<j$, then $K_{ij}=K_{ki}=K_{kj}$, or
$i<k<j$, then $K_{ij}=K_{ik}=K_{kj}$, or
$i<j<k$, then $K_{ij}=K_{ik}=K_{jk}$.
Therefore, the number of distinct subgroups of $G$ isomorphic to Klein's $4$-group is given by:$n_K:=\frac{1}{3}\cdot \frac{|I_G|^2-|I_G|}{2}=\frac{1}{6}\cdot |I_G|\cdot(|I_G|-1)$, whence the special case above for $|I_G|=7$.
| 215,127
|
portfoilio lighting luxurious portfolio lighting home depot with portfolio lighting home depot home depot landscape stakes portfolio outdoor lighting troubleshooting.
portfolio outdoor lighting transformer manual,portfolio lighting,portfolio lighting parts,portfolio lighting customer service,portfolio lighting transformer,portfolio outdoor lighting troubleshooting,portfolio lighting lowes,lowe's portfolio outdoor lighting,portfolio outdoor lighting replacement parts,portfolio outdoor lighting,lowe's portfolio lighting replacement parts.
Related Post
triad fabrics
pedestal bowl
brick details
| 269,884
|
Keep your eye on the tennis balls! This handmade dog bone bracelet features a silver dog bone focal hand stamped with FETCH. On the sides are hand crafted three dimensional porcelain tennis balls. Then I added some wild yellow and red lampwork beads and fat rich red coral gemstone beads. All hand wire wrapped with silver jeweler's wire. For some added sparkle and bling, I added lots of dangles of more red coral gemstone beads and yellow glass lantern beads.
Check out that sweet heart charm at the clasp. Double sided with a paw print on one side; I heart My Dog on the other. Super cute silver dog paw lobster claw clasp. Another perfect gift for any dog lover from For Love of a Dog, especially a flyball dog lover!.
Share with a friend
You might also like:
©2004 - 2017 For Love of a Dog Jewelry and Gifts. All rights reserved. Powered by Shoppe Pro.
Site Design © Mod Melon Creative | Logo © For Love of a Dog
| 388,758
|
Sandwich Tower
😢 You have problems playing Sandwich Tower? Sandwich Tower
Sandwich Tower is the name of one of the best sandwich-making games online that are combined with stacking games online, and the main reason that it is one of the best such skill games you get to find and enjoy right now for free here is because it belongs to the Be Cool Scooby-Doo, with Scooby being the chef that needs your help in creating the sandwich with the filling ingredients. Don't worry, since we will be guiding you on how tower building games like this one works right now, making your experience a great one, and there will be no mystery on what to do. Each member of the Mystery Gang is going to have an order for you, meaning they show you the ingredients they want to be added to their sandwich, as well as the number of times they need it in it. Of course, they will be put between two buns of bread. Of course, you can grab extra ingredients as well, but make sure to avoid those that the characters tell you they don't need since they will make you lose points. Depending on how well you do in building the sandwiches at each level you get from one to three stars, and we really hope it is always going to be three. To move Scooby-Doo to the left and right to grab the ingredients on the plate, use the corresponding arrow keys, of course. Skill and concentration are required, but we're sure you have them, and you will be doing just great. You need to improve your skills the more you progress because for each level and new character the orders get bigger, as well as more complicated, so you need to step up your game. It is always a fun time, so start right now and enjoy it!
How to play?
Use the arrows.
Cool Information & Statistics
This game was added in 11.10.2019 10:00 and it was played 1249 times since then. Sandwich Tower is an online free to play game, part of the Be Cool Scooby Doo Games, that raised a score of 77.8/100 from 9 votes, and 0 comments.
Read other 0 opinions below:
| 79,662
|
TITLE: Classification of functorial smooth vector fiber bundles
QUESTION [4 upvotes]: Let $\mathrm{Bundle}$ be the category whose objects are smooth vector fiber bundles over $\mathbb{R}$, and morphisms are fiberwise smooth linear map (that is, the base is not assumed to be fixed).
Let $Base \colon \mathrm{Bundle} \to \mathrm{Diff}$ be the functor returning the base of a given bundle (and the morphism between bases of a given bundle morphism, respectively). Let's define $ \mathrm{FunctorialBundle}$ as the full subcategory of $ \mathrm{Func}(\mathrm{Diff}, \mathrm {Bundle})$ on those functors that are section to $Base$ (that is, $ F \in \mathrm{Ob}~\mathrm{FunctorialBundle} \ \Leftarrow:\Rightarrow\ F \circ Base = id $).
Question 1: Classify objects of $ \mathrm{FunctorialBundle} $ up to isomorphism.
Of course, my question is rather "is such a classification possible?". Or is the class of such functors immense and the classification problem does not make sense (just as the problem of classifying all finite magmas, semigroups, groups does not make sense)? My next question is an example of an answer:
Starting with tangent bundle $TM$ and one-dimensional trivial bundle $\mathbb{R} \times M$ and applying operations direct sum, tensor product, dual we obtain all tensor bundle functors (whose sections are tensor fields). If there are some standard operations that extend the functor class, then add them to this list.
Question 2: are there functorial vector bundles (that is, objects of $ \mathrm{FunctorialBundle} $ ) that are not in this class (up to isomorphism, of course)?
REPLY [2 votes]: This is called natural bundle. Apparently, all known information is in Kolár, Slovák, Michor: Natural operations in differential geometry (recommended by Stefan Waldmann).
From the description, it is also the best categorical textbook on differential geometry and topology.
Third in the beginning of this book we try to give an introduction to the fundamentals of differential geometry (manifolds, flows, Lie groups, differential forms, bundles and connections) which stresses naturality and functoriality from the beginning and is as coordinate free as possible. Here we present the Frölicher–Nijenhuis bracket (a natural extension of the Lie bracket from vector fields to vector valued differential forms) as one of the basic structures of differential geometry, and we base nearly all treatment of curvature and Bianchi identities on it. This allows us to present the concept of a connection first on general fiber bundles (without structure group), with curvature, parallel transport and Bianchi identity, and only then add G-equivariance as a further property for principal fiber bundles. We think, that in this way the underlying geometric ideas are more easily understood by the novice than in the traditional approach, where too much structure at the same time is rather confusing.
| 98,927
|
TITLE: Modular equations for quasimodular forms
QUESTION [13 upvotes]: This problem is motivated by this question and by teaching
modular polynomials for the classical modular invariant $j(\tau)$.
The latter implies
that if we consider the fields of modular functions $\mathcal K_n=\mathbb C[j(\tau),j(n\tau)]$,
then $\mathcal K_n$ is an algebraic extension of $\mathcal K=\mathcal K_1=\mathbb C[j(\tau)]$. A standard application
of this fact is in study of isogenies of elliptic curves but this has several links
to transcendence theory (for example, Schneider's theorem on the transcendence of the values
of $j(\tau)$ at non-CM points and a more recent result about transcendence of
the values of $J(q)=j(e^{2\pi i\tau})$).
The field $\mathcal K$ (of transcendence degree 1 over $\mathbb C$) is a subfield of
a larger field $\mathcal F=\mathbb C(E_2(\tau),E_4(\tau),E_6(\tau))$ (which has
transcendence degree 3 and is in fact the differential closure of $\mathcal K$), where
$$
E_2(\tau)=1-24\sum_{n=1}^\infty\sigma_1(n)q^n,
\quad
E_4(\tau)=1+240\sum_{n=1}^\infty\sigma_3(n)q^n,
\qquad
E_6(\tau)=1-504\sum_{n=1}^\infty\sigma_5(n)q^n
$$
are the Eisenstein series of weight 2, 4 and 6, respectively, $\sigma_k(n)=\sum_{d\mid n}d^k$ and $q=e^{2\pi i\tau}$.
The field $\mathcal F$ is differentially closed due to Ramanujan's system
of algebraic differential equations
$$
DE_2=\frac1{12}(E_2^2-E_4), \quad
DE_4=\frac13(E_2E_4-E_6), \quad
DE_6=\frac12(E_2E_6-E_4^2),
\qquad D=\frac1{2\pi i}\frac{d}{d\tau}=q\frac d{dq}.
$$
Because of the modular origin and as the field $\mathcal F$ can be defined
as $\mathbb C(j(\tau),Dj(\tau),D^2j(\tau))$, it is more or less clear that
$E_2(n\tau)$, $E_4(n\tau)$ and $E_6(n\tau)$ are algebraic over $\mathcal F$
for every positive $n$. (In fact, $E_4(n\tau)$ and $E_6(n\tau)$ are algebraic
over the smaller field $\mathcal F'=\mathbb C(E_4(\tau),E_6(\tau))$.) In what
follows, I denote $\mathcal R=\mathbb Q[E_2(\tau),E_4(\tau),E_6(\tau)]$ and
$\mathcal R'=\mathbb Q[E_4(\tau),E_6(\tau)]$ the corresponding
rational rings.
Problem.
For each $n>1$, construct polynomials $A_n\in\mathcal R[x]$ and
$B_n,C_n\in\mathcal R'[x]$ such that
$$
A_n\bigl(E_2(n\tau)\bigr)=0, \quad
B_n\bigl(E_4(n\tau)\bigr)=0, \quad
C_n\bigl(E_6(n\tau)\bigr)=0,
$$
and derive the arithmetic bounds for them (for degree and size of
their coefficients).
Is this problem ever studied?
Some years ago I published relations expressing $E_2(\tau/2)$, $E_4(\tau/2)$ and
$E_6(\tau/2)$ through the logarithmic derivatives of the thetanulls
(Ramanujan Journal 7:4 (2003) 435--447, Section 4).
The expressions for $E_2(\tau)$, $E_4(\tau)$ and $E_6(\tau)$ through the same set of (three) functions
are classical (Sbornik: Mathematics 191:12 (2000) 1827--1871, Eqs. (0.11) and (0.12)),
so I am aware of solution in the case $n=2$.
But the approach is hardly generalizable.
Martin Rubey computed explicitly
the polynomials for $n\le7$. Here are $n=5$ instances:
$$\begin{align*}
A_5(x)&= 5^{11}x^6 - 2\cdot 3\cdot 5^{10}E_2x^5 + (3\cdot 5^{10}E_2^2-2^4\cdot 3\cdot 5^8E_4)x^4 \cr &\;
+ (-2^2\cdot 5^9E_2^3+2^6\cdot 3\cdot 5^7E_4E_2-2^9\cdot 5^6E_6)x^3 \cr &\;
+ (3\cdot 5^8E_2^4-2^5\cdot 3^2\cdot 5^6E_4E_2^2+2^9\cdot 3\cdot 5^5E_6E_2-2^8\cdot 3^2\cdot 5^4E_4^2)x^2 \cr &\;
+ (-2\cdot 3\cdot 5^6E_2^5+2^6\cdot 3\cdot 5^5E_4E_2^3-2^9\cdot 3\cdot 5^4E_6E_2^2+2^9\cdot 3^2\cdot 5^3E_4^2E_2-2^{13}\cdot 3\cdot 5E_6E_4)x \cr &\;
+ (5^5E_2^6-2^4\cdot 3\cdot 5^4E_4E_2^4+2^9\cdot 5^3E_6E_2^3-2^8\cdot 3^2\cdot 5^2E_4^2E_2^2+2^{13}\cdot 3E_6E_4E_2-2^{12}E_6^2),
\cr
B_5(x)&= 5^{20}x^6 - 2\cdot 3^2\cdot 5^{17}\cdot 7E_4x^5 + 3\cdot 5^{13}\cdot 11\cdot 19E_4^2x^4 \cr &\;
+ (2^2\cdot 5^9\cdot 7\cdot 210241E_4^3-2^{11}\cdot 5^{12}\cdot 23E_6^2)x^3 \cr &\;
+ (3^3\cdot 5^5\cdot 18858713E_4^4-2^11\cdot 3^2\cdot 5^8\cdot 13\cdot 17E_6^2E_4)x^2 \cr &\;
+ (2\cdot 3\cdot 7\cdot 11\cdot 59\cdot 71\cdot 24943E_4^5-2^{11}\cdot 3\cdot 5^4\cdot 13\cdot 967E_6^2E_4^2)x \cr &\;
+ (11^2\cdot 59^2\cdot 71^2E_4^6-2^{11}\cdot 5\cdot 389\cdot 971E_6^2E_4^3+2^{18}\cdot 5\cdot 11^3E_6^4),
\cr
C_5(x)&= 5^{30}x^6 - 2\cdot 3\cdot 5^{25}\cdot 521E_6x^5 + (-2^9\cdot 3^3\cdot 5^{19}\cdot 7^2\cdot 23E_4^3+3\cdot 5^{21}\cdot 269\cdot 773E_6^2)x^4 \cr &\;
+ (2^9\cdot 3^3\cdot 5^{13}\cdot 7^2\cdot 31123E_6E_4^3-2^2\cdot 5^{16}\cdot 521\cdot 80929E_6^3)x^3 \cr &\;
+ (-2^8\cdot 3^6\cdot 5^7\cdot 7^4\cdot 11^2\cdot 19^2E_4^6+2^8\cdot 3^5\cdot 5^8\cdot 7^2\cdot 11\cdot 17\cdot 13171E_6^2E_4^3 \cr &\;\quad
-3\cdot 5^{11}\cdot 11\cdot 59\cdot 71\cdot 269\cdot 773E_6^4)x^2 \cr &\;
+ (-2^9\cdot 3^6\cdot 7^4\cdot 11^2\cdot 17\cdot 19^2\cdot 31E_6E_4^6+2^{10}\cdot 3^3\cdot 5^3\cdot 7^2\cdot 157\cdot 191\cdot 8147E_6^3E_4^3 \cr &\;\quad
-2\cdot 3\cdot 5^5\cdot 11^2\cdot 59^2\cdot 71^2\cdot 521E_6^5)x \cr &\;
+ (-2^8\cdot 3^6\cdot 5\cdot 7^4\cdot 11^2\cdot 19^2E_6^2E_4^6+2^8\cdot 3^3\cdot 5\cdot 7^2\cdot 5237\cdot 22067E_6^4E_4^3-11^3\cdot 59^3\cdot 71^3E_6^6).
\end{align*}
$$
The conjecture about degree is $\psi(n)=n\prod_{p\mid n}(1+1/p)$, the same as for the modular polynomials.
In fact, if we assign weight 2, 4 and 6 to the variable $x$, then $A_n$, $B_n$ and $C_n$ happen to be homogeneous
polynomials of degree $2\psi(n)$, $4\psi(n)$ and $6\psi(n)$, respectively.
REPLY [1 votes]: I just stumbled across this old question. I don’t know if you are still interested in it, but here's some thoughts.
If $F(z)$ is any modular form for $\Gamma_0(N)$ of weight $k$, then $F(z)$ has a modular polynomial given by
$$\Phi_F(X)= \prod_{\gamma \in \Gamma_0(N)\backslash \textrm{SL}_2(\mathbb {Z})} (X- F(z)|_k \gamma ),$$
where $|_k$ is the usual weight $k$ Petersson slash operator.
The action of $\textrm{SL}_2(\mathbb {Z})$ permutes the roots of this polynomial transitively, and so its coefficients are all modular forms for $\textrm{SL}_2(\mathbb {Z})$, and it is either irreducible or a perfect power of an irreducible polynomial over the ring of modular forms for $\textrm{SL}_2(\mathbb {Z})$. The degree the of the polynomial is the index $\Gamma_0(N)$ in $\textrm{SL}_2(\mathbb {Z})$, which is $N\prod_{p|N} (1+\frac1p). $ If you assign $X$ the weight $k$, the polynomial is clearly homogeneous, since it is a product of homogeneous terms. If $F(z)$ the expansion of $F$ at every cusp has integer coefficients, then the coefficient forms of the polynomial will have integer coefficients.
In your case, we’re starting with $F$ a modular form for $\textrm{SL}_2(\mathbb {Z})$, and considering the polynomial for $F(Nz)$. In this case, the polynomial must be irreducible rather than a perfect power, since the individual roots can be clearly seen to be distinct. If $N$ is square free, the second coefficient will be a power of $N$ times $-F(z)|T_N$, where $T_N $ is the $N$-th hecke operator (more generally, linear combinations of $T_{N/d^2}$ where $d|N$). Using Newton’s identities for symmetric functions, you can write every coefficient in terms of the Hecke operators acting on powers of $F(z)$, so bounds for the coefficients come down to bounds on Hecke eigenvalues. The expansion of $F(Nz)$ at other cusps will generally have rational coefficients, not integer, so you may need to multiply by powers of $N$ to clear denominators in the polynomial.
We can do the same construction thing for $E_2$, by considering instead $E_2^*(z):= E_2(z)-\frac{3}{\pi y}$, which is modular for $\textrm{SL}_2(\mathbb {Z})$, not just quasi-modular. Here $y$ is the imaginary part of $z$. The polynomial for $E_2^*(Nz)$, is in $\mathbb{Q}[X,E_2, E_4,E_6, \frac{1}{\pi y}]$. If we simply ignore every component with a positive power of $\frac{1}{\pi y},$ we obtain the quasi-modular polynomial for $E_2(Nz).$
| 174,770
|
TITLE: How many hands to reduce the probability of no royal flush in n hands to less than 1/e?
QUESTION [1 upvotes]: I get close but can't figure out how they get the answer to this question -- Introduction to Probability, Grinstead, Snell, Chapter 5.1, exercise 17:
The probability of a royal flush in a poker hand is p = 1/649,740. How large
must n be to render the probability of having no royal flush in n hands smaller
than 1/e?
My answer is: since this is for one success, use a geometric distribution instead of a negative binomial.
success p(flush) = 1/649740 = .0000016
failure p(no flush) = 1 - 1/649740 = .9999984
1/e = 1 / 2.71828 = .36788
For geometric distribution, I set them equal to find the minimum n:
q^n = 1/e
(.9999984)^n = .36788
log _.9999984 (.36788) = n (base is .9999984, the failure probability)
log .36788 / log .9999984 = n (found this technique for using base 10 for both)
n = 624998.55
Therefore, my answer is 624999 (just over the equal n by rounding up). However, the answer in the text book says 649741. Using my above process, I get that too -- if I change 1/e from .36788 to .3536. Is my 1/e incorrect? Or have I used the wrong distribution and formula? Thanks in advance for any help.
REPLY [2 votes]: This is just an answer due to rounding. You correctly solve the equation
$$
q^n = 1/e
$$
for $n$ to get:
$$\begin{align}
n = \frac{\log(1/e)}{\log(q)} = \frac{-1}{\log(q)} \approx 649739.5715,
\end{align}$$
according to my calculator. Notice that there is no need to explicitly solve for $e$ since $\log(1/e) = -1$.
REPLY [1 votes]: It's a rounding error. You can get closer results as follows:
$$
\begin{align}
q^n &= e^{-1}
\\ n \ln q &= -1
\\ n &= 1 / (-\ln q)
\end{align}
$$
Note that $-\ln(1 - x)$ is approximately $x$ for small x. Just plugging in - 1/649740 gives a result of 649740. The actual series is $x + x^2 / 2 + x^3/3 ...$, which means that the $-\ln q > 1/649740$, but only very slightly, which means that $-1/\ln q < 649740$, so 649740 is sufficient, rather than the 649741 that the book claims.
REPLY [0 votes]: When $p$ is small, the probability of at least one success in $n$ trials is approximately $1-1/\mathrm e$ when $n = 1/p$. Your $p = 1/649740$ should be plenty small enough for this approximation to be very good.
To be exact, the probability of no success in $n$ independent trials, with probability $p$ of success in each, is $(1-p)^n = \exp(n \log(1-p))$. When $n$ is large and/or $p$ is small, this is quite well approximated by $\exp(-np)$. Equivalently, writing $k = np$ for the expected number of successes in the $n$ trials, the probability of no success is approximately $\exp(-k)$ regardless of $n$ (as long as $n$ isn't very small).
| 114,189
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.