text
stringlengths
0
6.23M
__index_level_0__
int64
0
419k
“Had my second session with Keith, Great guy – great info – great vibe – super helpful.” “I feel like it will help me assess the trade recommendations that come through and filter other possible trades with the criterion Peter’s course teaches as well as Keiths practical application to it. I do feel more confident and pray that this guidance will help me consistently win with your trades and others that I consider.” “It was a tough pill to swallow to drop another 2K into training – but I felt like it was a necessary move and so thank you for encouraging it.” “Heck Tiger Woods still has a coach…right?! And I am farrrrrr from Tiger Woods as a trader…:)….for now…:)” —Joe “Let me be clear on something: I have for 20 years tried this or the other, studied techniques, books and then in my only session…and with Peter’s material, I have learned hands on more than in all those years together.” “I want your sessions and I appreciate them SO much. I wait for them like water in the desert.” –Gustavo “I think having someone to provide coaching is very important. Someone available to ask a question to and just hold my hand at various times. Not many of these types of programs have that available. Yes, I feel it helped me significantly. My questions were answered and new ones created, we are still working on them.” —Dale L “Good session. She was most understanding and helpful.” “She also made a couple of good suggestions which I have already implemented.” “I had a wonderful call. She was informative, patient and helpful. All the best,” –Tom H “The session…went very well. My questions and concerns were readily answered. Yes, I get more confidence every time I speak with her. After our session concluded, I contacted Ameritrade and acquired an account with them, because I feel ready to go LIVE.” –Andy E “The meeting went very well. Most impressed. She helped me to set stops and to set my Think or Swim platform to show me more pertinent information. And she explained how to get more premium from my trades by selling the put spreads and call spreads separately. Thanks much” –Charles S “The coaching was very helpful. I have been using the ThinkorSwim platform for several years but she showed me a few good tips that I have never tried before. I implemented her suggestions immediately after our coaching session. I do feel it is a necessary step for new members and highly recommend using the coaching session.” –John B The session made me aware of some basics of the TOS platform settings to focus on key data and to avoid errors in placing trades. While I’m in somewhat familiar territory, I have been away from trading for a time. I think the orientation session…and the coaching session…speak highly for … and are key customer service assets of CashFlowHeaven. –Al P The opportunity to discuss some of my concerns and platform issues, together with possible workarounds was a great help. It answered several questions I had and I’m sure will improve my level of confidence as I go forward (particularly as I grapple with the complexities of the IB platform). This, was certainly helped by the coach’s friendly approach and her knowledge of the topic.It was an altogether a very informative and enjoyable experience. –Steve The one on one session was a great experience for me. I started with you about a month ago and some paper trades for a few days and shortly after I started trading I got stopped out on two spreads the same day. On April 25 I believe so I have been a little discouraged. My coach helped me a lot to regain confidence and to be able to follow what the spreads are doing. She was so patient with me and I am so grateful to her for that. She went way above what I was expecting her to do. Thank You All Very Much. –Max H The coaching session was very good. We talked technical analysis, credit spread strategies and how to interpret market conditions. I thought the coaching session helped. I got my most important questions answered. I have more confidence in my trading than before the coaching session. Thanks for all you and the entire organization do. –Elven N “I just had my coaching period this morning and it was excellent. She covered many items about TOS, the preferred way of entering the basic spread, stops and general discussions of several other topics. I highly recommend such a session for the new guy on the block.” –Dale “The session really did help me.She even taught me some new techniques, that I was very grateful to learn. And, I got my main questions answered, but to be honest, a few hours ago, I thought up of one more questions. I’m definitely more confident about trading after our session. And, I’m really glad you have someone…to answer the hands on questions on trading.” –Sam R “The coaching session was very good. She helped me configure ThinkorSwim and share a few tips on executing option spread purchases and stop orders as well as cancelling an order.” –Bryan A “I found the coaching to be very helpful, it’s great to have some direct one on one contact with someone from your staff, it shows your commitment to your subscribers. There were some specific setups for TOS and also additional details about the service that I learned. I have been trading part time on and off for a number of years with mixed results and have been skeptical of trading credit spreads because of the stories of 1 loss taking away the profits of 9 winners. Mentoring really reinforced the importance of always entering your stops, and properly sizing my positions based on my account size.” –Ron P “The coaching session was great. I learned a lot of new things that I can begin putting into action right away. My mentor was amazing and a great resource for you and your company. As a new Trader, I am trying to take in as much information as possible and she made things really easy to understand.” — Peter F. “My mentor talked me through [everything] step by step, calmly and methodically. It sure helps having a tutor with a great personality!” “Mentoring] has been critical in helping me get on my trading feet and feeling confident. Thank you.” — Judith A “Coaching was fantastic.” — Jeff S “Coaching was very helpful to me. I am a new trader trying to learn as much as I learned several things during our time today that I can begin using right away. It was a great session and I am very grateful for her time. She did a really good job!” — Peter “The coaching session went really well. I’ve been trading successfully for about 7 years and wanted to integrate credit spreads to my existing strategies. All my questions were answered including the ones about the ThinkorSwim platform functionality regarding credit spreads.” — Vasken “The coaching session was very helpful. She answered all of our questions. She is pretty knowledgeable and patient. No doubt, we are lot more confident than before. We must practice more paper trades before we put our feet down to the live market.” —Sanjoy & Christine
220,262
Kindly make sure that you have gone through our “Author Guidelinesâ€Â before submitting your article(s)/manuscript(s). It is important to note here that you will have to name the file of your manuscript with the last name of the first author. All the manuscripts which are to be sent to IRHSR should be attached in e-mails. After having received your manuscript, we will send you a manuscript number within two working days which should be used for future correspondence with IRHSR. IRHSR only accept manuscripts that are submitted via e-mail attachments. Please mention New Manuscript submission in subject area. All the correspondence and submissions are managed via info@irhsr.org , info@irhsr.org Paper Template Doc Format Pdf Format
161,500
This is where I get much of my Victorian inspiration and love for everything old and shabby! I am lucky to call this beautiful Victorian reproduction *HOME* and simply adore my Pink Painted Lady!!! The inside is decorated as vintage and old as the outside looks! It's simply a doll house and I'm blessed to call it my own. You can see where I got much of the inspiration for my new DESIGN & STAMP ROOM ... directly from the style of my home. I've been a collector for all 23 years of my marriage ... collecting vintage items, antiques, dolls, teddy bears, dishes and more. I just adore everything old and shabby -- the more antique looking the better. Sometimes my husband cringes at my choices, but mostly he's on board with it all! He loves our home too ... it's definitely different and not your ordinary cookie cutter house! I've gotten some questions about where my design style comes from ... so now you know!!!
342,705
\begin{document} \title[Quantum tomography and nonlocality]{Quantum tomography and nonlocality} \author{Evgeny V. Shchukin} \email{evgeny.shchukin@gmail.com} \address{Institute of Physics, Johannes-Gutenberg University of Mainz, Staudingerweg 7, 55128 Mainz, Germany} \author{Stefano Mancini} \email{stefano.mancini@unicam.it} \address{School of Science and Technology, University of Camerino, 62032 Camerino, Italy\\ \& INFN Sezione di Perugia, I-06123 Perugia, Italy} \begin{abstract} We present a tomographic approach to the study of quantum nonlocality in multipartite systems. Bell inequalities for tomograms belonging to a generic tomographic scheme are derived by exploiting tools from convex geometry. Then, possible violations of these inequalities are discussed in specific tomographic realizations providing some explicit examples. \end{abstract} \pacs{03.65.Wj, 03.65.Ud} \maketitle \section{Introduction} The Bell inequalities \cite{ph-1-195} demonstrate paradigmatic difference of quantum and classical worlds. They were originally written for dichotomic (spin$-\frac{1}{2}$) variables \cite{BO}. Spin$-\frac{1}{2}$ operators realize the Lie algebra of the $\mathrm{SU}(2)$ group. For several spin particles their spin operators form Lie algebra of the tensor product of the Lie algebras. Due to algebraic equivalence of the operators satisfying commutation relations of the Lie algebra constructed from particle spin operators and constructed from creation and annihilation operators of a field, one can obtain Bell inequalities also for the case of continuous variables besides discrete ones \cite{CV}. Beyond the specific operators involved in the Bell inequalities, their possible violations obviously depend on the state under consideration. For a (multipartite) classical system with fluctuations, the system state is described by means of a joint probability distribution function of random variables corresponding to the subsystems. In contrast, for a (multipartite) quantum system the state is described by the density matrix. In view of this difference the calculations of the system's statistical properties (including correlations) are accomplished differently in classical and quantum domains. Recently, a probability representation of quantum mechanics has been suggested \cite{FOUND}. This representation, equivalent to all other well known formulations of quantum mechanics (see, e.g. \cite{STYER}), goes back to \emph{quantum tomography}, a technique used for quantum state reconstruction \cite{REVIEW}. The approach makes use of a set of fair probabilities, \emph{tomograms}, to ``replace'' the notion of quantum state. It has also been understood \cite{JRLR} that for classical statistical mechanics the states with fluctuations can be described as well by tomograms related to standard probability distributions in classical phase-space. A comparison of classical and quantum tomograms can be found in Ref.\cite{JRLR, PHYSD}. Thus, in the probability representation, tomograms turned out to be a unique tool to describe both classical and quantum states. As a consequence they represent a natural bed where to place inequalities marking the boarder-line between quantum and classical worlds. Tomograms can be either continuous or discrete variable functions depending on the tomographic scheme (realization). In both cases they might be directly used to test nonlocality. This possibility was described for symplectic tomography \cite{SYMTOM} in bipartite system \cite{job-5-S333} and spin tomography \cite{SPINTOM} still in bipartite system \cite{ninni}. Here we shall derive Bell inequalities for \emph{multipartite} systems in terms of tomograms belonging to a \emph{generic} tomographic scheme. Then, we shall discuss the possibility to violate such inequalities depending on the tomographic realization. The layout of the paper is the following. In Section \ref{qtom} we formalize quantum tomography in a multipartite setting. Then, in Section \ref{belllike} we derive the Bell inequalities in terms of tomograms. In Section \ref{qviol} we provide some evidences of violations of such inequalities for spin$-\frac{1}{2}$ systems as well as for field modes and finally draw the conclusions in Section \ref{conclu}. \section{Quantum tomography} \label{qtom} Here we briefly review the general quantum tomography approach for a single system, by detailing three relevant cases (optical \cite{OPTTOM}, spin \cite{SPINTOM} and photon-number tomography \cite{PNT}) and then extend the formalism to multipartite systems. The basic ingredients of any tomographic scheme are a Hilbert space $\mathcal{H}$ associated with space of the system under consideration and a pair of measurable sets $(X, \Lambda)$ with measures $\mu(x)$ and $\nu(\lambda)$ correspondingly. More precisely, the set of system states is the set $\mathcal{S}(\mathcal{H})$ of Hermitian non-negative trace-class operators on $\mathcal{H}$ with trace $1$. Usually the set $X$ is the spectrum of an observable of the system and the set $\Lambda$ plays the role of transformations. We use the notation $\mathcal{P}(X)$ for the set of probability distributions on $X$, i.e. the set of nonnegative measurable functions $p: X \to \mathbb{R}$ normalized to one in the following sense $\int p(x)\,d\mu(x) = 1$. Both sets $\mathcal{S}(\mathcal{H})$ and $\mathcal{P}(X)$ are closed with respect to the convex combinations: if $\hat{\varrho}, \hat{\sigma} \in \mathcal{S}(\mathcal{H})$ (resp. $p(x), q(x) \in \mathcal{P}(X)$) and $a \in [0, 1]$ then \begin{equation*} a \hat{\varrho} + (1-a) \hat{\sigma} \in \mathcal{S}(\mathcal{H})\quad ({\rm resp.}\; a p(x) + (1-a) q(x) \in \mathcal{P}(X)). \end{equation*} \begin{defn}\label{tomomap} A map $\mathcal{T}: \mathcal{S}(\mathcal{H}) \to \mathbb{R}^{X \times \Lambda}$ is called tomographic map if the following three conditions are satisfied: \begin{enumerate} \item for any $\hat{\varrho} \in \mathcal{S}(\mathcal{H})$ the image $\mathcal{T}(\hat{\varrho}): X \times \Lambda \to \mathbb{R}$ restricted on the set $X \times \{\lambda\}$ is a probability density on $X$ \begin{equation*} \mathcal{T}_\lambda(\hat{\varrho}) \in \mathcal{P}(X) \quad \forall \lambda \in \Lambda, \quad{\rm where}\quad \mathcal{T}_\lambda(\hat{\varrho}) = \mathcal{T}(\hat{\varrho})|_{X \times \{\lambda\}}: X \to \mathbb{R}. \end{equation*} \item the map $\mathcal{T}$ preserves convex combinations \begin{equation*} \mathcal{T}(a \hat{\varrho} + (1-a) \hat{\sigma}) = a \mathcal{T}(\hat{\varrho}) + (1-a) \mathcal{T}(\hat{\sigma}), \quad \forall\hat{\varrho}, \hat{\sigma} \in \mathcal{S}(\mathcal{H}), a \in [0, 1]. \end{equation*} \item the map $\mathcal{T}$ is one-to-one \begin{equation*} \mathcal{T}(\hat{\varrho}) = \mathcal{T}(\hat{\sigma}) \Leftrightarrow \hat{\varrho} = \hat{\sigma}. \end{equation*} \end{enumerate} \end{defn} These conditions have simple meaning: (i) means that the tomogram $\mathcal{T}(\hat{\varrho})$ of any state $\hat{\varrho}$ is a probability distribution on $X$ parameterized by the points of $\Lambda$, (ii) is the linearity condition, and (iii) requires that the tomogram of each state be unique, or, in other words, that any state can be unambiguous reconstructed from its tomogram. In the present work we deal with tomographic maps of the following form \begin{equation}\label{calT} \mathcal{T}(\hat{\varrho})(x, \lambda) \equiv p_{\hat{\varrho}}(x, \lambda) = \tr\Bigl(\hat{\varrho}\hat{U}(x, \lambda)\Bigr), \end{equation} where $\hat{U}(x, \lambda)$ is a family of operators on $\mathcal{H}$ parameterized by points $(x, \lambda)$ of the set $X \times \Lambda$. In the examples considered below the state $\hat{\varrho}$ can be reconstructed from its tomogram $p_{\hat{\varrho}}(x, \lambda)$ according to the formula \begin{equation}\label{eq:D} \hat{\varrho} = \iint_{X \times \Lambda} p(x, \lambda) \hat{\cal D}(x, \lambda)\,d\mu(x)\,d\nu(\lambda), \end{equation} for the appropriate $(x, \lambda)$-parameterized family of operators $\hat{\cal D}(x, \lambda)$ on $\mathcal{H}$. The set $X$ is the spectrum of an observable $\hat{O}$ and the set $\Lambda$ is a group equipped with a representation (in general projective) $ \pi: \Lambda \to \mathcal{H}$ in $\mathcal{H}$. The operators $\hat{U}(x, \lambda)$ have the following form \begin{equation}\label{eq:Ugen} \hat{U}(x, \lambda) = \pi(\lambda)|x\rangle\langle x|\pi^\dagger(\lambda), \end{equation} where $|x\rangle$ is an eigenstate of the observable $\hat{O}$. For a group theoretical approach to quantum tomography see \cite{jmp-41-7940}. See also \cite{Ibort} for a relation to grupoids. \subsection{Spin tomography} \label{spintom} Let us consider a system with spin $j$. In this case we have: $\mathcal{H} = \mathbb{C}^{2j+1}$, $X = \{-j, -j+1, \ldots, j-1, j\}$ and $\Lambda = {\rm SO}(3, \mathbb{R})$. We denote the elements of the sets $X$ and $\Lambda$ as $s$ and $\Omega$ respectively. The measure on $X$ is equal to one on each element, so the corresponding integral is simply the finite sum over $2j+1$ terms. The measure on ${\rm SO}(3, \mathbb{R})$ is Haar's one. For the group ${\rm SO}(3, \mathbb{R})$, parameterized with Euler angles $\Omega \equiv (\varphi, \psi, \theta)$ the measure $\nu(\Omega)$ reads $\nu(\Omega) \equiv \nu(\varphi, \psi, \theta) = \sin\psi\,d\varphi\,d\psi\,d\theta$ and the operator $\hat{U}$ of (\ref{eq:Ugen}) takes the form \begin{equation} \hat{U}(s, \Omega) = \hat{K}(\Omega)|j, s\rangle\langle j, s|\hat{K}^\dagger(\Omega). \end{equation} Here the vectors $|j, s\rangle$, $s = -j, -j+1, \ldots, j-1, j$ are the basis of the space $\mathbb{C}^{2j+1}$ (eigenvectors of the spin projection $\hat{s}_z$) and the operators $\hat{K}(\Omega)$ are the operators of the irreducible representation of ${\rm SO}(3, \mathbb{R})$ in $\mathbb{C}^{2j+1}$. Their matrix elements are given by \begin{eqnarray} \langle j, s|\hat{K}(\Omega)|j, s^\prime\rangle& = & e^{i(s\theta+s^\prime\varphi)} \sqrt{\frac{(j+s^\prime)!(j-s^\prime)!}{(j+s)!(j-s)!}} \nonumber \\ &\times&\cos^{s+s^\prime}(\psi/2)\sin^{s^\prime-s}(\psi/2) P^{(s^\prime-s, s^\prime+s)}_{j-s^\prime}(\cos\psi), \end{eqnarray} with $P^{(\alpha, \beta)}_n(x)$ the Jacobi polynomials. Then the tomogram $p(s, \Omega) \equiv p(s, \varphi, \psi, \theta)$ of (\ref{calT}) is \begin{equation}\label{eq:ps} p(s, \Omega) = \langle j, s|\hat{K}(\Omega)\hat{\varrho}\hat{K}^\dagger(\Omega)|j, s\rangle. \end{equation} Due to the property $\langle j, s|\hat{K}(\Omega)|j, s^\prime\rangle = (-1)^{s^\prime-s}\langle j, -s|\hat{K}(\Omega)|j, -s^\prime\rangle$, the tomogram does not depend on the angle $\theta$, i.e. $p(s, \varphi, \psi, \theta) \equiv p(s, \varphi, \psi)$. Finally, the operator $\hat{\cal D}$ of (\ref{eq:D}) results \begin{equation*} \hat{\cal D}(s, \Omega) = \sum^j_{n, m = -j}\langle j, n| \hat{\cal D}(s, \Omega)|j, m\rangle |j, n\rangle\langle j, m|, \end{equation*} where the matrix elements $\langle j, n|\hat{\cal D}(s, \Omega)|j, m\rangle$ are given by the following expression \begin{eqnarray}\label{eq:Dm} \langle j, n|\hat{\cal D}(s, \Omega)|j, m\rangle &=& \frac{(-1)^{s+m}}{8\pi^2}\sum^{2j}_{j_3 = 0} (2j_3+1)^2 \nonumber\\ &\times& \sum^{j_3}_{k = -j_3}\langle j, k|\hat{K}(\Omega)|j, 0\rangle \left(\begin{array}{ccc} j & j & j_3 \\ n & -m & k \end{array}\right) \left(\begin{array}{ccc} j & j & j_3 \\ s & -s & k \end{array}\right), \end{eqnarray} in terms of Wigner $3j$-symbols. \subsection{Optical tomography} \label{opttom} Here we have : $\mathcal{H} = L_2(\mathbb{R})$, $X = \mathbb{R}$ and $\Lambda = \{e^{i\theta}|\theta \in [0, 2\pi]\}$. The measures on $X$ and $\Lambda$ are Lebegue's ones. The operator corresponding to Eq.(\ref{eq:Ugen}) reads \begin{equation}\label{eq:Uo} \hat{U}(X, \theta) = \hat{R}(\theta)|X\rangle\langle X|\hat{R}^\dagger(\theta), \end{equation} where $\hat{R}(\theta)$ is the rotation operator \begin{equation*} \hat{R}(\theta) = \exp\left(i\frac{\theta}{2}(\hat{x}^2+\hat{p}^2)\right), \end{equation*} acting on and the canonical position $\hat{x}$ and momentum $\hat{p}$ operators as \begin{equation*} \hat{R}(\theta) \left(\begin{array}{c} \hat{x} \\ \hat{p} \end{array}\right) \hat{R}^\dagger(\theta) = \left( \begin{array}{cc} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{array}\right) \left(\begin{array}{c} \hat{x} \\ \hat{p} \end{array}\right). \end{equation*} In other words, $\hat{U}(X, \theta)$ of (\ref{eq:Uo}) is the projector on the rotated eigenvector $|X\rangle$ of the position operator $\hat{x}$. The tomogram $p(X, \theta)$ of (\ref{calT}) is the diagonal matrix element \begin{equation} \label{eq:opttom} p(X, \theta) = \langle X|\hat{R}(\theta)\hat{\varrho}\hat{R}^\dagger(\theta)|X\rangle. \end{equation} Furthermore, the operator $\hat{\cal D}$ of (\ref{eq:D}) results \begin{eqnarray*} \hat{\cal D}(X, \theta) =\frac{1}{4\pi}\int |r| \exp\Bigl(-ir(X-\cos\theta\hat{x}-\sin\theta\hat{p})\Bigr)\,dr. \end{eqnarray*} \subsection{Photon-Number tomography} \label{pntom} Here we have: $\mathcal{H} = L_2(\mathbb{R})$, $X = \mathbb{Z}_+ = \{0, 1, \ldots\}$ and $\Lambda = \mathbb{C}$. We denote the elements of the sets $X$ and $\mathbb{C}$ as $n$ and $\alpha$ respectively. The measure on $X$ is equal to one on each element and the measure on $\mathbb{C}$ is $(1/\pi)d^2\alpha$, where $d^2\alpha = d\re\alpha\,d\im\alpha$ is the Lebegue's measure on the real plane. Here, the operator $\hat{U}$ is the projector onto the displaced Fock state \begin{equation} \hat{U}(n, \alpha) = \hat{D}(\alpha)|n\rangle\langle n|\hat{D}^\dagger(\alpha), \end{equation} with \begin{equation*} D(\alpha)\equiv\exp\left[\frac{\alpha-\alpha^*}{\sqrt{2}}\hat{x} -i\frac{\alpha+\alpha^*}{\sqrt{2}}\hat{p}\right]. \end{equation*} From (\ref{calT}) the tomogram $p(n, \alpha)$ reads \begin{equation} \label{eq:pntom} p(n, \alpha) = \langle n|\hat{D}(\alpha)\hat{\varrho}\hat{D}^\dagger(\alpha)|n\rangle. \end{equation} Furthermore, the operator $\hat{\cal D}$ of (\ref{eq:D}) becomes in this case \begin{eqnarray*} \hat{\cal D}(n, \alpha) =4(-1)^n\sum^{+\infty}_{m = 0} (-1)^m\hat{D}(\alpha)|m\rangle\langle m|\hat{D}^\dagger(\alpha). \end{eqnarray*} \subsection{Tomography for multi-partite systems} \label{tommulti} The generalization for multi-partite systems is straightforward. \begin{defn} Consider a $n$-partite system with the state space $\mathcal{H}^{\otimes n}$ and $n$ tomographic schemes, one for each part with sets $(X_k, \Lambda_k)$ and operators $\hat{U}_k(x_k, \lambda_k)$ and $\hat{\cal D}_k(x_k, \lambda_k)$, $k = 1, \ldots, n$. The tomographic scheme for the whole system is then constructed as the direct product of these schemes, by using \begin{eqnarray} && X \equiv \prod^n_{k=1}X_k, \quad \Lambda \equiv \prod^n_{k=1}\Lambda_k, \nonumber\\ && \hat{U}({\gr{x}}, \lambda) \equiv \bigotimes^n_{k=1}\hat{U}_k(x_k, \lambda_k), \quad \hat{\cal D}({\gr{x}}, \lambda) \equiv \bigotimes^n_{k=1}\hat{D}_k(x_k, \lambda_k),\label{eq:U2} \end{eqnarray} where ${\gr{x}} \equiv (x_1, \ldots, x_n)$, ${\gr\lambda} \equiv (\lambda_1, \ldots, \lambda_n)$ and the measures $\mu(\gr{x})$, $\nu(\gr{\lambda})$ on $X$, $\Lambda$ are direct products of $\mu_1(x_1), \ldots, \mu_n(x_n)$ and $\nu_1(\lambda_1), \ldots, \nu_n(\lambda_n)$ respectively. The tomogram $p(\gr{x}, \gr{\lambda})$ of a state $\hat{\varrho}$ (generalizing (\ref{calT})) is \begin{equation} p(\gr{x}, \gr{\lambda}) = \tr\Bigl(\hat{\varrho} \hat{U}(\gr{x}, \gr{\lambda})\Bigr). \end{equation} For any $\gr{\lambda} \in \Lambda$ it is a probability distribution on $X$, thus $ \int_X p(\gr{x}, \gr{\lambda})\, d\mu(\gr{x}) = 1$. \end{defn} \bigskip \noindent \textbf{Remark.} From the definition (\ref{eq:U2}) of the operator $\hat{U}(\gr{x}, \gr{\lambda})$ it immediately follows that the tomogram $p(\gr{x}, \gr{\lambda})$ of a factorized state \begin{equation}\label{eq:rhof} \hat{\varrho} = \hat{\varrho}_1 \otimes \ldots \otimes \hat{\varrho}_n \end{equation} is also factorized, i.e. \begin{equation}\label{eq:pf} p(\gr{x}, \gr{\lambda}) = p_1(x_1, \lambda_1) \ldots p_n(x_n, \lambda_n), \end{equation} where $p_k(x_k, \lambda_k)$ is the tomogram of the state $\hat{\varrho}_k$. More generally, the tomogram of a separable state \begin{equation}\label{eq:rhos} \hat{\varrho} = \sum^{+\infty}_{i=0} a_i \hat{\varrho}^{(i)}_1 \otimes \ldots \hat{\varrho}^{(i)}_n, \quad\quad a_i \geqslant 0, \quad \sum^{+\infty}_{i=0} a_i = 1 \end{equation} is also separable in the following sense \begin{equation} p(\gr{x}, \gr{\lambda}) = \sum^{+\infty}_{i=0} a_i p^{(i)}_1(x_1, \lambda_1) \ldots p^{(i)}_n(x_n, \lambda_n), \end{equation} where $p^{(i)}_k(x_k, \lambda_k)$ is the tomogram of the state $\hat{\varrho}^{(i)}_k$. \section{Bell inequalities for tomograms} \label{belllike} Let us consider a $n$-partite system in tomographic representation, whith each subsystem supplied by a tomographic map ${\cal T}_k$, $k=1,\ldots,n$. The tomogram $p(\gr{x}, \gr{\lambda})$ of a state $\hat{\varrho}$ is a function of $2n$ arguments and with respect to one half of them it is a probability distribution. We will show that in general it cannot be considered as a classical joint probability. \begin{defn}\label{def:setsYZ} For any $k = 1, \ldots, n$ let $Y_k$ and $Z_k$ be two measurable sets such that \begin{equation*} X_k = Y_k \bigcup Z_k, \quad Y_k \bigcap Z_k = \emptyset, \end{equation*} and for any $\lambda_k \in \Lambda_k$ let $A_k(\lambda_k)$ be a dichotomic random variable on $X = \prod_k X_k$ such that \begin{eqnarray}\label{eq:rv} \mathbf{P}(A_k(\lambda_k) = 1) &= \int_{Y_k} \tr \Bigl(\hat{\varrho}\hat{U}_k(x_k, \lambda_k)\Bigr)\,d\mu_k(x_k), \nonumber\\ \mathbf{P}(A_k(\lambda_k) = -1) &= \int_{Z_k} \tr \Bigl(\hat{\varrho}\hat{U}_k(x_k, \lambda_k)\Bigr)\,d\mu_k(x_k). \end{eqnarray} \end{defn} Symbolically the variables $A_k(\lambda_k)$ can be written as \begin{equation}\label{eq:rv2} A_k(\lambda_k) = \left\{\begin{array}{cc} 1 & {\rm if}\ x_k \in Y_k, \\ -1 & {\rm if}\ x_k \in Z_k \end{array}\right. \end{equation} in the coordinate system deformed by the operator $\hat{U}_k(x_k, \lambda_k)$. The joint probability distribution of the random variables $A_1(\lambda_1), \ldots, A_n(\lambda_n)$, namely \begin{equation*} p_{\varepsilon_1, \ldots, \varepsilon_n}(\lambda_1, \ldots, \lambda_n) = \mathbf{P}(A_1(\lambda_1) = \varepsilon_1, \ldots, A_n(\lambda_n) = \varepsilon_n), \end{equation*} where $\varepsilon_k = \pm 1$, is given by \begin{equation}\label{eq:jpd} p_{\varepsilon_1, \ldots, \varepsilon_n}(\lambda_1, \ldots, \lambda_n) = \int_{W_1} \ldots \int_{W_n} p(\gr{x}, \gr{\lambda})\,d\gr{x}, \end{equation} with \begin{equation*} W_k = \left\{\begin{array}{cc} Y_k & {\rm if}\ \varepsilon_k = 1, \\ Z_k & {\rm if}\ \varepsilon_k = -1. \end{array}\right. \end{equation*} The correlation function of $A_1(\lambda_1), \ldots, A_n(\lambda_n)$ results \begin{eqnarray}\label{eq:Edef} E(\lambda_1, \ldots, \lambda_n) &\equiv& \bigl\langle A_1(\lambda_1) \ldots A_n(\lambda_n) \bigr\rangle\nonumber\\ &=& \sum_{\varepsilon_1, \ldots, \varepsilon_n = \pm 1} p_{\varepsilon_1, \ldots, \varepsilon_n}(\lambda_1, \ldots, \lambda_n) \varepsilon_1 \ldots \varepsilon_n. \end{eqnarray} \begin{defn} Let us fix two parameters $\lambda^{(1)}_k$ and $\lambda^{(2)}_k$ for $k = 1, \ldots, n$ and denote \begin{equation}\label{eq:tomcor} E(j_1, \ldots, j_n) \equiv E(\lambda^{(j_1)}_1, \ldots, \lambda^{(j_n)}_n), \quad j_k = 1, 2. \end{equation} Since each index $j_k$ can take $2$ values independently on all the other indices, there are $2^n$ correlation functions (\ref{eq:tomcor}). Then we define by \begin{equation}\label{eq:e} \gr{e} \equiv \Bigl(E(j_1, \ldots, j_n)\Bigr) \in \mathbb{R}^{2^n}. \end{equation} the vector of these correlation functions with some order of multi-indices $(j_1, \ldots, j_n)$. \end{defn} It is convenient to enumerate the functions $E(j_1, \ldots, j_n)$. For this purpose we use the binary base with ``digits" $1$ and $2$ instead of $0$ and $1$. This means that we use the following one-to-one correspondence \begin{equation*} \{1, \ldots, 2^n\} \ni j \leftrightarrow (j_1, \ldots, j_n),\quad j_k = 1, 2, \end{equation*} where $j$ and $(j_1, \ldots, j_n)$ are related to each other according to \begin{equation}\label{eq:j} j = (j_1-1)2^{n-1} + \ldots + (j_n-1)2 + j_n. \end{equation} By virtue of such an ordering, the vector $\gr{e}$ (\ref{eq:e}) can be written as \begin{eqnarray}\label{eq:eo} \gr{e} = \Bigl(E(1), \ldots, E(2^n)\Bigr) =\Bigl(E(1, \ldots, 1), \ldots, E(2, \ldots, 2)\Bigr) \in \mathbb{R}^{2^n}. \end{eqnarray} What region $\Omega_n \subset \mathbb{R}^{2^n}$ fills the vector $\gr{e}$ of (\ref{eq:eo})? Due to the fact that each observable has only two outcomes $\pm 1$ it follows that each correlation function (\ref{eq:tomcor}) is bounded by one by absolute value and, so, the set $\Omega_n$ is a subset of $2^n$-dimensional cube ${[-1, 1]}^{2^n}$. Suppose that it is possible to model the result of the measurement by a random variable, $A_k(j_k)$, which can take two values $\pm 1$. We assume that these random variables can be arbitrary correlated. \begin{defn}\label{defn:joint} Let us define by \begin{eqnarray}\label{eq:p} p(i_1(1), &\ldots, i_n(2)) \equiv \mathbf{P}\Bigl(A_1(1) = i_1(1), \ldots, A_n(2) = i_n(2)\Bigr), \end{eqnarray} the joint probability distribution for random variables $A_k(j_k)$, with $i_k(j_k) = \pm 1$. Since each index $i_k(j_k)$ can take independently $2$ values we have $2^{2n}$ numbers (\ref{eq:p}) which completely describe statistical characteristics of the random variables under consideration. We enumerate them with a single number $i = 1, \ldots, 2^{2n}$ using the same rule as for the correlation functions $E(j)$, namely \begin{equation*} \{1, \ldots, 2^{2n}\} \ni i \leftrightarrow (i_1(1), \ldots, i_n(2)), \end{equation*} where $i$ and $(i_1(1), \ldots, i_n(2))$ are related to each other according to \begin{equation}\label{eq:i} i = (i_1(1)-1)2^{2n-1} + (i_n(1)-1)2 + i_n(2). \end{equation} Enumerated in such a way the probabilities (\ref{eq:p}) form a $2^{2n}$-dimensional vector \begin{equation}\label{eq:po} \gr{p} \equiv (p_1, \ldots, p_{2^{2n}}) \in \mathbb{R}^{2^{2n}}. \end{equation} \end{defn} The point (\ref{eq:po}) lies in the standard simplex \begin{equation}\label{eq:S} S_{2^{2n}-1} = \left\{(p_1, \ldots, p_{2^{2n}})\biggm|\sum^{2^{2n}}_{i=1}p_i = 1, p_i \geqslant 0 \right\} \subset \mathbb{R}^{2^{2n}}. \end{equation} What region $\Omega_n \subset \mathbb{R}^{2^n}$ fills the vector $\gr{e}$ (\ref{eq:eo}) when the point $\gr{p}$ (\ref{eq:po}) runs over the simplex $S_{2^{2n}-1}$ (\ref{eq:S})? To answer this question we are going to explicitly relate $\gr{e}$ and $\gr{p}$ assuming the former expressed like classical joint probabilities. Then, the correlation function $E(j)$ is intended as a simple linear combination of $p_i$ with proper coefficients. Looking at (\ref{eq:Edef}) we consider such coefficients, $\mathcal{E}(j, i)$, given by the product \begin{equation}\label{eq:Eji} \mathcal{E}(j, i) = i_1(j_1) \ldots i_n(j_n), \end{equation} where $j_k$ and $i_k(j_k)$ are ``digits" of the numbers $j$ and $i$ in the binary representations (\ref{eq:j}) and (\ref{eq:i}). The numbers $\mathcal{E}(j, i)$, $j = 1, \ldots, 2^n$, $i = 1, \ldots, 2^{2n}$ form a $2^n \times 2^{2n}$ matrix $\mathcal{E}_n$ and the relation between $\gr{e}$ and $\gr{p}$ can then be written as \begin{equation}\label{eq:ep} \gr{e} = \mathcal{E}_n\gr{p}. \end{equation} We see that the region $\Omega_n$ is the image of the standard simplex $S_{2^{2n}-1}$ \begin{equation}\label{eq:Omega} \Omega_n = \mathcal{E}_n(S_{2^{2n}-1}), \end{equation} where we do not distinguish the linear map $\mathcal{E}_n: \mathbb{R}^{2^{2n}} \to \mathbb{R}^{2^n}$ and its matrix $\mathcal{E}_n$ in the standard bases of $\mathbb{R}^{2^{2n}}$ and $\mathbb{R}^{2^n}$. Thus, we have reduced the problem of finding Bell inequalities to find the set $\Omega_n$. It means that the problem of finding Bell inequalities boils down to a standard problem of convex geometry, referred to as convex hull problem: given points $\gr{c}_i$ find their convex hull, or facets of maximal dimension of the corresponding polytope (for notions of convex geometry see, e.g. \cite{Gruber07}). Now we will get the Bell inequalities explicitly. Note that permutations of the columns of the matrix $\mathcal{E}_n$ do not change their convex hull and that they correspond to permutations of the components of the vector $\gr{p}$ or different orderings of the probabilities (\ref{eq:po}), so one can safely permute columns of $\mathcal{E}_n$ without altering (\ref{eq:ep}). \begin{theorem}\label{theoBell} The set $\Omega_n$ is specified by the (Bell) inequalities for the vector of the correlation functions \begin{equation}\label{eq:Bell} (\gr{e}, H_{2^n}\gr{c}) \leqslant 2^n, \quad \forall \gr{c} = (\pm 1, \ldots, \pm 1). \end{equation} The matrix $H_{2^n}$ is the Hadamard matrix recurrently defined as \begin{eqnarray*} H_{2^n} = \underbrace{H_2 \otimes \ldots \otimes H_2}_{n}\,,\quad H_2 = \left(\begin{array}{cc} 1 & 1 \\ 1 & -1 \end{array}\right). \end{eqnarray*} \end{theorem} \begin{proof} The key fact in deriving the Bell inequality (\ref{eq:Bell}) is that the matrix $\mathcal{E}_n$ can be written in the following block form \begin{equation}\label{eq:EH} \mathcal{E}_n =\Big( \underbrace{ \begin{array}{ccccc} H_{2^n} & -H_{2^n} & \ldots & H_{2^n} & -H_{2^n} \end{array} }_{2^n}\Big) \end{equation} after appropriate arrangement of its columns. One can rewrite the r.h.s. of (\ref{eq:EH}) as the product of two matrices \begin{equation*} \mathcal{E}_n = H_{2^n} \left(\begin{array}{ccccc} E_{2^n} & -E_{2^n} & \ldots & E_{2^n} & -E_{2^n} \end{array}\right) = H_{2^n}A_n, \end{equation*} which means that the linear map $\mathcal{E}_n: \mathbb{R}^{2^{2n}} \to \mathbb{R}^{2^n}$ can be decomposed into two maps \begin{equation*} \mathcal{E}_n = H_{2^n} \circ A_n, \quad A_n: \mathbb{R}^{2^{2n}} \to \mathbb{R}^{2^n}, \quad H_{2^n}: \mathbb{R}^{2^n} \to \mathbb{R}^{2^n}. \end{equation*} According to this decomposition (\ref{eq:ep}) reads \begin{equation}\label{eq:eq} \gr{e} = H_{2^n}\gr{q}, \end{equation} where the vector $\gr{q} = A_n \gr{p} \in \mathbb{R}^{2^n}$ is explicitly given by the following expression \begin{equation}\label{eq:q} \gr{q} = \left(\begin{array}{c} p_1-p_{2^n+1}+\ldots-p_{(2^n-1)2^n+1} \\ \vdots \\ p_{2^n}-p_{2\cdot2^n}+\ldots-p_{2^{2n}} \end{array}\right). \end{equation} Define the following convex polytope $\mathcal{O}_N \subset \mathbb{R}^N$ \begin{equation}\label{eq:O} \mathcal{O}_N = \{\gr{x} \in \mathbb{R}^N | (\gr{x}, \gr{c}) \leqslant 1, \ \forall \gr{c} = (\pm 1, \ldots, \pm 1) \}. \end{equation} As one can easily see the image of the standard simplex $S_{2^{2n}-1}$ is exactly the polytope $\mathcal{O}_{2^n}$, that is $A_n(S_{2^{2n}-1}) = \mathcal{O}_{2^n}$. From this fact we have \begin{equation}\label{eq:OH} \Omega_n = H_{2^n}(\mathcal{O}_{2^n}). \end{equation} Now the Bell inequalities can be straightforwardly obtained from this relation. Just notice that a non-degenerate linear map $f: \mathbb{R}^N \to \mathbb{R}^N$ with the matrix $F$ maps a half-space $\mathfrak{h} = \{\gr{x} \in \mathbb{R}^N | (\gr{x}, \gr{a}) \leqslant b \}$ to the half-space $f(\mathfrak{h}) = \{\gr{y} \in \mathbb{R}^N | (\gr{y}, (F^T)^{-1}\gr{a}) \leqslant b \}$. Taking into account the following representation of the polytope (\ref{eq:OH}) \begin{equation}\label{eq:Oc} \mathcal{O}_{2^n} = \bigcap_{\gr{c} = (\pm 1, \ldots, \pm 1)} \{\gr{q}|(\gr{q}, \gr{c}) \leqslant 1\}, \end{equation} the symmetry of the Hadamard matrix $H_{2^n}$ and the formula for its inverse $H^{-1}_{2^n} = \frac{1}{2^n}H_{2^n}$, we get the explicit form of the set $\Omega_n$, i.e. \begin{equation} \Omega_n = \bigcap_{\gr{c} = (\pm 1, \ldots, \pm 1)} \{\gr{e}|(\gr{e}, H_{2^n}\gr{c}) \leqslant 2^n\}. \end{equation} Hence, the Bell inequalities (\ref{eq:Bell}). \end{proof} \bigskip \noindent \textbf{Remark.} Explicitly (\ref{eq:Bell}) can be written as \begin{equation}\label{eq:B} \left|\sum^2_{j_1, \ldots, j_n = 1} a_{j_1, \ldots, j_n} E(j_1, \ldots, j_n) \right| \leqslant 2^n, \end{equation} where the coefficients $a_{j_1, \ldots, j_n}$ are connected with the vector $\gr{c}$ by the following relation \begin{equation}\label{eq:a} a_{j_1, \ldots, j_n} = \sum_{\varepsilon_1, \ldots, \varepsilon_n = \pm 1} c(\varepsilon_1, \ldots, \varepsilon_n) \varepsilon^{j_1-1}_1 \ldots \varepsilon^{j_n-1}_n. \end{equation} The number $c(\varepsilon_1, \ldots, \varepsilon_n)$ here is the $i$-th component of the vector $\gr{c}$, where the binary representation of $i$ is $i = (\varepsilon_1 \ldots \varepsilon_n)_2$ with digits $+1$ and $-1$ instead of $0$ and $1$. One can easily see that there are $2^{n+1}$ inequalities of the form $\pm E(j_1, \ldots, j_n) \leqslant 1$. They correspond to the functions $c(\varepsilon_1, \ldots, \varepsilon_n)$ that are columns of either $H_{2^n}$ or $-H_{2^n}$ and they are referred to as trivial inequalities. Finally, notice that the well known CHSH inequality \cite{CHSH} is a particular instance of (\ref{eq:B}) corresponding to $n=2$ and \begin{equation*} c(-1,-1)=-1,\quad c(-1,+1)=c(+1,-1)=c(+1,+1)=+1. \end{equation*} \begin{theorem} Any separable state satisfies (\ref{eq:Bell}) with correlation functions (\ref{eq:tomcor}). \end{theorem} \begin{proof} Let us start with a factorized state (\ref{eq:rhof}) whose tomogram (\ref{eq:pf}) is also factorized. Due to this the random variables $A_1(\lambda_1), \ldots, A_n(\lambda_n)$ are independent and the correlation function $E(\lambda_1, \ldots, \lambda_n)$ reads \begin{equation}\label{eq:Eq} E(\lambda_1, \ldots, \lambda_n) = q_1(\lambda_1) \ldots q_n(\lambda_n) \end{equation} with \begin{equation*} q_k(\lambda_k) = p^{(k)}_1(\lambda_k) - p^{(k)}_{-1}(\lambda_k), \end{equation*} where \begin{equation*} p^{(k)}_{\varepsilon_k}(\lambda_k) = \mathbf{P}(A_k(\lambda_k) = \varepsilon_k), \quad \varepsilon_k = \pm 1. \end{equation*} Due to the fact that \begin{equation*} p^{(k)}_1(\lambda_k) + p^{(k)}_{-1}(\lambda_k) = 1, \quad \forall k = 1, \ldots, n \quad \forall \lambda_k \in \Lambda_k, \end{equation*} it is clear that $-1 \leqslant q_k(\lambda_k) \leqslant 1$. The left hand side of the inequality (\ref{eq:Bell}) is a linear function of any $q_k(\lambda^{(j_k)}_k)$ where all the $q_1(\lambda^{(j_1)}_1), \ldots, q_n(\lambda^{(j_n)}_n)$, $j_k = 1, 2$ are considered as independent variables. A linear function defined on the convex set $[-1, 1]$ takes its maximum on a boundary point, $\pm 1$ in this case, and so, the left hand side of (\ref{eq:Bell}) is maximal if $q_k(\lambda^{(j_1)}_k) = \pm 1$, $j_k = 1, 2$, $k = 1, \ldots, n$. In such a case the vector $\gr{e}$ of correlation functions is a column of either $H_{2^n}$ or $-H_{2^n}$. Just note that due to (\ref{eq:Eq}) the vector $\gr{e}$ reads \begin{equation*} \gr{e} = \left(\begin{array}{c} q_1(1) \\ q_1(2) \end{array}\right) \otimes \ldots \otimes \left(\begin{array}{c} q_n(1) \\ q_n(2) \end{array}\right), \end{equation*} where $q_k(j_k) = q_k(\lambda^{(j_k)}_k)$. That is to say, $\gr{e} = \gr{c}_i$ is the $i$-th column of $H_{2^n}$ or $-H_{2^n}$, then \begin{eqnarray}\label{eHc} (\gr{e}, H_{2^n}\gr{c}) &= \pm (\gr{c}_i, H_{2^n}\gr{c}) = \pm (H_{2^n}\gr{c}_i, \gr{c})= \pm (2^n \gr{e}_i, \gr{c}) = \pm 2^n \leqslant 2^n. \end{eqnarray} Here we used the orthogonality of the columns of $H_{2^n}$: $H_{2^n}\gr{c}_i = 2^n \gr{e}_i$, where all the coordinates of $\gr{e}_i$ are zero except the $i$-th which is one. Hence, we have proved that all factorized states satisfy (\ref{eq:Bell}). Let us now consider a general separable state (\ref{eq:rhos}). Since the correlation function $E(\lambda_1, \ldots, \lambda_n)$ is a linear function of the state, the vector $\gr{e}$ is a linear combination of the vectors $\gr{e}^{(i)}$ corresponding to the states $\hat{\varrho}^{(i)} = \hat{\varrho}^{(i)}_1 \otimes \ldots \otimes \hat{\varrho}^{(i)}_n$, i.e. \begin{equation*} \gr{e} = \sum^{+\infty}_{n=0} a_i \gr{e}^{(i)}. \end{equation*} As we have already shown each vector $\gr{e}^{(i)}$ satisfies all the Bell inequalities (\ref{eq:Bell}) or lies in the convex set $\Omega_n$. Once all the vectors $\gr{e}^{(i)}$ are in $\Omega_n$ so is their convex combination $\gr{e}$. This means that any separable state satisfies all the inequalities (\ref{eq:Bell}). \end{proof} \section{Quantum violations} \label{qviol} The Bell inequalities are of interest not because they are always valid but because they can be violated. One can ask if there was a mistake in the proof of theorem \ref{theoBell}. The problem relies on the underlying hypothesis of locality when relating $\gr{e}$ with $\gr{p}$ in (\ref{eq:ep}). In doing so we have implicitly assumed (\ref{eq:Edef}) as a classical joint probability, which is not generally true at quantum level. We follow Mermin \cite{prl-65-1838} to derive the only Bell inequality whose maximal quantum violation is the largest among all others. For an odd number $n$ of systems let us consider the following random variable \begin{equation}\label{eq:Mn} M_n = \im \left[ \prod^n_{k=1}(A_k(1) + i A_k(2)) \right]. \end{equation} Since each $A_k$ can take only values $\pm 1$, each term in this product is equal to $\sqrt{2}$ by absolute value. Furthermore, since $n$ is odd the whole product has the phase that is an integer multiplier of $\pi/4$. As a consequence we have \begin{equation}\label{eq:Mer} |\langle M_n \rangle| \leqslant 2^{(n-1)/2}. \end{equation} Explicitly this inequality reads \begin{equation}\label{eq:Mo} \left| \sum_{(j_1, \ldots, j_n) \in J} (-1)^{\delta(j_1, \ldots, j_n)} E(j_1, \ldots, j_n) \right| \leqslant 2^{(n-1)/2}, \end{equation} where the sum here runs over the set of multi-indices $(j_1, \ldots, j_n)$ which contain an odd number of $2$ \begin{equation*} J = \Bigl \{(j_1, \ldots, j_n) \Bigm| |\{k|j_k=2\}| = 2l+1 \Bigr\} \end{equation*} and \begin{equation*} \delta(j_1, \ldots, j_n) = l, \quad |\{k|j_k=2\}| = 2l+1. \end{equation*} Multiplied by $2^{(n+1)/2}$ the inequality (\ref{eq:Mo}) takes the form (\ref{eq:B}) and it is easy to show that it is a Bell inequality, i.e. there is a vector $\gr{c}$ that gives the coefficients of (\ref{eq:Mo}) (multiplied by $2^{(n+1)/2}$) according to (\ref{eq:a}). We now consider an even number $n$. Let us denote the expression (\ref{eq:Mn}) as $M_n(1, 2)$ and the similar expression with the variables $A_k(1)$ and $A_k(2)$ swapped as $M_n(2, 1)$. Consider the following combination \begin{eqnarray}\label{eq:Meo} \widetilde{M}_n = M_{n-1}(1, 2) (A_n(1)+A_n(2)) + M_{n-1}(2, 1) (A_n(1)-A_n(2)). \end{eqnarray} Since $M_{n-1}(1, 2)$ is equal to $\pm 2^{n/2-1}$ and $A_n(j) = \pm 1$, we will have \begin{equation}\label{eq:M2} |\langle \widetilde{M}_n \rangle| \leqslant 2^{n/2}. \end{equation} Using the explicit form (\ref{eq:Mo}) for the odd number $n-1$ one can write (\ref{eq:M2}) as \begin{equation}\label{eq:Me} \left|\sum^2_{j_1, \ldots, j_n = 1} (-1)^{\tilde{\delta}(j_1, \ldots, j_n)} E(j_1, \ldots, j_n) \right| \leqslant 2^{n/2}, \end{equation} where \begin{eqnarray*} \tilde{\delta}(j_1, \ldots, j_n) = \left\{\begin{array}{cc} 1 & {\rm if}\; j_n = 2 \; {\rm and} \; |\{k|j_k = 2\}|\ {\rm is \; nonzero \; and \; even}\\ 0 & {\rm otherwise } \end{array}\right. .\nonumber\\ \end{eqnarray*} One can see that it is a Bell inequality and multiplied by $2^{n/2}$ it takes the form (\ref{eq:B}). Furthermore, for $n=2$ Eq.(\ref{eq:Me}) exactly reduces to the CHSH inequality \cite{CHSH}. Let us now see how the inequalities (\ref{eq:Mo}) or (\ref{eq:Me}) can be violated in different tomographic realizations starting from the following entangled state \begin{eqnarray}\label{eq:psi1} |\Psi\rangle &= \frac{1}{\sqrt{2}} \Bigl( |{\bf 0}\rangle + |{\bf 1}\rangle \Bigr), \end{eqnarray} where ${\bf 0} = (0, \ldots, 0)$ and ${\bf 1} = (1, \ldots, 1)$. Below, for the sake of simplicity, the focus will mainly be to $n=2,3$. \subsection{Spin tomography} Using the notation $|0\rangle \equiv |\halfm\rangle$, $|1\rangle \equiv |\halfp\rangle$ for the spin projection along $z$, the state (\ref{eq:psi1}) becomes \begin{equation*} |\Psi\rangle = \frac{1}{\sqrt{2}}\left(|\halfm, \ldots, \halfm\rangle + |\halfp, \ldots, \halfp\rangle\right), \end{equation*} whose tomogram, referring to Eq.(\ref{eq:Dm}), reads as \begin{eqnarray}\label{eq:spintomex} p(s_1, \ldots, s_n, \Omega_1, \ldots, \Omega_n) = \frac{1}{2} \left| \prod^n_{j=1} \langle s_j|\hat{K}(\Omega_j)|\halfm\rangle + \prod^n_{j=1} \langle s_j|\hat{K}(\Omega_j)|\halfp\rangle \right|^2.\nonumber\\ \end{eqnarray} For $n=2$ we immediately get \begin{eqnarray*} p(\halfp, \halfp, \Omega_1, \Omega_2) &=& p(\halfm, \halfm, \Omega_1, \Omega_2) \nonumber \\ &=& \frac{1}{4} (1 + \cos\psi_1 \cos\psi_2 + \sin\psi_1 \sin\psi_2 \cos(\varphi_1+\varphi_2)), \nonumber \\ p(\halfp, \halfm, \Omega_1, \Omega_2) &=& p(\halfm, \halfp, \Omega_1, \Omega_2) \nonumber \\ &=& \frac{1}{4} (1 - \cos\psi_1 \cos\psi_2 - \sin\psi_1 \sin\psi_2 \cos(\varphi_1+\varphi_2)), \end{eqnarray*} and the correlation function (\ref{eq:Edef}) becomes \begin{eqnarray}\label{eq:E2spin} E(\Omega_1, \Omega_2) = \cos\psi_1 \cos\psi_2 + \sin\psi_1 \sin\psi_2 \cos(\varphi_1+\varphi_2). \end{eqnarray} The Bell inequality (\ref{eq:Me}) reads in this case \begin{eqnarray}\label{eq:Bellspin} \Bigl|E(\Omega^{(1)}_1, \Omega^{(1)}_2) &+ E(\Omega^{(1)}_1, \Omega^{(2)}_2) + E(\Omega^{(2)}_1, \Omega^{(1)}_2) - E(\Omega^{(2)}_1, \Omega^{(2)}_2)\Bigr| \leqslant 2, \end{eqnarray} for all $\Omega^{(j)}_k = (\varphi^{(j)}_k, \psi^{(j)}_k, \theta^{(j)}_k)$, $j,k = 1, 2$. The maximum of the l.h.s. of (\ref{eq:Bellspin}) with (\ref{eq:E2spin}) is $2\sqrt{2}$ and is attained by taking e.g. (the angles $\theta$ do not matter here) \begin{eqnarray*} \Omega^{(1)}_1 &= (\varphi_1, -\pi/8, 0), & \quad \Omega^{(1)}_2 = (-\varphi_1, \pi/8, 0), \\ \Omega^{(2)}_1 &= (\varphi_1, 3\pi/8, 0), & \quad \Omega^{(2)}_2 = (-\varphi_1, -3\pi/8, 0). \end{eqnarray*} In the case of $n=3$, from (\ref{eq:spintomex}), we have (not to overload the notation we omit the $\Omega$'s) \begin{eqnarray*} p(\halfp, \halfp, \halfp) &= \frac{1}{8}[1 + \cos\psi_1 \cos\psi_2 + \cos\psi_1 \cos\psi_3 + \cos\psi_2 \cos\psi_3 \nonumber \\ &- \sin\psi_1 \sin\psi_2 \sin\psi_3 \cos(\varphi_1 + \varphi_2 + \varphi_3)], \nonumber \\ p(\halfp, \halfp, \halfm) &= \frac{1}{8}[1 + \cos\psi_1 \cos\psi_2 - \cos\psi_1 \cos\psi_3 - \cos\psi_2 \cos\psi_3 \nonumber \\ &+ \sin\psi_1 \sin\psi_2 \sin\psi_3 \cos(\varphi_1 + \varphi_2 + \varphi_3)], \nonumber \\ p(\halfp, \halfm, \halfp) &= \frac{1}{8}[1 - \cos\psi_1 \cos\psi_2 + \cos\psi_1 \cos\psi_3 - \cos\psi_2 \cos\psi_3 \nonumber \\ &+ \sin\psi_1 \sin\psi_2 \sin\psi_3 \cos(\varphi_1 + \varphi_2 + \varphi_3)], \nonumber \\ p(\halfp, \halfm, \halfm) &= \frac{1}{8}[1 - \cos\psi_1 \cos\psi_2 - \cos\psi_1 \cos\psi_3 + \cos\psi_2 \cos\psi_3 \nonumber \\ &- \sin\psi_1 \sin\psi_2 \sin\psi_3 \cos(\varphi_1 + \varphi_2 + \varphi_3)], \nonumber \\ p(\halfm, \halfp, \halfp) &= \frac{1}{8}[1 - \cos\psi_1 \cos\psi_2 - \cos\psi_1 \cos\psi_3 + \cos\psi_2 \cos\psi_3 \nonumber \\ &+ \sin\psi_1 \sin\psi_2 \sin\psi_3 \cos(\varphi_1 + \varphi_2 + \varphi_3)], \nonumber \\ p(\halfm, \halfp, \halfm) &= \frac{1}{8}[1 - \cos\psi_1 \cos\psi_2 + \cos\psi_1 \cos\psi_3 - \cos\psi_2 \cos\psi_3 \nonumber \\ &- \sin\psi_1 \sin\psi_2 \sin\psi_3 \cos(\varphi_1 + \varphi_2 + \varphi_3)], \nonumber \\ p(\halfm, \halfm, \halfp) &= \frac{1}{8}[1 + \cos\psi_1 \cos\psi_2 - \cos\psi_1 \cos\psi_3 - \cos\psi_2 \cos\psi_3 \nonumber \\ &- \sin\psi_1 \sin\psi_2 \sin\psi_3 \cos(\varphi_1 + \varphi_2 + \varphi_3)], \nonumber \\ p(\halfm, \halfm, \halfm) &= \frac{1}{8}[1 + \cos\psi_1 \cos\psi_2 + \cos\psi_1 \cos\psi_3 + \cos\psi_2 \cos\psi_3 \nonumber \\ &+ \sin\psi_1 \sin\psi_2 \sin\psi_3 \cos(\varphi_1 + \varphi_2 + \varphi_3)]. \end{eqnarray*} Thanks to these tomograms the correlation function (\ref{eq:Edef}) results \begin{equation}\label{eq:E3spin} E(\Omega_1,\Omega_2,\Omega_3)=-\sin\psi_1\sin\psi_2\sin\psi_3\cos(\varphi_1 + \varphi_2 + \varphi_3). \end{equation} Finally, the Bell inequality (\ref{eq:Mo}) in this case reads \begin{eqnarray} \left| E(\Omega^{(2)}_1,\Omega^{(1)}_2,\Omega^{(1)}_3)+ E(\Omega^{(1)}_1,\Omega^{(2)}_2,\Omega^{(1)}_3) +E(\Omega^{(1)}_1,\Omega^{(1)}_2,\Omega^{(2)}_3)-E(\Omega^{(2)}_1,\Omega^{(2)}_2,\Omega^{(2)}_3)\right|\le 2. \end{eqnarray} Using (\ref{eq:E3spin}) the maximum violation occurs when the l.h.s equals 4. This value can be attained by taking e.g. (again the angles $\theta$ do not matter here) \begin{eqnarray*} \psi^{(1)}_1=\psi^{(1)}_2=\psi^{(1)}_3=\pi/2, \qquad \varphi^{(1)}_1=\varphi^{(1)}_2=\varphi^{(1)}_3=5\pi/6,\nonumber\\ \psi^{(2)}_1=\psi^{(2)}_2=\psi^{(2)}_3=\pi/2, \qquad \varphi^{(2)}_1=\varphi^{(2)}_2=\varphi^{(2)}_3=\pi/3. \end{eqnarray*} \subsection{Optical tomography}\label{vioot} The tomogram of the state (\ref{eq:psi1}) accordingly to (\ref{eq:opttom}) is given by \begin{eqnarray}\label{eq:opttomexp} p(\gr{X}, \gr{\theta}) = \frac{1}{2\sqrt{\pi^n}} \left[ 1+ 2^n\prod_{i=1}^n (X_i^2) +2^{(n+2)/2}\prod_{i=1}^n (X_i) \cos(\theta_1+\ldots+\theta_n) \right] \exp\left[-\sum_{i=1}^n X_i^2\right], \end{eqnarray} where $\gr{X} = (X_1, \ldots, X_n)$ and $\gr{\theta}=(\theta_1, \ldots, \theta_n)$. We take the sets $Y_k$ and $Z_k$ of Definition \ref{def:setsYZ} to be \begin{equation*} \quad Y_k = [x, +\infty), Z_k = (-\infty, x). \end{equation*} For such sets and tomogram (\ref{eq:opttomexp}), the correlation function (\ref{eq:Edef}) results \begin{eqnarray}\label{eq:Et} E(\gr{\theta}) &= 2^{n-1}\left[ {\sf a}_{0}^n(x)+{\sf a}_{1}^n(x)\right] +2^n {\sf b}_{0}^n(x)\cos(\theta_1+\ldots+\theta_n), \end{eqnarray} where \begin{eqnarray*} {\sf a}_0(x) = -\frac{1}{2}\erf(x),\;\;\; {\sf a}_1(x) = -\frac{1}{2}\erf(x)+\frac{1}{\sqrt{\pi}} xe^{-x^2},\;\;\; {\sf b}_0(x) = \frac{1}{\sqrt{2\pi}}e^{-x^2}. \end{eqnarray*} We have now to insert (\ref{eq:Et}) into (\ref{eq:Mo}) or (\ref{eq:Me}) to get an explicit version fo the Bell inequality. In doing so we use a Lemma, reported in \ref{maxval}, showing that the maximal value of \begin{equation*} \sum\limits^2_{j_1, \ldots, j_n = 1} a_{j_1, \ldots, j_n} \cos(\theta^{(j_1)}_1 + \ldots + \theta^{(j_n)}_n) \end{equation*} does not exceed $2^{n+(n-1)/2}$ and this value is attained with coefficients of (\ref{eq:Mo}) or (\ref{eq:Me}). It then follows that the maximal value $f_{n}(x)$ of the l.h.s. of (\ref{eq:Mo}) and of (\ref{eq:Me}) is \begin{equation}\label{fnk} f_{n}(x) = \left\{\begin{array}{ccc} 2^{n}|{\sf a}^n_{0}(x)+{\sf a}^n_{1}(x)| + 2^{n+(n+1)/2}| {\sf b}^n_{0}(x)|, & & n \; {\rm odd}\\ 2^{n}|{\sf a}^n_{0}(x)+{\sf a}^n_{1}(x)| + 2^{n+n/2} \quad\;\; | {\sf b}^n_{0}(x)|, & & n \; {\rm even} \end{array}\right. . \end{equation} Figure \ref{fig:v} illustrates the function $f_{n}(x)$ for $n=2,3$. A tiny violation of Bell inequality only occurs for $n=3$. \begin{figure} \begin{center} \includegraphics[scale=1]{fig-opt-tom.pdf} \end{center} \caption{Function $f_{n}$ of Eq.~(\ref{fnk}) versus $x$ for $n=2$ (dashed line) and $n=3$ (solid line). } \label{fig:v} \end{figure} \subsection{Photon-Number tomography}\label{viopnt} Considering the state (\ref{eq:psi1}) its number tomogram (\ref{eq:pntom}) can be computed as \begin{eqnarray}\label{eq:pntomex} p(m_1,\ldots m_n, \alpha_1, \ldots \alpha_n) &=\prod_{i=1}^n \frac{|\alpha_i|^{2m_i-2}}{m_i!} e^{-|\alpha_i|^2} \left| \prod_{i=1}^n \alpha_i +\prod_{i=1}^n (m_i - |\alpha_i|^2)\right|^2. \end{eqnarray} We further choose the sets of Definition \ref{def:setsYZ} as $Z_1 = \ldots = Z_n = \{0\}$, $Y_1 =\ldots = Y_n = \{1,2,3, \ldots\}$. The corresponding correlation function (\ref{eq:Edef}) for $n=2$ is \begin{eqnarray}\label{eq:E2pn} E(\alpha_1, \alpha_2)=e^{-|\alpha_1|^2-|\alpha_2|^2}&& \left[2+4 \Re(\alpha_1 \alpha_2)+2|\alpha_1|^2|\alpha_2|^2 \right.\nonumber\\ &&\left.-\left(1+|\alpha_2|^2\right) e^{|\alpha_1|^2} -\left(1+|\alpha_1|^2\right) e^{|\alpha_2|^2} +e^{|\alpha_1|^2+|\alpha_2|^2}\right]. \end{eqnarray} Furthermore, the Bell inequality for the number tomogram with $n=2$ is from (\ref{eq:Me}) \begin{eqnarray}\label{eq:Bellpn2} \Bigl|E(\alpha^{(1)}_1, &\alpha^{(1)}_2) + E(\alpha^{(1)}_1, \alpha^{(2)}_2) + E(\alpha^{(2)}_1, \alpha^{(1)}_2) - E(\alpha^{(2)}_1, \alpha^{(2)}_2)\Bigr| \leqslant 2, \end{eqnarray} for all $\alpha^{(j)}_1, \alpha^{(j)}_2 \in \mathbb{C}$, $j = 1, 2$. Figure \ref{fig:pnt} illustrates that this inequality can be violated using (\ref{eq:E2pn}). Analogously, from (\ref{eq:pntomex}) it follows that the correlation function (\ref{eq:Edef}) for $n=3$ is \begin{eqnarray}\label{eq:E3pn} E(\alpha_1, \alpha_2,\alpha_3)=e^{-|\alpha_1|^2-|\alpha_2|^2-|\alpha_3|^2}&& \left[-4+8 \Re(\alpha_1 \alpha_2 \alpha_3)-4|\alpha_1|^2 |\alpha_2|^2 |\alpha_3|^2 \right.\nonumber\\ &&\left.+2\left(e^{|\alpha_1|^2}+e^{|\alpha_2|^2}+e^{|\alpha_3|^2}\right) +2 |\alpha_2|^2 |\alpha_3|^2 e^{|\alpha_1|^2} \right.\nonumber\\ &&\left.+2 |\alpha_1|^2 |\alpha_2|^2 e^{|\alpha_3|^2} +2 |\alpha_1|^2 |\alpha_3|^2 e^{|\alpha_2|^2} \right.\nonumber\\ &&\left.-\left(1+|\alpha_1|^2 \right) e^{|\alpha_2|^2+|\alpha_3|^2} -\left(1+|\alpha_2|^2\right) e^{|\alpha_1|^2+|\alpha_3|^2} \right.\nonumber\\ &&\left. -\left(1+|\alpha_3|^2\right) e^{|\alpha_1|^2+|\alpha_2|^2} +e^{|\alpha_1|^2+|\alpha_2|^2+|\alpha_3|^2}\right]. \end{eqnarray} This time the Bell inequality for the number tomogram reads from (\ref{eq:Mo}) \begin{eqnarray}\label{eq:Bellpn3} \Bigl|E(\alpha^{(1)}_1, \alpha^{(1)}_2, \alpha^{(2)}_3) + E(\alpha^{(1)}_1, \alpha^{(2)}_2, \alpha^{(1)}_2) + E(\alpha^{(2)}_1, \alpha^{(1)}_2, \alpha^{(1)}_2) - E(\alpha^{(2)}_1, \alpha^{(2)}_2, \alpha^{(2)}_2)\Bigr| \leqslant 2. \end{eqnarray} This inequality, by numerical checking, results never violated with (\ref{eq:E3pn}) and an example of the behavior of the l.h.s. is shown in figure \ref{fig:pnt}. By also choosing $Z_1 = \ldots = Z_n = \{0,\ldots,m\}$, $Y_1 =\ldots = Y_n = \{m+1,m+2, \ldots\}$, with $m>0$, neither (\ref{eq:Bellpn2}) nor (\ref{eq:Bellpn3}) will result (by numerical checking) ever violated by using (\ref{eq:E2pn}) and (\ref{eq:E3pn}) respectively. \begin{figure} \begin{center} \includegraphics[scale=1]{fig-pn-tom.pdf} \end{center} \caption{The left hand side of (\ref{eq:Bellpn2}) as a function of $\alpha^{(2)}_2$ (dashed line); the other parameters are given by $\alpha^{(1)}_1 = 0.165$, $\alpha^{(1)}_2 = -0.165$, $\alpha^{(2)}_1 = -0.559$. The left hand side of (\ref{eq:Bellpn3}) as a function of $\alpha^{(2)}_2$ (dashed line); the other parameters are given by $\alpha^{(1)}_1 = \alpha^{(1)}_2 = 0$, $\alpha^{(1)}_3 = 5.936$, $\alpha^{(2)}_1 4.767$, $\alpha^{(2)}_3 = 4$.} \label{fig:pnt} \end{figure} \section{Concluding remarks} \label{conclu} As we have seen from the previous examples the use of finite (namely $2^n$) number of tomograms within a tomographic realization may lead to the evidence of nonlocality. Actually, it results that finite dimensional systems by means of spin tomograms allow for the best evidence of nonlocality. In contrast, violations of Bell inequalities seem much harder to uncover in infinite dimensional systems where ${\cal H}=L_2(\mathbb{R})$. Given that we have considered in both cases the same (entangled) state (\ref{eq:psi1}), this difference, according to Ref. \cite{SAM}, must be ascribed to the diversity of observables employed (from which the tomograms stem). However, we argue that also the way the spectrum of an observable is binned could play a role. As matter of fact the choices made in Sections \ref{vioot} and \ref{viopnt} for $Y_k$ and $Z_k$ do not exhaust all possibilities of these measurable sets. Unfortunately looking at Bell inequalities violations using optical tomograms (resp. photon number tomograms) by scanning the possible sets $Y_k$ and $Z_k$ appears a daunting task. All in all the advantage of the tomographic approach is to allow to to find the large violations of Bell inequalities typical of spin systems also in infinite dimensional systems. In fact, introducing in $L_2(\mathbb{R})^{\otimes n}$ the following local pseudo-spin operators \cite{MISTA} \begin{eqnarray*} \hat{S}^{(k)}_x &=& \sum\limits^{+\infty}_{n_k=0} \Bigl( |2n_k\rangle\langle2n_k+1| + |2n_k+1\rangle\langle 2n_k| \Bigr), \nonumber\\\ \hat{S}^{(k)}_y &=& -i\sum\limits^{+\infty}_{n_k=0} \Bigl( |2n_k\rangle\langle2n_k+1| - |2n_k+1\rangle\langle 2n_k| \Bigr), \nonumber\\\ \hat{S}^{(k)}_z &=& \sum\limits^{+\infty}_{n_k=0} (-1)^{n_k}|n_k\rangle\langle n_k|, \end{eqnarray*} where $|n_k\rangle$ are Fock states of the $k$th subsystem, we can derive the tomograms of the spin tomography realized with the above operators from those of any other tomographic scheme (see e.g. \cite{QSO97}). The price one ought to pay in such a case is the \emph{completeness} of the set of starting tomograms, (i.e. a number of tomograms much greater than $2^n$). \section*{Acknowledgments} This work was planned some years ago after an interesting discussion with V. I. Man'ko. We affectionately dedicate its completion to him in occasion of his 75th birthday. \appendix \section{} \label{maxval} \begin{lemma} For any coefficients $a_{\gr{j}} = a_{j_1, \ldots, j_n}$ ($\gr{j} = (j_1, \ldots, j_n)$) of (\ref{eq:a}) and for any angles $\theta^{(1)}_k$, $\theta^{(2)}_k$ ($k = 1, \ldots, n$) we have \begin{equation}\label{eq:sin} \frac{1}{2^n}\left|\sum^2_{\gr{j}=1} a_{\gr{j}} \cos\left(\theta^{(j_1)}_1 + \ldots + \theta^{(j_n)}_n\right)\right| \leqslant 2^{(n-1)/2}. \end{equation} The equality is attained with coefficients from (\ref{eq:Mo}), (\ref{eq:Me}). \end{lemma} \begin{proof} To estimate the l.h.s. of (\ref{eq:sin}) note that \begin{eqnarray}\label{eq:exp} \left|\sum^2_{\gr{j}=1} a_{\gr{j}} \cos\left(\theta^{(j_1)}_1 + \ldots + \theta^{(j_n)}_n\right)\right| \leqslant \left|\sum^2_{\gr{j}=1} a_{\gr{j}} e^{i\left(\theta^{(j_1)}_1 + \ldots + \theta^{(j_n)}_n\right)}\right|, \end{eqnarray} so we need to estimate the last sum. To this end we use (\ref{eq:a}) obtaining \begin{eqnarray}\label{eq:prod} \left|\sum^2_{\gr{j}=1} a_{\gr{j}} e^{i\left(\theta^{(j_1)}_1 + \ldots + \theta^{(j_n)}_n\right)}\right| =\sum_{\varepsilon_1, \ldots, \varepsilon_n = \pm 1} c(\varepsilon_1, \ldots, \varepsilon_n) \prod^n_{k=1}\Bigl(e^{i\theta^{(1)}_k}+\varepsilon_k e^{i\theta^{(2)}_k}\Bigr). \end{eqnarray} Next we define \begin{equation}\label{eq:tp} \theta_k = \frac{\theta^{(1)}_k - \theta^{(2)}_k}{2}, \quad \varphi_k = \frac{\theta^{(1)}_k + \theta^{(2)}_k}{2}. \end{equation} so that the r.h.s. of (\ref{eq:prod}) simplifies to \begin{eqnarray*} 2^n e^{i(\varphi_1+\ldots+\varphi_n)} \sum_{\varepsilon_1, \ldots, \varepsilon_n = \pm 1} c(\varepsilon_1, \ldots, \varepsilon_n) \prod^n_{k=1} a_k(\varepsilon_k), \end{eqnarray*} where $a_k(+1) = \cos\theta_k$ and $a_k(-1) = i\sin\theta_k$. Taking into account that we use absolute value in (\ref{eq:exp}) and divide by $2^n$ in (\ref{eq:sin}) we have to prove the following inequality \begin{equation}\label{eq:ca} \left|\sum_{\varepsilon_1, \ldots, \varepsilon_n = \pm 1} c(\varepsilon_1, \ldots, \varepsilon_n) \prod^n_{k=1} a_k(\varepsilon_k) \right| \leqslant 2^{(n-1)/2}, \end{equation} for any $\pm 1$-valued function $c(\varepsilon_1, \ldots, \varepsilon_n)$. We employ the induction method. For $n=1$ we simply have \begin{equation*} \Bigl|c(+1)\cos\theta_1 + c(-1)i\sin\theta_1\Bigr| = 1 = 2^{(1-1)/2}. \end{equation*} Then, we can write the sum in (\ref{eq:ca}) as \begin{eqnarray*} \sum_{\varepsilon_1, \ldots, \varepsilon_n = \pm 1} c(\varepsilon_1, \ldots, \varepsilon_n) \prod^n_{k=1} a_k(\varepsilon_k) &\equiv& A_{n-1}\cos\theta_n + iB_{n-1}\sin\theta_n, \nonumber\\ &=&\sum\limits_{\varepsilon_1, \ldots, \varepsilon_{n-1} = \pm 1} c(\varepsilon_1, \ldots, \varepsilon_{n-1}, +1) \prod^{n-1}_{k=1} a_k(\varepsilon_k)\cos\theta_n \nonumber \\ && +i\sum\limits_{\varepsilon_1, \ldots, \varepsilon_{n-1} = \pm 1} c(\varepsilon_1, \ldots, \varepsilon_{n-1}, -1) \prod^{n-1}_{k=1} a_k(\varepsilon_k) \sin\theta_n \end{eqnarray*} where, according to the induction assumption, we have \begin{equation} |A_{n-1}|, \; |B_{n-1}| \leqslant 2^{(n-2)/2}. \end{equation} The sum in (\ref{eq:ca}) can be estimated in the following way \begin{eqnarray*} \left|\sum_{\varepsilon_1, \ldots, \varepsilon_n = \pm 1} c(\varepsilon_1, \ldots, \varepsilon_n) \prod^n_{k=1} a_k(\varepsilon_k) \right| &=& \Bigl| A_{n-1}\cos\theta_n + iB_{n-1}\sin\theta_n \Bigr| \nonumber\\ & \leqslant & \sqrt{|A_{n-1}|^2+|B_{n-1}|^2} \leqslant 2^{(n-1)/2}. \end{eqnarray*} Now we show that with the coefficients of (\ref{eq:Mo}) or (\ref{eq:Me}) the maximal value $2^{(n-1)/2}$ is attained. Due to (\ref{eq:exp}) we need to estimate the sum \begin{equation}\label{eq:Ms} S_n = \frac{1}{2^{n+(n-1)/2}}\sum^2_{j_1, \ldots, j_n=1} a_{j_1, \ldots, j_n} e^{i\left(\theta^{(j_1)}_1 + \ldots + \theta^{(j_n)}_n\right)} \end{equation} and show that it can be equal to one by absolute value. First, let us consider the case of an odd $n$. From (\ref{eq:Me}) we have \begin{equation*} a_{j_1, \ldots, j_n} = 2^{(n+1)/2} (-1)^{\delta(j_1, \ldots, j_n)}, \quad\quad (j_1, \ldots, j_n) \in J. \end{equation*} Furthermore, from (\ref{eq:Mn}) it is \begin{equation*} S_n = \frac{1}{2^n i} \left[\prod^n_{k=1}\left(e^{i \theta^{(1)}}_k + i e^{i \theta^{(2)}}_k\right) - \prod^n_{k=1}\left(e^{i \theta^{(1)}}_k - i e^{i \theta^{(2)}}_k\right)\right]. \end{equation*} Taking into account that each term in these products can be written as \begin{equation*} e^{i \theta^{(1)}}_k \pm e^{i \tilde{\theta}^{(2)}}_k, \quad \tilde{\theta}^{(2)}_k = \theta^{(2)}_k + \pi/2, \end{equation*} and using the relations (\ref{eq:tp}), $S_n$ can be simplified to \begin{equation*} S_n = \frac{1}{i}\left(\prod^n_{k=1}\cos\theta^\prime_k \pm i \prod^n_{k=1}\sin\theta^\prime_k\right)\,e^{i(\varphi^\prime_1 + \ldots + \varphi^\prime_n)}, \end{equation*} where $\theta^\prime_k = \theta_k - \pi/4$, $\varphi^\prime_k = \varphi_k + \pi/4$. It is clear that the imaginary part of the sum $S_n$ takes its maximal absolute value $1$ when, for example, $\theta_k = \varphi_k = 0$ for $k = 1, \ldots, n$. Now we consider the case of an even $n$. The coefficients $a_{j_1, \ldots, j_n}$ in this case come from (\ref{eq:Mo}) \begin{equation*} a_{j_1, \ldots, j_n} = 2^{n/2} (-1)^{\tilde{\delta}(j_1, \ldots, j_n)}, \end{equation*} and the sum $S_n$ (\ref{eq:Ms}) becomes \begin{eqnarray*} S_n &= \frac{1}{i2^n\sqrt{2}} (e^{i \theta^{(1)}}_n + e^{i \theta^{(2)}}_n) \left[\prod^{n-1}_{k=1}\left(e^{i \theta^{(1)}}_k + e^{i \tilde{\theta}^{(2)}}_k\right) - \prod^{n-1}_{k=1}\left(e^{i \theta^{(1)}}_k - e^{i \tilde{\theta}^{(2)}}_k\right)\right]\\ &+ \frac{1}{i2^n\sqrt{2}} (e^{i \theta^{(1)}}_n e^{i \theta^{(2)}}_n) \left[\prod^{n-1}_{k=1}\left(e^{i \tilde{\theta}^{(1)}}_k + e^{i \theta^{(2)}}_k\right) - \prod^{n-1}_{k=1}\left(e^{i \tilde{\theta}^{(1)}}_k - e^{i \theta^{(2)}}_k\right) \right]. \end{eqnarray*} According to (\ref{eq:tp}) $S_n$ can be simplified to \begin{eqnarray*} S_n &= \frac{e^{i\varphi}}{\sqrt{2}} \left[\left(i\prod^{n-1}_{k=1}\cos\theta^\prime_k \mp \prod^{n-1}_{k=1}\sin\theta^\prime_k\right)\cos\theta_n +\left(\prod^{n-1}_{k=1}\cos\theta^{\prime\prime}_k \pm i\prod^{n-1}_{k=1}\sin\theta^{\prime\prime}_k\right)\sin\theta_n\right], \end{eqnarray*} where $\theta^\prime_k = \theta_k - \pi/4$, $\theta^{\prime\prime}_k = \theta_k + \pi/4$ and $\varphi = \varphi_1 + \ldots \varphi_n + (n-1) \pi/4$. The imaginary part of $S_n$ is (when $\varphi = 0$) \begin{eqnarray*} |\im(S_n)| &= \frac{1}{\sqrt{2}} \left| \cos\theta_n \prod^{n-1}_{k=1}\cos(\theta_k-\pi/4) \pm \sin\theta_n \prod^{n-1}_{k=1}\sin(\theta_k+\pi/4) \right|. \end{eqnarray*} It is clear that this expression takes its maximal value $1$ when $\theta_k = \pi/4$, $k = 1, \ldots, n-1$, and $\theta_n = \pm \pi/4$. This completes the proof. \end{proof} \section*{References}
148,973
Curse of the Dragon Slayer (2013) Directed by John Lyde Genres - Action, Adventure | Sub-Genres - Adventure Drama, Fantasy Adventure | Run Time - 111 min. | Countries - United States | MPAA Rating - NR Share on Synopsis by Jason Buchanan Upon learning that the dreaded orc Bloodshed has emerged from the underworld on a mission to unleash the God of the Undead, brave warrior Keltus embarks on a quest to save his kingdom from the forces of darkness. Before he can defeat the God of the Undead, however, Keltus must first defeat a powerful dragon that is sworn to protect the diabolical orcs. Characteristics Themes Keywords defeat, dragon, God
298,495
\begin{document} \title[Information-Disturbance Tradeoff]{Universality and Optimality in the \vspace*{5pt} \\ Information-Disturbance Tradeoff} \author[Hashagen]{Anna-Lena K. Hashagen$^1$} \author[Wolf]{Michael M. Wolf$^{1,2}$} \address{$^1$ Department of Mathematics, Technical University of Munich} \address{$^2$ Kavli Institute for Theoretical Physics, University of California, Santa Barbara (Aug - Dec, 2017)} \begin{abstract}We investigate the tradeoff between the quality of an approximate version of a given measurement and the disturbance it induces in the measured quantum system. We prove that if the target measurement is a non-degenerate von Neumann measurement, then the optimal tradeoff can always be achieved within a two-parameter family of quantum devices that is independent of the chosen distance measures. This form of almost universal optimality holds under mild assumptions on the distance measures such as convexity and basis-independence, which are satisfied for all the usual cases that are based on norms, transport cost functions, relative entropies, fidelities, etc. for both worst-case and average-case analysis. We analyze the case of the cb-norm (or diamond norm) more generally for which we show dimension-independence of the derived optimal tradeoff for general von Neumann measurements. A SDP solution is provided for general POVMs and shown to exist for arbitrary convex semialgebraic distance measures. \end{abstract} \maketitle \tableofcontents \newpage \section{Introduction}\label{sec:intro} The idea that measurements inevitably disturb a quantum system is so much folklore and so deeply routed in the foundations of quantum mechanics that it is difficult to trace back historically. It is certainly present in Heisenberg's original exposition of the uncertainty relation. However, it only became amenable to mathematical analysis after the `projection postulate' was replaced by a more refined theory of the quantum measurement process ~\cite{Davies_Lewis_1970, Lueders_1950}. With the emergence of the field of quantum information theory, the interest in a quantitative analysis of the information-disturbance tradeoff has intensified. At the same time, it became an issue of practical significance for many quantum information processing tasks, most notably for quantum cryptography \cite{BB_1984, Ekert_1991, Fusch_Peres_1996, Fuchs_2005}. In the last two decades numerous papers derived quantitative bounds on the disturbance induced by a quantum measurement. A coarse way to categorize the existing approaches is depending on whether or not there are reference measurements w.r.t. which information gain on one side and disturbance on the other side are quantified. In \cite{Martens_1992, Ozawa_2003, Ozawa_2004, HeinosaariWolf_2010, Watanabe_2011, Ipsen_2013, Busch_Lahti_Werner_2013, Busch_Lahti_Werner_2014, Branciard_2013, Buscemi_2014, Coles_2015, Schwonnek_Reeb_Werner_2016, Renes_2017} disturbance and information gain are both considered w.r.t. reference measurements. In \cite{Banaszek_2001, Barnum_2001, Maccone_2007, Kretschmann_2008, Buscemi_Hayashi_Horodecki_2008, Buscemi_Horodecki_2009, Bisio_Chiribella_DAriano_Perinotti_2010, Shitara_Kuramochi_Ueda_2016}, in contrast, no reference observable is used on either side. In the present paper, we follow an intermediate route: we consider the performed measurement as an approximation of a given reference measurement, but we quantify the disturbance without specifying a second observable. Another way of classifying previous works is in terms of the measures that are used to mathematically formalize and quantify disturbance and information gain: for instance, \cite{Martens_1992, Barnum_2001, Buscemi_Horodecki_2009, Buscemi_2014, Maccone_2007, Coles_2015} use various entropic measures, \cite{Kretschmann_2008, Ipsen_2013, Renes_2017} use norm-based measures, \cite{Banaszek_2001, Barnum_2001, Buscemi_Horodecki_2009, Bisio_Chiribella_DAriano_Perinotti_2010} use fidelities, \cite{Watanabe_2011, Shitara_Kuramochi_Ueda_2016} use Fisher information, and \cite{Busch_Lahti_Werner_2013, Schwonnek_Reeb_Werner_2016} use transport-cost functions. Many other measures are conceivable and most of them come in two flavors: a worst-case and an average-case variant, where the latter again calls for the choice of an underlying distribution. A central point of the present work is to show that the information-disturbance problem has a core that is largely independent of the measures chosen. More specifically, we prove the existence of a small set of devices that are (almost) universally optimal independent of the chosen measures, as long as these exhibit a set of elementary properties that are shared by the vast majority of distance measures found in the literature. Based on this universality result, we then derive optimal tradeoff bounds for specific choices of measures. These include the diamond norm and its classical counterpart the total variation distance. In this case, the reachability of the optimal tradeoff has been demonstrated experimentally in a parallel work \cite{Knips_2018}. \newpage \paragraph{\bf Organization of the paper.} Sec.~\ref{sec:sum} starts off with introducing the setup and summarizes the paper's main results. In Sec.~\ref{sec:dist}, we discuss distance measures that quantify the measurement error and the disturbance caused to the system. We give a brief overview of common measures found in the literature that fulfill the assumptions we make, necessary to derive the universality theorem. In Sec.~\ref{sec:vN1}, for the case of a non-degenerate von Neumann target measurement, we derive a universal two-parameter family of optimal devices that yield the best information-disturbance tradeoff. In Sec.~\ref{sec:vN2}, still for the case of a non-degenerate von Neumann target measurement, we use the universal optimal devices derived in the previous section to compute the optimal tradeoff for a variety of distance measures. In the special case where we consider the diamond norm for quantifying disturbance, we derive the optimal tradeoff also for the case of degenerate von Neumann target measurements. In the last section, Sec.~\ref{sec:SDP}, we show that the optimal tradeoff can always be represented as a SDP if the distance measures under consideration are convex semialgebraic. We give the explicit SDP that represents the tradeoff between the diamond norm and the worst-case $l_\infty$-distance and apply it to the special case of qubit as well as qutrit SIC POVMs. \section{Summary}\label{sec:sum} This section will briefly introduce some notation, specify the considered setup, and summarize the main results. More details and proofs will then be given in the following sections.\vspace*{5pt} \paragraph{\bf Notation.} Throughout we will consider finite dimensional Hilbert spaces $\C^d$, write $\cM_d$ for the set of complex $d\times d$ matrices and $\cS_d\subseteq \cM_d$ for the subset of density operators, usually denoted by $\rho$. An $m$-outcome measurement on this space will be described by a \emph{positive operator valued measure} (POVM) $E=(E_1,\ldots,E_m)$ whose elements $E_i\in\cM_d$ are positive semidefinite and sum up to the identity operator $\sum_{i=1}^m E_i=\1$. The set of all such POVM's will be denoted by $\cE_{d,m}$ and we will set $\cE_d:=\cE_{d,d}$. We will call $E$ a \emph{von Neumann measurement} if the $E_i$'s are mutually orthogonal projections and further call it \emph{non-degenerate} if those are one-dimensional, i.e., characterized by an orthonormal basis. A completely positive, trace-preserving linear map will be called a \emph{quantum channel} and the set of quantum channels from $\cM_d$ into $\cM_d$ will be denoted by $\cT_d$. \vspace*{5pt} \paragraph{\bf Setup.} We will fix a \emph{target measurement} $E\in\cE_{d,m}$ and investigate the tradeoff between the quality of an approximate measurement of $E$, say by $E'\in\cE_{d,m}$, and the disturbance the measurement process induces in the system. The evolution of the latter will be described by some channel $T_1\in\cT_d$. To this end, we will have to choose two suitable functionals $E'\mapsto\delta(E')$ and $T_1\mapsto \Delta(T_1)$ that quantify the deviation of $E'$ and $T_1$ from the target measurement $E$ and the ideal channel $\id$, respectively. For a given triple $(E,\delta,\Delta)$ the question will then be: what is the accessible region in the $\delta-\Delta$-plane when running over all possible measurement devices and, in particular, what is the optimal tradeoff curve and how can it be achieved? Clearly, $E'$ and $T_1$ are not independent. The framework of \emph{instruments} allows to describe all pairs ($E'$, $T_1$) that are compatible within the rules of quantum theory. An \emph{instrument} assigns to each possible outcome $i$ of a measurement a completely positive map $I_i:\cM_d\rightarrow\cM_d$ so that the corresponding POVM element is $E_i':=I_i^*(\1)$ and the evolution of the remaining quantum system is governed by $T_1:=\sum_{i=1}^m I_i$. Normalization requires that this sum is trace-preserving.\vspace*{5pt} \paragraph{\bf Main results.} There are zillions of possible choices for the measures $\Delta$ and $\delta$. If one had to choose one pair that stands out for operational significance this would probably be the \emph{diamond norm} and its classical counterpart, the \emph{total variational distance} (defined and discussed in Sec.~\ref{sec:dist} and Sec.~\ref{sec:vN2}). One of our results is the derivation of the optimal tradeoff curve for this pair (Thm.~\ref{thm:TVdiamond} in Sec.~\ref{sec:gvN}): \begin{theorem*}[Total variation - diamond norm tradeoff] If an instrument is considered approximating a (possibly degenerate) von Neumann measurement with $m$ outcomes, then the worst-case total variational distance $\delta_{TV}$ and the diamond norm distance $\Delta_\diamond$ satisfy \be \delta_{TV}\geq \left\{\begin{array}{ll}\frac{1}{2m}\left(\sqrt{(2-\Delta_\diamond)(m-1)}-\sqrt{\Delta_\diamond} \right)^2&\ \text{if }\ \Delta_\diamond\leq 2-\frac{2}{m},\\ 0 &\ \text{if }\ \Delta_\diamond > 2-\frac{2}{m}. \end{array}\right.\label{eq:optdiatv}\ee The inequality is tight in the sense that for every choice of the von Neumann measurement there is an instrument achieving equality. \end{theorem*} Note that the tradeoff depends solely on the number $m$ of outcomes and is independent of the dimension of the underlying Hilbert space (apart from $d\geq m$). Also note that the accessible region shrinks with increasing $m$ and in the limit $m\rightarrow\infty$ becomes a triangle, determined by $\delta_{TV}\geq 1-\Delta_{\diamond}/2$. In Sec.~\ref{sec:vN2} we derive similar results for the worst-case as well as average-case fidelity and trace-norm. In all cases, the bounds are tight and we show how the optimal tradeoff can be achieved. Instead of going through these and more examples one-by-one we follow a different approach. We provide a general tool for obtaining optimal tradeoffs for \emph{all pairs} $(\delta,\Delta)$ that exhibit a set of elementary properties that are shared by the vast majority of distance measures that can be found in the literature. These properties, which are discussed in Sec.~\ref{sec:dist}, are essentially convexity and suitable forms of basis-(in)dependence. For the case of a non-degenerate von Neumann target measurement Thm.~\ref{thm:universality} in Sec.~\ref{sec:vN1} shows that optimal devices can always be found within a universal two-parameter family, independent of the specific choice of $\delta$ and $\Delta$: \begin{theorem*}[(Almost universal) optimal instruments] Let $\Delta$ and $\delta$ be distance-measures for quantifying disturbance and measurement-error that satisfy Assumptions~\ref{assum:1} and \ref{assum:2} (cf. Sec.~\ref{sec:dist}), respectively. Then the optimal $\Delta-\delta$-tradeoff w.r.t. a target measurement that is given by an orthonormal basis $\{|i\rangle\in\C^d\}_{i=1}^d$ is attained within the two-parameter family of instruments defined by \be\label{eq:optinst} I_i(\rho):= z\langle i|\rho|i\rangle\frac{\1_d-\ket{i}\bra{i}}{d-1}+(1-z)K_i\rho K_i,\quad K_i:=\mu\1_d+\nu\ket{i}\bra{i}, \ee where $z\in[0,1]$ and $\mu,\nu\in\R$ satisfy $d\mu^2+\nu^2+2\mu\nu=1$ (which makes $\sum_i I_i$ trace preserving). \end{theorem*} While the parameter $z$ can be eliminated for instance in all cases mentioned above, we show in Cor.~\ref{cor:z} that this is not possible in general. If the target measurement itself is not a von Neumann measurement but a general POVM, then closed-form expressions like the ones above should not be expected. For the important case of the diamond norm, we show in Sec.~\ref{sec:SDP} how the optimal tradeoff curve can still be obtained via a semidefinite program (SDP). This is an instance of the following more general fact (Thm.~\ref{thm:SDPalg}): \begin{theorem*}[SDP solution for arbitrary target measurements] If $\Delta$ and $\delta$ are both convex and semialgebraic, then the accessible region in the $\Delta-\delta $-plane is the feasible set of a SDP. \end{theorem*} Note that no assumptions on the chosen measures are made other than being convex and semialgebraic. \section{Distance measures}\label{sec:dist} In this section we have a closer look at the functionals $\Delta:\cT_d\rightarrow [0,\infty]$ and $\delta:\cE_{d,m}\rightarrow [0,\infty]$ that quantify how much $E'$ and $T_1$ differ from $E$ and $\id$, respectively. We will not assume that they arise from metrics and use the notion of a `distance' merely in the colloquial sense. We will state the assumptions that we will use in Sec.~\ref{sec:vN1} and discuss some of the most common measures that appear in the literature.\vspace*{5pt} \paragraph{\bf Quantifying disturbance} For the universality theorem (Thm.~\ref{thm:universality}) we will need the following assumption on $\Delta$:\footnote{In fact, slightly less is required since Eq.~(\ref{eq:assumpDbi}) will only be used for unitaries that are products of diagonal and permutation matrices.} \begin{assumption}[on the distance measure to the identity channel]\label{assum:1}\ \\ For $ \Delta:\cT_d\rightarrow[0,\infty]$ we assume that (a) $\Delta(\id)=0$, (b) $\Delta$ is convex, and (c) $\Delta$ is basis-independent in the sense that for every unitary $U\in\cM_d$ and every channel $\Phi\in\cT_d$: \be\Delta\Big(U\Phi(U^*\cdot U)U^*\Big)=\Delta(\Phi).\label{eq:assumpDbi}\ee \end{assumption} In the usually considered cases, $\Delta$ arises from a distance measure on the set of density operators $\cS_d\subseteq\cM_d$. In fact, if $\tilde{\Delta}:\cS_d\times\cS_d\rightarrow[0,\infty]$ is convex in its first argument, unitarily invariant and satisfies $\tilde{\Delta}(\rho,\rho)=0$, then considering the worst case as well as the average case w.r.t. the input state both lead to functionals that satisfy Assumption~\ref{assum:1}. More precisely, if $\mu$ is a unitarily invariant measure on $\cS_d$ and $S\subseteq\cS_d$ a unitarily closed subset (e.g., the set of all pure states), then the following two definitions can easily be seen to satisfy Assumption~\ref{assum:1}, see the appendix: \begin{eqnarray*} \Delta_\infty(\Phi)&:=& \sup_{\rho\in S} \tilde{\Delta}\big(\Phi(\rho),\rho\big),\\ \Delta_\mu(\Phi)&:=& \int_{\cS_d} \tilde{\Delta}\big(\Phi(\rho),\rho\big)\; \mathrm{d}\mu(\rho). \end{eqnarray*} While $\Delta_\infty$ quantifies the distance between $\Phi$ and $\id$ in the worst case in terms of $\tilde{\Delta}$, $\Delta_\mu$ does the same for the average case. Concrete examples for $\tilde{\Delta}$ are (i) $\tilde{\Delta}(\rho,\sigma)=1-F(\rho,\sigma)$, where $F(\rho,\sigma):=||\sqrt{\rho}\sqrt{\sigma}||_1$ is the fidelity, (ii) the relative entropy and many other quantum $f$-divergences ~\cite{Hiai_Mosonyi_Petz_Beny_2011} including the Chernoff- and Hoeffding-distance and (iii) $\tilde{\Delta}(\rho,\sigma)=|||\rho-\sigma|||$, where $|||\cdot|||$ is any unitarily invariant norm such as the Schatten $p$-norms. The latter can, in a similar vein, be used to define Schatten $p$-to-$q$ norm-distances to the identity channel $$\Phi\ \mapsto ||\Phi-\id||_{p\rightarrow q,n}:=\sup_{\rho\in\cS_{dn}}\frac{||(\Phi-\id)\otimes\id_n(\rho)||_q}{||\rho||_p},\quad q,p\in[1,\infty], n\in\mathbbm{N},$$ which also fulfill Assumption~\ref{assum:1}. Special cases are given by the \emph{diamond norm} $||\cdot||_\diamond:=||\cdot||_{1\rightarrow1,d}$, which we discuss in more detail in Sec.~\ref{sec:gvN}, and its dual, the \emph{cb-norm} (with $p=q=\infty, n=d$).\vspace*{5pt} \paragraph{\bf Quantifying measurement error} The following assumptions that we need for the universality theorem on the functional $\delta$ refer to the case of a non-degenerate von Neumann target measurement that is given by an orthonormal basis $(|i\rangle\langle i|)_{i=1}^d$. \begin{assumption}[on the distance measure to the target measurement]\label{assum:2}\ \\ For $ \delta:\cE_d\rightarrow[0,\infty]$ we assume that (a) $\delta\big((|i\rangle\langle i|)_{i=1}^d\big)=0$, (b) $\delta$ is convex, (c) $\delta$ is permutation-invariant in the sense that for every permutation $\pi\in S_d$ and any $M\in\cE_d$ \be M_i'=U_\pi^* M_{\pi(i)} U_\pi\ \forall i\ \Rightarrow\ \delta(M')=\delta(M),\label{eq:assumpdperm}\ee where $U_\pi$ is the permutation matrix that acts as $U_\pi |i\rangle=|\pi(i)\rangle$, and (d) that for every diagonal unitary $D\in\cM_d$ and any $M\in\cE_d$ \be M_i'= D^* M_i D \ \forall i\ \Rightarrow\ \delta(M')=\delta(M).\label{eq:assumpdessdiag}\ee \end{assumption} Here, the most common cases arise from distance measures $\tilde{\delta}:\cP_d\times\cP_d\rightarrow[0,\infty]$ on the space of probability distributions $\cP_d:=\big\{q\in\R^d|\sum_{i=1}^d q_i=1\wedge \forall i: q_i\geq 0\big\}$ applied to the target distribution $p_i:=\langle i|\rho|i\rangle$ and the actually measured distribution $p_i':=\tr{\rho E_i'}$. Suppose $\tilde{\delta}$ is convex in its second argument, invariant under joint permutations and satisfies $\tilde{\delta}(q,q)=0$. Then the worst-case as well as the average-case construction \bea \delta_{\infty}(E')&:=&\sup_{\rho\in S} \tilde{\delta}(p,p'),\nonumber\\ \delta_{\mu}(E')&:=&\int_{\cS_d} \tilde{\delta}(p,p') \; \mathrm{d}\mu(\rho),\nonumber \eea both satisfy Assumption~\ref{assum:2}, see appendix. Concrete examples for $\tilde{\delta}$ are all $l_p$-norms for $p\in[1,\infty]$ and the Kullback-Leibler divergence as well as other $f$-divergences. Other examples for $\delta$ that satisfy Assumption~\ref{assum:2} are transport cost functions like the ones used in ~\cite{Schwonnek_Reeb_Werner_2016}. Note that convexity of the two measures $\Delta$ and $\delta$ implies that the region in the $\Delta-\delta$-plane that is accessible by quantum instruments is a convex set. The boundary of this set is given by two lines that are parallel to the axes (and correspond to the maximal values of $\Delta$ and $\delta$) and what we call the \emph{optimal tradeoff curve}. \section{Universal optimal devices}\label{sec:vN1} There are three major steps towards proving the claimed universality theorem: the exploitation of symmetry, the construction of a von Neumann algebra isomorphism to obtain a manageable representation, and the final reduction to the envelope of a unit cone. Throughout this section, the target measurement will be given by an orthonormal basis $E=(|i\rangle\langle i|)_{i=1}^d$. In this case, instead of working with instruments it turns out to be slightly more convenient to work with channels. More specifically, we will describe the entire process by a channel $T:\cM_d\rightarrow\cM_d\otimes\cM_d$ with marginals $T_1,T_2\in\cT_d$. $T_1$ will then reflect the evolution of the `disturbed' quantum system, whereas the output of $T_2$ is measured by $E$ leading to $E_i'=T_2^*(E_i)$. This is clearly describable by an instrument and conversely, for every instrument $I$ we can simply construct $$ T(\rho):=\sum_{i=1}^d I_i(\rho)\otimes |i\rangle\langle i|,$$ which shows that the two viewpoints are equivalent. \begin{proposition}[Reduction to symmetric channels]\label{prop:sym} Let $G$ be the group generated by all diagonal unitaries and permutation matrices in $\cM_d$. If $\Delta$ and $\delta$ satisfy Assumptions~\ref{assum:1} and \ref{assum:2}, respectively, the optimal tradeoff between them can be attained within the set of channels $T:\cM_d\rightarrow\cM_d\otimes\cM_d$ for which \be (U\otimes U)T\big(U^*\rho U\big)(U\otimes U)^*\ =\ T(\rho)\quad \forall U\in G,\rho\in\cS_d.\label{eq:sym0}\ee \end{proposition} \begin{proof} We will show that for an arbitrary channel $T$, which does not necessarily satisfy Eq.~(\ref{eq:sym0}), the symmetrization \be \bar{T}:=\int_G (U\otimes U)T\big(U^*\cdot U\big)(U\otimes U)^*\; \mathrm{d}U\nonumber\ee w.r.t. the Haar measure of $G$ performs at least as well as $T$. Let $\bar{T}_1$ and $\bar{T}_2$ be the marginals of $\bar{T}$. Then \begin{eqnarray*} \Delta\big(\bar{T}_1\big)&=& \Delta\left(\int_G U T_1\big(U^*\cdot U\big) U^*\; \mathrm{d}U\right)\\ &\stackrel{(1b)}{\leq}& \int_G \Delta\left( U T_1\big(U^*\cdot U\big) U^*\right)\; \mathrm{d}U\ \stackrel{(1c)}{=}\ \Delta(T_1), \end{eqnarray*} where the used assumption is indicated above the (in-)equality sign. Similarly, we obtain \begin{eqnarray*} \delta\left[\Big(\bar{T}_2^*\big(|i\rangle\langle i|\big)\Big)_{i=1}^d\right]&\stackrel{(2b)}{\leq}& \int_G \delta\left[\Big(U^* T_2^*\big(U|i\rangle\langle i| U^*\big)U\Big)_{i=1}^d\right]\; \mathrm{d}U\\ &\stackrel{(2d)}{=}&\int_G \delta\left[\Big(U_\pi^* T_2^*\big(|\pi(i)\rangle\langle\pi(i)|\big)U_\pi\Big)_{i=1}^d\right]\; \mathrm{d}U\\ &\stackrel{(2c)}{=}& \delta\left[\Big(T_2^*\big(|i\rangle\langle i|\big)\Big)_{i=1}^d\right], \end{eqnarray*} where we have used that every $U\in G$ can be written as $U=U_\pi D$, where $U_\pi$ is a permutation and $D$ a diagonal unitary, both depending on $U$. Consequently, when replacing $T$ by its symmetrization $\bar{T}$, which satisfies Eq.~(\ref{eq:sym0}) by construction, neither $\Delta$ nor $\delta$ is increasing. \end{proof} \begin{lemma}[Structure of marginals of symmetric channels]\label{lem:commutant} Let $G$ be the group generated by all diagonal unitaries and permutation matrices in $\cM_d$ and $\Phi:\cM_d\rightarrow\cM_d$ a quantum channel. Then the following are equivalent: \begin{enumerate} \item $ \Phi(\rho)=U\Phi\big(U^*\rho U\big) U^*\quad \forall U\in G,\rho\in\cS_d$. \item There are $\alpha,\beta,\gamma\in\R$ with $\alpha+\beta+\gamma=1$ so that \be\Phi=\alpha\;\tr{\cdot}\frac{\1}{d}+\beta\;\id+\gamma\sum_{i=1}^d |i\rangle\langle i|\langle i|\cdot|i\rangle.\label{eq:commutant}\ee \end{enumerate} \end{lemma} \begin{proof} (2) $\Rightarrow$ (1) can be seen by direct inspection. In order to prove the converse, we consider the Jamiolkowski-state (= normalized Choi-matrix) $J_\Phi:=\frac1d \sum_{i,j=1}^d \Phi\big(|i\rangle\langle j|\big)\otimes |i\rangle\langle j|$. Then (1) is equivalent to the statement that $J_\Phi$ commutes with all unitaries of the form $U\otimes\bar{U}$, $U\in G$. Considering for the moment only the subgroup of diagonal unitaries, this requires that $$ \langle ij|J_\Phi|kl\rangle =(2\pi)^{-d}\int_0^{2\pi} \ldots \int_0^{2\pi} e^{i(\varphi_i-\varphi_j-\varphi_k+\varphi_l)}\langle ij|J_\Phi|kl\rangle\; \mathrm{d}\varphi_1\ldots d\varphi_d, $$ which vanishes unless $(i=j\wedge k=l)\vee(i=k\wedge j=l)$. Hence, there are $A,B\in\cM_d$ such that $$J_\Phi=\sum_{i,j=1}^d A_{ij}|i \rangle \langle i|\otimes |j\rangle \langle j| + B_{ij}|i\rangle\langle j| \otimes |i\rangle\langle j|. $$ Next, we will exploit that $J_\Phi$ commutes in addition with permutations of the form $U_\pi\otimes U_\pi$ for all $\pi\in S_d$. For $i\neq j$ this implies that $A_{i,j}=A_{\pi(i),\pi(j)}$ and $B_{i,j}=B_{\pi(i),\pi(j)}$ so that there is only one independent off-diagonal element for each $A$ and $B$. The case $i=j$ leads to a third parameter that is a coefficient in front of $\sum_i|ii\rangle\langle ii|$. Translating this back to the level of quantum channels then yields Eq.~(\ref{eq:commutant}). The coefficients are real and sum up to one since $\Phi$ preserves hermiticity as well as the trace. \end{proof} If $T$ is symmetric as in Prop.~\ref{prop:sym}, then both marginal channels $T_1$ and $T_2$ are of the form derived in the previous Lemma. That is, each $T_i$, $i\in\{1,2\}$, is specified by three parameters $\alpha_i,\beta_i,\gamma_i$ only two of which are independent. The following Lemma shows that under Assumption~\ref{assum:2} the error measure $\delta$ depends only on $\alpha_2$ and does so in a non-decreasing way. \begin{lemma}\label{lem:delta=a2} Let $\delta$ satisfy Assumption~\ref{assum:2}. There is a non-decreasing function $\hat{\delta}:[0,1]\rightarrow[0,\infty]$ s.t. for all $T_2:\cM_d\rightarrow\cM_d$ of the form in Eq.~(\ref{eq:commutant}) with coefficients $\alpha_2,\beta_2,\gamma_2$ we have $\delta \big[ \big(T_2^*(|i\rangle\langle i|)\big)_{i=1}^d\big] =\hat{\delta}(\alpha_2)$. \end{lemma} \begin{proof} The statement follows from convexity of $\delta$ together with the observation that $\beta$ and $\gamma$ only contribute jointly to $\delta$ and not individually. This is seen by composing $T_2$ with the projection onto the diagonal. This leads to a channel of the same form, but possibly different parameters. On the level of the latter the composition corresponds to $(\alpha_2,\beta_2,\gamma_2)\mapsto(\alpha_2,0,\beta_2+\gamma_2)$. The distance measure $\delta$, however, does not change in this process and thus depends only on the sum $\beta_2+\gamma_2$ and not on those two parameters individually. As this sum equals $1-\alpha_2$ we see that $\delta$ can be regarded as a function of $\alpha_2$ only. We formally denote this function by $\hat{\delta}$. Assumption (2b) then implies that $\hat{\delta}$ is convex. As it is in addition positive and satisfies $\hat{\delta}(0)=0$ by Assumption (2a), we get that $\hat{\delta}$ is non-decreasing. \end{proof} For later investigation, it is useful to decompose the $J_\Phi$ that corresponds to Eq.~(\ref{eq:commutant}) into its spectral projections: \begin{eqnarray} J_\Phi &=& aP_a+bP_b+cP_c,\quad\text{where}\quad P_a:=\1-\sum_{i=1}^d |ii\rangle\langle ii| ,\nonumber\\ & &P_b:= \frac1d\sum_{i,j=1}^d |ii\rangle\langle jj|,\label{eq:specproj}\quad P_c:= \sum_{i=1}^d |ii\rangle\langle ii| -P_b. \end{eqnarray} The coefficients $a,b,c$ are the eigenvalues of $J_\Phi$ (and thus non-negative) and related to $\alpha,\beta,\gamma$ via $\alpha=d^2 a,\ \beta=b-c,\ \gamma=d(c-a)$. When considering symmetric $T$, we will label the eigenvalues of $J_{T_i}$ with a subscript $i\in\{1,2\}$ to distinguish the two marginals. Since the $P$'s are mutually orthogonal projectors, we can obtain the eigenvalues from their expectation values. That is, \be x_1=\frac{\tr{(P_x\otimes\1) J_T}}{\tr{P_x}}\quad\text{and}\quad x_2=\frac{\tr{(\1\otimes P_x) J_T}}{\tr{P_x}},\quad x\in\{a,b,c\}.\label{eq:eigexp}\ee If we are aiming at identifying a subset of optimal channels, we can, according to Lemma~\ref{lem:delta=a2}, w.l.o.g. use $a_2$ as $\delta$. Due to the monotonic relation between the two, optimality for one implies optimality for the other. The question we are going to address in the next step of the argumentation is then: which values of $a_1, b_1$ and $c_1$ are consistent with a given value of $a_2$? After all, due to Prop.~\ref{prop:sym}, $\Delta$ and $\delta$ will be functions of those parameters only. Thus, we would like to know which is the accessible region in the space of these parameters, when we vary $J_T$ over the set of all density matrices. We tackle this question using an operator algebraic point of view: the operators $\1\otimes P_a,P_x\otimes\1$ together with the identity operator generate a von Neumann algebra $\cA$ on which $J_T$ acts as a state, i.e., as a normalized positive linear functional. This suggests the use of a von Neumann algebra isomorphism that simplifies the representation. To this end, we observe that $\cA$ is generated by the following operators: \begin{align*} \1_{d^3} &=:\ \Ga & \1_d\otimes\sum_{i=1}^d |ii\rangle\langle ii| &=:\ \Gb\\ \sum_{i,j=1}^d |ii\rangle\langle jj|\otimes\1_d &=:\ \Gc & \sum_{i=1}^d |ii\rangle\langle ii|\otimes\1_d &=:\ \Gd \end{align*} The introduced diagrammatic notation turns out be useful as it reflects that these operators are what one may call \emph{contraction tensors}.\footnote{Please note that these diagrams are not braid diagrams, but rather diagrammatically represent contraction tensors.} If we view an element in $\cM_d\otimes\cM_d\otimes\cM_d$ as a tensor with three left and three right indices, then the diagrammatic notation indicates which of these indices get contracted together---by connecting them. Taking products of pairs of these four operators generates (up to scalar multiples, which arise from closed loops) three new contraction tensors: \begin{equation*} \Ge := \Gb\Gc,\quad \Gf := \Gc\Gb,\quad \Gg :=\Gb\Gd. \end{equation*} The set of these seven tensors is, however, closed under multiplication (again ignoring scalar multiples). This is easily verified by using the diagrammatic notation and going through all cases. This observation is the core for constructing a simplifying isomorphism: \begin{lemma}[Isomorphic representation]\label{lem:iso} Let $\cA$ be the von Neumann algebra that is generated by the set $\{\1_{d^3},\1_d\otimes P_a,P_a\otimes\1_d ,P_b\otimes\1_d, P_c\otimes\1_d\}$. A unital map $\iota:\cA\rightarrow\cM_2\oplus\C^3$ defined by \begin{align} \iota: \Gc &\mapsto d|e_1\rangle\langle e_1|& \iota: \Gg &\mapsto |e_2\rangle\langle e_2|\label{eq:iso1}\\ \iota: \Gd &\mapsto \1_2 \oplus f_2& \iota: \Gb &\mapsto |e_2\rangle\langle e_2| \oplus f_1\label{eq:iso2} \end{align} is an isomorphism if $|e_1\rangle, |e_2\rangle$ constitute unit vectors with $|\langle e_1|e_2\rangle|^2=1/{d}$ in the space of the non-abelian part (i.e., the corresponding projections as well as $\1_2$ are in $\cM_2$) and $f_1:=(1,0,0),f_2:=(0,1,0)$ are elements of the abelian part.\footnote{Here we regard $\C^3$ as space $\cM_1\oplus\cM_1\oplus\cM_1$ of diagonal matrices in $\cM_3$.} \end{lemma} \begin{proof} $\cA$ is generated by the above set of seven contraction tensors. Since this set is closed under multiplication, $^*$-operation and contains linear independent elements, we have ${\rm dim}(\cA)=7$. Moreover, $\cA$ is non-commutative since $[\Gc,\Gg]\neq 0$. From the representation theory of finite-dimensional von Neumann algebras we known that every $7$-dimensional non-commutative von Neumann algebra is isomorphic to $\cM_2\oplus\C^3$ \cite[Thm. 5.6]{Farenick_2001}. Hence, we can establish an isomorphism $\iota$ by representing a generating set of $\cA$ in $\cM_2\oplus\C^3$. Due to unitality $\iota(\1_{d^3})=\1_2\oplus(1,1,1)$ has to hold. Moreover, since $\Gc,\Gg$ are (proportional to) non-commuting minimal projectors in $\cA$, they need to be the same in $\cM_2\oplus\C^3$. Taking proportionality factors into account, this determines Eq.~(\ref{eq:iso1}) and requires $|\langle e_1|e_2\rangle|^2=1/{d}$ in order to be consistent with the value of the trace $\tr{\Gc\Gg}$. From $\Gd\Gc=\Gc$ and $\Gd\Gg=\Gg$ we see that $\iota(\Gd)$ acts as identity on $\cM_2$. Similarly, $\iota(\Gb)$, when restricted to $\cM_2$, has to be a projector that is not the identity and has $|e_2\rangle$ as eigenvector (due to $\Gb\Gg=\Gg$). This determines Eq.~(\ref{eq:iso2}) when restricted to $\cM_2$. Moreover, since $\cM_2\oplus\C^3$ has to be generated, both $\iota(\Gd)$ and $\iota(\Gb)$ have to have non-zero parts on the abelian side. Since they are projectors, these parts need to be projectors as well. Finally, they have to be one-dimensional since otherwise the identity operator would become linearly dependent. \end{proof} Using this Lemma we can now express the accessible region within the space of parameters $\alpha_1,\beta_1,\gamma_1,\alpha_2$ by varying over all states on $\cM_2\oplus\C^3$, instead of over all states $J_T$ on $\cM_{d^3}$. To this end, we just have to unravel the linear maps from the parameters to the eigenvalues $a_1,b_1,c_1,a_2$, to the $P$'s, to the contraction tensors, and finally to their representation in $\cM_2\oplus\C^3$. In this way, we obtain: \begin{corollary} \label{cor:reduc} There exists a channel $T:\cM_d\rightarrow\cM_d\otimes\cM_d$ with corresponding Jamiolkowski state $J_T$ whose marginals give rise to the parameters $\alpha_1,\beta_1$ and $a_2$ iff there exists a state $\varrho$ on $\cM_2\oplus\C^3$ such that \bea \alpha_1 &=& \frac{d}{d-1}\Big(1-\tr{\1_2\varrho}-\tr{f_2\varrho}\Big),\label{eq:paramred1}\\ \beta_1 &=& \bra{e_1}\varrho\ket{e_1}-\frac{1}{d-1}\Big(\tr{\1_2\varrho}+\tr{f_2\varrho}-\bra{e_1}\varrho\ket{e_1}\Big),\label{eq:paramred2}\\ a_2 &=& \big(1- \bra{e_2}\varrho\ket{e_2}-\tr{f_1\varrho}\big)/(d^2-d),\label{eq:paramred3} \eea where $\C^3$ is regarded as space of diagonal $3\times 3$ matrices and $e_1,e_2,f_1,f_2$ are as in Lemma~\ref{lem:iso}. \end{corollary} The proof of this corollary can be found in the appendix. There is still unitary freedom in the choice of the vectors $e_1, e_2$. We utilize this and set \be \langle e_1|\sigma_y|e_1\rangle=\langle e_2|\sigma_y|e_2\rangle=0\quad\text{and}\quad |e_2\rangle\langle e_2|=\frac12(\1_2+\sigma_x)\label{eq:fixinplane},\ee where the $\sigma_i$'s are the usual Pauli matrices. So in particular, we choose the vectors such that the corresponding projectors lie in an equatorial plane of the Bloch sphere that is characterized by density matrices with real entries. In order to simplify the problem further, we now focus more explicitly on minimizing $a_2$: \begin{proposition}[Reduction to the unit cone]\label{prop:cone} Under the constraints given by Eqs.~(\ref{eq:paramred1} -- \ref{eq:fixinplane}), the minimum value for $a_2$ for arbitrary fixed values of $\alpha_1,\beta_1$ that is achievable by varying over all states $\varrho$ is attained for a state of the form \be\varrho=\frac{1}{2}\Big((1-z)\1_2+x\sigma_x+y\sigma_z\Big)\oplus (z,0,0),\label{eq:cone}\ee where $(x,y,z)\in\R^3$ is an element of the envelope of the unit cone, i.e., $z\in[0,1], x^2+y^2=(1-z)^2$. \end{proposition} \begin{proof} We simplify the structure of $\varrho$ in four steps, each of which eliminates one parameter. First, note that we can assume $\tr{f_3\varrho}=0$, where $f_3$ is the diagonal matrix $(0,0,1)$. This is seen by considering the map $\varrho\mapsto\varrho+\tr{f_3\varrho}(f_1-f_3)$, which decreases $a_2$, sets the $f_3$-component to zero, but leaves $\alpha_1$ and $\beta_1$ unchanged. Second, we claim that the $f_2$-component can be set to zero, as well. To this end, consider the map $\varrho\mapsto\varrho+\tr{f_2\varrho}(|e_1^\perp\rangle\langle e_1^\perp|-f_2)$ where $e_1^\perp$ is a unit vector in $\C^2$ that is orthogonal to $e_1$. By construction, this sets the $f_2$-component to zero, decreases $a_2$ and leaves $\alpha_1$ and $\beta_1$ invariant. Taken together with the first step, this already shows that the abelian part of $\varrho$ can be assumed to be of the form $(z,0,0)$ for some $z\in[0,1]$. Third, observe that the $\sigma_y$-component of the non-abelian part of $\varrho$ does not enter any of the equations so that we can as well set it to zero and thus assume that, restricted to $\cM_2$, $\varrho$ lies in the 'real' equatorial plane of the Bloch sphere. Taking positivity and normalization into account, Eq.~(\ref{eq:cone}) summarizes these findings, so far with $x^2+y^2\leq (1-z)^2$. What remains to show is that equality can be assumed, here. Let $v_1,v_2,w\in{\R^3}$ be the Bloch vectors of $e_1$, $e_2$ and $\varrho$, respectively. Suppose $||w||_2<1$, which corresponds to a point that does not lie on the envelope of the cone and let $v_1^\perp\in\R^3$ be a unit vector in the equatorial plane that is orthogonal to $v_1$. Then the map $w\mapsto w+\epsilon v_1^\perp$, for sufficiently small $\epsilon$ of the right sign, leaves $\alpha_1$ and $\beta_2$ unchanged, but decreases $a_2$. Hence, we can choose $\epsilon$ so that the Bloch vector reaches unit norm, which completes the proof of the proposition. \end{proof} \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=0.6] \draw[->] (0,0) -- (7,0)node[right]{$x$}; \draw[<-, name path=y axis] (0,7)node[above]{$z$} -- (0,0); \draw (6,1pt) -- (6,-1pt) node[below right]{$1$}; \draw (1pt,6) -- (-1pt,6) node[above left]{$1$}; \draw[name path=hat, thick] (-6,0) -- (0,6) -- (6,0); \draw[dashed, name path=upper arc, thick] (-6,0) arc (180:0:6cm and 1cm); \draw[name path=lower arc, thick] (-6,0) arc (180:360:6cm and 1cm); \draw[tumblue, thick,->] (0,0) -- node[below left]{$v_2$} (6,0); \path[name path=P1] (0,0) -- (-6,-1); \path[name intersections={of=P1 and lower arc, by=A1}]; \draw[tumblue, thick, ->] (0,0) -- node[above]{$v_1$} (A1); \path[name path=P2] (4,0) -- (-2,6); \draw[name intersections={of=P2 and hat, by=A2}]; \path[name path=P3] (4,0) -- (3.75,-1); \draw[name intersections={of=P3 and lower arc, by=A3}]; \path[name path=P4] (5,4) -- (4,0); \draw[name intersections={of=P4 and upper arc, by=A4}]; \filldraw[fill=tumorange!20!white, draw= tumorange, opacity=0.9, dashed, name path=para, thick] (A3) .. controls (-2.7,6.4) and (-2.6,7.0) .. (A4) -- (A3); \path[name intersections={of=para and y axis}]; \coordinate (I1) at (intersection-1); \coordinate (I2) at (intersection-2); \node (I3) at ($(I1)!0.4!(I2)$) {}; \draw (I3) -- (I2); \end{tikzpicture} \caption{Sketch of the unit cone used in the construction of the proof in Prop.~\ref{prop:cone}. The orange parabola corresponds to a fixed value of $\delta$ and the optimal device is contained within its boundary; its location depends on the chosen disturbance distance measure $\Delta$.} \label{fig:plot_cone} \end{figure} This completes the list of ingredients that are needed for the main theorem of this section: \begin{theorem}[(Almost universal) optimal devices]\label{thm:universality} Let $\Delta$ and $\delta$ be distance-measures for quantifying disturbance and measurement-error that satisfy Assumptions~\ref{assum:1} and \ref{assum:2}, respectively. Then the optimal $\Delta-\delta$-tradeoff is attained within the following two-parameter family of quantum channels: \bea\label{eq:optchannels} T(\rho)&:=& \sum_{i=1}^d \left[z\langle i|\rho|i\rangle\frac{\1_d-\ket{i}\bra{i}}{d-1}+(1-z)K_i\rho K_i\right]\otimes\ket{i}\bra{i}, \\ && K_i:=\mu\1_d+\nu\ket{i}\bra{i}, \nonumber \eea where $z\in[0,1]$ and $\mu,\nu\in\R$ are constrained by imposing $T$ to be trace preserving. \end{theorem} \begin{proof} What remains to do is to translate the two-parameter family of Eq.~(\ref{eq:cone}) into the world of channels. It suffices to consider the cases in which either $z=0$ or $z=1$ since these generate the general case by convex combination. In both cases the relevant von Neumann algebra is a factor on which the dual of $\iota$ becomes its inverse, up to a multiplicity factor. This means, we have to compute $\iota^{-1}(\varrho)$ and show that it equals $J_T$ when normalized. If $z=1$, this is readily verified since in this case $\varrho=f_1$ for which Eqs.~(\ref{eq:iso1},\ref{eq:iso2}) give $$\iota^{-1}(\varrho)=\Gb-\Gg =\sum_{i=1}^d \big(\1_d-|i\rangle\langle i|\big)\otimes |ii\rangle\langle ii|.$$ If $z=0$ then $\varrho$ is a rank-one projection within the real algebra generated by the projections onto $\ket{e_1}$ and $\ket{e_2}$. That is, \be\varrho = \mu^2 \ket{e_1}\bra{e_1}+\frac{\nu^2}{d} \ket{e_2}\bra{e_2} +\tau \big(\ket{e_1}\bra{e_2}+\ket{e_2}\bra{e_1}\big),\nonumber \ee for some $\tau,\mu, \nu\in\R$. Having rank one requires vanishing determinant, which fixes $\tau^2=\mu^2\nu^2/d$ while the remaining two parameters are constrained by the normalization $\tr{\varrho}=1$. Please note that we choose $\tau=\mu\nu/\sqrt{d}$, since $\mu \in \R$, which thus includes the other case. Exploiting that $\iota^{-1}$ is again an isomorphism and that for instance $\ket{e_1}\bra{e_2}=\sqrt{d}\ket{e_1}\bra{e_1}\cdot\ket{e_2}\bra{e_2}$, we obtain \bea \iota^{-1}(\varrho)&=&\frac{1}{d}\left[\mu^2\;\Gc+\nu^2\;\Gg+\mu\nu\big(\Ge+\Gf\big)\right]\nonumber\\ &=& \frac{1}{d}\sum_{i,k,l=1}^d K_i\ket{k}\bra{l}K_i\otimes \ket{k}\bra{l}\otimes\ket{i}\bra{i},\nonumber \eea which is, up to normalization, indeed the Choi matrix of the claimed channel. \end{proof} In the following section we will see that for many common disturbance measures $\Delta$, in fact, one more parameter can be eliminated: $z=0$ turns out to be optimal if $\Delta$ is for instance constructed from the average-case or worst-case fidelity, the worst-case Schatten $1-1$-norm or the diamond norm. This may not come as a surprise since a look at Eq.~(\ref{eq:paramred1}) reveals that for channels that correspond to elements of the unit cone we have \be\label{eq:alpha1z} \alpha_1=\frac{d}{d-1} z. \ee In other words, the contribution of the completely depolarizing channel to $T_1$ vanishes iff $z=0$. This raises the question whether $z=0$ is generally optimal under Assumptions~\ref{assum:1} and \ref{assum:2}. The following construction, whose only purpose is to enable the argument, shows that this is not true. Hence, without adding further assumptions about the distance measures (in particular about $\Delta$) no further reduction is possible. On the set of quantum channels on $\cM_d$ we define $$\DD(\Phi):=\sup_{||\psi||=1}\bra{\psi}\Phi\big(\ket{\psi}\bra{\psi}\big)\ket{\psi}-\inf_{||\varphi||=1}\bra{\varphi}\Phi\big(\ket{\varphi}\bra{\varphi}\big)\ket{\varphi}.$$ This particular example yields zero disturbance for the depolarizing channel, and thus allows to show that $z=0$ is not true in general. \begin{lemma} $\DD$ satisfies Assumption~\ref{assum:1}. \end{lemma} \begin{proof} Evidently, $\DD(\id)=0$ and $\DD$ is basis-independent. Convexity follows from the fact that $\DD$ is a supremum over linear functionals. \end{proof} \begin{corollary}[Necessity of the second parameter]\label{cor:z} Let $\delta $ be any error-measure that satisfies Assumption~\ref{assum:2} and that is faithful in the sense that $\delta=0$ implies a perfect measurement. Then the optimal $\DD-\delta$-tradeoff cannot be attained within the family of channels in Eq.~(\ref{eq:optchannels}) with $z=0$. \end{corollary} \begin{proof} Consider $\delta=0$ in the $\DD-\delta$-plane. Within the full set of channels in Eq.~(\ref{eq:optchannels}) there is one that attains $\delta=0$ while $T_1(\cdot)=\tr{\cdot}\1/d$, by choosing $\mu =0$, $\nu = 1$ and $z= (d-1)/d$. The latter implies $\DD(T_1)=0$. However, if we restrict ourselves to channels with $z=0$, then the unique channel in Eq.~(\ref{eq:optchannels}) that achieves $\delta=0$ has $T_1(\cdot)=\sum_i \bra{i}\cdot\ket{i}\ket{i}\bra{i}$ for which clearly $\DD(T_1)>0$. \end{proof} Clearly, $\DD$ is not a 'natural' disturbance measure. For instance, it has the somewhat odd property that it vanishes for the ideal channel as well as for the projection onto the maximally mixed state. In particular, it is not faithful. Note, however, that adding the latter as an additional requirement to Assumption~\ref{assum:1}, would still not allow to eliminate the parameter $z$. In order to construct a new counterexample, we could just consider $\Phi\mapsto\DD(\Phi)+\epsilon||\Phi-\id||_\diamond$. This would be faithful and satisfy Assumption~\ref{assum:1} for any $\epsilon > 0$, but for sufficiently small $\epsilon$, the minimum $\Delta$-value for $\delta=0$ would, by continuity, again not be attainable for $z=0$. \section{Optimal tradeoffs}\label{sec:vN2} In this section we will continue considering non-degenerate von Neumann measurements and exploit the universality theorem of the previous section in order to explicitly compute the optimal tradeoff for a variety of worst-case distance measures. We first discuss the total variational distance as a paradigm for the measurement error $\delta$ and then the fidelity and trace-norm as means for quantifying disturbance. \subsection{Total variation} We saw in Lemma~\ref{lem:delta=a2} that all functionals quantifying the measurement error consistent with Assumption~\ref{assum:2} are non-decreasing functions of the parameter $\alpha_2$. In the following, we want to make this dependence explicit for one case that we regard as the most important one from an operational point of view --- the worst-case total variational distance. Given two finite probability distributions $p$ and $p'$, their total variational distance is given by \be||p-p'||_{TV}:=\frac12||p-p'||_1=\frac12\sum_{i}|p_i-p_i'|.\ee The significance of this distance stems from the fact that it displays the largest possible difference in probabilities that the two distributions assign to the same event. In our context the two probability distributions arise from an ideal and an approximate measurement on a quantum state. As $||p-p'||_{TV}$ has itself a 'worst-case interpretation' it is natural to also consider the worst case w.r.t. all quantum states and use the resulting functional as $\delta$. That is, \be \delta_{TV}\left(E'\right)=\sup_{\rho}\frac12\sum_i\big|\tr{E_i'\rho}-\langle i|\rho|i\rangle\big|.\label{eq:Eprimea2}\ee If $E_i'=T_2^*(|i\rangle\langle i|)$ with $T_2$ of the form in Eq.~(\ref{eq:commutant}) so that we can regard $\delta_{TV}$ as a function of $\alpha_2$, we will write $\hat{\delta}_{TV}(\alpha_2)$. \begin{lemma}[Total variational distance]\label{lem:TV} In the symmetric setting discussed above, the worst-case total variational distance, regarded as a function of $\alpha_2$, is given by $\hat{\delta}_{TV}(\alpha_2)=\alpha_2(1-1/d)$. Furthermore, if an instrument is parametrized by the unit cone coordinates of Eq.~(\ref{eq:cone}), then it leads to a worst-case total variational distance of $(1-z-x)/2$. \end{lemma} \begin{proof} Inserting $E_i'=T_2^*(|i\rangle\langle i|)=\alpha_2\1/d+(1-\alpha_2)|i\rangle\langle i|$ into Eq.~(\ref{eq:Eprimea2}) we obtain \bea \hat{\delta}(\alpha_2) &=& \alpha_2\;\sup_\rho \frac12 \sum_i\left|\tr{\rho\big(\1/d-|i\rangle\langle i|\big)}\right| \nonumber\\ &=& \alpha_2\left(1-\frac{1}{d}\right),\nonumber \eea where the supremum is computed by first realizing that diagonal $\rho$'s (i.e., classical probability distributions) suffice and then noting that convexity of the $l_1$-norm allows to restrict to the extreme points of the simplex of classical distributions, which all lead to the same, stated value. The $\delta_{TV}$-value of an instrument parametrized by the coordinates of the unit cone can then be obtained from Eq.~(\ref{eq:paramred3}) when using that $\alpha_2=d^2 a_2$. \end{proof} An alternative way of quantifying the measurement error would be the worst-case $l_\infty$-distance between the two probability distributions $p$ and $p'$. In the present context, this measure turns out to have exactly the same value since \bea \sup_{\rho}\max_i \Big|\tr{E_i'\rho}- \langle i|\rho|i\rangle\Big| &=&\nonumber \max_i \big|\big|E_i'-|i\rangle\langle i|\big|\big|_\infty\\ &=&\alpha_2\big|\big|\1/d-|i\rangle\langle i|\big|\big|_\infty\;=\;\alpha_2\left(1-\frac1d\right).\nonumber \eea \subsection{Worst-case fidelity} We consider the worst-case fidelity of a channel $T_1:\cM_d\rightarrow\cM_d$ \be f:=\inf_{||\psi||=1}\langle\psi |T_1\big(|\psi\rangle\langle\psi|\big)|\psi\rangle,\label{eq:worstfT1}\ee which is equal to $\inf_\rho F\big(T_1(\rho),\rho\big)^2$ due to joint concavity of the fidelity. The following states the optimal 'information-disturbance tradeoff' between $f$ and the total variational distance: \begin{theorem}[Total variation - fidelity tradeoff]\label{thm:TVfidelity} Consider a non-degenerate von Neumann measurement, given by an orthonormal basis in $\C^d$, and an instrument with $d$ corresponding outcomes. Then the worst-case total variational distance $\delta_{TV}$ and the worst-case fidelity $f$ satisfy \be \delta_{TV}\geq \left\{\begin{array}{ll}\frac{1}{d}\left|\sqrt{f(d-1)}-\sqrt{1-f} \right|^2&\ \text{if }\ f\geq\frac{1}{d},\\ 0 &\ \text{if }\ f\leq \frac{1}{d}. \end{array}\right.\label{eq:optfidtv}\ee The inequality is tight and equality is attainable within the one-parameter family of instruments in Eq.~(\ref{eq:optinst}) with $z=0$. \end{theorem} \begin{proof} We exploit that the optimal tradeoff is attainable for symmetric channels (Prop.~\ref{prop:sym}) whose marginal is given in Eq.~(\ref{eq:commutant}). Inserting this into the worst-case fidelity in Eq.~(\ref{eq:worstfT1}) we obtain \bea f &=&\min_{||\psi||=1}\left(\frac{\alpha_1}{d}+\beta_1+\gamma_1\sum_{i=1}^d |\langle\psi|i\rangle|^4\right) \nonumber\\ &=&\frac{\alpha_1}{d}+\beta_1+\left\{\begin{array}{ll} \frac{\gamma_1}{d}&\ \text{if }\ \gamma_1\geq 0,\\ \gamma_1&\ \text{if }\ \gamma_1 <0. \end{array}\right.\label{eq:fgamma} \eea Using Eqs.~(\ref{eq:paramred1},\ref{eq:paramred2}) together with $\gamma_1=1-\alpha_1-\beta_1$ we can express this in terms of the state $\varrho$. From the proof of Prop.~\ref{prop:cone} we know in addition that we can w.l.o.g. assume that $\tr{\varrho f_2}=0$ and $\tr{\1_2\varrho}=1-\tr{\varrho f_1}$. In this way, we obtain \be f=\min\{1-\tr{\varrho f_1},\langle e_1|\varrho| e_1\rangle+\tr{\varrho f_1}/d\}.\label{eq:fminvarrho} \ee We aim at maximizing Eq.~(\ref{eq:fminvarrho}) for each value of the total variational distance, which by Lemma~\ref{lem:TV} and Eq.~(\ref{eq:paramred3}) can be expressed as \be \delta_{TV}=1-\langle e_2|\varrho| e_2\rangle-\tr{\varrho f_1}.\nonumber\ee Considering the map $\varrho\mapsto\varrho+\epsilon|e_2\rangle\langle e_2|-\epsilon f_1$, $\epsilon\geq 0$, under which $\delta_{TV}$ is constant and $f$ non-decreasing, we see that $\tr{\varrho f_1}=0$ can be assumed. That is, $z=0$ is indeed sufficient for the optimal tradeoff. The remaining optimization problem can be solved in the equatorial plane of the Bloch sphere, where $\varrho,|e_2\rangle\langle e_2|$ and $|e_1\rangle\langle e_1|$ are represented by Bloch vectors $(x,y)=:w,(1,0)$ and $(2/d-1,2\sqrt{d-1}/d)=:v$, respectively. Minimizing $\delta_{TV}=(1-x)/2$ under the constraints $$ f\leq \frac12\big(1+\langle w,v\rangle\big),\quad \langle w, w\rangle=1,$$ then amounts to a quadratic problem whose solution is stated in Eq.~(\ref{eq:optfidtv}). \end{proof} \subsection{Average-case fidelity} One prominent example of an average-case measure is the average-case fidelity of a quantum channel $T_1: \cM_d \to \cM_d$ \begin{equation} \bar{f}:= \int_{\norm{\psi}=1} \bra{\psi} T_1(\kb{\psi}{\psi}) \ket{\psi} \; \mathrm{d}\psi. \label{eq:Avf} \end{equation} The following theorem gives the optimal 'information-disturbance tradeoff' between the average-case fidelity and the worst-case total variational distance: \begin{theorem}[Total variation - average fidelity tradeoff]\label{thm:TVAvfidelity} Consider a non-degenerate von Neumann measurement, given by an orthonormal basis in $\C^d$, and an instrument with $d$ corresponding outcomes. Then the worst-case total variational distance $\delta_{TV}$ and the average-case fidelity $\bar{f}$ satisfy \be \delta_{TV}\geq \left\{\begin{array}{ll}\frac{1}{d}\left|\sqrt{\left( \bar{f} - \frac{1}{d+1}\right)\frac{d^2-1}{d}}-\sqrt{\left(1- \bar{f}\right)\frac{d+1}{d}} \right|^2&\ \text{if }\ \bar{f}\geq\frac{2}{d+1},\\ 0 &\ \text{if }\ \bar{f}\leq \frac{2}{d+1}. \end{array}\right.\label{eq:optavfidtv}\ee The inequality is tight and equality is attainable within the one-parameter family of instruments in Eq.~(\ref{eq:optinst}) with $z=0$. \end{theorem} \begin{proof} We again use the fact that the optimal tradeoff is attainable for symmetric channels by Prop.~\ref{prop:sym} and its marginal is given in Eq.~(\ref{eq:commutant}). The average-case fidelity given in Eq.~(\ref{eq:Avf}) therefore yields \bea \bar{f} &=& \int_{\norm{\psi}=1} \bra{\psi} \left(\alpha_1 \frac{\1}{d} + \beta_1 \kb{\psi}{\psi} + \gamma_1 \sum_{i=1}^d \kb{i}{i} \bk{i}{\psi} \bk{\psi}{i} \right) \ket{\psi} \; \mathrm{d} \psi \nonumber \\ &=& \frac{\alpha_1}{d} + \beta_1 + \gamma_1 \sum_{i=1}^d \int_{\norm{\psi}=1} \bk{\psi}{i}\bk{i}{\psi} \bk{i}{\psi}\bk{\psi}{i}\; \mathrm{d}\psi. \nonumber \eea The integral can be rewritten to give \bea && \int_{\norm{\psi}=1} \bra{\psi \otimes \psi} \left( \kb{i}{i} \otimes \kb{i}{i} \right) \ket{\psi\otimes \psi} \; \mathrm{d}\psi \nonumber \\ &=& \int_{U(d)} \bra{00} \left( U \otimes U \right) \left( \kb{i}{i} \otimes \kb{i}{i} \right) \left( U \otimes U \right)^\ast \ket{00} \; \mathrm{d}U \nonumber \\ &=& \bra{00} \frac{\1+\F}{d(d+1)}\ket{00} \nonumber \\ &=& \frac{2}{d(d+1)}, \nonumber \eea where $\F$ is the flip operator defined as $\F \ket{ij}= \ket{ji}$ and $dU$ denotes the normalized Haar measure on the unitary group $U(d)$ acting on $\C^d$. Together with $\gamma_1 = 1- \alpha_1 - \beta_1$, this gives an average fidelity \begin{equation} \bar{f} = \frac{2}{d+1} - \alpha_1 \frac{d-1}{d(d+1)} + \beta_1 \frac{d-1}{d+1}. \nonumber \end{equation} Using Eqs.~(\ref{eq:paramred1},\ref{eq:paramred2}) we can express this in terms of the state $\varrho$. We can again w.l.o.g. assume that $\tr{\varrho f_2}=0$ and $\tr{\1_2\varrho}=1-\tr{\varrho f_1}$ from the proof of Prop.~\ref{prop:cone}. Therefore, we obtain \begin{equation} \bar{f} = \frac{1}{d+1}\left( 1 + d\bra{e_1} \varrho \ket{e_1} \right). \label{eq:Avfminvarrho} \end{equation} We would like to maximize Eq.~(\ref{eq:Avfminvarrho}) for each value of the worst-case total variational distance, which by Lemma~\ref{lem:TV} and Eq.~(\ref{eq:paramred3}) is \be \delta_{TV}=1-\langle e_2|\varrho| e_2\rangle-\tr{\varrho f_1}.\nonumber \ee Similarly to the worst-case fidelity, we can again consider the map $\varrho\mapsto\varrho+\epsilon|e_2\rangle\langle e_2|-\epsilon f_1$, $\epsilon\geq 0$, under which $\delta_{TV}$ is constant and $\bar{f}$ non-decreasing, such that $\tr{\varrho f_1}=0$ can be assumed. That is, $z=0$ is sufficient for the optimal tradeoff. The remaining optimization problem can be solved by realizing that $(\bar{f}(d+1)-1)/d = \bra{e_1} \varrho \ket{e_1}$ and using the solution to the quadratic problem stated and solved in the worst-case fidelity tradeoff. This yields the solution stated in Eq.~(\ref{eq:optavfidtv}). \end{proof} \subsection{Trace norm} The analogue of the total variational distance for density operators is (up to a factor of $2$) the trace norm distance. The corresponding distance between a channel $T_1$ and the identity map is then given by half of the $1$-to-$1$-norm distance \be \Delta_{TV}(T_1):=\frac12\sup_{\rho}||T_1(\rho)-\rho||_1,\label{eq:Delta11def}\ee where the supremum is taken over all density operators. $\Delta_{TV}$ quantifies how well $T_1$ can be distinguished from $\id$ in a statistical experiment, if no ancillary system is allowed. For the two-parameter family of channels in Eq.~(\ref{eq:commutant}) $ \Delta_{TV}$ turns out to be a function of the worst-case fidelity $f$, which was defined in Eq.~(\ref{eq:worstfT1}). This is in contrast to the case of general channels, which merely satisfy the Fuchs-van de Graaf inequalities \be\label{eq:FuchsGraaf}1-f\leq \Delta_{TV}\leq\sqrt{1-f}.\ee \begin{lemma} For every channel of the form in Eq.~(\ref{eq:commutant}), we have $\Delta_{TV}=1-f$.\label{lem:TV1f} \end{lemma} \begin{proof} Due to convexity of the norm we can restrict the supremum in Eq.~(\ref{eq:Delta11def}) to pure state density operators. The resulting operator $T_1\big(|\psi\rangle\langle\psi |\big)-|\psi\rangle\langle\psi |$ then has a single negative eigenvalue and vanishing trace. Hence, the trace-norm is twice the operator norm and we can write \bea \Delta_{TV}(T_1) &=& \max_{||\psi||=||\phi||=1} \langle\phi| \big[|\psi\rangle\langle\psi |-T_1\big(|\psi\rangle\langle\psi |\big)\big]|\phi\rangle\label{eq:tnfid1}\\ &=& \max_{||\psi||=||\phi||=1}\left[(1-\beta_1)|\langle\psi|\phi\rangle|^2-\frac{\alpha_1}{d}-\gamma_1\sum_{i=1}^d|\langle\phi|i\rangle|^2|\langle\psi|i\rangle|^2\right]\nonumber\\ &=& \max_{||\psi||=||\phi||=1} \langle\psi\otimes\phi|R|\psi\otimes\phi\rangle-\frac{\alpha_1}{d},\nonumber \\ && R:=(1-\beta_1)\mathbbm{F}-\gamma_1\sum_{i=1}^d|ii\rangle\langle ii|. \nonumber \eea Our aim is to prove that the maximum in Eq.~(\ref{eq:tnfid1}) is attained for $\psi=\phi$ since then the Lemma follows from the definition of the worst-case fidelity $f$. In order to achieve this, we exploit the symmetry properties of $R$, which is block-diagonal w.r.t. the decomposition of $\C^d\otimes\C^d$ into symmetric and anti-symmetric subspace. Moreover, if we denote by $P_+:=(\1+\mathbbm{F})/2$ the projector onto the symmetric subspace, then $R\leq P_+ R P_+$. Defining $\cS$ as the set of separable density operators and utilizing its convexity, we obtain \bea \max_{||\psi||=||\phi||=1}\langle\psi\otimes\phi|R|\psi\otimes\phi\rangle & = &\nonumber\max_{\rho\in\cS}\tr{R\rho} \leq \max_{\rho\in\cS}\tr{RP_+\rho P_+}\\ &=&\max_{\rho\in P_+ \cS P_+}\tr{R\rho}\nonumber\\ &=&\max_{||\psi||=1}\langle\psi\otimes\psi|R|\psi\otimes\psi\rangle,\nonumber \eea where the last step follows from the fact that the extreme points of the convex set $P_+\cS P_+$ are pure, symmetric product states. \end{proof} Due to Prop.~\ref{prop:sym} we can now plug the previous Lemma into Thm.~\ref{thm:TVfidelity} and obtain: \begin{corollary}[Total variation - trace norm tradeoff]\label{cor:TVTV} Consider a non-degenerate von Neumann measurement, given by an orthonormal basis in $\C^d$, and an instrument with $d$ corresponding outcomes. Then the worst-case total variational distance $\delta_{TV}$ and its trace-norm analogue $\Delta_{TV}$ satisfy \be \delta_{TV}\geq \left\{\begin{array}{ll}\frac{1}{d}\left|\sqrt{(1-\Delta_{TV})(d-1)}-\sqrt{\Delta_{TV}} \right|^2&\ \text{if }\ \Delta_{TV}\leq1-\frac{1}{d},\\ 0 &\ \text{if }\ \Delta_{TV}\geq 1-\frac{1}{d}. \end{array}\right.\label{eq:opttracetv}\ee The inequality is tight and equality is attainable within the one-parameter family of instruments in Eq.~(\ref{eq:optinst}) with $z=0$. \end{corollary} \subsection{Diamond norm}\label{sec:gvN} We treat the diamond norm separately, not only because it might be the operationally most relevant measure, but also because the corresponding tradeoff result will be proven in a more general setting: we will allow the target measurement to be a von Neumann measurement that may be degenerate. We will see that degeneracy, even if it varies among the measurement outcomes, does not affect the optimal tradeoff curve if the diamond norm is considered. For general distance measures $\Delta$ that satisfy Assumption~\ref{assum:1} we do not expect this result to be true since, loosely speaking, they typically behave less benign w.r.t. extending the system than the diamond norm. Hence, assigning different dimensions to different measurement outcomes may, in general, affect the optimal information-disturbance relation. Before we prove that this is not the case for the tradeoff between the diamond norm and its classical counterpart, the total variational distance, let us recall its definition and basic properties. For a hermiticity-preserving map $\Phi:\cM_d\rightarrow\cM_{d'}$ we define \be ||\Phi||_\diamond:=\sup_{\rho}||(\Phi\otimes\id_d)(\rho)||_1,\ee where the supremum is taken over all density operators in $\cM_{d^2}$, which by convexity may be assumed to be pure. For a quantum channel $T_1:\cM_d\rightarrow\cM_d$ we then define \be \Delta_\diamond(T_1):=||T_1-\id_d||_\diamond.\ee $\Delta_\diamond(T_1)$ quantifies how well $T_1$ can be distinguished from the identity channel $\id$ in a statistical experiment, when arbitrary preparations, measurements and ancillary systems are allowed. There are two crucial properties of the diamond norm that we will exploit: 1) Monotonicity: for any quantum channel $\Psi$, neither $||\Psi\circ\Phi||_\diamond$ nor $||\Phi\circ\Psi||_\diamond$ can be larger than $||\Phi||_\diamond$. 2) Tensor stability: in particular, $||\Phi\otimes\id||_\diamond=||\Phi||_\diamond$. \begin{lemma}[Dimension-independence of optimal tradeoff curve]\label{lem:dimind} Consider a von Neumann measurement with $m$ outcomes, corresponding to $m$ mutually orthogonal, non-zero projections of possibly different dimensions, as target. Then the optimal $\Delta_\diamond - \delta_{TV}$-tradeoff depends only on $m$ and is independent of the dimensions of the projections. \end{lemma} \begin{proof} Let $(d_1,\ldots,d_m)\in\mathbbm{N}^m$ be the dimensions of the projections (i.e., the dimensions of their ranges) and assume w.l.o.g. that $d_m$ is the largest among them. We will consider three changes of those dimensions, namely \be (1,\ldots, 1)\rightarrow(d_m\ldots,d_m)\rightarrow (d_1,\ldots, d_m)\rightarrow (1,\ldots, 1),\label{eq:dimchanges} \ee and show that in each of those three steps the accessible region in the $\Delta_{\diamond}-\delta_{TV}$-plane can only grow or stay the same. Since Eq.~(\ref{eq:dimchanges}) describes a full circle, this means that the region, indeed, stays the same, which proves the claim of the Lemma. For the starting point in Eq.~(\ref{eq:dimchanges}) we consider an arbitrary instrument $\big(I_i:\cM_m\rightarrow\cM_m\big)_{i=1}^m$ that is supposed to approximate a von Neumann measurement given by $(|i\rangle\langle i|)_{i=1}^m$. From here, we construct an instrument that approximates $(|i\rangle\langle i|\otimes\1_{d_m})_{i=1}^m$ simply by taking $ I_i\otimes{\id_{d_m}}=: \tilde{I}_i $. Then $\Delta_\diamond\big(\sum_i\tilde{I}_i\big)=\Delta_\diamond\big(\sum_i I_i\big)$ holds due to the tensor stability of the diamond norm and \bea && \sup_\rho \sum_{i=1}^m \Big|\tr{\rho\big(\tilde{I}^*_i(\1)- |i\rangle\langle i|\otimes\1_{d_m}\big)}\Big| \nonumber \\ &=& \sup_\rho \sum_{i=1}^m \Big|\tr{\rho\left( \big(I^*_i(\1)- |i\rangle\langle i|\big)\otimes\1_{d_m}\right)}\Big| \nonumber \\ &=& \sup_\rho\sum_{i=1}^m \Big|\tr{\rho\big(I^*_i(\1)- |i\rangle\langle i|\big)}\Big| \nonumber \eea shows that the value of $\delta_{TV}$ is preserved, as well. Second and third step in Eq.~(\ref{eq:dimchanges}) can be treated at once by realizing that in both cases the dimensions are pointwise non-increasing. So let us consider this scenario in general. Denote the projections corresponding to two von Neumann measurements by $Q_i\in\cM_D$ and $\tilde{Q}_i\in\cM_{\tilde{D}}$ and assume that $\tr{Q_i}=:d_i\geq \tilde{d}_i:=\tr{\tilde{Q}_i}$. Let $I_i:\cM_D\rightarrow\cM_D$ be the elements of an instrument that approximates the measurement in the larger space. In order to construct an instrument in the smaller space that is at least as good w.r.t. $\Delta_\diamond$ and $\delta$, we introduce two isometries $V$ and $W$ as \bea V:\C^{\tilde{D}}\rightarrow\C^D &\text{ s.t. }& V^*Q_i V=\tilde{Q}_i \nonumber \\ W:\C^D\rightarrow\C^k\otimes\C^{\tilde{D}} &\text{ s.t. }& \forall i\in\{1,\ldots,\tilde{D}\}:\ WV|i\rangle=|1\rangle\otimes |i\rangle, \nonumber \eea where $\{|i\rangle\}_{i}$ is an orthonormal basis in $\C^{\tilde{D}}$ and $k\in\mathbbm{N}$ is sufficiently large so that $W$ can be an isometry. The sought instrument in the smaller space can then be defined as \be \tilde{I}_i(\rho):={\rm tr}_{\C^k} \left[W I_i\big(V\rho V^*\big)W^*\right], \nonumber \ee where ${\rm tr}_{\C^k}$ means the partial trace w.r.t. the first tensor factor. For the value of $\Delta_\diamond$ we obtain \bea \Big|\!\Big| \id-\sum_i \tilde{I}_i \Big|\!\Big|_\diamond &=& \Big|\!\Big|{\rm tr}_{\C^k}\big[WV\cdot V^* W^*\big]-{\rm tr}_{\C^k}\Big[W\Big(\sum_i I_i\big(V\cdot V^*\big)\Big)W^*\Big] \Big|\!\Big|_\diamond\nonumber\\ &\leq &\Big|\!\Big| V\cdot V^* -\sum_i I_i\big(V\cdot V^*\big)\Big|\!\Big|_\diamond\ \leq \ \Big|\!\Big|\id-\sum_i I_i\Big|\!\Big|_\diamond\nonumber , \eea where we have used the monotonicity property of the diamond norm twice. Finally, using that $\tilde{I}_i^*(\1)=V^* I_i^*(\1)V$ we can show that also $\delta_{TV}$ is non-increasing when moving to the smaller space since \bea \sup_{\rho} \sum_i\left|\tr{\rho\big(\tilde{I}_i^*(\1)-\tilde{Q}_i\big)}\right| &=& \sup_{\rho} \sum_i\left|\tr{V \rho V^*\big(I_i^*(\1)-Q_i\big)}\right| \nonumber\\ &\leq& \sup_{\rho} \sum_i\left|\tr{ \rho \big(I_i^*(\1)-Q_i\big)}\right| ,\nonumber \eea where the supremum in the first (second) line is taken over all density operators in the smaller (larger) space. \end{proof} \begin{theorem}[Total variation - diamond norm tradeoff]\label{thm:TVdiamond} If an instrument is considered approximating a (possibly degenerate) von Neumann measurement with $m$ outcomes, then the worst-case total variational distance $\delta_{TV}$ and the diamond norm distance $\Delta_\diamond$ satisfy \be \delta_{TV}\geq \left\{\begin{array}{ll}\frac{1}{2m}\left(\sqrt{(2-\Delta_\diamond)(m-1)}-\sqrt{\Delta_\diamond} \right)^2&\ \text{if }\ \Delta_\diamond\leq 2-\frac{2}{m},\\ 0 &\ \text{if }\ \Delta_\diamond > 2-\frac{2}{m}. \end{array}\right.\label{eq:optdiamondtv}\ee The inequality is tight in the sense that for every choice of the von Neumann measurement there is an instrument achieving equality. \end{theorem} Note: if the von Neumann measurement is non-degenerate, then equality is again attainable within the one-parameter family of instruments in Eq.~(\ref{eq:optinst}) with $z=0$. In the degenerate case, equality is attainable by such instruments when suitably embedded, as it is done in the proof of Lemma~\ref{lem:dimind}. \begin{figure}[ht] \centering \includegraphics[clip, trim=4cm 9cm 4cm 9cm, width=0.900\textwidth]{Plot_TV_Diamond_Tradeoff.pdf} \caption{The optimal total variation - diamond norm tradeoff for different numbers of measurement outcome.} \label{fig:Plot_TV_Diamond_Tradeoff} \end{figure} \begin{proof} Due to Lemma~\ref{lem:dimind} we can assume that the von Neumann measurement is non-degenerate and acts on a $d=m$ dimensional Hilbert space. We will prove that the accessible region stays the same when replacing $\Delta_\diamond$ with $2\Delta_{TV}$ so that the theorem follows from Cor.~\ref{cor:TVTV}. Since $\Delta_\diamond \geq 2\Delta_{TV}$ it suffices to show that this holds with equality for instruments that achieve the optimal $\Delta_{TV} - \delta_{TV}$ curve. Due to Eq.~(\ref{eq:alpha1z}) and Cor.~\ref{cor:TVTV} we can restrict ourselves to symmetric channels $T_1$ of the form in Eq.~(\ref{eq:commutant}) with $\alpha_1=0$. With $\cC(\cdot):=\sum_{i=1}^d |i\rangle\langle i|\langle i|\cdot|i\rangle$ and using that $(1-\beta_1)=\gamma_1$ we have \bea \Delta_\diamond(T_1) &=& \sup_{||\psi||=1} \big|\!\big| \big(T_1\otimes\id_d-\id_{d^2}\big)\big(|\psi\rangle\langle\psi|\big)\big|\!\big|_1\nonumber\\ &=& \sup_{||\psi||=1} \gamma_1 \big|\!\big| |\psi\rangle\langle\psi| -\big(\cC\otimes\id_d\big)\big(|\psi\rangle\langle\psi|\big)\big|\!\big|_1\nonumber\\ &=& 2 \gamma_1 \sup_{||\psi||=||\phi||=1} |\langle\psi|\phi\rangle|^2-\langle\phi|\big(\cC\otimes\id_d\big)\big(|\psi\rangle\langle\psi|\big)|\phi\rangle\nonumber\\ &=& 2\gamma_1 \sup_{||\psi||=1} 1-\langle\psi|\big(\cC\otimes\id_d\big)\big(|\psi\rangle\langle\psi|\big)|\psi\rangle,\nonumber \eea where the last two steps follow exactly the argumentation below Eq.~(\ref{eq:tnfid1}). For the remaining optimization problem we write $|\psi\rangle=(\1_d\otimes X)\sum_{i=1}^d |ii\rangle$ where $X\in\cM_d$ is s.t. $\sum_{i=1}^d \langle i|X^*X|i\rangle=||\psi||^2=1$. Then $$ \langle\psi|\big(\cC\otimes\id_d\big)\big(|\psi\rangle\langle\psi|\big)|\psi\rangle = \sum_{i=1}^d \big|\langle i|X^*X|i\rangle\big|^2 \geq \frac1d\Big(\sum_{i=1}^d \langle i|X^*X|i\rangle\Big)^2=\frac1d ,$$ where the inequality is an application of Cauchy-Schwarz. Consequently, \be\label{eq:cbfinal} \Delta_\diamond (T_1)\leq 2\gamma_1\Big(1-\frac1d\Big)=2\Delta_{TV}(T_1),\ee where the last inequality uses that $\Delta_{TV}=1-f$ by Lemma~\ref{lem:TV1f} and $f=1-\gamma(1-1/d)$ by Eq.~(\ref{eq:fgamma}). As $\Delta_\diamond$ is also lower bounded by $2 \Delta_{TV}$, equality has to hold in Eq.~(\ref{eq:cbfinal}), which completes the proof. \end{proof} Note that equality in Eq.~(\ref{eq:cbfinal}) means that entanglement assistance does not increase the distinguishability of the identity channel $\id$ and the channel $T_1$. \section{SDPs for general POVMs}\label{sec:SDP} In this section, we consider the most general case, when the target measurement $E$ is given by an arbitrary POVM. It is then still possible to characterize the achievable region in the $\Delta-\delta$-plane as the set of solutions to some SDP if $\Delta$ and $\delta$ are convex semialgebraic. To this end, let us start with the definition of semialgebraicity. A semialgebraic set is a set $S\subseteq\R^n$ defined by a finite sequence of polynomial equations and inequalities or any finite union of such sets. We mainly follow \cite{Bochnak_RealAlgGeo, Karow_2003}. \begin{definition}[Semialgebraic set {\cite[Definition 3.1.1]{Karow_2003}}] A semialgebraic subset of $\R^n$ is an element of the Boolean algebra of subsets of $\R^n$ which is generated by the sets \begin{equation} \left\{ \left( x_1, \ldots, x_n \right) \in \R^n \middle\vert p\left(x_1, \ldots, x_n \right) > 0 \right\}, \ \ p\in \R[X_1, \ldots, X_n], \label{eq:semiset} \end{equation} where $\R[X_1, \ldots, X_n]$ denotes the ring of real polynomials in the variables $X_1$, $\ldots$, $X_n$. \end{definition} From this definition, it is immediately clear that sets of the form \[\left\{ \left( x_1, \ldots, x_n \right) \in \R^n \middle\vert p\left(x_1, \ldots, x_n \right) \bullet 0 \right\},\] where $\bullet \in \{ <,>, \leq, \geq, =, \neq \}$, $p\in \R[X_1, \ldots, X_n]$, are semialgebraic and that the family of semialgebraic sets is closed under taking complements, finite unions and finite intersections. Moreover, by the Tarski-Seidenberg principle quantification over reals preserves the semialgebraic property \cite[Appendix 1]{Marshall_2008}: \begin{theorem}[Tarski-Seidenberg, quantifier elimination {\cite[Thm. 1]{Wolf_2011}}] \label{thm:TS2} Given a finite set $\{ p_i (x,z)\}_{i=1}^k$ of polynomial equalities and inequalities with variables $(x,z)\in \R^n \times \R^m$ and coefficients in $\Q$. Let $\phi(x,z)$ be a Boolean combination of the $p_i$'s (using $\vee$, $\wedge$ and $\neg$) and \begin{equation} \Psi(z) := \big( Q_1 x_1 \ldots Q_n x_n : \phi(x,z) \big), \ \ Q_j \in \left\{ \exists, \forall \right\}. \label{eq:TS1} \end{equation} Then there exists a formula $\psi(z)$ which is (i) a quantifier-free Boolean combination of finitely many polynomial (in-)equalities with rational coefficients, and (ii) equivalent in the sense \begin{equation} \forall z: \ \ \big( \psi(z) \Leftrightarrow \Psi(z) \big). \label{eq:TS2} \end{equation} Moreover, there exists an effective algorithm which constructs the quantifier-free equivalent $\psi$ of any such formula $\Psi$. \end{theorem} \begin{definition}[Semialgebraic function] Let $S_k \subseteq \R^{n_k}$ be non-empty semialgebraic sets, $k=1,2$. A function $f:S_1 \to S_2$ is said to be semialgebraic if its graph \begin{equation} \left\{(x,z) \in S_1\times S_2 \middle\vert z=f(x) \right\} \label{eq:graph} \end{equation} is a semialgebraic subset of $\R^{n_1+n_2}$. \end{definition} Using the Tarski-Seidenberg principle, Thm.~\ref{thm:TS2}, it is also possible to prove that the following functions, that are likely to appear in optimization problems, are semialgebraic \cite[Sec. 3.1]{Karow_2003}: \begin{itemize} \item Real polynomial functions are semialgebraic. \item Compositions of semialgebraic functions are semialgebraic. Let $S_k \subseteq \R^{n_k}$, $k=1,2,3$, be semialgebraic sets and let $f:S_1 \to S_2$ and $g:S_2 \to S_3$ be semialgebaric functions. Then their composition $g \circ f:S_1 \to S_3$ is semialgebraic. \item Let $f: S_1 \to S_2$ be a semialgebraic function, and let $A \subseteq S_1$ (resp. $B \subseteq S_2$) be a semialgebraic set. Then $f(A)$ (resp. $f^{-1}(B)$) is semialgebraic. \item Finite sums and products of semialgebraic functions are semialgebraic. Let $f_1, f_2: S_1 \to \R$ be semialgebraic functions. Then $f_1+f_2, f_1f_2:S_1 \to \R$ are semialgebraic. \item Let $f_1, f_2: S_1 \to \R$ be semialgebraic functions. If $f_2^{-1}(\{0\}) \neq S_1$, then $f_1/f_2:S_1\backslash f_2^{-1}(\{0 \}) \to \R$ is semialgebraic. \item Let $\cM^{\text{Herm}}_n$ denote the set of all Hermitian $n\times n$-matrices, and for $H \in \cM^{\text{Herm}}_n$ let $\lambda_k(H)$, $k \in \{1,\ldots, n\}$, denote the eigenvalues of $H$ in decreasing order. The functions $\lambda_k(\cdot):\cM^{\text{Herm}}_n \to \R$ are semialgebraic. \item The singular value functions $\sigma_k: \C^{m\times n} \to [0, \infty)$, $1\leq k \leq \min \{m,n\}$ are semialgebraic. \end{itemize} For the last point, we identify a subset of $\C^{n}$ with a subset of $\R^{2n}$ by separating the real and imaginary parts. Therefore, the notion of a semialgebraic subset of $\C^{m\times n}$ is well defined. Furthermore, one can show the following regarding the supremum or infimum of a function: \begin{lemma}[{\cite[Cor. 3.1.15]{Karow_2003}}] \label{lem:supinf} Let $S_k \subseteq \R^{n_k}$ be non-empty semialgebraic sets, $k=1,2$, and $f:S_1 \times S_2 \to \R$ a semialgebraic function. Then $\hat{f}, \check{f}:S_1 \to \R \cup \{-\infty, \infty\}$, \begin{eqnarray} \hat{f}(x) &:=& \sup_{y\in S_2} f(x,y) \ \ \text{ and} \\ \check{f}(x) &:=& \inf_{y \in S_2} f(x,y) \end{eqnarray} are both semialgebraic. \end{lemma} Using the fact that singular value functions are semialgebraic, it is immediately possible to show the following corollary: \begin{corollary}[{\cite[Cor. 3.1.24]{Karow_2003}}] \label{cor:Schatten} The Schatten $p$-norms $\norm{\cdot}_{p}:\C^{n\times m} \to [0,\infty)$ are semialgebraic for all $p \in [1,\infty) \cap \Q$ and $p=\infty$. \end{corollary} \begin{proof} Please see \cite[Cor. 3.1.23 and 3.1.19]{Karow_2003} for a full proof. The main idea is to establish that the function $x \mapsto x^{p/q}$, with $x > 0$ and $p,q$ positive integers, is semialgebraic. Its graph is \begin{eqnarray*} && \left\{ \left(x,z \right) \in \R^2_+ \middle\vert z = x^{\frac{p}{q}} \right\} \\ &=& \left\{ \left(x,z \right) \in \R^2 \middle\vert z^q - x^{p} =0 \right\} \cap \R^2_+, \end{eqnarray*} which is semialgebraic. \end{proof} \begin{corollary} \label{cor:Schattenpq} The Schatten $p$-to-$q$ norm-distances of a quantum channel $\Phi \in \cT_d$ to the identity channel $$\Phi\ \mapsto ||\Phi-\id||_{p\rightarrow q,n}:=\sup_{\rho\in\cS_{dn}}\frac{||(\Phi-\id)\otimes\id_n(\rho)||_q}{||\rho||_p},\quad n\in\mathbbm{N},$$ are semialgebraic for all $p,q \in [1,\infty) \cap \Q$ and $p,q=\infty$. The worst-case fidelity distance of a quantum channel $\Phi \in \cT_d$ to the identity channel $$\Phi\ \mapsto \inf_{\rho\in\cS_{d}} F\left( \Phi(\rho), \rho\right)^2$$ is semialgebraic. The worst-case $l_p$-distances of a POVM $E' \in \cE_{d,m}$ to the target POVM $E \in \cE_{d,m}$ $$E'\ \mapsto \sup_{\rho\in\cS_{d}}||\left( \Tr[\rho E_i] - \Tr[\rho E_i']\right)_{i=1}^m ||_{p},$$ are semialgebraic for all $p \in [1,\infty) \cap \Q$ and $p=\infty$. \end{corollary} \begin{proof} Given that the set of all quantum states is semialgebraic \cite[Lemma 1]{Wolf_2011}, Cor.~\ref{cor:Schatten} together with Lemma~\ref{lem:supinf} immediately yields the statements. \end{proof} In particular, the special case of the \emph{diamond norm} $||\cdot||_\diamond:=||\cdot||_{1\rightarrow1,d}$, which we discuss in more detail below, and its dual, the \emph{cb-norm} (with $p=q=\infty, n=d$) are semialgebraic. \begin{theorem}[Helton-Nie conjecture in dimension two {\cite[Thm.~6.8.]{Scheiderer_2012}}] \label{thm:HeltonNie} Every convex semialgebraic subset $S$ of $\R^2$ is the feasible set of a SDP. That is, it can be written as \begin{equation} S = \left\{ \xi \in \R^2\middle\vert \exists \eta \in \R^m : A + \sum_{i=1}^2 \xi_i B_i + \sum_{j=1}^m \eta_j C_j \geq 0 \right\}, \label{eq:SDPFeasible} \end{equation} where $m \geq 0$ and $A$, $B_i$ as well as $C_j$ are real symmetric matrices of the same size. \end{theorem} The proof of the Helton-Nie conjecture in dimension two can be found in \cite{Scheiderer_2012}.\footnote{The conjecture for larger dimensions was shown to be false in general in \cite{Scheiderer_2017}.} The main observation of this section is a consequence of the previous theorem and the following simple Lemma: \begin{lemma} \label{lem:SemiPlane} If $\Delta$ and $\delta$ are both semialgebraic, then the accessible region in the $\Delta-\delta$-plane is a semialgebraic set. \end{lemma} \begin{proof} Let us denote the accessible region in the $\Delta-\delta$-plane by $S$, i.e., \begin{equation*} S = \left\{ x\in \R^2 \middle\vert \exists I = \{ I_i \}_{i=1}^m : x_1 = \Delta\left(\sum_{i=1}^{m} I_i \right) \wedge x_2 = \delta \left( \left( I_i^\ast(\1)\right)_{i=1}^m\right) \right\}. \end{equation*} First note that the set of instruments is semialgebraic. The maps $I\mapsto \sum_{i=1}^m I_i$ as well as $I \mapsto (I_i^\ast(\1))_{i=1}^m$ are algebraic and therefore semialgebraic \cite{Bochnak_RealAlgGeo}. Given that the composition of two semialgebraic maps is semialgebraic \cite[Prop. 2.2.6 (i)]{Bochnak_RealAlgGeo} and that the image of a semialgebraic set under a semialgebraic map is semialgebraic \cite[Prop. 2.2.7.]{Bochnak_RealAlgGeo}, $\Delta\left(\sum_{i=1}^m I_i\right)$ as well as $\delta\left((I_i^\ast(\1))_{i=1}^m\right)$ are semialgebraic. Using the Tarski-Seidenberg principle, Thm.~\ref{thm:TS2}, we arrive at the claim. \end{proof} \begin{theorem}[SDP solution for arbitrary target measurements] \label{thm:SDPalg} If $\Delta$ and $\delta$ are both convex and semialgebraic, then the accessible region in the $\Delta-\delta$-plane is the feasible set of a SDP. \end{theorem} \begin{proof} If $\Delta$ and $\delta$ are convex and semialgebraic, then the whole region in the $\Delta-\delta$-plane that is accessible by quantum instruments is a convex semialgebraic subset of $\R^2$ by Lemma~\ref{lem:SemiPlane}. By Thm.~\ref{thm:HeltonNie}, it must thus be the feasible set of a SDP. \end{proof} In particular, if we consider a Schatten $p$-to-$q$-norm distance, with $p$ and $q$ rational, to describe the disturbance caused to the quantum system and a worst-case $l_p$-norm distances, with rational $p$, to quantify the measurement error, the accessible region in the $\Delta-\delta$-plane is the feasible set of a SDP. Unfortunately, we do not know how to make the results of \cite{Scheiderer_2012} constructive. That is while Thm.~\ref{thm:SDPalg} proves the existence of a SDP, we do not have a way of making the SDP explicit. \vspace*{5pt} \paragraph{\bf SDP for the diamond norm tradeoff} We now explicitly state the SDP yielding the optimal tradeoff curve in the case of a general POVM for the worst-case $l_\infty$-distance and the diamond norm. This particular example does not rely on the general result of Thm~\ref{thm:SDPalg}, since the $l_\infty$-norm as well as the diamond norm are already well-suited to SDP formulation. Please note that on the measurement error side, we use the worst-case $l_\infty$-norm to quantify the distance between the two probability distributions, \begin{equation} \delta_{l_\infty} := \sup_{\rho}\max_i \Big|\tr{E_i'\rho}- \tr{E_i\rho}\Big|. \end{equation} In this setting the optimization problem, quantifying the information-disturbance tradeoff, is given as: \\ Compute for a given target POVM $E = \left\{ E_i \right\}_{i=1}^m$ and $\lambda \in \left[ 0, 1 \right]$ \bea \label{eq:OptProb} \nu (E,\lambda) := & & \min_{\left\{ I_i \right\}_{i=1}^m} \norm{\sum^m_{i=1} I_i - \id}_\diamond \\ & \text{such that} & \norm{I^\ast_i(\1)-E_i}_\infty \leq \lambda \ \ \forall i, \nonumber \\ & & I_i \text{ is completely positive} \ \ \forall i \text{ and } \nonumber \\ & & \sum^m_{i=1} I^\ast_i(\1) = \1. \nonumber \eea In the following, let us the define the Choi matrix for any linear map $T: \cM_d \to \cM_{d'}$ as \begin{equation} J(T):= \left( T \otimes \id_d \right) \left( \sum_{i,j=1}^d \kb{ii}{jj} \right). \label{eq:J} \end{equation} \begin{theorem} \label{thm:SDP} For a given target POVM $E = \left\{ E_i \in \cM_d \right\}_{i=1}^m$ and $\lambda \in \left[0, 1\right]$, the optimization problem $\nu (E, \lambda)$ given in Eq.~(\ref{eq:OptProb}), can be formulated as a SDP $(\phi, C, D)$, where $\phi:\cM_{\hat{d}} \to \cM_{\check{d}}$ is a hermiticity preserving map, $C=C^\ast \in \cM_{\hat{d}}$ and $D=D^\ast \in \cM_{\check{d}}$, with dimensions $\hat{d} = (m+4)d^2+2(m+2)d$ and $\check{d} = 2+(m+2)d^2$. The primal and the dual SDP problem are given as follows: \\ \begin{equation*} \begin{split} \text{\emph{Primal SDP problem}} & \\ & \\ \text{maximize } \ \ & \tr{CX} \\ \text{subject to } \ \ & \begin{aligned}[t] &\phi(X) = D \\ &X \geq 0 \end{aligned} \end{split} \qquad \qquad \begin{split} \text{\emph{Dual SDP problem}} & \\ & \\ \text{minimize } \ \ & \tr{DY} \\ \text{subject to } \ \ & \begin{aligned}[t] &\phi^\ast(Y) \geq C \\ &Y = Y^\dagger \end{aligned} \end{split} \end{equation*} where the hermiticity preserving map $\phi:\cM_{\hat{d}} \to \cM_{\check{d}}$ is \bea \phi(X) & = & \tr{w_0} \oplus \tr{w_1} \oplus \left(A+Z_0-\1 \otimes w_0\right) \oplus \left(B+Z_1-\1 \otimes w_1\right) \oplus \nonumber \\ && \bigoplus_{i=1}^m \left( M+M^\ast + \1 \otimes \left(F_i-\widetilde{F}_i \right) +G_i+\1 \otimes \left(H-\widetilde{H}\right)\right), \eea with \bea X & := & \begin{pmatrix} A & M \\ M^\ast & B \end{pmatrix} \oplus w_0 \oplus w_1 \oplus Z_0 \oplus Z_1 \oplus \nonumber \\ & & \bigoplus_{i=1}^m F_i \oplus \bigoplus_{i=1}^m \widetilde{F}_i \oplus \bigoplus_{i=1}^m G_i \oplus H \oplus \widetilde{H}. \eea The adjoint of the map $\phi$ is \bea \phi^\ast(Y) &:=& \begin{pmatrix} Y_0 & \sum_{i=1}^m J(I_i) \\ \sum_{i=1}^m J(I_i) & Y_1 \end{pmatrix} \oplus \left( \lambda_0 \1 - \Tr_{1} \left[Y_0\right] \right) \oplus \left( \lambda_1 \1 - \Tr_{1}\left[ Y_1\right] \right) \oplus \nonumber \\ & & Y_0 \oplus Y_1 \oplus \bigoplus_{i=1}^m \Tr_{1} \left[J(I_i)\right] \oplus \bigoplus_{i=1}^m -\Tr_{1} \left[J(I_i)\right] \oplus \bigoplus_{i=1}^m J(I_i) \oplus \nonumber\\ & & \sum^m_{i=1} \Tr_{1}\left[J(I_i)\right] \oplus -\sum^m_{i=1} \Tr_{1}\left[J(I_i)\right], \eea with \begin{equation} Y := \lambda_0 \oplus \lambda_1 \oplus Y_0 \oplus Y_1 \oplus \bigoplus_{i=1}^m J(I_i). \end{equation} Furthermore, \begin{equation} D:= \frac{1}{2} \oplus \frac{1}{2} \oplus 0 \oplus 0 \oplus \bigoplus_{i=1}^m 0 \end{equation} and \bea C & := & \begin{pmatrix} 0 & J(\id) \\ J(\id) & 0 \end{pmatrix} \oplus 0 \oplus 0 \oplus 0 \oplus 0 \oplus \bigoplus_{i=1}^m \left(-\lambda \1 +E_i^T \right) \oplus \nonumber \\ & & \bigoplus_{i=1}^m \left(-\lambda \1 -E_i^T \right) \oplus \bigoplus_{i=1}^m 0 \oplus \1 \oplus -\1. \eea \end{theorem} \begin{proof} The diamond norm can be expressed as a SDP itself \cite{Watrous_2009, Watrous_2012}, \begin{align*} \norm{\id - \sum_{i=1}^m I_i}_\diamond = & && \min_{Y_0, Y_1 \in \cM_d \otimes \cM_d} \frac{1}{2} \left[ \norm{\Tr_{1}\left[ Y_0 \right]}_\infty + \norm{\Tr_1\left[ Y_1 \right]}_\infty \right] \\ & \text{ such that } && \begin{pmatrix} Y_0 & J\left( \id - \sum_{i=1}^m I_i \right) \\ J\left( \id - \sum^m_{i=1} I_i \right) & Y_1 \end{pmatrix} \geq 0 \ \ \text{ and} \nonumber \\ &&& Y_0,Y_1 \geq 0, \nonumber \end{align*} where $\Tr_1$ denotes the partial trace over the first system. Using Watrous SDP for the diamond norm in the form of \cite[p. 11]{Watrous_2012} gives \begin{align*} \nu \left( E, \lambda \right) = & \text{ minimize } && \frac{1}{2}\left[ \lambda_0 +\lambda_1 \right] \\ & \text{ such that } && \begin{pmatrix} Y_0 & \sum_{i=1}^m J(I_i) \\ \sum_{i=1}^m J(I_i) & Y_1 \end{pmatrix} \geq \begin{pmatrix} 0 & J(\id) \\ J(\id) & 0 \end{pmatrix} \nonumber \\ &&& \lambda_0 \1 - \Tr_{1} \left[Y_0 \right] \geq 0 \nonumber \\ &&& \lambda_1 \1 - \Tr_{1} \left[Y_1 \right] \geq 0 \nonumber \\ &&& Y_0, Y_1 \geq 0 \nonumber \\ &&& \Tr_{1}\left[J(I_i)\right] \geq -\lambda \1 + E_i^T \ \ \forall i \nonumber \\ &&& -\Tr_{1}\left[J(I_i)\right] \geq -\lambda \1 - E_i^T \ \ \forall i \nonumber \\ &&& J(I_i) \geq 0 \ \ \forall i \nonumber \\ &&& \sum_{i=1}^m \Tr_{1}\left[J(I_i)\right] \geq \1 \nonumber \\ &&& -\sum_{i=1}^m \Tr_{1}\left[J(I_i)\right] \geq -\1. \nonumber \end{align*} We would like to write this as a SDP in the form \begin{align*} \text{minimize } \ \ & \tr{DY} \\ \text{subject to } \ \ &\phi^\ast(Y) \geq C, \\ &Y = Y^\dagger. \end{align*} Collecting all variables that we optimize over yields $Y \in \C \oplus \C \oplus \cM_{d^2} \oplus \cM_{d^2} \oplus \bigoplus_{i=1}^m \cM_{d^2}$ as \begin{equation*} Y := \lambda_0 \oplus \lambda_1 \oplus Y_0 \oplus Y_1 \oplus \bigoplus_{i=1}^m J(I_i). \end{equation*} Furthermore, we set $D \in \C \oplus \C \oplus \cM_{d^2} \oplus \cM_{d^2} \oplus \bigoplus_{i=1}^m \cM_{d^2}$ as \begin{equation*} D:= \frac{1}{2} \oplus \frac{1}{2} \oplus 0_{d^2} \oplus 0_{d^2} \oplus \bigoplus_{i=1}^m 0_{d^2}. \end{equation*} Similarly, set $\phi^\ast(Y) \in \cM_{2d^2} \oplus \cM_d \oplus \cM_d \oplus \cM_{d^2} \oplus \cM_{d^2} \oplus \bigoplus_{i=1}^m \cM_d \oplus \bigoplus_{i=1}^m \cM_d \oplus \bigoplus_{i=1}^m \cM_{d^2} \oplus \bigoplus_{i=1}^m \cM_d \oplus \bigoplus_{i=1}^m \cM_d$ to be \bea \phi^\ast(Y) &:=& \begin{pmatrix} Y_0 & \sum_{i=1}^m J(I_i) \\ \sum_{i=1}^m J(I_i) & Y_1 \end{pmatrix} \oplus \left( \lambda_0 \1_d - \Tr_{1} \left[Y_0\right] \right) \oplus \left( \lambda_1 \1_d - \Tr_{1}\left[ Y_1\right] \right) \oplus \nonumber \\ & & Y_0 \oplus Y_1 \oplus \bigoplus_{i=1}^m \Tr_{1} \left[J(I_i)\right] \oplus \bigoplus_{i=1}^m -\Tr_{1} \left[J(I_i)\right] \oplus \bigoplus_{i=1}^m J(I_i) \oplus \nonumber\\ & & \sum^m_{i=1} \Tr_{1}\left[J(I_i)\right] \oplus -\sum^m_{i=1} \Tr_{1}\left[J(I_i)\right],\nonumber \eea and we define $C \in \cM_{2d^2} \oplus \cM_d \oplus \cM_d \oplus \cM_{d^2} \oplus \cM_{d^2} \oplus \bigoplus_{i=1}^m \cM_d \oplus \bigoplus_{i=1}^m \cM_d \oplus \bigoplus_{i=1}^m \cM_{d^2} \oplus \bigoplus_{i=1}^m \cM_d \oplus \bigoplus_{i=1}^m \cM_d$ as \bea C & := & \begin{pmatrix} 0 & J(\id) \\ J(\id) & 0 \end{pmatrix} \oplus 0_d \oplus 0_d \oplus 0_{d^2} \oplus 0_{d^2} \oplus \bigoplus_{i=1}^m \left(-\lambda \1 +E_i^T \right) \oplus \nonumber \\ & & \bigoplus_{i=1}^m \left(-\lambda \1 -E_i^T \right) \oplus \bigoplus_{i=1}^m 0_{d^2} \oplus \1_d \oplus -\1_d. \nonumber \eea Therefore, the optimization problem $\nu(E,\lambda)$ is a SDP indeed. In order to state the dual SDP problem, define $X \in \cM_{2d^2} \oplus \cM_d \oplus \cM_d \oplus \cM_{d^2} \oplus \cM_{d^2} \oplus \bigoplus_{i=1}^m \cM_d \oplus \bigoplus_{i=1}^m \cM_d \oplus \bigoplus_{i=1}^m \cM_{d^2} \oplus \bigoplus_{i=1}^m \cM_d \oplus \bigoplus_{i=1}^m \cM_d$ to be \bea X & := & \begin{pmatrix} A & M \\ M^\ast & B \end{pmatrix} \oplus w_0 \oplus w_1 \oplus Z_0 \oplus Z_1 \oplus \nonumber \\ & & \bigoplus_{i=1}^m F_i \oplus \bigoplus_{i=1}^m \widetilde{F}_i \oplus \bigoplus_{i=1}^m G_i \oplus H \oplus \widetilde{H}.\nonumber \eea Using the fact that $\tr{\phi^\ast(Y)X} = \tr{Y\phi(X)}$ lets us construct $\phi$ such that $\phi(X) \in \C \oplus \C \oplus \cM_{d^2} \oplus \cM_{d^2} \oplus \bigoplus_{i=1}^m \cM_{d^2}$ is \bea \phi(X) & = & \tr{w_0} \oplus \tr{w_1} \oplus \left(A+Z_0-\1 \otimes w_0\right) \oplus \left(B+Z_1-\1 \otimes w_1\right) \oplus \nonumber \\ && \bigoplus_{i=1}^m \left( M+M^\ast + \1 \otimes \left(F_i-\widetilde{F}_i \right) +G_i+\1 \otimes \left(H-\widetilde{H}\right)\right).\nonumber \eea \end{proof} \begin{proposition} For the above SDP $\left(\phi, C, D\right)$ the Slater-type strong duality holds, such that \begin{equation} \sup_X \tr{CX} = \inf_Y \tr{DY}. \end{equation} \end{proposition} \begin{proof} There is an interior point $X > 0$ that fulfills $\phi(X) = D$ and a $Y=Y^\ast$ such that $\phi^\ast(Y) \geq C$. By Slater's theorem strong duality holds for the SDP $\left(\phi, C, D\right)$. \end{proof} Using Thm.~\ref{thm:SDP} it is therefore possible to explicitly state the SDP that yields the information-disturbance tradeoff curve for any general POVM in the case where the measurement-error is quantified by the worst-case $l_\infty$-distance and the disturbance is quantified by the diamond norm. \begin{figure}[ht] \centering \includegraphics[clip, trim=4cm 9cm 4cm 9cm, width=0.900\textwidth]{Plot_SDP_SIC_POVM_d2.pdf} \caption{The information-disturbance tradeoff for a qubit SIC POVM target measurement.} \label{fig:SDP_dim2_4_SIC_POVM} \end{figure} \begin{figure}[ht] \centering \includegraphics[clip, trim=4cm 9cm 4cm 9cm, width=0.900\textwidth]{Plot_SDP_SIC_POVM_d3.pdf} \caption{The information-disturbance tradeoff for a qutrit SIC POVM target measurement.} \label{fig:SDP_dim3_9_SIC_POVM} \end{figure} \vspace*{5pt} \paragraph{\bf SIC POVM} As it is a prominent application in various fields in quantum information theory, this section analyzes the example of a symmetric, informationally complete (SIC) POVM as target measurement. A SIC POVM is defined by a set of $d^2$ subnormalized rank-$1$ projectors $\left\{P_i/d\right\}_{i=1}^{d^2}$, which have equal pairwise Hilbert-Schmidt inner products, $\tr{P_iP_j}/d^2=1/d^2(d+1)$ for $i \neq j$. Figure~\ref{fig:SDP_dim2_4_SIC_POVM} and ~\ref{fig:SDP_dim3_9_SIC_POVM} show the information-disturbance tradeoff for a qubit SIC POVM and qutrit SIC POVM as target measurement respectively. In two dimensions, we considered the following SIC POVM represented by the four Bloch vectors $(0,0,1)$, $(2\sqrt{2}/3, 0,-1/3)$, $(-\sqrt{2}/3, \sqrt{2/3},-1/3)$ and $(-\sqrt{2}/3, -\sqrt{2/3},-1/3)$. In dimension three, the nine explicit (unnormalized) vectors of the SIC POVM under consideration are $(0,1,-1)$, $(0,1,-\eta)$, $(0,1,-\eta^2)$, $(-1,0,1)$, $(-\eta,0,1)$, $(-\eta^2,0,1)$, $(1,-1,0)$, $(1,-\eta,0)$ and $(1, -\eta^2,0)$ with $\eta = \exp{2\pi i/3}$. To solve the SDP stated in Thm.~\ref{thm:SDP} for this particular example, we used cvx, a package for specifying and solving convex programs \cite{cvx, Grant_Boyd_2008} in {MATLAB} \cite{Matlab}. The solution of the SDP is compared to an instrument similar to the one found in Thm.~\ref{thm:universality} consisting of an inherit POVM $E' = tE+(1-t) \1/d$, $t\in [0,1]$, together with the L\"uders channel. The symmetry of the SIC POVM most likely leads to this agreement. However, further investigation would be necessary to get a better understanding of this observation. \section*{Acknowledgment} The authors would like to thank Teiko Heinosaari for many useful comments. AKHs work is supported by the Elite Network of Bavaria through the PhD program of excellence \textit{Exploring Quantum Matter}. This research was supported in part by the National Science Foundation under Grant No. NSF PHY11-25915. \newpage \appendix \section*{Appendix} \paragraph{\bf Proof that average- and worst-case construction satisfy Assumption~\ref{assum:1} and Assumption~\ref{assum:2}.} \begin{lemma} If $\tilde{\Delta}:\cS_d\times\cS_d\rightarrow[0,\infty]$ satisfies \begin{enumerate}[(i)] \item $\tilde{\Delta}(\rho,\rho)=0$, \item convexity in its first argument and \item unitary invariance, \end{enumerate} then the worst-case as well as the average-case construction \begin{eqnarray*} \Delta_\infty(\Phi)&:=& \sup_{\rho\in S} \tilde{\Delta}\left(\Phi(\rho),\rho\right) \ \ \text{ and}\\ \Delta_\mu(\Phi)&:=& \int_{\cS_d} \tilde{\Delta}\left(\Phi(\rho),\rho\right)\; \mathrm{d}\mu(\rho), \end{eqnarray*} with $\mu$ a unitarily invariant measure on $\cS_d$ and $S\subseteq\cS_d$ a unitarily closed subset, satisfy Assumption~\ref{assum:1}. \end{lemma} \begin{proof} Let $\tilde{\Delta}:\cS_d\times\cS_d\rightarrow[0,\infty]$ be such that it \begin{enumerate}[(i)] \item satisfies $\tilde{\Delta}(\rho,\rho)=0$, \item is convex in its first argument, i.e., for any quantum state $\sigma, \sigma', \rho \in \cS_d$ \be \tilde{\Delta}\left( \lambda \sigma + (1-\lambda) \sigma' , \rho \right) \leq \lambda \tilde{\Delta} \left(\sigma, \rho \right) + (1-\lambda) \tilde{\Delta} \left( \sigma', \rho \right) \ \ \forall \lambda \in \left[0,1\right], \nonumber \ee \item and is unitarily invariant, i.e., for any quantum state $\sigma, \rho \in \cS_d$ \be \tilde{\Delta}\left( U^\ast \sigma U, U^\ast \rho U \right) = \tilde{\Delta}\left( \sigma, \rho \right) \ \ \forall \text{ unitaries } U \in \cM_d. \nonumber \ee \end{enumerate} Then its worst case $\Delta_\infty$ satisfies \begin{enumerate}[(a)] \item $\Delta_\infty(\id) = 0$, since \begin{equation} \Delta_\infty(\id) = \sup_{\rho\in S} \tilde{\Delta}\left(\id(\rho),\rho\right) = \sup_{\rho\in S} \tilde{\Delta}\left(\rho,\rho\right) = 0,\nonumber \end{equation} \item is convex, i.e., for every quantum channel $\Phi, \Phi' \in \cT_d$ \begin{equation} \Delta_\infty \left( \lambda \Phi + (1-\lambda) \Phi' \right) \leq \lambda \Delta_\infty \left( \Phi \right) + (1-\lambda) \Delta_\infty \left( \Phi' \right) \ \ \forall \lambda \in \left[0,1\right],\nonumber \end{equation} because \begin{eqnarray*} \Delta_\infty \left( \lambda \Phi + (1-\lambda) \Phi'\right) &=& \sup_{\rho\in S} \tilde{\Delta}\left(\lambda \Phi(\rho) + (1-\lambda) \Phi'(\rho),\rho\right) \\ &\leq& \lambda \sup_{\rho\in S} \tilde{\Delta}\left( \Phi(\rho), \rho \right) + (1-\lambda) \sup_{\rho\in S} \left(\Phi'(\rho),\rho\right) \\ &=& \lambda \Delta_\infty \left( \Phi \right) + (1-\lambda) \Delta_\infty \left( \Phi' \right), \end{eqnarray*} \item and is basis-independent, i.e., for every unitary $U \in \cM_d$ and every channel $\Phi \in \cT_d$, we have that \begin{equation} \Delta_\infty \left(U \Phi\left(U^\ast \cdot U \right) U^\ast \right) = \Delta_\infty(\Phi),\nonumber \end{equation} since \begin{eqnarray*} \Delta_\infty \left(U \Phi\left(U^\ast \cdot U \right) U^\ast \right) &=& \sup_{\rho\in S} \tilde{\Delta}\left(U \Phi\left(U^\ast \rho U \right) U^\ast ,\rho\right) \\ &=& \sup_{\rho\in S} \tilde{\Delta}\left( \Phi\left(U^\ast \rho U \right) ,U^\ast\rho U \right) \\ &=& \sup_{\rho\in S} \tilde{\Delta}\left( \Phi\left( \rho \right) ,\rho \right) \\ &=& \Delta_\infty(\Phi). \end{eqnarray*} \end{enumerate} The average case $\Delta_\mu$ satisfies \begin{enumerate}[(a)] \item $\Delta_\mu(\id) = 0$, since \begin{equation} \Delta_\mu(\id) = \int_{\cS_d} \tilde{\Delta}\left(\id(\rho),\rho\right)\; \mathrm{d}\mu(\rho) = \int_{\cS_d} \tilde{\Delta}\left(\rho,\rho\right)\; \mathrm{d}\mu(\rho) = 0,\nonumber \end{equation} \item is convex, i.e., for every quantum channel $\Phi, \Phi' \in \cT_d$ \begin{equation} \Delta_\mu \left( \lambda \Phi + (1-\lambda) \Phi' \right) \leq \lambda \Delta_\mu \left( \Phi \right) + (1-\lambda) \Delta_\mu \left( \Phi' \right) \ \ \forall \lambda \in \left[0,1\right],\nonumber \end{equation} because \begin{eqnarray*} \Delta_\mu \left( \lambda \Phi + (1-\lambda) \Phi'\right) &=& \int_{\cS_d} \tilde{\Delta}\left(\lambda \Phi(\rho) + (1-\lambda) \Phi'(\rho),\rho\right)\; \mathrm{d}\mu(\rho) \\ &\leq& \lambda \int_{\cS_d} \tilde{\Delta}\left( \Phi(\rho), \rho \right) \; \mathrm{d}\mu(\rho) + (1-\lambda) \int_{\cS_d} \tilde{\Delta} \left(\Phi'(\rho),\rho\right) \; \mathrm{d}\mu(\rho)\\ &=& \lambda \Delta_\mu \left( \Phi \right) + (1-\lambda) \Delta_\mu \left( \Phi' \right), \end{eqnarray*} \item and is basis-independent, i.e., for every unitary $U \in \cM_d$ and every channel $\Phi \in \cT_d$, we have that \begin{equation} \Delta_\mu \left(U \Phi\left(U^\ast \cdot U \right) U^\ast \right) = \Delta_\mu(\Phi),\nonumber \end{equation} since \begin{eqnarray*} \Delta_\mu \left(U \Phi\left(U^\ast \cdot U \right) U^\ast \right) &=& \int_{\cS_d} \tilde{\Delta}\left(U \Phi\left(U^\ast \rho U \right) U^\ast ,\rho\right)\; \mathrm{d}\mu(\rho) \\ &=& \int_{\cS_d} \tilde{\Delta}\left( \Phi\left(U^\ast \rho U \right) ,U^\ast\rho U \right)\; \mathrm{d}\mu(\rho) \\ &=& \int_{\cS_d} \tilde{\Delta}\left( \Phi\left( \rho \right) ,\rho \right)\; \mathrm{d}\mu(\rho) \\ &=& \Delta_\mu(\Phi), \end{eqnarray*} where we have used the fact that $\mu$ is a unitarily invariant measure on $\cS_d$. \end{enumerate} The worst-case construction as well as the average-case construction therefore satisfy Assumption~\ref{assum:1} as claimed. \end{proof} \begin{lemma} If $\tilde{\delta}:\cP_d\times\cP_d\rightarrow[0,\infty]$ on the space of probability distributions $\cP_d:=\big\{q\in\R^d|\sum_{i=1}^d q_i=1\wedge \forall i: q_i\geq 0\big\}$ applied to the target distribution $p_i:=\langle i|\rho|i\rangle$ and the actually measured distribution $p_i':=\tr{\rho E_i'}$ satisfies \begin{enumerate}[(i)] \item $\tilde{\delta}(q,q)=0$, \item convexity in its second argument and \item invariance under joint permutations, \end{enumerate} then the worst-case as well as the average-case construction \bea \delta_{\infty}(E')&:=&\sup_{\rho\in S} \tilde{\delta}(p,p'),\nonumber\\ \delta_{\mu}(E')&:=&\int_{\cS_d} \tilde{\delta}(p,p') \; \mathrm{d}\mu(\rho),\nonumber \eea both satisfy Assumption~\ref{assum:2}. \end{lemma} \begin{proof} Let $\tilde{\delta}:\cP_d\times\cP_d\rightarrow[0,\infty]$ be such that it \begin{enumerate}[(i)] \item satisfies $\tilde{\delta}(q,q)=0$, \item is convex in its second argument, i.e., for every probability distribution $p,q,q' \in \cP_d$ \begin{equation} \tilde{\delta}( p , \lambda q + (1-\lambda) q') \leq \lambda \tilde{\delta}(p,q) +(1-\lambda) \tilde{\delta}(p,q')\ \ \forall \lambda \in [0,1],\nonumber \end{equation} \item and invariant under joint permutations,i.e., for every quantum state $\rho \in \cS_d$ and every POVM $E,E' \in \cE_d$ \begin{equation} \tilde{\delta}\left( \left( \tr{\rho U_{\pi}^\ast E_{\pi(i)} U_\pi } \right)_{i=1}^d, \left( \tr{\rho U_\pi^\ast E_{\pi(i)}'} U_\pi \right)_{i=1}^d \right) = \tilde{\delta}\left( \left( \tr{\rho E_{i} } \right)_{i=1}^d, \left( \tr{\rho E_{i}' } \right)_{i=1}^d \right).\nonumber \end{equation} \end{enumerate} Then its worst case $\delta_\infty$ satisfies \begin{enumerate}[(a)] \item $\delta_\infty \left( \left( \kb{i}{i} \right)_{i=1}^d \right) = 0$, since \begin{equation*} \delta_\infty \left( \left( \kb{i}{i} \right)_{i=1}^d \right) = \sup_{\rho\in S} \tilde{\delta}\left(\left( \kb{i}{i} \right)_{i=1}^d ,\left( \kb{i}{i} \right)_{i=1}^d \right)=0, \end{equation*} \item is convex, i.e., for any POVM $Q,Q' \in \cE_d$ \begin{equation*} \delta_\infty \left( \lambda Q + (1-\lambda)Q' \right) \leq \lambda \delta_\infty \left( Q \right) + (1-\lambda)\delta_\infty \left( Q' \right) \ \ \forall \lambda \in [0,1], \end{equation*} because \begin{eqnarray*} \delta_\infty \left( \lambda Q + (1-\lambda)Q' \right) &=& \sup_{\rho\in S} \tilde{\delta}(p,\lambda q+(1-\lambda)q') \\ &\leq& \lambda \sup_{\rho\in S} \tilde{\delta}(p, q) + (1-\lambda) \sup_{\rho\in S} \tilde{\delta}(p,q') \\ &=& \lambda \delta_\infty \left( Q \right) + (1-\lambda)\delta_\infty \left( Q' \right), \end{eqnarray*} where we have denoted the corresponding probability distribution as $q_i:= \tr{\rho Q_i}$ and $q_i':= \tr{\rho Q_i'}$. \item is permutation-invariant, i.e., for every permutation $\pi\in S_d$ and any POVM $E \in\cE_d$ \be \delta_\infty \left(\left( U_\pi^\ast E_{\pi(i)} U_\pi \right)_{i=1}^d \right)=\delta_\infty \left( \left(E_{i}\right)_{i=1}^d \right), \nonumber \ee where $U_\pi$ is the permutation matrix that acts as $U_\pi |i\rangle=|\pi(i)\rangle$, since \begin{eqnarray*} \delta_\infty \left(\left( U_\pi^\ast E_{\pi(i)} U_\pi \right)_{i=1}^d\right) &=& \sup_{\rho\in S} \tilde{\delta}\left(\left(\tr{\rho \kb{i}{i}} \right)_{i=1}^d,\left(\tr{\rho U_\pi^\ast E_{\pi(i)} U_\pi} \right)_{i=1}^d\right) \\ &=& \sup_{\rho\in S} \tilde{\delta}\left(\left(\tr{\rho \kb{i}{i}} \right)_{i=1}^d,\left(\tr{U_\pi \rho U_\pi^\ast E_{\pi(i)} } \right)_{i=1}^d\right) \\ &=& \sup_{\rho\in S} \tilde{\delta}\left(\left(\tr{U_\pi^\ast \rho U_\pi \kb{i}{i}} \right)_{i=1}^d,\left(\tr{ \rho E_{\pi(i)} } \right)_{i=1}^d\right) \\ &=& \sup_{\rho\in S} \tilde{\delta}\left(\left(\tr{ \rho U_\pi \kb{i}{i}U_\pi^\ast} \right)_{i=1}^d,\left(\tr{ \rho E_{\pi(i)} } \right)_{i=1}^d\right) \\ &=& \sup_{\rho\in S} \tilde{\delta}\left(\left(\tr{ \rho \kb{\pi(i)}{\pi(i)}} \right)_{i=1}^d,\left(\tr{ \rho E_{\pi(i)} } \right)_{i=1}^d\right) \\ &=& \sup_{\rho\in S} \tilde{\delta}\left(\left(\tr{ \rho \kb{i}{i}} \right)_{i=1}^d,\left(\tr{ \rho E_{i} } \right)_{i=1}^d\right) \\ &=& \delta_\infty\left(\left(E_{i}\right)_{i=1}^d\right), \end{eqnarray*} \item and it satisfies for every diagonal unitary $D\in\cM_d$ and any POVM $E\in\cE_d$ \be \delta_\infty \left( (D^\ast E_i D)_{i=1}^d \right)=\delta_\infty \left( (E_i)_{i=1}^d \right), \nonumber \ee because \begin{eqnarray*} \delta_\infty \left( (D^\ast E_i D)_{i=1}^d \right) &=& \sup_{\rho\in S} \tilde{\delta}\left( \left(\tr{ \rho \kb{i}{i}} \right)_{i=1}^d,\left(\tr{ \rho D^\ast E_{i} D } \right)_{i=1}^d \right) \\ &=& \sup_{\rho\in S} \tilde{\delta}\left( \left(\tr{ \rho \kb{i}{i}} \right)_{i=1}^d,\left(\tr{ D \rho D^\ast E_{i} } \right)_{i=1}^d \right) \\ &=& \sup_{\rho\in S} \tilde{\delta}\left( \left(\tr{ D \rho D^\ast \kb{i}{i}} \right)_{i=1}^d,\left(\tr{ \rho E_{i} } \right)_{i=1}^d \right) \\ &=& \sup_{\rho\in S} \tilde{\delta}\left( \left(\tr{ \rho D^\ast \kb{i}{i}D} \right)_{i=1}^d,\left(\tr{ \rho E_{i} } \right)_{i=1}^d \right) \\ &=& \sup_{\rho\in S} \tilde{\delta}\left( \left(\tr{ \rho \kb{i}{i}} \right)_{i=1}^d,\left(\tr{ \rho E_{i} } \right)_{i=1}^d \right) \\ &=& \delta_\infty \left( (E_i)_{i=1}^d \right). \end{eqnarray*} \end{enumerate} Similarly, its average case $\delta_\mu$ satisfies \begin{enumerate}[(a)] \item $\delta_\mu \left( \left( \kb{i}{i} \right)_{i=1}^d \right) = 0$, since \begin{equation*} \delta_\mu \left( \left( \kb{i}{i} \right)_{i=1}^d \right) = \int_{\cS_d} \tilde{\delta}\left(\left( \kb{i}{i} \right)_{i=1}^d ,\left( \kb{i}{i} \right)_{i=1}^d \right)\; \mathrm{d}\mu(\rho)=0, \end{equation*} \item is convex, i.e., for any POVM $Q,Q' \in \cE_d$ \begin{equation*} \delta_\mu \left( \lambda Q + (1-\lambda)Q' \right) \leq \lambda \delta_\mu \left( Q \right) + (1-\lambda)\delta_\mu \left( Q' \right) \ \ \forall \lambda \in [0,1], \end{equation*} because \begin{eqnarray*} \delta_\mu \left( \lambda Q + (1-\lambda)Q' \right) &=& \int_{\cS_d} \tilde{\delta}(p,\lambda q+(1-\lambda)q')\; \mathrm{d}\mu(\rho) \\ &\leq& \lambda \int_{\cS_d} \tilde{\delta}(p, q) \; d\mu(\rho) + (1-\lambda)\int_{\cS_d} \tilde{\delta}(p,q')\; \mathrm{d}\mu(\rho) \\ &=& \lambda \delta_\mu \left( Q \right) + (1-\lambda)\delta_\mu \left( Q' \right), \end{eqnarray*} where we have denoted the corresponding probability distribution as $q_i:= \tr{\rho Q_i}$ and $q_i':= \tr{\rho Q_i'}$. \item is permutation-invariant, i.e. for every permutation $\pi\in S_d$ and any $E\in\cE_d$ \be \delta_\mu \left(\left( U_\pi^\ast E_{\pi(i)} U_\pi \right)_{i=1}^d \right)=\delta_\mu \left( \left(E_{i}\right)_{i=1}^d \right) \nonumber \ee where $U_\pi$ is the permutation matrix that acts as $U_\pi |i\rangle=|\pi(i)\rangle$, since \begin{eqnarray*} \delta_\mu \left(\left( U_\pi^\ast E_{\pi(i)} U_\pi \right)_{i=1}^d\right) &=& \int_{\cS_d} \tilde{\delta}\left(\left(\tr{\rho \kb{i}{i}} \right)_{i=1}^d,\left(\tr{\rho U_\pi^\ast E_{\pi(i)} U_\pi} \right)_{i=1}^d\right) \; \mathrm{d}\mu(\rho) \\ &=& \int_{\cS_d} \tilde{\delta}\left(\left(\tr{\rho \kb{i}{i}} \right)_{i=1}^d,\left(\tr{U_\pi \rho U_\pi^\ast E_{\pi(i)} } \right)_{i=1}^d\right) \; \mathrm{d}\mu(\rho)\\ &=& \int_{\cS_d} \tilde{\delta}\left(\left(\tr{U_\pi^\ast \rho U_\pi \kb{i}{i}} \right)_{i=1}^d,\left(\tr{ \rho E_{\pi(i)} } \right)_{i=1}^d\right)\; \mathrm{d}\mu(\rho) \\ &=& \int_{\cS_d} \tilde{\delta}\left(\left(\tr{ \rho U_\pi \kb{i}{i}U_\pi^\ast} \right)_{i=1}^d,\left(\tr{ \rho E_{\pi(i)} } \right)_{i=1}^d\right) \; \mathrm{d}\mu(\rho) \\ &=& \int_{\cS_d} \tilde{\delta}\left(\left(\tr{ \rho \kb{\pi(i)}{\pi(i)}} \right)_{i=1}^d,\left(\tr{ \rho E_{\pi(i)} } \right)_{i=1}^d\right)\; \mathrm{d}\mu(\rho) \\ &=& \int_{\cS_d} \tilde{\delta}\left(\left(\tr{ \rho \kb{i}{i}} \right)_{i=1}^d,\left(\tr{ \rho E_{i} } \right)_{i=1}^d\right) \; \mathrm{d}\mu(\rho) \\ &=& \delta_\mu \left(\left(E_{i}\right)_{i=1}^d\right), \end{eqnarray*} \item and it satisfies for every diagonal unitary $D\in\cM_d$ and any $E\in\cE_d$ \be \delta_\mu \left( (D^\ast E_i D)_{i=1}^d \right)=\delta_\mu \left( (E_i)_{i=1}^d \right), \nonumber \ee because \begin{eqnarray*} \delta_\mu \left( (D^\ast E_i D)_{i=1}^d \right) &=& \int_{\cS_d} \tilde{\delta}\left( \left(\tr{ \rho \kb{i}{i}} \right)_{i=1}^d,\left(\tr{ \rho D^\ast E_{i} D } \right)_{i=1}^d \right)\; \mathrm{d}\mu(\rho) \\ &=& \int_{\cS_d} \tilde{\delta}\left( \left(\tr{ \rho \kb{i}{i}} \right)_{i=1}^d,\left(\tr{ D \rho D^\ast E_{i} } \right)_{i=1}^d \right) \; \mathrm{d}\mu(\rho)\\ &=& \int_{\cS_d} \tilde{\delta}\left( \left(\tr{ D \rho D^\ast \kb{i}{i}} \right)_{i=1}^d,\left(\tr{ \rho E_{i} } \right)_{i=1}^d \right) \; \mathrm{d}\mu(\rho) \\ &=& \int_{\cS_d} \tilde{\delta}\left( \left(\tr{ \rho D^\ast \kb{i}{i}D} \right)_{i=1}^d,\left(\tr{ \rho E_{i} } \right)_{i=1}^d \right) \; \mathrm{d}\mu(\rho)\\ &=& \int_{\cS_d} \tilde{\delta}\left( \left(\tr{ \rho \kb{i}{i}} \right)_{i=1}^d,\left(\tr{ \rho E_{i} } \right)_{i=1}^d \right) \; \mathrm{d}\mu(\rho)\\ &=& \delta_\mu \left( (E_i)_{i=1}^d \right). \end{eqnarray*} \end{enumerate} The worst-case as well as the average-case construction therefore satisfy Assumption~\ref{assum:2}. \end{proof} \paragraph{\bf Proof of Corollary~\ref{cor:reduc}.} \begin{proof} The eigenvalues of $J_{T_i}$, $i=1,2$, can be obtained from the expectation values of the mutually orthogonal projectors, i.e., \be x_1=\frac{\tr{(P_x\otimes\1) J_T}}{\tr{P_x}}\quad\text{and}\quad x_2=\frac{\tr{(\1\otimes P_x) J_T}}{\tr{P_x}},\quad x\in\{a,b,c\}. \nonumber \ee Since we know that $a,b,c$ are related to $\alpha,\beta,\gamma$ via $\alpha=d^2 a,\ \beta=b-c,\ \gamma=d(c-a)$, we get \begin{eqnarray} \alpha_1 &=& d^2 \frac{\tr{(P_a\otimes\1) J_T}}{\tr{P_a}} \nonumber \\ &=& d^2 \frac{\tr{\left( \1_{d^3} - \sum_{i=1}^d |ii\rangle\langle ii| \otimes \1_d \right) J_T}}{d^2-d}. \nonumber \end{eqnarray} Similarly, \begin{eqnarray} \beta_1 &=& \frac{\tr{(P_b\otimes\1) J_T}}{\tr{P_b}} - \frac{\tr{(P_c\otimes\1) J_T}}{\tr{P_c}} \nonumber \\ &=& \frac{\tr{\left(\frac1d\sum_{i,j=1}^d |ii\rangle\langle jj|\otimes\1_d \right) J_T}}{1} \nonumber \\ && - \frac{\tr{\left(\sum_{i=1}^d |ii\rangle\langle ii| \otimes \1_d - \frac1d\sum_{i,j=1}^d |ii\rangle\langle jj| \otimes \1_d \right) J_T}}{d-1}, \nonumber \end{eqnarray} and \begin{eqnarray} a_2 &=& \frac{\tr{(\1\otimes P_a) J_T}}{\tr{P_a}} \nonumber \\ &=& \frac{\tr{\left(\1_{d^3}-\1_d \otimes \sum_{i=1}^d |ii\rangle\langle ii|\right) J_T}}{d^2-d}. \nonumber \end{eqnarray} Using the diagrammatic notation introduced earlier, i.e., \begin{align*} \1_{d^3} &=:\ \Ga & \1_d\otimes\sum_{i=1}^d |ii\rangle\langle ii| &=:\ \Gb\\ \sum_{i,j=1}^d |ii\rangle\langle jj|\otimes\1_d &=:\ \Gc & \sum_{i=1}^d |ii\rangle\langle ii|\otimes\1_d &=:\ \Gd \end{align*} together with the isomorphic representation from Lemma~\ref{lem:iso}, the claim follows immediately. \end{proof} \bibliography{Information_Disturbance_Tradeoff_Literature}{} \bibliographystyle{ieeetr} \end{document}
194,627
You stick your penis into a cylinder attached to a pump that sucks out the air The resulting vacuum draws extra blood into your penis, making it erect and a little bigger. Diabetes Everyone was chatting about some things in Wujiang City Diabetes Mellitus Erectile Dysfunction on the plane , Time flies, the plane has entered the territory Mellitus Erectile of Jiangnan Province, and is flying towards Wujiang City Dysfunction While riding on the plane, Liu Yu who was chatting with Wang Xudong suddenly stopped. The medicine is sold as a complete course which promises to treat the two most spread sexual issues, ie premature ejaculation and erectile dysfunction. Whats the matter, say! Prince Diabetes Mellitus Erectile Dysfunction Lilund looked ugly, and his heart was full of anger His subordinates even showed weapons, pistols, assault rifles, Diabetes Mellitus Erectile Dysfunction etc. only picked up a Vx1 hammer and a shovel This Vx1 Male Enhancement is a kind of inertia, and it has nothing to do with the strength Male of smelling incense and Tong Fei at this moment In Changshan County, smelling incense Enhancement is a great sacrifice Sometimes it gets angry, and even Longqis face is not given. At this moment, Xi Xiaoru suddenly stretched out his hand and wiped Su Tangs chest forcefully, rubbing all the greasy he had gotten Which last longer in bed pills for men from eating chicken legs on Su Tangs Clothes Hmm Xi Xiaoru arrogantly regained his arrogance again, as if he had pulled back a round like this. Is it because we know that we have helicopter air patrols, these pirates got the wind and run without a trace? probably not Gale immediately overturned his own speculation that if these pirates were so easily deterred, Diabetes Mellitus Erectile Dysfunction they would not be so rampant. All of the ordinary energy points in the system were consumed all at once, Diabetes Mellitus Erectile Dysfunction no way, there are too many rare earth mines, more than 7. Its not that Peng Runwei has sent people to investigate, Diabetes Mellitus Erectile Dysfunction and even used top military computer experts, but no one can find any clues, these technical information seems to appear out of thin. Damiana The main reason for using this ingredient is that it enhances the flow of nutrient and oxygenrich blood to the penis It also enhances nerves impulses making you more sensitive. Su Tang stabbed out with a sharp Diabetes sword, and Diabetes Mellitus Erectile Dysfunction the Taisho sword Mellitus penetrated from the back of the warriors head and came out from his Erectile forehead The warriors movements stopped Dysfunction abruptly, and the giant seaprobing fork fell heavily on the ground. The old man said But Ill tell you the prophet, then Hu Saburo doesnt know any luck Not only is he close to the people of the natural sect, he also became friends with the emissary, no now it should be called the emissary. While eating and chatting, after eating for more than an hour, Wang Xudong touched his stomach and smiled contentedly This time I really enjoyed it Liu Yu said Brother Dong, if you think it is Diabetes Mellitus Erectile Dysfunction good here, we Come again next time Yeah Wang Xudong nodded and got up and left. This enhancement pill has grabbed much attention in this market because you will not find many of the ingredients in this pill in any other male enhancement products. Whats great is that the components used in these supplements have been naturallyderived and tested over time If you find a supplement that you think will work for you, go ahead and purchase it. Can see Diabetes all kinds of instruments, various display screens and so Mellitus on Through the glass, you can see Erectile the sea outside, and Diabetes Mellitus Erectile Dysfunction several oil Dysfunction mining platforms scattered in the distance. The problem was that Su Tangs attitude gave Diabetes her a strange feeling In the next few days, Su Tang Mellitus cultivated his inner breath when he was in good spirits When Diabetes Mellitus Erectile Dysfunction he was tired he would molested Erectile and molested Xiao Keer, or went around outside the fort The Dysfunction small days were very leisurely. how much special steel is needed for Super Metal No 1 Suddenly Benshaming was overjoyed The same goes for several of his entourages and subordinates, including two of them. There will also be several large highend residential complexes, villa complexes and so on The welfare facilities here will be very complete Staying here will be like staying in paradise, with a rich and comfortable life Wang Xudong even has a plan. But overusing it might cause tissue damage and lead to problems with your erections Its not considered to be an effective method for elongating your penis. Who is your master? the old man asked softly I dont have a master Food Recommended While Penis Pumping Growth Su Tang replied This road to practice requires one person to walk It is too difficult The old man smiled and said, It is not easy for you to achieve todays achievements. After everyone looked at the crude oil samples and the crude oil inspection reports, the head of the exploration team continued to introduce This is a model we built based on the exploration situation and data After speaking, he showed the screen of his laptop computer in front of everyone. In addition, Diabetes the supply of crude oil of Xudong Mining Group is stable Mellitus Once the contract is signed, Diabetes Mellitus Erectile Dysfunction people will never The neck is stuck and the credit Erectile is good Han Xinguang also saves Dysfunction worry and spends more energy in the daily management of the group. I know After I was natural a descendant of Master Wen Tian, she penis gave up her authority and tried enlargement her natural penis enlargement methods best to support me in methods Diabetes Mellitus Erectile Dysfunction the upper position Thats why I was so relaxed South of Feilu City. which must be good news Liu Yu noticed the change in Wang Xudongs face, raised a heart to his throat, and asked Diabetes Mellitus Erectile Dysfunction nervously Brother Dong, this whose call is this Wang Xudong said loudly Gail is calling, they The work should be finished Lets listen to what Gail said. How could I bring it with me? The old man said In this way, a few little friends will follow me, first send Diabetes Mellitus Erectile Dysfunction the ghost mastiff back to the manor, and then I will take you to get the good fortune pill. Su Tang said Vx1 Male Enhancement By the way, how far is Vx1 his Hundred Flower Palace from here? You dont even know where Male Hundred Flower Enhancement Palace is, so you want to rob?! Wen Xiang sighed Afterwards. Because Diabetes Su Tang felt too strange for them, this was a Mellitus tentative attack The figure retreated sharply, Erectile and at the same time he threw the tip of the spear, and the spear Dysfunction Diabetes Mellitus Erectile Dysfunction pierced Su Tangs chest again. The next moment, Wenxiang jumped over the high wall while holding a wooden barrel, and her left hand, which had returned to normal, was covered with blood. Su Tang was staring at the broom, and the old man surnamed Gu was taken aback, followed Su Tangs line of sight, immediately showing a frightened look. The idea is this No pill can make someone taller, but standing up straight can add inches to their height and reveal hight that was not there before Hardness is more important than size for most women anyway. If you order a 4 or 6 month supply, youll receive a free fastacting erection gel as an added bonus This product comes with a 100 day money back guarantee, which is one of the longest in the industry. It also grows in other regions of Asia and other parts of the world, where its also known as barrenwort, Bishops Hat, inyokaku, Rowdy Lamb Herb, and Fairy Wings. As always, these supplements are listed in no particular order We encourage readers to do their own research to find the product Diabetes Mellitus Erectile Dysfunction thats right for. When the two saw that the recipient was Wang Xudong and the place of delivery was Ziyuan Villa, they also jokingly said that this Wang Xudong should not be the president of Xudong Mining Group Wait until the real one arrives in front of this villa. Yet common sense isnt enough to stop some of us And thanks to our cultures restless drive for selfimprovement, information about penis enlargement is everywhere. It is important to have a natural flow of testosterone throughout the body to increase your overall stamina as well The capsule ensures you will last longer in bed by maintaining the erection and delaying the ejaculation. Nevertheless, the FDA does not regulate overthecounter supplements, and therefore they do not come with a full list of ingredients or their amounts It is advisable to dismiss products from manufacturers that promise magical results in enlarging your penis. Chen Sanlian said Whoever he is, if you want to spy on ourSuper Metal No 1special steel, thats no good After thinking about it, Chen Sanlian said This matter is not trivial. Its contains a highly effective ingredient which is known as ginsenoside which has the power to supply a healthy flow of blood to the brain and the penis. Oh, and by the way, the benefits you gain from having all those things done is up to 4 extra inches in length, a more robust erection, a hard like a rock erection. Diabetes Mellitus Erectile Dysfunction Progene Testosterone Research Study 5 Hour Potency Guide To Better Sex Vx1 Male Enhancement Male Enhancement Pills At Cvs Best Male Penis Enlargement Male Erection Pills Male Enhancement Pills Viewtopic Sexual Enhancement Pills Reviews VB Tours.
83,513
NorthLink Ferries Scoops National Fair Trade Award Tuesday 1st December 2015 NorthLink Ferries has been named as the joint winner of a national award in recognition of the company’s commitment to Fair Trade. The award was presented at the Scottish Fair Trade Awards to the Northern Isles ferry service for its partnership with Aberdeen FairTrade Steering Group and Orkney Fair Trade Group. The Campaign of the Year certificate was awarded to Peter Hutchinson, Customer Service Director at NorthLink Ferries for the integrated campaign, which saw Aberdeen’s Fair Trade banana mascot travel to Orkney on MV Hamnavoe. NorthLink Ferries’ commitments to Fair Trade include the provision of Fairtrade coffee, tea and sugar sourced from Aberdeen-based Caber Coffee, Fairtrade bananas supplied by JW Gray in Kirkwall and Strachans in Aberdeen and the introduction of new staff polo shirts manufactured from Fairtrade cotton. Speaking of the award win, Peter said: “We are extremely pleased to be recognised for our commitment to using Fair Trade on board. “Working collaboratively with Aberdeen Fairtrade Steering Group and Orkney Fair Trade Group, the partnership is a great achievement for all involved and I would like to thank both groups for their support and hard work in helping us become a Fair Trade organisation.” As a tribute to the company’s well-developed ethical business policy, NorthLink Ferries was named as Aberdeen’s flagship business that supports Fair Trade for 2015 earlier this year. Gill Smee, Chair of Orkney Fair Trade Group, added: “It is great to work with fellow Fair Trade enthusiasts and to see NorthLink Ferries rewarded. The company has put a lot of effort into buying fair and local, sourcing many of their hotel goods from both Orkney and Fairtrade suppliers.” Sue Good, Chair of Aberdeen Fairtrade Steering Group, said: “We were very happy to name NorthLink Ferries as Aberdeen’s flagship business for 2015 and are now delighted that the exploits of our mascot, FT Banana, have been recognised as raising awareness of Fair Trade activity in Scotland.” For more information about NorthLink Ferries please visit
414,975
Use this animation maker to create a text animation featuring your business's logo! You don't need any design software to customize this text animation for your brand. Simply type your text into the online animation maker, select colors for the different elements, and upload your image or logo. The expanding circles, shrinking text inside of a frame, and logo reveal will impress your customers! Use this animation maker to create text animations for your social media feeds, video intros, or for important presentations!! Take a look at this intro video maker by Placeit, featuring a reflective white space in which your logo design appears from a side, floating. This logo animation maker is an excellent choice for you to promote your latest brand in a smarter way! Start now, all you need to do is drag-and-drop your design image file over the interface. Easy, wasn't it? Try another Logo Video Maker!! Use this motion graphics maker to create a text animation for Instagram posts, website, business presentations, or YouTube videos. Stop struggling with After Effects and easily customize this text animation template in seconds! Simply enter in your text and select the colors for the background and the graphics. Try this other Text Animation Maker with Geometric Graphics for Corporate Intros!. Are you part of a CrossFit gym or fitness center that takes their athletes to the edge? With this text animation maker with jerk boxes, announce your sports event and invite them to workouts for beginners or pros. Just upload your logo, type in your own text and customize the background and graphics colors. Use your video on your Instagram Stories or Facebook profile!! Use this animation maker to create your own text animation for your Instagram Stories or Facebook feed! This animation template features a person typing a message on their phone. To customize it, enter your own text and pick a color for every element. Click on generate to watch a preview, and proceed to download your custom animated video! Want to try something different? Use this Text Animation Maker to Create a Vertical Loop Video with Abstract Shapes! This text animation app is an excellent tool for you to create animated video in a simple way! You just have to edit the text and background with the menu on the left side of the page. Then click on the Generate button and see the result! Use now Placeit's text animation video make an online text animation in a simple way. Amaze your audience with Placeit's motion graphics template. Try another Kinetic Typography Generator!!! Take a look at this awesome logo reveal by Placeit, it features a logo coming out of a spinning liquid and then being absorbed by it. Use it now and showcase your logo intro in a more spectacular way! You just have to drag-and-drop your jpeg or png image file over the interface, we will process it for you in just a moment. Don't forget about choosing a color for the video, wasn't that cool? Try a Logo Intro!
315,785
Pediatric Dentist - Torrington 148 Migeon Ave Torrington, CT 06790 To request appointment availability, please call our office at (860) 482-9578. Questions or Comments? We encourage you to contact us whenever you have an interest or concern about our services. You can contact us at (860) 482-9578. 148 Migeon Ave Torrington, CT 06790 Pediatric Dentist - Torrington, Dr. Lawrence Y. Lee, DDS, MS 148 Migeon Ave, Torrington, CT 06790 (860) 482-9578
44,043
\begin{document} \title{Local convergence of Newton's method under majorant condition} \author{ O. P. Ferreira\thanks{IME/UFG, CP-131, CEP 74001-970 - Goi\^ania, GO, Brazil ({\tt orizon@mat.ufg.br}). The author was supported by CNPq Grants 302618/2005-8 and 475647/2006-8, PRONEX--Optimization(FAPERJ/CNPq) and FUNAPE/UFG.} } \date{May 13, 2009} \maketitle \begin{abstract} A local convergence analysis of Newton's method for solving nonlinear equations, under a majorant condition, is presented in this paper. Without assuming convexity of the derivative of the majorant function, which relaxes the Lipschitz condition on the operator under consideration, convergence, the biggest range for uniqueness of the solution, the optimal convergence radius and results on the convergence rate are established. Besides, two special cases of the general theory are presented as an application. \noindent {\bf Keywords:} Newton's method, majorant condition, local convergence, Banach Space. \end{abstract} \section{Introduction}\label{sec:int} The Newton's method and its variant are powerful tools for solving nonlinear equation in real or complex Banach space. In the last few years, a couple of papers have dealt with the issue of local and semi-local convergence analysis of Newton's method and its variants by relaxing the assumption of Lipschitz continuity of the derivative of the function, which define the nonlinear equation in consideration, see \cite{C08}, \cite{C10}, \cite{Cl08}, \cite{F08}, \cite{FG09}, \cite{FS09}, \cite{Hu04}, \cite{W00}, \cite{W03}. In \cite{F08} and \cite{W00}, under a majorant condition and generalized Lipschitz condition, respectively, local convergence, quadratic rate and estimate of the best possible convergence radius of the Newton's method were established, as well as uniqueness of solution for the nonlinear equation in question. The analysis presented in \cite{F08}, \emph{convexity} of the derivative of the scalar majorant function was assumed and in \cite{W00} the {\it nondecrement} of the positive integrable function which defines the generalized Lipschitz condition. These assumptions seem to be actually natural in local analysis of Newton's method. Even though, convergence, uniqueness, superlinear rate and estimate of the best possible convergence radius will be established in this paper without assuming convexity of the derivative of the majorant function or that the function which defines the generalized Lipschitz condition is nondecreasing. In particular, this analysis shows that the convexity of the derivative of the majorant function or that the function which defines the generalized Lipschitz condition is nondecreasing are needed only to obtain quadratic convergence rate of the sequence generated by Newton's Method. Also, as in \cite{F08}, the analysis presented provides a clear relationship between the majorant function with the nonlinear operator under consideration. Besides improving the convergence theory this analysis permits us to obtain two new important special cases, namely, \cite{Hu04} and \cite{W03} (see also, \cite{F08} and \cite{W00}) as an application. It is worth pointing out that the majorant condition used here is equivalent to Wang's condition (see \cite{W03}) always the derivative of majorant function is convex. The organization of the paper is as follows. In Section \ref{sec:int.1}, some notations and one basic result used in the paper are presented. In Section \ref{lkant}, the main result is stated and in Section \ref{sec:PR} some properties of the majorant function are established and the main relationships between the majorant function and the nonlinear operator used in the paper are presented. In Section \ref{sec:UnOpBall}, the uniqueness of the solution and the optimal convergence radius are obtained. In Section \ref{sec:proof} the main result is proved and two applications of this result are given in Section \ref{apl}. Some final remarks are made in Section~\ref{fr}. \subsection{Notation and auxiliary results} \label{sec:int.1} The following notations and results are used throughout our presentation. Let $\banacha$,\, $\banachb$ be Banach spaces. The open and closed ball at $x$ are denoted, respectively, by $$ B(x,\delta) = \{ y\in X ;\; \|x-y\|<\delta \}\;\;\; \mbox{and}\;\;\;B[x,\delta] = \{ y\in X ;\; \|x-y\|\leqslant \delta \}. $$ Let $\Omega\subseteq \banacha$ an open set. The Fr\'echet derivative of $F:{\Omega}\to \banachb$ is the linear map $F'(x):\banacha \to \banachb$. \begin{lemma}[Banach's Lemma] \label{lem:ban} Let $B:\banacha \to \banacha$ be bounded linear operator. If $I:\banacha \to \banacha$ is the identity operator and $\|B-I\|<1$, then $B$ is invertible and $ \|B^{-1}\|\leq 1/\left(1- \|B-I\|\right). $ \end{lemma} \section{Local analysis for Newton's method } \label{lkant} Our goal is to state and prove a local theorem for Newton's method, which generalize Theorem~2.1 of \cite{F08}. First, we will prove some results regarding the scalar majorant function, which relaxes the Lipschitz condition. Then we will establish the main relationships between the majorant function and the nonlinear function. We will also prove the uniqueness of the solution in a suitable region and the optimal ball of convergence. Finally, we will show well definedness of Newton's method and convergence, also results on the convergence rates will be given. The statement of the theorem~is: \begin{theorem}\label{th:nt} Let $\banacha$, $\banachb$ be Banach spaces, $\Omega\subseteq \banacha$ an open set and $F:{\Omega}\to \banachb$ a continuously differentiable function. Let $x_*\in \Omega$, $R>0$ and $\kappa:=\sup\{t\in [0, R): B(x_*, t)\subset \Omega\}$. Suppose that $F(x_*)=0$, $F '(x_*)$ is invertible and there exist an $f:[0,\; R)\to \mathbb{R}$ continuously differentiable such~that \begin{equation}\label{Hyp:MH} \left\|F'(x_*)^{-1}\left[F'(x)-F'(x_*+\tau(x-x_*))\right]\right\| \leq f'\left(\|x-x_*\|\right)-f'\left(\tau\|x-x_*\|\right), \end{equation} for all $\tau \in [0,1]$, $x\in B(x_*, \kappa)$ and \begin{itemize} \item[{\bf h1)}] $f(0)=0$ and $f'(0)=-1$; \item[{\bf h2)}] $f'$ is strictly increasing. \end{itemize} Let $\nu:=\sup\{t\in [0, R): f'(t)< 0\},$ $\rho:=\sup\{\delta\in (0, \nu): [f(t)/f'(t)-t]/t<1,\, t\in (0, \delta)\} $ and $$ r:=\min \left\{\kappa, \,\rho \right\}. $$ Then the sequences with starting points $x_0\in B(x_*, r)/\{x_*\}$ and $t_0=\|x_0-x_*\|$, respectively, namely \begin{equation} \label{eq:DNS} x_{k+1} ={x_k}-F'(x_k) ^{-1}F(x_k), \qquad t_{k+1} =|t_k-f(t_k)/f'(t_k)|, \qquad k=0,1,\ldots\,, \end{equation} are well defined; $\{t_k\}$ is strictly decreasing, is contained in $(0, r)$ and converges to $0$ and $\{x_k\}$ is contained in $B(x_*,r)$ and converges to the point $x_*$ which is the unique zero of $F$ in $B(x_*, \sigma)$, where $\sigma:=\sup\{t\in (0, \kappa): f(t)< 0\}$ and there hold: \begin{equation} \label{eq:q2} \lim_{k \to \infty}\left[\|x_{k+1}-x_*\|\big{/}\|x_k-x_*\|\right]=0, \qquad \lim_{k \to \infty}[t_{k+1}/t_k]=0. \end{equation} Moreover, if $f(\rho)/(\rho f'(\rho)-1=1$ and $\rho<\kappa$ then $r=\rho$ is the best possible convergence radius. \noindent If, additionally, given $0\leq p\leq1$ \begin{itemize} \item[{\bf h3)}] the function $(0,\, \nu) \ni t \mapsto [f(t)/f'(t)-t]/t^{p+1}$ is strictly increasing, \end{itemize} then the sequence $\{t_{k+1}/t_k^{p+1}\}$ is strictly decreasing and there holds \begin{equation} \label{eq:q3} \|x_{k+1}-x_*\| \leq \big[t_{k+1}/t_k^{p+1}\big]\,\|x_k-x_*\|^{p+1}, \qquad k=0,1,\ldots\,. \end{equation} \end{theorem} \begin{remark} \label{r:rqc} The first equation in \eqref{eq:q2} means that $\{x_k\}$ converges superlinearly to $x_*$. Moreover, because the sequence $\{t_{k+1}/t_k^{p+1}\}$ is strictly decreasing then $ t_{k+1}/ t_k^{p+1}\leq t_1/t_0^{p+1}, $ for $k=0,1, \ldots$. So, the inequality in \eqref{eq:q3} implies $\|x_{k+1}-x_*\| \leq [t_{1}/t_0^{p+1}]\|x_k-x_*\|^{p+1},$ for $ k=0,1,\ldots\, $. As a consequence, if $p=0$ then $\|x_{k}-x_*\| \leq t_0[t_1/t_0]^k$ for $k=0,1,\ldots\,$ and if $0<p\leq 1$ then $$ \|x_{k}-x_*\| \leq t_0\left(t_1/t_0\right)^{[(p+1)^k-1]/p}, \qquad k=0,1,\ldots\,. $$ \end{remark} \begin{example} \label{ex:mf} The following continuously differentiable functions satisfy {\bf h1}, {\bf h2} and {\bf h3}: \begin{itemize} \item[{i)}] $f: [0, +\infty)\to \mathbb{R}$ such that $f(t)=t^{1+p}-t $; \item[{ii)}] $f: [0, +\infty)\to \mathbb{R}$ such that $f(t)=\mbox{e}^{-t}+t^2-1$. \end{itemize} Letting $0<p<1$, the derivative of first function is not convex, as well as of the second. \end{example} Similarly to the proof of Proposition $2.6$ in \cite{F08}, always $f$ has derivative $f'$ convex, we can prove that {\bf h3} holds with $p=1$. In this case, the Newton's sequence converges with quadratic rate. Indeed, the next example shows that the convexity of $f'$ was necessary in \cite{F08} to obtain quadratic convergence rate. \begin{example} Let $g: \mathbb{R}\to \mathbb{R}$ be given by $g(t)=t^{5/3}-t.$ Note that $g(0)=0$, $g'(0)=-1$ and letting $p=2/3$ in Example \ref{ex:mf} the function $f$ is a mojorant function to $g$. The Newton's method applied to $g$ with starting point $t_0$ ``near'' $0$ generates the following sequence: $$ t_{k+1}=\left(2\,t_{k}^{5/3}\right)\big{/}\left(5\,t_{k}^{2/3}-3\right), \qquad k=0,1,\ldots\,. $$ Theorem \ref{th:nt} implies that the sequence $\{t_k\}$ converges to $0$ with superlinear rate. It easy to see that $\{t_k\}$ does not converges to $0$ with quadratic rate. So, in particular, it follows from \cite{F08} that there is none majorant function having convex derivative for the function $g$. \end{example} From now on, we assume that the hypotheses of Theorem \ref{th:nt} hold, with the exception of {\bf h3} which will be considered to hold only when explicitly stated. \subsection{Preliminary results} \label{sec:PR} In this section, we will prove all statements in Theorem~\ref{th:nt} regarding the sequence $\{t_k\}$ associated to the majorant function. The main relationships between the majorant function and the nonlinear operator will be also established, as well as the results in Theorem~\ref{th:nt} related to the uniqueness of the solution and the optimal convergence radius. \subsubsection{The scalar sequence} \label{sec:PMF} In this section, we will prove the statements in Theorem~\ref{th:nt} involving $\{t_k\}$. First, we will prove that the constants $\kappa$, $\nu$, $\rho$ and $\sigma$ are positive. We beginning proving that $\kappa$, $\nu$ and $\sigma$ are positive. \begin{proposition} \label{pr:incr1} The constants $ \kappa,\, \nu $ and $\sigma$ are positive and $t-f(t)/f'(t)<0,$ for all $t\in (0,\,\nu).$ \end{proposition} \begin{proof} Since $\Omega$ is open and $x_*\in \Omega$, we can immediately conclude that $\kappa>0$. As $f'$ is continuous in $0$ with $f'(0)=-1$, there exists $\delta>0$ such that $f'(t)<0$ for all $t\in (0,\, \delta).$ So, $\nu>0$. Now, because $f(0)=0$ and $f'(0)=-1$, there exists $\delta>0$ such that $f(t)<0$ for all $t\in (0, \delta)$. Hence $\sigma>0$. It remains to show that $t-f(t)/f'(t)<0,$ for all $t\in (0,\,\nu).$ Since $f'$ is strictly increasing, $f$ is strictly convex. So, $ 0=f(0)>f(t)-tf'(t), $ for $t\in (0,\, R).$ If $t\in (0, \,\nu)$ then $f'(t)<0$, which, combined with last inequality yields the desired inequality. \end{proof} According to {\bf h2} and definition of $\nu$, we have $f'(t)< 0$ for all $t\in[0, \,\nu)$. Therefore, the Newton iteration map for $f$ is well defined in $[0,\, \nu)$. Let us call it $n_f$: \begin{equation} \label{eq:def.nf} \begin{array}{rcl} n_f:[0,\, \nu)&\to& (-\infty, \, 0]\\ t&\mapsto& t-f(t)/f'(t). \end{array} \end{equation} \begin{proposition} \label{pr:incr3} $ \lim_{t\to 0}|n_f(t)|/t=0. $ As a consequence, $\rho>0 $ and $|n_f(t)|<t$ for all $ t\in (0, \, \rho)$. \end{proposition} \begin{proof} Using definition \eqref{eq:def.nf}, Proposition \ref{pr:incr1}, $f(0)=0$, and definition of $\nu$, simple algebraic manipulation gives \begin{equation} \label{eq:rho} \frac{|n_f(t)|}{t}= [f(t)/f'(t)-t]/t=\frac{1}{f'(t)} \frac{f(t)-f(0)}{t-0}-1, \qquad t\in (0,\,\nu). \end{equation} Because $f'(0)=-1\neq 0$ the first statement follows by taking limit in~\eqref{eq:rho}, as $t$ goes to $0$. Since $\lim_{t\to 0}|n_f(t)|/t=0$, first equality in \eqref{eq:rho} implies that there exists $\delta>0$ such that $$ 0<[f(t)/f'(t)-t]/t<1, \qquad \; t\in (0, \delta). $$ So, we conclude that $\rho$ is positive. Therefore, the first equality in \eqref{eq:rho} together definition of $\rho$ implies that $|n_f(t)|/t=[f(t)/f'(t)-t]/t<1,$ for all $t\in (0, \rho)$, as required. \end{proof} Using \eqref{eq:def.nf}, it easy to see that the sequence $\{t_k \}$ is equivalently defined as \begin{equation} \label{eq:tknk} t_0=\|x_0-x_*\|, \qquad t_{k+1}=|n_f(t_k)|, \qquad k=0,1,\ldots\, . \end{equation} \begin{corollary} \label{cr:kanttk} The sequence $\{t_k\}$ is well defined, is strictly decreasing and is contained in $(0, \rho)$. Moreover, $\{t_k\}$ converges to $0$ with superlinear rate, i.e., $ \lim_{k\to \infty}t_{k+1}/t_k=0. $ If, additionally, {\bf h3} holds then the sequence $\{t_{k+1}/t_k^{p+1}\}$ is strictly decreasing. \end{corollary} \begin{proof} Since $0<t_0=\|x_0-x_*\|<r\leq\rho$, using Proposition~\ref{pr:incr3} and \eqref{eq:tknk} it is simple to conclude that $\{t_k\}$ is well defined, is strictly decreasing and is contained in $(0, \rho)$. So, we have proved the first statement of the corollary. Because $\{t_k \}\subset (0, \rho)$ is strictly decreasing it converges. So, $\lim_{k\to \infty}t_{k}=t_*$ with $0\leq t_*<\rho$ which together with \eqref{eq:tknk} implies $0\leq t_{*}=|n_f(t_*)|$. But, if $t_*\neq 0$ then Proposition~\eqref{pr:incr3} implies $|n_f(t_*)|<t_*$, hence $t_*=0$. Now, as $\lim_{k\to \infty}t_{k}=0$. Thus, definition of $\{t_k\}$ in \eqref{eq:tknk} and first statement in Proposition~\ref{pr:incr3} imply that $\lim_{k\to \infty}t_{k+1}/t_k=\lim_{k\to \infty}|n_f(t_k)|/t_k=0$ and the second statement is proved. Since $\{t_k \}$ is strictly decreasing, the last statement is an immediate consequence of {\bf h3}. \end{proof} \subsubsection{Relationship between the majorant function and the nonlinear operator} \label{sec:MFNLO} In this section, we will present the main relationships between the majorant function $f$ and the nonlinear operator $F$. \begin{lemma} \label{wdns} If \,\,$\| x-x_*\|<\min\{\kappa, \nu\}$, then $F'(x) $ is invertible and $$ \|F'(x)^{-1}F'(x_*)\|\leqslant 1/|f'(\| x-x_*\|)|. $$ In particular, $F'$ is invertible in $B(x_*, r)$. \end{lemma} \begin{proof} For proving this lemma is not necessary the assumption {\bf h3} neither that the derivative of the majorant function is convex. The proof follows the same pattern of Lemma~2.9 of \cite{F08}. \end{proof} Newton iteration at a point happens to be a zero of the linearization of $F$ at such a point. So, we study the linearization error at a point in $\Omega$ \begin{equation}\label{eq:def.er} E_F(x,y):= F(y)-\left[ F(x)+F'(x)(y-x)\right],\qquad y,\, x\in \Omega. \end{equation} We will bound this error by the error in the linearization on the majorant function $f$ \begin{equation}\label{eq:def.erf} e_f(t,u):= f(u)-\left[ f(t)+f'(t)(u-t)\right],\qquad t,\,u \in [0,R). \end{equation} \begin{lemma} \label{pr:taylor} If $\|x_*-x\|< \kappa$, then $ \|F'(x_*)^{-1}E_F(x, x_*)\|\leq e_f(\|x-x_*\|, 0). $ \end{lemma} \begin{proof} For proving this lemma is not necessary the assumption {\bf h3} neither that the derivative of the majorant function is convex. The proof follows the same pattern of Lemma~2.10 of \cite{F08}. \end{proof} Lemma \ref{wdns} guarantees, in particular, that $F'$ is invertible in $B(x_*, r)$ and consequently, the Newton iteration map is well-defined. Let us call $N_{F}$, the Newton iteration map for $F$ in that region: \begin{equation} \label{NF} \begin{array}{rcl} N_{F}:B(x_*, r) &\to& \banachb\\ x&\mapsto& x-F'(x)^{-1}F(x). \end{array} \end{equation} Now, we establish an important relationship between the Newton iteration maps $n_{f}$ and $ N_{F}$. As a consequence, we obtain that $B(x_*, r)$ is invariant under $ N_{F}$. This result will be important to assert the well definition of the Newton method. \begin{lemma} \label{le:cl} If $\|x-x_*\|< r$ then $ \|N_F(x)-x_*\|\leq |n_f(\|x-x_*\|)|. $ As a consequence, $$N_{F}(B(x_*, r))\subset B(x_*, r).$$ \end{lemma} \begin{proof} Since $F(x_*)=0$, the inequality is trivial for $x=x_*$. Now assume that $0<\|x-x_*\|\leq t$. Lemma \ref{wdns} implies that $F'(x) $ is invertible. Thus, because $F(x_*)=0$, direct manipulation yields $$ x_*-N_F(x)=-F'(x)^{-1}\left[ F(x_*)-F(x)-F'(x)(x_*-x)\right]= -F'(x)^{-1}E_F(x,x_*). $$ Using the above equation, Lemma \ref{wdns} and Lemma \ref{pr:taylor}, it is easy to conclude that $$ \|x_*-N_F(x)\|\leq\| -F'(x)^{-1}F'(x_*)\|\| F'(x_*)^{-1}E_F(x,x_*)\|\leq e_f(\|x-x_*\|, 0)/|f'(\|x-x_*\|)|. $$ On the other hand, taking into account that $f(0)=0$, the definitions of $e_f$ and $n_f$ imply that $$ e_f(\|x-x_*\|, 0)/|f'(\|x-x_*\|)|=|n_f(\|x-x_*\|)|. $$ So, the first statement follows by combining two above expressions. Take $x\in B(x_*, r)$. Since $\|x-x_*\|<r$ and $ r\leq \rho$, the first part together with the second part of Proposition~\ref{pr:incr3} imply that $ \|N_F(x)-x_*\|\leq |n_f(\|x-x_*\|)|<\|x-x_*\| $ and the last result follows. \end{proof} \begin{lemma} \label{le:cl2} If {\bf h3} holds and $\|x-x_*\|\leq t<r$ then $ \|N_F(x)-x_*\|\leq [ |n_f(t)|/t^{p+1}]\,\|x-x_*\|^{p+1}. $ \end{lemma} \begin{proof} The inequality is trivial for $x=x_*$. If $0<\|x-x_*\|\leq t$ then assumption {\bf h3} and \eqref{eq:def.nf} give $|n_f(\|x-x_*\|)|/\|x-x_*\|^{p+1}\leq |n_f(t)|/t^{p+1}$. So, using Lemma~\ref{le:cl} the statement follows. \end{proof} \subsection{Uniqueness and optimal convergence radius} \label{sec:UnOpBall} In this section we will obtain the uniqueness of the solution and the optimal convergence radius. \begin{lemma} \label{pr:uniq} The point $x_*$ is the unique zero of $F$ in $B(x_*, \sigma)$. \end{lemma} \begin{proof} For proving this lemma is not necessary the assumption {\bf h3} neither that the derivative of the majorant function is convex. The proof follows the same pattern of Lemma~2.13 of \cite{F08}. \end{proof} \begin{lemma} \label{pr:best} If $f(\rho)/(\rho f'(\rho))-1=1$ and $\rho < \kappa$, then $r=\rho$ is the optimal convergence radius. \end{lemma} \begin{proof} The proof follows the same pattern of Lemma~2.15 of \cite{F08}. \end{proof} \subsection{The Newton's sequence} \label{sec:proof} In this section, we will prove the statements in Theorem~\ref{th:nt} involving the Newton's sequence $\{x_k\}$. First, note that the first equation in \eqref{eq:DNS} together with \eqref{NF} implies that the sequence $\{x_k\}$ satisfies \begin{equation} \label{NFS} x_{k+1}=N_F(x_k),\qquad k=0,1,\ldots \,, \end{equation} which is indeed an equivalent definition of this sequence. \begin{proposition}\label{pr:nthe} The sequence $\{x_k\}$ is well defined, is contained in $B(x_*,r)$ and converges to the point $x_*$ the unique zero of $F$ in $B(x_*, \sigma)$ and there hold: \begin{equation} \label{eq:q2e} \lim_{k \to \infty}\left[\|x_{k+1}-x_*\|\big{/}\|x_k-x_*\|\right]=0. \end{equation} If, additionally, {\bf h3} holds then the sequences $\{x_k\}$ and $\{t_k\}$ satisfy \begin{equation} \label{eq:q3e} \|x_{k+1}-x_*\| \leq \big[t_{k+1}/t_k^{p+1}\big]\,\|x_k-x_*\|^{p+1}, \qquad k=0,1,\ldots\,. \end{equation} \end{proposition} \begin{proof} As $x_0\in B(x_*,r)$ and $r\leq \nu$, combining \eqref{NFS}, inclusion $N_{F}(B(x_*, r)) \subset B(x_*, r)$ in Lemma~\ref{le:cl} and Lemma~\ref{wdns}, it is easy to conclude that $\{x_k\}$ is well defined and remains in $B(x_*,r)$. We are going to prove that $\{x_k \}$ converges towards $x_*$. Since $\|x_{k}-x_*\|<r\leq \rho$, for $ k=0,1,\ldots \,$, we obtain from \eqref{NFS}, Proposition~\ref{le:cl} and Proposition~\ref{pr:incr3} that \begin{equation}\label{eq:conv1} \|x_{k+1}-x_*\|=\|N_F(x_k)-x_*\|\leq |n_f(\|x_{k}-x_*\|)|<\|x_{k}-x_*\|,\qquad k=0,1,\ldots \,. \end{equation} So, $\{\|x_{k}-x_*\| \}$ is strictly decreasing and convergent. Let $\ell_*=\lim_{k\to \infty}\|x_{k}-x_*\|$. Because $\{\|x_{k}-x_*\| \}$ rest in $(0, \,\rho)$ and is strictly decreasing we have $0\leq \ell_*<\rho$. Thus, the continuity of $n_f$ in $[0, \rho)$ and \eqref{eq:conv1} imply $0\leq \ell_{*}=|n_f(\ell_*)|$ and from Proposition~\ref{pr:incr3} we have $\ell_{*}=0$. Therefore, the convergence of $\{x_k \}$ to $x_*$ is proved. The uniqueness was proved in Lemma~\ref{pr:uniq}. For proving the equality in \eqref{eq:q2e} note that equation \eqref{eq:conv1} implies $$ \left[\|x_{k+1}-x_*\|\big{/}\|x_{k}-x_*\|\right]\leq \left[|n_f(\|x_{k}-x_*\|)|\big{/}\|x_{k}-x_*\|\right], \qquad k=0,1, \ldots. $$ Since $\lim_{k\to \infty}\|x_{k}-x_*\|=0$ the desired equality follows from first statement in Proposition~\ref{pr:incr3}. Now we will show \eqref{eq:q3e}. First, we will prove by induction that the sequences $\{t_k \}$ and $\{x_k \}$ defined, respectively, in \eqref{NFS} and \eqref{eq:tknk} satisfy \begin{equation}\label{eq:mjs} \|x_{k}-x_*\|\leq t_k, \qquad k=0,1, \ldots. \end{equation} Because $t_0=\|x_0-x_*\|$, the above inequality holds for $k=0$. Now, assume that $\|x_{k}-x_*\|\leq t_k$. Using \eqref{NFS}, Lemma~\ref{le:cl2}, the induction assumption and \eqref{eq:tknk} we obtain that $$ \|x_{k+1}-x_*\|=\|N_F(x_k)-x_*\|\leq \frac{|n_f(t_k)|}{t_{k}^{p+1}}\,\|x_{k}-x_*\|^{p+1}\leq|n_f(t_k)|=t_{k+1}, $$ and the proof by induction is complete. Therefore, it easy to see that the desired inequality follows by combining \eqref{NFS}, \eqref{eq:mjs}, Lemma~\ref{le:cl2} and \eqref{eq:tknk}. \end{proof} The proof of Theorem~\ref{th:nt} follows from Corollary~\ref{cr:kanttk}, Lemmas~\ref{pr:uniq} and \ref{pr:best} and Proposition~\ref{pr:nthe}. \section{Special Cases} \label{apl} In this section, we will present two special cases of Theorem~\ref{th:nt}. \subsection{Convergence result under H\"{o}lder-like condition} In this section we will present the convergence theorem for Newton's method under affine invariant H\"{o}lder-like condition which has appeared in \cite{Hu04} and \cite{W03}. \begin{theorem} \label{th:HV} Let $\banacha$, $\banachb$ be Banach spaces, $\Omega\subseteq \banacha$ an open set and $F:{\Omega}\to \banachb$ a continuously differentiable function. Let $x_*\in \Omega$ and $\kappa:=\sup\{t\in [0, R): B(x_*, t)\subset \Omega\}$. Suppose that $F(x_*)=0$, $F '(x_*)$ is invertible and there exists a constant $K>0$ and $ 0< p \leq 1$ such that \begin{equation} \label{eq:hc} \left\|F'(x_*)^{-1}\left[F'(x)-F'(x_*+\tau(x-x_*))\right]\right\|\leq K(1-\tau^p) \|x-x_*\|^p, \qquad x\in B(x_*, \kappa) \quad \tau \in [0,1]. \end{equation} Let $r=\min \{\kappa, \,[(p+1)/((2p+1)K)]^{1/p}\}$. Then, the sequences with starting point $x_0\in B(x_*, r)/\{x_*\}$ and $t_0=\|x_0-x_*\|$, respectively, $$ x_{k+1} ={x_k}-F'(x_k) ^{-1}F(x_k), \qquad t_{k+1} =\frac{K\,p\,t_{k}^{p+1}}{(p+1)[1-K\,t_k^{p}]},\qquad k=0,1,\ldots\,, $$ are well defined, $\{t_k\}$ is strictly decreasing, is contained in $(0, r)$ and converges to $0$ and $\{x_k\}$ is contained in $B(x_*,r)$, converges to $x_*$ which is the unique zero of $F$ in $B(x_*, \,[(p+1)/K]^{1/p})$ and there holds $$ \|x_{k+1}-x_*\| \leq \frac{K\,p}{(p+1)[1-K\,t_k^{p}]}\,\|x_{k}-x_*\|^{p+1}, \qquad k=0,1,\ldots\, $$ Moreover, if $[(p+1)/((2p+1)K)]^{1/p}<\kappa$ then $r=[(p+1)/((2p+1)K)]^{1/p}$ is the best possible convergence radius. \end{theorem} \begin{proof} It is immediate to prove that $F$, $x_*$ and $f:[0, \kappa)\to \mathbb{R}$, defined by $ f(t)=Kt^{p+1}/(p+1)-t, $ satisfy the inequality \eqref{Hyp:MH} and the conditions {\bf h1}, {\bf h2} and {\bf h3} in Theorem \ref{th:nt}. In this case, it is easy to see that $\rho$ and $\nu$, as defined in Theorem \ref{th:nt}, satisfy $$ \rho=[(p+1)/((2p+1)K)]^{1/p} \leq \nu=[1/K]^{1/p}, $$ and, as a consequence, $r=\min \{\kappa,\; [(p+1)/((2p+1)K)]^{1/p}\}$. Moreover, $f(\rho)/(\rho f'(\rho))-1=1$, $f(0)=f([(p+1)/K]^{1/p})=0$ and $f(t)<0$ for all $t\in (0,\, [(p+1)/K]^{1/p})$. Therefore, the result follows by invoking Theorem~\ref{th:nt}. \end{proof} \begin{remark} Since Theorem~\ref{th:HV} is a special case of Theorem~\ref{th:nt} it follows from Remark~\ref{r:rqc} that $$ \|x_{k}-x_*\| \leq \left[ \frac{K\,p\,\|x_0-x_*\|^{p}}{(p+1)[1-K\,\|x_0-x_*\|^{p}]}\right]^{[(p+1)^k-1]/p}\,\|x_0-x_*\|, \qquad k=0,1,\ldots . $$ \end{remark} \begin{remark} If $F:{\Omega}\to \banachb$ satisfies the Lipschitz condition $ \|F'(x)-F'(y)\| \leq L \|x-y\|, $ for all $ x,\,y\in \Omega,$ where $L>0$, then it also satisfies the condition \eqref{eq:hc} with $p=1$ and $K=L\| F'(x_*)^{-1}\|$. In this case, the best possible convergence radius for Newton's method is $r=2/(3L\|F '(x_*)^{-1} \|)$, see~\cite{r74} and \cite{WCR77}. We point out that the convergence radius of affine invariant theorems are insensitive to invertible linear transformation of $F$. On the other hand, theorems with the Lipschitz condition are sensitive, see \cite{TW79}. For more details about affine invariant theorems on Newton's method see \cite{DHB} (see also \cite{DH}). \end{remark} \subsection{Convergence result under generalized Lipschitz condition} In this section, we will present a local convergence theorem on Newton’s method under a generalized Lipschitz condition due to X. Wang, it has appeared in \cite{W03} (see also \cite{W00}). It is worth point out that the result in this section does not assume that the function which defines the generalized Lipschitz condition is nondecreasing. \begin{theorem} \label{th:XWT} Let $\banacha$, $\banachb$ be Banach spaces, $\Omega\subseteq \banacha$ an open set and $F:{\Omega}\to \banachb$ a continuously differentiable function. Let $x_*\in \Omega$ and $\kappa:=\sup\{t\in [0, R): B(x_*, t)\subset \Omega\}$. Suppose that $F(x_*)=0$, $F '(x_*)$ is invertible and there exists a positive integrable function $L:[0,\; R)\to \mathbb{R}$ such that \begin{equation}\label{Hyp:XW} \left\|F'(x_*)^{-1}\left[F'(x)-F'(x_*+\tau(x-x_*))\right]\right\| \leq \int^{\|x-x_*\|}_{\tau\|x-x_*\|} L(u){\rm d}u, \end{equation} for all $\tau \in [0,1]$, $x\in B(x_*, \kappa)$. Let $\bar{\nu}> 0 $ be the constant defined by $$ \bar{\nu}:=\sup \left\{t\in [0, R): \displaystyle\int_{0}^{t}L(u){\rm d}u-1< 0\right\}, $$ and let $\bar{\rho}> 0 $ and $\bar{r}>0$ be the constants defined by $$ \bar{\rho}:=\sup \left\{t\in (0, \delta): \displaystyle\int^{t}_{0}L(u)u {\rm d}u\Big{/}\left[t\left(1-\displaystyle\int^{t}_{0}L(u){\rm d}u\right)\right]<1, \; t\in (0, \delta)\right\}, \qquad \bar{r}=\min \left\{\kappa, \bar{\rho}\right\}. $$ Then, the sequences with starting point $x_0\in B(x_*, \bar{r})/\{x_*\}$ and $t_0=\|x_0-x_*\|$, respectively, $$ x_{k+1} ={x_k}-F'(x_k) ^{-1}F(x_k), \qquad t_{k+1} =\displaystyle\int^{t_k}_{0}L(u)u {\rm d}u\Big{/}\left(1-\displaystyle\int^{t_k}_{0}L(u){\rm d}u\right),\qquad k=0,1,\ldots\,, $$ are well defined, $\{t_k\}$ is strictly decreasing, is contained in $(0, \bar{r})$ and converges to $0$, $\{x_k\}$ is contained in $B(x_*,\bar{r})$, converges to $x_*$ which is the unique zero of $F$ in $B(x_*, \bar{\sigma})$, where $$ \bar{\sigma}:=\sup\left \{t\in(0, \kappa): \int^{t}_{0}L(u)(t-u){\rm d}u- t< 0 \right\}. $$ and there hold: $\lim_{k\to \infty}t_{k+1}/t_k=0$ and $ \lim_{k\to \infty}[\|x_{k+1}-x_*\|/\|x_k-x_*\|]=0. $ Moreover, if $$ \displaystyle\int^{\bar{\rho}}_{0}L(u)u {\rm d}u\Big{/}\left[\bar{\rho}\left(1-\displaystyle\int^{\bar{\rho}}_{0}L(u){\rm d}u\right)\right]= 1, $$ and ${\bar \rho}<\kappa$ then $\bar{r}=\bar{\rho}$ is the best possible convergence radius. \noindent If, additionally, given $0\leq p\leq1$ \begin{itemize} \item[{\bf h)}] the function $ (0,\, \nu) \ni t \mapsto t^{1-p}L(t) $ is nondecreasing, \end{itemize} then the sequence $\{t_{k+1}/t_k^{p+1}\}$ is strictly decreasing and there holds \begin{equation} \label{eq:wq3} \|x_{k+1}-x_*\| \leq \big[t_{k+1}/t_k^{p+1}\big]\,\|x_k-x_*\|^{p+1}, \qquad k=0,1,\ldots\,. \end{equation} \end{theorem} \begin{proof} Let ${\bar f}:[0, \kappa)\to \mathbb{R}$ a differentiable function defined by \begin{equation} \label{eq:wf} {\bar f}(t)=\int_{0}^{t}L(u)(t-u){\rm d}u-t. \end{equation} Note that the derivative of the function $f$ is given by $$ {\bar f}'(t)=\int_{0}^{t}L(u){\rm d}u-1. $$ Because $L$ is integrable ${\bar f}'$ is continuous (in fact ${\bar f}'$ is absolutely continuous). So, it is easy to see that \eqref{Hyp:XW} becomes \eqref{Hyp:MH} with $f'={\bar f}'$. Moreover, because $L$ is positive the function $f={\bar f}$ satisfies the conditions {\bf h1} and {\bf h2} in Theorem \ref{th:nt}. Direct algebraic manipulation yields $$ \frac{1}{t^{p+1}} \left[\frac{{\bar f}(t)}{{\bar f}'(t)}-t\right]=\left[ \frac{1}{t^{p+1}}\displaystyle\int^{t}_{0}L(u)u{\rm d}u\right] \frac{1}{|{\bar f}'(t)|}. $$ If assumption {\bf h} holds then Lemma~$2.2$ of \cite{W03} implies that the first tern in right had side of the above equation is nondecreasing in $(0,\, \nu)$. Now, since $1/|{\bar f}'|$ is strictly increasing in $(0,\, \nu)$ the above equality implies that {\bf h3} in Theorem~\ref{th:nt}, with $f={\bar f}$, also holds. Therefore, the result follows from Theorem~\ref{th:nt} with $f={\bar f}$, $\nu=\bar{\nu}$, $\rho=\bar{\rho}$, $r=\bar{r}$ and $\sigma=\bar{\sigma}$. \end{proof} \begin{remark} Since Theorem~\ref{th:XWT} is a special case of Theorem~\ref{th:nt} it follows from Remark~\ref{r:rqc} that if $p=0$ then $\|x_{k}-x_*\| \leq q^{k}\,\|x_0-x_*\|$, for $k=0,1,\ldots$ and if $0<p\leq 1$ then $$ \|x_{k}-x_*\| \leq q^{[(p+1)^k-1]/p}\,\|x_0-x_*\|, \qquad k=0,1,\ldots , $$ where $$ q=\displaystyle\int^{\|x_0-x_*\|}_{0}L(u)u {\rm d}u\Big{/}\left[\|x_0-x_*\|\left(1-\displaystyle\int^{\|x_0-x_*\|}_{0}L(u){\rm d}u\right)\right]. $$ \end{remark} \begin{remark} It was shown in \cite{W00} that if $L$ is positive and nondecreasing then the sequence generated by Newton's method converges with quadratic rate. From Theorem~\ref{th:XWT} we conclude that the assumption on the nondecrement of $L$ is needed only to obtain the quadratic convergence rate of the Newton's sequence. This result was also obtained in \cite{W03}. Finally, we observe that if the positive integrable function $L:[0,\; R)\to \mathbb{R}$ is nondecreasing then the strictly increasing function $f':[0,\; R)\to \mathbb{R}$, defined by $$ f'(t)=\int_{0}^{t}L(u){\rm d}u-1, $$ is convex. In this case, is not hard to prove that the inequalities \eqref{Hyp:MH} and \eqref{Hyp:XW} are equivalents. On the other hand, if $f'$ is strictly increasing and non necessary convex then the inequalities \eqref{Hyp:MH} and \eqref{Hyp:XW} are not equivalents. Because there exists functions strictly increasing, continuous, with derivative zero almost everywhere, see \cite{T78} (see also \cite{OW07}). Note that these functions are not absolutely continuous, so they can not be represented by an integral. \end{remark} \section{Final remarks } \label{fr} Theorem \ref{th:XWT} has many interesting special cases, including the Smale's theorem on Newton's method (see \cite{S86}) for analytical functions, see \cite{W03}. Theorem \ref{th:nt} has the Nesterov-Nemirovskii's theorem on Newton's method (see \cite{NN-94}) for self-concordant functions as a special case, see \cite{F08}.
17,494
February 15. Ryze NYC Bronx holds networking meeting. 6:30 p.m. to 9:30 p.m., Riverdale's Greentree, 5693 Riverdale Ave., Bronx. Fee: $12 members, $15 nonmembers. (646) 483-4636. February 15. Institute of Real Estate Management, greater New York chapter, holds lunch meeting on secular versus cyclical bull markets. 11:30 a.m. to 1:30 p.m., The New York Helmsley Hotel, 212 E. 42nd St. Fee: $65 members, $130 nonmembers. (212) 944-9445. February 15. Prime Strategies Brain Exchange holds networking exchange roundtable. 6:00 p.m. to 8:30 p.m., AXA Advisors, 1633 Broadway, third-floor conference room. Fee: $25. (212) 679-1209 or [email protected] February 15. NYC Usability Professionals Association holds meeting on running a company. 6:00 p.m. to 8:00 p.m., Federation of Protestant Welfare Agencies, 281 Park Ave. South. Fee: $10 members, $15 nonmembers, $5 students; additional $5 if not registered. (201) 653-0783 or [email protected] February 15. Institute of Management Accountants, New York chapter, holds dinner and networking meeting on practical marketing strategies. 5:15 p.m. to 8:00 p.m., Arno's Restaurant, 141 W. 38th St. Fee: $40 members, $45 nonmembers. (201) 216-6478. February 15. Center for an Urban Future and Center for New York City Affairs hold forum on the Bush administration and the future of small business development in New York. 8:30 a.m. to 10:30 a.m., New School University's Theresa Lang Community and Student Center, 55 W. 13th St., second floor. Free. (212) 479-3341 or [email protected] February 16. Networking for Professionals holds networking reception. 6:30 p.m. to 8:30 p.m., TechSpace, 41 E. 11th St., 11th floor. Fee: $25 members, $40 nonmembers. (718) 625-1369 or [email protected]onals.com. February 16. Manhattan Chamber of Commerce holds seminar with Ron Markezich, chief information officer of Microsoft. 11:00 a.m. to 12:30 p.m., Lighthouse International, 111 E. 59th St. Free. (212) 473-7805 or [email protected] February 17. Human Resources Association of New York holds breakfast meeting on why women are suing their employers. 8:00 a.m. to 11:00 a.m., NYU Conference Center, 15 Barclay St. Fee: $35 HR/NY members in advance, $55 nonmembers in advance, $20 HR/NY student members and members in transition in advance; additional $10 at door. (877) 625-4769. February 21-27 February 22. The New York Society of Security Analysts holds breakfast meeting on accounting trends and issues for 2005. 8:00 a.m. to 10:00 a.m., The Harvard Club, 27 W. 44th St. Fee: $55 members, $85 nonmembers. (212) 541-4530. February 22. SoundBoard holds chief executive roundtable forum. 8:30 a.m. to 10:30 a.m., Piper Rudnick, 1251 Sixth Ave., 29th floor. Free. Registration required. (917) 408-6175 or [email protected] -Adrianne Pasquarelli
70,352
\begin{document} \title[Packed words and quotient rings] {Packed words and quotient rings} \author{Dani\"el Kroes and Brendon Rhoades} \address {Department of Mathematics \newline \indent University of California, San Diego \newline \indent La Jolla, CA, 92093-0112, USA} \email{dkroes@ucsd.edu, bprhoades@ucsd.edu} \begin{abstract} The coinvariant algebra is a quotient of the polynomial ring $\QQ[x_1,\ldots,x_n]$ whose algebraic properties are governed by the combinatorics of permutations of length $n$. A word $w = w_1 \dots w_n$ over the positive integers is {\em packed} if whenever $i > 2$ appears as a letter of $w$, so does $i-1$. We introduce a quotient $S_n$ of $\QQ[x_1,\ldots,x_n]$ which is governed by the combinatorics of packed words. We relate our quotient $S_n$ to the generalized coinvariant rings of Haglund, Rhoades, and Shimozono as well as the superspace coinvariant ring. \end{abstract} \maketitle \section{Introduction} Consider the polynomial ring $\QQ[\xx_n] := \QQ[x_1,\ldots,x_n]$ in $n$ variables. The symmetric group $\symm_n$ acts on $\QQ[\xx_n]$ by variable permutation. It is known that the corresponding invariant subring $\QQ[\xx_n]^{\symm_n}$ of \emph{symmetric functions} has algebraically independent homogeneous generators $e_1(\xx_n),\ldots,e_n(\xx_n)$, where \[ e_d := \sum_{1 \leq i_1 < \ldots < i_d \leq n} x_{i_1} \cdots x_{i_d} \] is the degree $d$ \emph{elementary symmetric polynomial}. The \emph{invariant ideal} is the ideal generated by the symmetric functions with constant term: \[ I_n := \langle \QQ[\xx_n]_+^{\symm_n} \rangle = \langle e_1,\ldots,e_n \rangle. \] The \emph{coinvariant algebra} $R_n := \QQ[x_n]/I_n$ has long been studied. In particular, as ungraded $\symm_n$-module $R_n \cong_{\symm_n} \QQ[\symm_n]$ coincides with the regular representation of $\symm_n$. Moreover, the Hilbert series $\Hilb(R_n;q) = (1+q)(1+q+q^2)\cdots(1+q+\ldots+q^{n-1})$ coincides with the generating function of both the inversion and major index statistic on permutations. Traditionally studied in physics, the superspace ring $\Omega_n$ has received significant recent attention in coinvariant theory \cite{RW-module, Zabrocki}. For a positive integer $n$, {\em superspace} of rank $n$ is the tensor product \[ \Omega_n := \QQ[x_1, \dots, x_n] \otimes \wedge \{ \theta_1, \dots , \theta_n \} \] of a rank $n$ polynomial ring with a rank $n$ exterior algebra. The group $\symm_n$ acts diagonally on $\Omega_n$, viz. $w.x_i := x_{w(i)}$, $w.\theta_i := \theta_{w(i)}$. Let $(\Omega_n)^{\symm_n}_+ \subseteq \Omega_n$ denote the space of $\symm_n$-invariants with vanishing constant term and let $\langle (\Omega_n)^{\symm_n}_+ \rangle \subseteq \Omega_n$ be the ideal generated by this subspace. Considering commuting and anticommuting variables separately, the {\em superspace coinvariant ring} $\Omega_n/\langle (\Omega_n)^{\symm_n}_+ \rangle$ carries a bigraded action of $\symm_n$. One recovers the classical coinvariant algebra by setting the $\theta$-variables to zero. The Combinatorics Group and the Fields Institute conjectured (see \cite{Zabrocki}) a formula for the bigraded $\symm_n$-Frobenius image type of the superspace coinvariant ring. Let $\mathrm{SYT}(n)$ be the family of standard Young tableaux with $n$ boxes. Given $T \in \mathrm{SYT}(n)$, let $\mathrm{des}(T)$ be the number of descents in $T$ and let $\mathrm{maj}(T)$ be its major index. We use the $q$-analog notation \begin{equation*} [n]_q := 1 + q + \cdots + q^{n-1}, \quad [n]!_q := [n]_q [n-1]_q \cdots [1]_q, \quad {n \brack k}_q := \frac{[n]!_q}{[k]!_q \cdot [n-k]!_q}. \end{equation*} The Fields Group conjectured \cite{Zabrocki} that \begin{equation} \label{superspace-frobenius} \grFrob( \Omega_n/\langle (\Omega_n)^{\symm_n}_+ \rangle; q, z) = \sum_{k = 1}^n z^{n-k} \cdot C_{n,k}(\xx;q) \end{equation} where \begin{equation} \label{c-function-definition} C_{n,k}(\xx; q) := \sum_{T \in \mathrm{SYT}(n)} q^{\mathrm{maj}(T) + {n-k \choose 2} + (n-k) \cdot \mathrm{des}(T)} {\mathrm{des}(T) \brack n-k}_q s_{\mathrm{shape}(T)'}(\xx). \end{equation} Here $s_{\mathrm{shape}(T)'}(\xx)$ is the Schur function corresponding to the conjugate of the shape of $T$. The symmetric function $C_{n,k}(\xx;q)$ appearing in Equation~\eqref{superspace-frobenius} has appeared in the literature before. Combinatorially, it is the $t = 0$ specialization of the function $\Delta'_{e_{k-1}} e_n$ appearing in the Haglund-Remmel-Wilson {\em Delta Conjecture} \cite{HRW, DM}. Algebraically, it is (up to a minor twist) the graded Frobenius image of the generalized coinvariant algebras $R_{n,k}$ introduced by Haglund-Rhoades-Shimozono \cite{HRS}. Geometrically, it is (up to the same minor twist) the graded Frobenius image of the cohomology representation afforded by the $\symm_n$-action on the variety $X_{n,k}$ of $n$-tuples of lines $(\ell_1, \dots, \ell_n)$ spanning $\mathbb{C}^k$ \cite{PR}. Despite these varied interpretations of $C_{n,k}(\xx;q)$, the formula \eqref{superspace-frobenius} remains conjectural as of this writing. Whereas algebraic properties of the classical coinvariant ring are governed by permutations, the superspace coinvariants appear to be governed by packed words. A word $w = w_1 \dots w_n$ over the positive integers is {\em packed} if, for all $i > 0$, whenever $i+1$ appears as a letter in $w$, so does $i$. Let $\WWW_n$ be the family of packed words of length $n$. For example, we have \begin{equation*} \WWW_3 = \{123, 213, 132, 231, 312, 321, 112, 121, 211, 122, 212, 221, 111\}. \end{equation*} Packed words in $\WWW_n$ are in natural bijection with the family $\OP_n$ of all ordered set partitions of $[n]$ and have appeared in various settings including Hopf algebras \cite{NT} and polytopes \cite{CL}. The symmetric group $\symm_n$ acts on the set $\WWW_n$ by letter permutation. The conjecture \eqref{superspace-frobenius} implies that \begin{equation} \label{ungraded-superspace} \Omega_n/\langle (\Omega_n)^{\symm_n}_+ \rangle \cong \QQ[\WWW_n] \otimes \mathrm{sign} \end{equation} as ungraded $\symm_n$-modules, where $\mathrm{sign}$ is the 1-dimensional sign representation of $\symm_n$. Proving the isomorphism \eqref{ungraded-superspace} remains an open problem. Even the dimension equality $\dim \Omega_n/\langle (\Omega_n)^{\symm_n}_+ \rangle = |\WWW_n|$ is presently out of reach. Motivated by the conjecture \eqref{superspace-frobenius}, we define the following family of singly-graded $\symm_n$-modules $S_n$ which provably have vector space dimension $|\WWW_n|$ and satisfy algebraic properties similar to \eqref{ungraded-superspace} and \eqref{superspace-frobenius}. We let $e_d^{(i)} := e_d(x_1, \dots, x_{i-1}, x_{i+1}, \dots, x_n)$ be the degree $d$ elementary symmetric polynomial with the variable $x_i$ omitted. \begin{definition} \label{sn-quotient-definition} Let $J_n \subseteq \QQ[\xx_n]$ be the ideal \[ J_n = \langle x_i^d \cdot e_{n-r}^{(i)} \ : \ 1 \leq i \leq n, \, 1 \leq r \leq d \rangle \] and let $S_n := \QQ[\xx_n]/J_n$ be the corresponding quotient ring. \end{definition} By convention, the degree 0 elementary symmetric polynomial is 1, so that $J_n$ contains the variable powers $x_i^n$. Additionally, we use the convention that $e_d \equiv 0$ for $d < 0$. Although the generators of the ideal $J_n$ may appear unusual, they will arise naturally from the perspective of orbit harmonics as follows. More precisely, suppose $X \subseteq \QQ^n$ is a finite locus of points. Consider the ideal \begin{equation} \mathbf{I}(X) := \{ f \in \QQ[\xx_n] \,:\, f(\mathbf{x}) = 0 \text{ for all $\mathbf{x} \in X$} \} \end{equation} of polynomials in $\QQ[\xx_n]$ which vanish on $X$ and let \begin{equation} \mathbf{T}(X) := \langle \tau(f) \,:\, f \in \mathbf{I}(X) - \{0\} \rangle, \end{equation} where $\tau(f)$ denotes the highest degree component of a nonzero polynomial $f \in \QQ[\xx_n]$. The homogeneous ideal $\mathbf{T}(X)$ is the {\em associated graded} ideal of $\mathbf{I}(X)$ and we have isomorphisms of $\QQ$-vector spaces \begin{equation} \QQ[X] \cong \QQ[\xx_n]/\mathbf{I}(X) \cong \QQ[\xx_n]/\mathbf{T}(X) \end{equation} which are isomorphisms of ungraded $\symm_n$-modules when $X$ is closed under the natural action of $\symm_n$ on $\QQ^n$; the quotient $\QQ[\xx_n]/\mathbf{T}(X)$ has the additional structure of a graded $\symm_n$-module. Given $n$ distinct rational parameters $\alpha_1, \dots, \alpha_n$, we have a natural point locus $X_n \subseteq \QQ^n$ in bijection with $\WWW_n$, namely \begin{equation} X_n = \{ (\beta_1, \dots, \beta_n) \in \QQ^n \,:\, \{ \beta_1, \dots, \beta_n \} = \{\alpha_1, \dots, \alpha_k\} \text{ for some $k$} \}. \end{equation} It will develop that \begin{equation} \label{harmonic-identification} \mathbf{T}(X_n) = J_n. \end{equation} In other words, the quotient $S_n = \QQ[\xx_n]/\mathbf{T}(X_n)$ is the graded quotient of $\QQ[\xx_n]$ arising from the packed word locus $X_n$. Equation~\eqref{harmonic-identification} may be viewed as a more natural, if less computationally useful, alternative to Definition~\ref{sn-quotient-definition}. We prove the following facts regarding the module $S_n$. \begin{itemize} \item The ungraded $\symm_n$-structure of $S_n$ coincides with the natural $\symm_n$-action on $\WWW_n$ (without sign twist) \begin{equation} \label{ungraded-sn-structure} S_n \cong \QQ[\WWW_n]. \end{equation} \item The graded $\symm_n$-structure is described by \begin{equation} \label{sn-graded-frobenius-image} \grFrob(S_n; q) = \sum_{k = 1}^n q^{n-k} \cdot (\mathrm{rev}_q \circ \omega) C_{n,k}(\xx;q). \end{equation} Here $\mathrm{rev}_q$ is the operator on polynomials in $q$ which reverses their coefficient sequences and $\omega$ is the symmetric function involution which trades $e_n(\xx)$ for $h_n(\xx)$. \end{itemize} Finding an algebraic explanation for the similarity between the provable \eqref{sn-graded-frobenius-image} and the conjectural \eqref{superspace-frobenius} could shed light on a proof of \eqref{superspace-frobenius}. The outline of the paper is as follows. In Section \ref{sec-background} we cover some of the necessary background, including symmetric functions, representation theory of the symmetric group, and Gr\"obner theory. In Section \ref{sec-combinatorics} we describe a bijection between ordered set partitions in $\OP_n$ and certain sequences $(c_1, \dots, c_n)$ of nonnegative integers. This bijection will translate to a bijection between a monomial basis of $S_{n}$ and $\WWW_{n}$. In Section \ref{sec-algebra} we use this bijection to prove \eqref{ungraded-sn-structure} and its graded refinement Equation~\eqref{sn-graded-frobenius-image}. In Section \ref{sec-conclusion} we end with some concluding remarks and open questions. \section{Background}\label{sec-background} \subsection{Symmetric functions and the representation theory of $\symm_n$} A \emph{partition} $\lambda$ of size $n$, denoted $\lambda \vdash n$, is a sequence $\lambda = (\lambda_1,\ldots,\lambda_m)$ of integers $\lambda_1 \geq \ldots \geq \lambda_m > 0$ with $\lambda_1+\ldots+\lambda_m=n$. Let $\xx = (x_1,x_2,x_3,\ldots)$ be an infinite set of variables and let $\Lambda \subseteq \QQ[[\xx]]$ be the \emph{ring of symmetric functions}. It is known that the degree $n$ homogeneous piece of $\Lambda$ has a basis given by the \emph{Schur functions} $s_{\lambda}(\xx)$ where the index $\lambda$ ranges over all partitions of size $n$. For thorough definitions of the ring of symmetric functions and Schur functions, we refer to \cite{Sagan}. We will now recall the fundamentals of the representation theory of $\symm_n$. The irreducible representations of $\symm_n$ are naturally in bijections with the partitions $\lambda \vdash n$. For every such $\lambda$, the corresponding irreducible $\symm_n$-module is denoted by $S^{\lambda}$. Consequently, every $\symm_n$-module $V$ decomposes as \[ V = \bigoplus_{\lambda \vdash n} \left(S^{\lambda}\right)^{c_{\lambda}} \] for some integers $c_{\lambda} \geq 0$. The \emph{Frobenius character} of $V$ is the symmetric function \[ \Frob(V) = \sum_{\lambda \vdash n} c_{\lambda} \cdot s_{\lambda}(\xx). \] Lastly, let $V$ be a graded vector space such that for every $d \geq 0$ the degree $d$ homogeneous component $V_d$ is finite dimensional. The \emph{Hilbert series} of $V$ is the power series in $q$ given by \[ \Hilb(V;q) = \sum_{d \geq 0} \dim(V_d) \cdot q^d. \] If further $V$ carries a graded $\symm_n$-action, we define the \emph{graded Frobenius character} by \[ \grFrob(V;q) = \sum_{d \geq 0} \Frob(V_d) \cdot q^d. \] More details on the representation theory of $\symm_n$ can be found in \cite{Sagan}. \subsection{Gr\"obner theory} In this section we will review some of the Gr\"obner theory used in this paper. The main starting point of Gr\"obner theory is a polynomial ring $k[\xx_n]$ over a field $k$ equipped with a total order $<$ on its monomials that satisfies: \begin{itemize} \item[1.] $1 \leq m$ for any monomial $m$; \item[2.] for monomials $m_1 < m_2$ and any monomial $m$ we have $m \cdot m_1 < m \cdot m_2$. \end{itemize} Such a total order is called a \emph{monomial order}. Given a monomial order $<$, for any $0 \neq f \in k[\xx_n]$ we define the \emph{leading monomial} $\mathrm{LM}(f)$ as the monomial $m$ such that $m$ has nonzero coefficient in $f$ and $n \leq m$ for any monomial $n$ with nonzero coefficient in $f$. For an ideal $I \leq k[\xx_n]$ we set $\mathrm{LM}(I)$ as the ideal generated by the leading monomials of all non-zero $f \in I$. We know \cite{CLOS} that a basis for the $k$-vector space $k[\xx_n]/I$ is given by all monomials $m$ that do not belong to $\mathrm{LM}(I)$, which is equivalent to $m$ not being divisible by any monomial of the form $LM(f)$ with $f \in I$. This basis for $k[\xx_n]/I$ is called the \emph{standard monomial basis} with respect to the total order $<$. The monomial order on $\QQ[\xx_n]$ used in this paper is the \emph{lexicographic order}. In this order, two monomials $m_1 = x_1^{a_1} \cdots x_n^{a_n}$ and $m_2 = x_1^{b_1} \cdots x_n^{b_n}$ are compared as follows. Assuming $m_1 \neq m_2$, let $j \in \{1,2,\ldots,n\}$ be minimal such that $a_j \neq b_j$, then $m_1 < m_2$ if and only if $a_j < b_j$. \section{The combinatorial bijection} \label{sec-combinatorics} In this section we will establish a bijection between ordered set partitions and coinversion codes. The starting point will be a bijection established by Rhoades and Wilson \cite[Thm. 2.2]{RW}. Let $\OP_{n,k}$ be the family of ordered set partitions $(B_1 \mid \cdots \mid B_k)$ of $[n]$ into $k$ blocks. Given an ordered set partition $\sigma = (B_1 \ | \ \cdots \ | \ B_k) \in \OP_{n,k}$, define a sequence $\mathbf{code}(\sigma) = (c_1,\ldots,c_n)$ as follows. If $1 \leq i \leq n$ and $i \in B_j$, then \[ c_i = \begin{cases} |\{\ell > j \ : \ \min(B_\ell) > i\}| & \text{if } i = \min(B_j); \\ |\{\ell > j \ : \ \min(B_\ell) > i\}| + (j-1) & \text{if } i \neq \min(B_j). \end{cases} \] The sequence $\mathbf{code}(\sigma)$ was called the {\em coinversion code} of $\sigma$ in \cite{RW}. This is a variant of the classical {\em Lehmer code} on permutations in the case $k = n$. The coinversion $\mathbf{code}(\sigma)$ of ordered set partitions $\sigma \in \OP_{n,k}$ were characterized in \cite{RW} as follows. Given a subset $S = \{s_1 < \cdots < s_d \} \subseteq [n]$, define the {\em skip sequence} by $\gamma(S) = (\gamma_1, \dots, \gamma_n)$ where \begin{equation} \gamma_i = \begin{cases} i -j + 1 & \text{if $i = s_j \in S$} \\ 0 & \text{if $i \notin S$.} \end{cases} \end{equation} Also let $\gamma(S)^* = (\gamma_n, \dots, \gamma_1)$ be the {\em reverse skip sequence}. For example, if $n = 7$ and $S = \{2,3,6\}$ we have $\gamma(S) = (0,2,2,0,0,4,0)$ and $\gamma(S)^* = (0,4,0,0,2,2,0).$ \begin{theorem} (\cite[Thm. 2.2]{RW}) \label{RW-bijection} Let $1 \leq k \leq n$. The map $\sigma \mapsto \mathbf{code}(\sigma)$ is a bijection from ordered set partitions of $[n]$ with $k$ blocks to the family of nonnegative integer sequences $(c_1,\ldots,c_n)$ such that \begin{itemize} \item for all $1 \leq i \leq n$ we have $c_i < k$, \item for any subset $S \subseteq [n]$ with $|S| = n-k+1$, the componentwise inequality $\gamma(S)^* \leq (c_1,\ldots,c_n)$ fails to hold. \end{itemize} \end{theorem} For future reference we recall the inverse map introduced in the proof of the above theorem. This inverse map uses the following insertion procedure. For $(B_1 \ | \ \cdots \ | \ B_k)$ a sequence of $k$ (possibly empty) sets of positive integers we define the \emph{coinversion labels} as follows. First, label the empty sets $0,1,\ldots,j$ from right to left, and then label the nonempty sets $j+1,\ldots,j+k-1$ from left to right. For a sequence $(c_1,\ldots,c_n)$ satisfying the conditions in Theorem~\ref{RW-bijection}, we construct an ordered set partition as follows. Start with a sequence $(\emptyset \ | \ \cdots \ | \ \emptyset)$ of $k$ copies of the empty set, and for $i = 1,2,\ldots,n$ insert the number $i$ in the block with label $c_i$ under the coinversion labeling. For example, let $n = 7$, $k = 4$ and consider the sequence $c = (2,1,2,0,2,0,2)$. The resulting ordered set partition will be $(6 \ | \ 13 \ | \ 257 \ | \ 4)$, as shown by the following process, starting with the labeled sequence of blocks $(\emptyset^3 \ | \ \emptyset^2 \ | \ \emptyset^1 \ | \ \emptyset^0)$. \[ \begin{tabular}{c|c|c} $i$ & $c_i$ & updated labeled sequence of blocks \\ \hline $1$ & $2$ & $(\emptyset^2 \ | \ 1^3 \ | \ \emptyset^1 \ | \ \emptyset^0)$ \\ $2$ & $1$ & $(\emptyset^1 \ | \ 1^2 \ | \ 2^3 \ | \ \emptyset^0)$ \\ $3$ & $2$ & $(\emptyset^1 \ | \ 13^2 \ | \ 2^3 \ | \ \emptyset^0)$ \\ $4$ & $0$ & $(\emptyset^0 \ | \ 13^1 \ | \ 2^2 \ | \ 4^3)$ \\ $5$ & $2$ & $(\emptyset^0 \ | \ 13^1 \ | \ 25^2 \ | \ 4^3)$ \\ $6$ & $0$ & $(6^0 \ | \ 13^1 \ | \ 25^2 \ | \ 4^3)$ \\ $7$ & $2$ & $(6^0 \ | \ 13^1 \ | \ 257^2 \ | \ 4^3)$ \end{tabular} \] In our algebraic analysis of $S_n$ we will need a version of this insertion which maps the family of ordered set partitions of $[n]$ with \emph{at least} $k$ blocks bijectively onto a certain collection $(c_1, \dots, c_n)$ of length $n$ `code words' over the nonnegative integers. In the bijection $\mathbf{code}$ of Theorem~\ref{RW-bijection}, the ordered set partition $(1|2|\cdots|m,m+1,\ldots,n)$ has code $(0,0,\ldots,0)$ for any number of blocks $m$, so we cannot simply take the union of these maps for $m \geq k$. We resolve the problem in the above paragraph by working with a different version of the coinversion code, which we will call the \emph{boosted coinversion code}. For an ordered set partition $\sigma = (B_1 \ | \ \cdots \ | \ B_k)$ we define $\overline{\mathbf{code}}(\sigma) = (c_1,\ldots,c_n)$ as follows. Suppose $1 \leq i \leq n$ and $i \in B_j$, then \[ c_i = \begin{cases} |\{\ell > j \ : \ \min(B_\ell) > i\}| & \text{if } i = \min(B_j); \\ |\{\ell > j \ : \ \min(B_\ell) > i\}| + j & \text{if } i \neq \min(B_j). \end{cases} \] Compared to the coinversion codes from before, the difference is that all the numbers corresponding to non-minimal elements of blocks are raised by one, and we say that these numbers are \emph{boosted}. The remainder of the section will be devoted to the proof of the following theorem. \begin{theorem}\label{thm-combinatorial-bijection} Let $1 \leq k \leq n$. The map $\sigma \mapsto \overline{\mathbf{code}}(\sigma)$ is a bijection from the set of ordered set partitions of $[n]$ with at least $k$ blocks to the family of nonnegative integer sequences such that \begin{itemize} \item for all $1 \leq i \leq n$ we have $c_i < n$. \item for any subset $S \subseteq [n]$ with $|S| = n-k+1$ the componentwise inequality $\gamma(S)^* \leq (c_1,\ldots,c_n)$ fails to hold. \item for any $1 \leq i,d \leq n$ and any $T \subseteq[n-1]$ with $|T| = n-d$ and $\gamma(T)^* = (\gamma_{n-1},\ldots,\gamma_1)$, the componentwise inequality $(\gamma_{n-1},\ldots,\gamma_i,d,\gamma_{i-1},\ldots,\gamma_1) \leq (c_1,\ldots,c_n)$ fails to hold. \end{itemize} \end{theorem} The proof of the necessity of these conditions will be similar to that of the proof of \cite[Thm.2.2]{RW}. For the sufficiency of the conditions we use an insertion map similar to that considered above. We begin by showing that both the number of blocks of an ordered set partition of $[n]$, as well as its classical coinversion code, can be recovered from its boosted coinversion code. \begin{lemma} Let $\sigma$ be an ordered set partition of $[n]$. Given the boosted coinversion code $\overline{\mathbf{code}}(\sigma)$ one can recover the coinversion code $\mathbf{code}(\sigma)$, as well as the number of blocks of $\sigma$. \end{lemma} \begin{proof} Note that the second part is immediate once we have recovered $\mathbf{code}(\sigma)$, as the number of blocks will be equal to the number of unboosted numbers, which is easily found by comparing $\overline{\mathbf{code}}(\sigma)$ and $\mathbf{code}(\sigma)$. Given a boosted coinversion code $(c_1,\ldots,c_n)$ corresponding to an ordered set partition with $\ell$ blocks (where $\ell$ is unknown), we can think of creating the ordered set partition by following the same procedure as described before, with the only difference that the labels of all the nonempty blocks should be raised by one. No matter what, at some point we will fill in the last nonempty block with some number $i$, which necessarily has $c_i = 0$. Additionally, from the boosting, it is clear that $c_j > 0$ for all $j > i$, hence we can recover $i$ by looking for the last entry in our sequence that equals $0$. Now, assume that we have identified that $i_1 < \ldots < i_j$ are minimal in their block and that all other numbers in $[i_1,n]$ are not minimal in their block. If $i_1 = 1$ we are done. Otherwise, it is clear that we must have at least $j+1$ blocks (as clearly $1$ will be minimal in its block). Now, let $i_0 < i_1$ be the largest number that is also minimal in its block. By the inverse map, this must correspond to some index with $c_{i_0} \leq j$, as at the time of inserting $i_0$ there are exactly $j+1$ empty blocks, labeled $0,1,\ldots,j$. Additionally, for any $i_0 < i < i_1$, at the time of insertion there will be exactly $j$ empty blocks, hence the coinversion label of $i$ will be at least $j+1$ (because of the boosting). Therefore, given $\overline{\mathbf{code}}(\sigma)$ we can recognize $i_0$ as the largest index $i_0 < i_1$ with $c_{i_0} \leq j$. By induction we are done. \end{proof} Explicitly, the procedure above is as follows. Given a sequence $(c_1,\ldots,c_n)$, trace the sequence from right to left, marking the first $0$, then the first $0$ or $1$, then the first $0$, $1$ or $2$, etcetera. Now, decrease all the unmarked numbers by $1$ and one recovers the coinversion code. We call this procedure the \emph{unboosting} of a sequence $(c_1,\ldots,c_n)$. As an example, consider the boosted coinversion code $c = (2,4,2,4,0,0,1,4)$. Working from right to left we mark $c_6$ as it is the first $0$, then $c_5$ as it is at most $1$, then $c_3$ as it is the next number at most $2$ and finally $c_1$ as it is the next number that is at most $3$. Therefore, the number of blocks is equal to $4$ and the unboosted coinversion code is given by $(2,3,2,3,0,0,0,3)$. Applying the earlier bijection this coinversion code corresponds to $(37 \ | \ 124 \ | \ 6 \ | \ 58)$. We are now ready to prove the main result of this section. \begin{proof}[Proof of Theorem \ref{thm-combinatorial-bijection}] We first prove the necessity of the conditions. Let $\sigma$ be an ordered set partition of $[n]$ with at least $k$ blocks and let $\overline{\mathbf{code}}(\sigma) = (c_1, \dots, c_n)$ be its boosted coinversion code. \begin{itemize} \item If $i$ is minimal in its block, $c_i$ will be at most the number of blocks following the block containing $i$, which is at most $n-1$. If $i$ is not minimal we have at most $n-1$ blocks, and if $i \in B_j$ we have \[ c_i = j + |\{\ell > j \ : \ \min(B_\ell)>i\}| \leq j + |\{\ell > j \ : \ \text{the $\ell^{\text{th}}$ block exists}\}| \leq n-1. \] \item Suppose $S = \{n+1-t_{n-k+1},\ldots,n+1-t_1\}$ (with $t_1 < \ldots < t_{n-k+1}$) satisfies $\gamma(S)^* \leq (c_1,\ldots,c_n)$. We show that none of the numbers $\{t_1,\ldots,t_{n-k+1}\}$ is minimal in its block of $\sigma$, contradicting that $\sigma$ has at least $k$ blocks. If $t_{n-k+1}$ is minimal in its block, then \begin{align*} c_{t_{n-k+1}} &= |\{ \ell > t_{n-k+1} : \begin{smallmatrix} \ell \text{ is minimal in its block and} \\ \text{occurs to the right of } t_{n-k+1} \text{ in } \sigma \end{smallmatrix} \}| \\ &\leq |\{t_{n-k+1}+1,\ldots,n-1,n\}| = n - t_{n-k+1}. \end{align*} However, the term in $\gamma(S)^*$ in position $t_{n-k+1}$ equals $n-t_{n-k+1} + 1$, hence we conclude that $t_{n-k+1}$ is not minimal in its block. Now, if $t_{n-k}$ were minimal in its block, we would have \begin{align*} c_{t_{n-k}} &= |\{ \ell > t_{n-k} : \ell \text{ is minimal in its block and occurs to the right of } t_{n-k} \text{ in } \sigma \}| \\ &\leq |\{t_{n-k}+1,\ldots,n-1,n\}-\{t_{n-k+1}\}| = n - t_{n-k} - 1. \end{align*} But again, the term in $\gamma(S)^*$ in position $t_{n-k}$ equals $n - t_{n-k}$, which shows that $t_{n-k}$ cannot be minimal in its block either. An inductive argument now shows that none of $\{t_1,\ldots,t_{n-k+1}\}$ is minimal in its block. \item For $d = n$ this is equivalent to the fact that $c_i < n$ for all $i$, so assume $1 \leq d < n$. Assume for contradiction that $(\gamma_{n-1},\ldots,\gamma_{i},d,\gamma_{i-1},\ldots,\gamma_1) \leq (c_1,\ldots,c_n)$ where $(\gamma_{n-1}, \dots, \gamma_1) = \gamma(T)^*$ for some $T \subseteq [n-1]$ of size $|T| = n-d$. Since $c_{n+1-i} \geq d$, this implies that $\sigma$ has at least $d$ blocks. Let $T = \{i_1 < \ldots < i_t \leq n+1-i < i_{t+1} < \ldots < i_{t+s}\}$. By the same argument used in the previous bullet, we see that all $n+1-i_j$ with $j \leq t$ are not minimal in their block. Now we consider two cases. \begin{itemize} \item If $n+1-i$ is not minimal in its block either, we can continue the argument as in the previous case to show that none of $n+1-i_j$ is minimal in its block. In particular we have $1 + (n-d)$ elements that are not minimal in their respective blocks, contradicting the fact that $\sigma$ has at least $d$ blocks. \item Now suppose that $n+1-i$ is minimal in its block. Since $c_{n+1-i} = d$, this implies that among $\{n+2-i,\ldots,n\}$ at least $d$ numbers are also minimal in their respective blocks. In particular, there are at least $d$ numbers that are not of the form $n+1-j$ with $j \in T$. But this implies that $T$ has size at most $(n-1) - d < n-d$, which is a contradiction. \end{itemize} \end{itemize} Now, we show that these conditions are sufficient. Given a sequence $(c_1,\ldots,c_n)$ we can first unboost the sequence (as we can apply this procedure to every sequence of nonnegative integers) to determine how many blocks our intended ordered set partition must have. Given this extra information, we can basically run the same inverse map as before, with the exception that we should increase the label of every nonempty block by $1$. It now suffices to check that we don't run into any troubles by doing so. Our proof will go through the following steps. \begin{itemize} \item First we will show that the unboosting procedure concludes that there are at most $n-k$ boosted numbers, as this will ensure that the ordered set partition we aim for has at least $k$ blocks. \item Then we will inductively show that can basically run the same inverse map as before. \begin{itemize} \item First we show that the conclusion of the unboosting is that $1$ is unboosted, ensuring we have enough blocks to insert $1$ as a minimal element in its block. \item After that we will show that if the first $j-1$ numbers have been placed, we can place $j$ following the appropriate procedure. This argument will depend on whether $j$ is supposed to be minimal in its block or not (something that we know from the unboosting procedure). \end{itemize} \end{itemize} We will now prove each of these steps. \begin{itemize} \item Assume that we have $t$ boosted numbers $c_{n+1-i_j}$ (with $i_1 < \ldots < i_t$) and assume that $t \geq n-k+1$. Let $S = \{i_1,\ldots,i_{n-k+1}\}$, then we claim that $(c_1,\ldots,c_n) \geq \gamma(S)^*$. If $i \not \in S$, we have $\gamma(S)*_{n+1-i} = 0$, so $c_{n+1-i} \geq \gamma(S)^*_{n+1-i}$ indeed holds. Furthermore, for $i = i_j$ by assumption on $c_{n+1-i_j}$ there are $(i_j-j)$ unboosted numbers to the right of $n+1-i_j$. Therefore, since $c_{n+1-i_j}$ was boosted, we have $c_{n+1-i_j} \geq i_j-j+1 = \gamma(S)^*_{n+1-i_j}$, as desired. \item As mentioned before, we now show that we can run the inverse map without any issues. \begin{itemize} \item If $c_1 = 0$ it is clear that we can insert $1$, so assume $c_1 = d$ with $1 \leq d \leq n-1$. Our goal is to show that in the unboosting procedure we conclude that $1$ has to be minimal in its block. As $c_1 = d$ this happens precisely if the procedure shows that among $\{2,3,\ldots,n\}$ at least $d$ numbers were not boosted. For the sake of contradiction, assume that we have at least $n-d$ boosted numbers, and let the largest $n-d$ be $n+1-i_1 > n+1-i_2 > \ldots > n+1-i_{n-d}$. Let $T = \{i_1,\ldots,i_{n-d}\}$ then by a similar argument to before we have $(c_1,c_2,\ldots,c_n) \geq (d,\gamma_{n-1},\ldots,\gamma_1)$ where $(\gamma_{n-1},\ldots,\gamma_1) = \gamma(T)^*$. \item Assume that the inverse map successfully inserted all the numbers in $[j-1]$ (with $j \geq 2$) and that we now try to insert $j$ according to $c_j$. First assume that $c_j = t$ is unboosted. Since this is unboosted, there are still at least $t$ unboosted numbers among $\{c_{j+1},\ldots,c_n\}$. As so far only indices corresponding to unboosted numbers have been inserted in empty blocks, and the number of total blocks it the number of unboosted numbers, we have at least $t+1$ empty blocks at this point. As a result, there will be some empty block labeled with $t$, so we can insert $j$ into an empty block, as desired. Hence, assume that $c_j$ was boosted. Suppose that at the time we still have $t$ nonempty blocks, then by the unboosting procedure we know that $c_j \geq t+1$, so we can insert $j$ appropriately, unless $c_j$ is too big. In other words, the only thing that can go wrong is that there were $\ell$ unboosted numbers (hence $\ell$ blocks in the ordered set partition), but that $c_j \geq \ell+1$. Let $n+1-i_1 > \ldots > n+1-i_a > j > n - i_{a+1} > \ldots > n-i_{n-\ell-1}$ be all the boosted numbers. But then, for $T = \{i_1,\ldots,i_{n-\ell-1}\}$ of size $n - (\ell+1)$, with $\gamma(T)^* = (\gamma_{n-1},\ldots,\gamma_1)$, we have $(c_1,\ldots,c_n) \geq (\gamma_{n-1},\ldots,\gamma_{n-j+1},\ell+1,\gamma_{n-j},\ldots,\gamma_1)$, a contradiction. \qedhere \end{itemize} \end{itemize} \end{proof} \section{The algebraic quotient} \label{sec-algebra} Recall from the introduction that a word $w = w_1w_2\cdots w_n$ on the alphabet $\ZZ_{>0}$ is \emph{packed} if whenever $i+1$ appears, then so does $i$. It will be convenient for our inductive arguments to consider packed words in which every letter in some segment $1 \leq i \leq k$ must appear. To this end, we define \begin{equation} \WWW_{n,k} := \{ \text{length $n$ packed words $w = w_1 w_2 \dots w_n$ \,:\, the letters $1, 2, \dots, k$ appear in $w$} \}. \end{equation} Words in $\WWW_{n,k}$ are in bijection with ordered set partitions of $[n]$ with at least $k$ blocks. We have the further identifications $\WWW_{n,1} = \WWW_n$ and $\WWW_{n,n} = \symm_n.$ The symmetric group $\symm_n$ acts on $\WWW_{n,k}$ by the rule $\sigma \cdot (w_1 \dots w_n) := w_{\sigma(1)} \dots w_{\sigma(n)}$. The quotient rings $S_{n,k}$ of the following definition will give a graded refinement of this action. Their defining ideals $J_{n,k}$ contain the ideal $J_n$ defining the ring $S_n$ appearing in the introduction. \begin{definition} Let $J_{n,k} \subseteq \QQ[\xx_n]$ be the ideal \[ J_{n,k} := J_n + \langle e_n ,e_{n-1}, \ldots, e_{n-k+1} \rangle \] and let $S_{n,k} := \mathbb{Q}[\xx_n]/J_{n,k}$ be the corresponding quotient ring. \end{definition} Each of the quotients $S_{n,k}$ is a graded $\symm_n$-module. Their defining ideals are nested according to $J_n = J_{n,n} \subseteq J_{n,n-1} \subseteq \cdots \subseteq J_{n,1}$. We study $S_{n,k}$ by making use of a point locus $X_{n,k} \subseteq \QQ^n$ corresponding to $\WWW_{n,k}.$ Fix $n$ distinct rational numbers $\alpha_1, \dots, \alpha_n \in \QQ$. For any packed word $w_1 \dots w_n \in \WWW_{n,k}$, we have a corresponding point $(\alpha_{w_1}, \dots, \alpha_{w_n}) \in \QQ^n$. We let $X_{n,k} \subseteq \QQ^n$ be the family of points corresponding to all packed words in $\WWW_{n,k}.$ The set $X_{n,k} \subseteq \QQ^n$ is closed under the coordinate-permuting action of $\symm_n$ and we have an identification $\QQ[\WWW_{n,k}] \cong \QQ[X_{n,k}]$. As explained in the introduction, we have isomorphisms of ungraded $\symm_n$-modules \begin{equation*} \QQ[\WWW_{n,k}] \cong \QQ[X_{n,k}] \cong \QQ[\xx_n]/\mathbf{I}(X_{n,k}) \cong \QQ[\xx_n]/\mathbf{T}(X_{n,k}). \end{equation*} It turns out that $\mathbf{T}(X_{n,k})$ coincides with $J_{n,k}$. \begin{theorem} \label{thm-ungraded-structure} For any $1 \leq k \leq n$, we have the ideal equality $J_{n,k} = \mathbf{T}(X_{n,k})$. Consequently, we have an isomorphism of ungraded $\symm_n$-modules $\QQ[\WWW_{n,k}] \cong S_{n,k}$. \end{theorem} \begin{proof} To show that $J_{n,k} \subseteq \mathbf{T}(X_{n,k})$, it suffices to show that every generator of $J_{n,k}$ arises as the highest degree component of some polynomial in $\mathbf{I}(X_{n,k}).$ Fix $1 \leq i\leq n$ and $1 \leq r \leq d$; we begin by showing that the generator $x_i^d e_{n-r}^{(i)}$ lies in $\mathbf{T}(X_{n,k}).$ Note that if $(x_1,\ldots,x_n) \in X_{n,k}$, we either have $x_i \in \{\alpha_1,\ldots,\alpha_d\}$, or for any $1 \leq j \leq d$ the number $\alpha_d$ must appear among $\{x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n\}$. We let $t$ be a new variable, and define the function \[ f(x_1,\ldots,x_n,t) := (x_i-\alpha_1) \cdots (x_i-\alpha_d) \cdot \frac{(1-tx_1)\cdots(1-tx_{i-1})(1-tx_{i+1})\cdots(1-tx_n)}{(1-t\alpha_1)\cdots(1-t\alpha_d)} \] and expanding this function in terms of the parameter $t$ yields \begin{align*} f(x_1,&\ldots,x_n,t) = \\ &(x_i-\alpha_1) \cdots (x_i-\alpha_d) \cdot \sum_{r \geq 0} \left(\sum_{a+b=r} (-1)^a e_a^{(i)} \cdot h_b(\alpha_1,\ldots,\alpha_d)\right) t^r \end{align*} Specialization of $f(x_1,\ldots,x_n,t)$ at $(x_1,\ldots,x_n) = (\beta_1,\ldots,\beta_n)$ yields an element of $\QQ[[t]]$. We analyze this specialization when $(\beta_1,\ldots,\beta_n) \in X_{n,k}$. If $\beta_i \in \{\alpha_1,\ldots,\alpha_d\}$, then $f(\beta_1,\ldots,\beta_n,t) = 0$. Otherwise, $d$ of the terms in the numerator of $f$ will cancel with the $d$ terms in the denominator, so that hence $f(\beta_1,\ldots,\beta_n,t)$ is a polynomial of degree $(n-1)-d$ in $t$. Either way, the coefficient of $t^{n-r}$ in $f(x_1,\ldots,x_n,t)$ vanishes on $X_{n,k}$, so that \[ (x_i-\alpha_1) \cdots (x_i-\alpha_d) \cdot \left(\sum_{a+b=n-r} (-1)^a e_a^{(i)} \cdot h_b(\alpha_1,\ldots,\alpha_d)\right) \in \mathbf{I}(X_{n,k}) \] and taking the highest degree component gives \[ x_i^d \cdot (-1)^{n-r} e_{n-r}^{(i)} \in \mathbf{T}(X_{n,k}). \] The remaining generators $e_d$ (for $d > n-k$) are handled by a similar argument. We consider the rational function \begin{align*} g(x_1, \dots, x_n, t) &:= \frac{(1-t x_1)(1 - t x_2) \cdots (1 - t x_n)}{(1 - t \alpha_1)(1 - t\alpha_2) \cdots (1 - t \alpha_k) } \\ &= \sum_{r \geq 0} \left( \sum_{a + b = r} (-1)^a e_a \cdot h_b(\alpha_1, \dots, \alpha_k) \right) \cdot t^r. \end{align*} Evaluating $(x_1, \dots, x_n)$ at a point in $X_{n,k}$ forces the $k$ factors in the denominator to cancel with $k$ factors in the numerator, yielding a polynomial of degree $n-k$ in $t$. For any $d > n-k$, we conclude that \[ \sum_{a + b = d} (-1)^a e_a \cdot h_b(\alpha_1, \dots, \alpha_k) \in \mathbf{I}(X_{n,k}), \] which implies \[ e_d \in \mathbf{T}(X_{n,k}). \] This proves the containment $J_{n,k} \subseteq \mathbf{T}(X_{n,k}),$ so that \begin{equation} \label{inequality-chain} \dim \QQ[\xx_n]/J_{n,k} \geq \dim \QQ[\xx_n]/\mathbf{T}(X_{n,k}) = |\WWW_{n,k}| \end{equation} In light of Equation~\eqref{inequality-chain}, to prove the desired equality $J_{n,k} = \mathbf{T}(X_{n,k})$ it is enough to show that $\dim(\QQ[\xx_n]/J_{n,k}) \leq |\WWW_{n,k}|$. This is a Gr\"obner theory argument. Since the elementary symmetric polynomials $e_n, e_{n-1}, \dots, e_{n-k+1}$ in the full variable set $\{x_1, \dots, x_n\}$ lie in $J_{n,k}$, \cite[Lem. 3.4]{HRS} implies that for any subset $S \subseteq [n]$ with $|S| = n-k+1$, the {\em Demazure character} $\kappa_{\gamma(S)}$ corresponding to the length $n$ sequence $\gamma(S)$ also lies in $J_{n,k}$. The lexicographical leading monomial of $\kappa_{\gamma(S)}$ has exponent sequence $\gamma(S)^*$. Similarly, for $1 \leq i, d \leq n$, since \begin{equation*} x_i^d \cdot e_{n-d}^{(i)}, \dots, x_i^d \cdot e_{n-1}^{(i)} \in J_{n,k}, \end{equation*} for any $T \subseteq [n-1]$ of size $|T| = n-d$, \cite[Lem. 3.4]{HRS} again implies that \begin{equation*} x_i^d \cdot \kappa_{\gamma(T)}(x_1, \dots, x_{i-1}, x_{i+1}, \dots, x_n) \in J_{n,k}. \end{equation*} Writing $\gamma(T)^* = (\gamma_{n-1}, \dots, \gamma_1)$, the lexicographical leading term of $x_i^d \cdot \kappa_{\gamma(T)}(x_1, \dots, x_{i-1}, x_{i+1}, \dots, x_n)$ is $(\gamma_{n-1}, \dots, \gamma_i, d, \gamma_{i-1}, \dots, \gamma_1)$. It follows that \begin{quote} the exponent sequence $(c_1, \dots, c_n)$ of any member of the standard monomial basis of $\QQ[\xx_n]/J_{n,k}$ satisfies the conditions in the statement of Theorem~\ref{thm-combinatorial-bijection}. \end{quote} Theorem~\ref{thm-combinatorial-bijection} implies the desired dimension bound $\dim \QQ[\xx_n]/J_{n,k} \leq |\WWW_{n,k}|$, completing the proof. \end{proof} The standard monomial basis of $S_{n,k}$ is governed by coinversion codes. \begin{corollary} The standard monomial basis of $S_{n,k}$ with respect to the lexicographical term ordering are the monomials $x_1^{c_1} \cdots x_n^{c_n}$ where $(c_1, \dots, c_n) = \overline{\mathbf{code}}(\sigma)$ is the boosted coinversion code of some ordered set partition $\sigma$ of $[n]$ with at least $k$ blocks. \end{corollary} \begin{proof} This follows from Theorem~\ref{thm-combinatorial-bijection} and the last paragraph of the above proof. \end{proof} Our next goal is to derive the graded $\symm_n$-module structure of the quotients $S_{n,k}$. This result is stated most cleanly in terms of the following rings defined by Haglund, Rhoades, and Shimozono \cite{HRS}. \begin{definition} \label{rnk-quotient-definition} Let $1 \leq k \leq n$ be integers. Define the ideal $I_{n,k} \subseteq \QQ[\xx_n]$ by \[ I_{n,k} := \langle x_1^k, x_2^k, \ldots, x_n^k, e_n, e_{n-1}, \ldots, e_{n-k+1} \rangle \] and let $R_{n,k} := \QQ[\xx_n]/I_{n,k}$ be the corresponding quotient ring. \end{definition} We can now state the graded $\symm_n$-module structure of $S_n$ in terms of the graded $\symm_n$-module structure of these $R_{n,k}$, which have been extensively studied in \cite{HRS}. \begin{theorem} \label{thm-graded-module-refined} As graded $\symm_n$-module we have \[ S_{n,k} \cong R_{n,n}\langle 0 \rangle \oplus R_{n,n-1}\langle -1 \rangle \oplus \cdots \oplus R_{n,k}\langle -n+k \rangle. \] \end{theorem} We are now ready to prove Theorem \ref{thm-graded-module-refined}. \begin{proof} We proceed by descending induction on $k$. In the case $n = k$, we claim that $J_{n,n} = I_{n,n} = \langle e_1, \dots, e_n \rangle$ is the classical invariant ideal so that $S_{n,n} = R_{n,n}$. Indeed, each elementary symmetric polynomial $e_d$ appears as a generator of $J_{n,n}$. On the other hand, Theorem~\ref{thm-ungraded-structure} implies that $\dim S_{n,n} = n! = \dim R_{n,n}$. This finishes the proof in the case $k = n$. Now suppose $1 \leq k \leq n-1$. We exhibit a short exact sequence of $\symm_n$-modules \begin{equation} \label{exact-sequence} 0 \rightarrow R_{n,k} \overset{\varphi}{\rightarrow} S_{n,k} \overset{\pi}{\rightarrow} S_{n,k+1} \rightarrow 0, \end{equation} where $\varphi$ is homogeneous of degree $n-k$ and $\pi$ is homogeneous of degree $0$. The exactness of this sequence implies \[ S_{n,k} \cong S_{n,k+1} \oplus R_{n,k}\langle -n+k \rangle, \] proving the theorem by induction. Since every generator of $J_{n,k+1}$ is also a generator of $J_{n,k}$, we may take $\pi: S_{n,k} \twoheadrightarrow S_{n,k+1}$ to be the canonical projection. We have a map \begin{equation} \widetilde{\varphi}: \QQ[\xx_n] \rightarrow S_{n,k} \end{equation} given by multiplication by $e_{n-k}$ followed by projection onto $S_{n,k}$. We verify that $\widetilde{\varphi}$ descends to a map $\varphi: R_{n,k} \rightarrow S_{n,k}$ by showing that $\widetilde{\varphi}$ sends every generator of $I_{n,k}$ to zero. Indeed, we have $\widetilde{\varphi}(e_j(x_1, \dots, x_n)) = 0$ for any $j > n-k$ since $e_j(x_1, \dots, x_n)$ is a generator of $J_{n,k}$. Furthermore, for $1 \leq i \leq n$ we have \[ \widetilde{\varphi}(x_i^k) = x_i^k e_{n-k} = x_i^k e_{n-k}^{(i)} + x_i^{k+1} e_{n-k-1}^{(i)} = 0, \] where the final equality follows because both $x_i^{k} e_{n-k}^{(i)}$ and $x_i^{k+1} e_{n-k-1}^{(i)}$ are generators of $J_{n,k}$. We conclude that $\widetilde{\varphi}$ descends to a map $\varphi: R_{n,k} \rightarrow S_{n,k}$ of $\symm_n$-modules which is homogeneous of degree $n-k$. It is clear that $\varphi$ surjects onto the kernel of $\pi$. The exactness of the sequence \eqref{exact-sequence} follows from the dimensional equality \[ \dim(S_{n,k}) = |\WWW_{n,k}| = |\WWW_{n,k+1}| + |\OP_{n,k}| = \dim(S_{n,k+1}) + \dim(R_{n,k}). \qedhere \] \end{proof} The graded Frobenius image of $S_{n,k}$ is most naturally stated in terms of the $C$-functions defined in Equation~\eqref{c-function-definition}. \begin{corollary} For any $1 \leq k \leq n$, the graded Frobenius image of $S_{n,k}$ is given by \begin{equation} \grFrob(S_{n,k}; q) = \sum_{j = k}^n q^{n-j} \cdot (\omega \circ \mathrm{rev}_q) C_{n,j}(\xx;q). \end{equation} \end{corollary} \begin{proof} Apply \cite[Thm. 6.11]{HRS} and Theorem~\ref{thm-graded-module-refined}. \end{proof} \section{Conclusion} \label{sec-conclusion} In this paper we have described an quotient $S_n$ of $\QQ[\xx_n]$ whose algebraic properties are governed by the combinatorics of packed words in $\WWW_n$. The ring $S_n$ has provable algebraic properties which are similar to conjectural properties of the superspace coinvariant ring $\Omega_n / \langle (\Omega_n)^{\symm_n}_+ \rangle$. With an eye towards proving these conjectures, it would be desirable to have a more direct connection between the packed word quotient $S_n$ and the superspace coinvariant ring. Generalized coinvariant rings related to delta operators have seen ties to cohomology theory. In the context of the rings $R_{n,k}$ of Definition~\ref{rnk-quotient-definition}, Pawlowski and Rhoades showed that $H^{\bullet}(X_{n,k};\QQ) = R_{n,k}$, where $X_{n,k}$ is the variety of $n$-tuples $(\ell_1, \dots, \ell_n)$ of 1-dimensional subspaces of $\CC^k$ which satisfy $\ell_1 + \cdots + \ell_n = \CC^k$. Rhoades and Wilson \cite{RW} refined this result by considering the open subvariety $X^{(r)}_{n,k}$ obtained by requiring the the sum $\ell_1 + \cdots + \ell_r$ of the first $r$ lines is direct. In light of \cite{PR, RW}, it is natural to ask for a geometric perspective on the ring $S_n$ appearing in this paper. \begin{problem} \label{geometry-problem} Find a variety $Y_n$ whose rational cohomology ring $H^{\bullet}(Y_n; \QQ)$ is isomorphic to $S_n$. \end{problem} The results in \cite{PR} suggest that $Y_n$ could be taken to be an open subvariety of the $n$-fold Cartesian product $(\mathbb{P}^{k-1})^n$ of $(k-1)$-dimensional projective space with itself with the property that the cohomology map $i^*: H^{\bullet}((\mathbb{P}^{k-1})^n;\QQ) \rightarrow H^{\bullet}(Y_n;\QQ)$ induced by the inclusion $i: Y_n \hookrightarrow (\mathbb{P}^{k-1})^n$ is surjective. \section{Acknowledgements} B. Rhoades was partially supported by NSF Grant DMS-1953781. The authors are grateful for Christopher O'Neill for helpful discussions about this project.
168,462
Mom son toons Cindy from Fairfield Age: 34. My advantages: sexy, charming, feminine, well-groomed, clean, appetizing, natural and delicate. Horny next door twinks are ready to spice things up with their sexy neighbours with. A very beautiful pakistani muslim. Chinese babe gillian chung nude china.Watch toon mom son on spankbang now. Browse the largest collection of alien. Paula from Fairfield Age: 24. Slender, beautiful, affectionate, loving, Busty, athletic figure, elastic ass. Milftoons Comics | Milf Toons Gallery -- Hd Porn Comics His hard cock lesbian sara luvv and remy lacroix do outdoor anal scene with strap-on. Find the best brazzers anal bar ebony videos right here and discover why our sex tube is visited mom son toons millions of porn.Explore sexy and fresh big tits creampie videos only on spankbang. Watch american hardcore screams on pornhub. Watch manroyale - presley wright gets a face full of cum gay video on xhamster - the ultimate database of free gay. Amazing step mother kendra fervor. Bonnie from Fairfield Age: 24. My advantages: sexy, charming, feminine, well-groomed, clean, appetizing, natural and delicate. Patricia from Fairfield Age: 33. Looking for a friend and lover in the face of a positive, interesting and adequate man who knows what he wants. Juliette from Fairfield Age: 29. a fragile and tender girl dreams of a rough embrace of a real male. Cartoon mother son toons search - muarchitecture.com Sexy horny adult film babe chastity lynn surprises her lucky boyfriend with a sexy threesome with her gf proxy paige. The nudity of your sister, daughter of your father or daughter of your mother, born mom son toons home or born outside, you will not. Jenna haze follando con billy glide. Virgin take on cock for first time nude hot blode girls kristen archive sex. We both seemed to have learnt from these spankings and have remained within budget for nearly. Rose from Fairfield Age: 33. I love to watch when a man moans with pleasure, giving all his power that has accumulated over the day))) and she loves sex and massage)))
410,582
- The Outside Story Why BTS Getting #1 On the Hot100 is Important Written by Jessica Voong | Instagram | Twitter Hundreds, maybe millions, of fans, including myself, were anxiously awaiting the results of the Billboard Hot100. We all wanted to know if our streaming, buying, and the radio plays finally worked in BTS' favor. I probably have far too many copies of Dynamite and I can probably give each house in my neighborhood a copy. This morning at 10:42AM PST, the Billboard Charts released the results of the top songs in the U.S. from the previous week. That's right folks, your eyes are not deceiving you. BTS grabbed the number one spot on the Billboard Hot100, the first for the all-Korean act. Their previous title track, 'ON' grabbed the number four spot earlier this year, with little to no radio airplay. Now you're probably wondering, "Okay, they grabbed the #1 spot. Why does it matter?" They are the second Asian act to ever achieve this goal. The last person to achieve number one was Kyu Sakamoto in 1963, with the song "Sukiyaki." Within the western music industry, it's rare to see Asian/Asian American musicians rise to become global phenomenons. BTS has been one of the only acts to sweep the western music industry by storm and many industry insiders/journalists still try to debunk the reason as to why they're so successful, especially for an act that sings primarily in Korean. "But Jessica...this song is completely in English." Yep, it is. It's their first full song completely sung in English. Some may say, "Well, they're selling out if it's all in English," and to be honest (just my opinion), sometimes you have to play the game because the industry is unjust, frayed, and broken. this is why bts fans spend so much time paying attention to/learning about industry metrics and awards shows—things that we know are deeply flawed and sometimes poor measures of achievement. because gradual changes in these dinosaur-like institutions add up to something bigger. RM (BTS' frontman and leader) said in USA Today, "Many things have changed and during the process of making our album which we will release later this year, we just kinda met this song as destiny. When we first listened to the demo and lyrics and the vibes and everything was so perfect. We thought “Why not keep it this way [In full English]?” Some have said things have changed and this is a new challenge for us as well. We’re giving a shot." And there's nothing wrong with wanting to challenge yourself and trying something new, that's how artists grow. I mean, I'd love to see a western artist try to sing in Korean. Some of you might be wondering, "If BTS is one of the most popular groups worldwide, then why aren't they on the radio?" Brian Byrne's article, "Radio, Why Won't You Play BTS?" states, "The industry often fears that non-English songs will lead listeners to tune out. “A radio programmer wants songs that the listener is going to sing along or rap along with,” said Chris Molanphy, a chart analyst, pop critic, and host of the Slate podcast “Hit Parade.” “They want engagement from the listener. Call it xenophobia, that's certainly part of it. But when you look at the scant history of non-English language hits over the years, the fact that BTS would be facing this challenge is not all that surprising.”" Dynamite has been one of the biggest pushes the ARMY (BTS's fandom) has ever seen when it comes to a single, in terms of radio airplay, and if you look at BTS' previous tracks, ON and Boy with Luv, the airplay for these tracks were way less than other western artists. You see friends, they aren't just making history for themselves or the K-pop industry but for Asians/Asian Americans. As PoC, we constantly have to prove ourselves and our worth, and work twice as hard to even get our foot in the door. BTS has proven time again and again, how hard they work, how given they are to their music, but when will they finally get the recognition that they really covet? *cough* I'm looking at you, Grammys *cough* Don't get me wrong, I honestly think the Recording Academy loves BTS, the only thing is, I don't know if the Academy members will take the time to truly dive into their music and see why the rest of the world loves them.
18,588
02 Januari 2018 A SysAdmin’s Essential Guide to Linux Workstation Security 14.06 ebook , Free Ebooks , linux , security Tidak ada komentar : This document is aimed at teams of systems administrators who use Linux workstations to access and manage your project’s IT infrastructure. If your systems administrators are remote workers, you may use this set of guidelines to help ensure that their workstations pass core security requirements in order to reduce the risk that they become attack vectors against the rest of your IT infrastructure.. We’re sharing this document as a way to bring the benefits of opensource collaboration to IT policy documentation. If you find it useful, we hope you’ll contribute to its development by making a fork for your own organization and sharing your improvements. Download here Artikel yang berkaitan : Langganan: Posting Komentar ( Atom ) Tidak ada komentar : Posting Komentar
283,047
To get a resolution to Darrelle Revis’ 5 plus week long holdout, the New York Jets signed the all-pro cornerback to a new deal on September 6th, despite Revis’ having three years to go on his rookie contract, which wasn’t exactly a bad deal for the player. The Jets ponied up a 4 yr/ $46 M contract to their disgruntled star, with a whopping $32 M guaranteed, despite the player having absolutely no negotiating leverage beyond his greatness, and his agents’ ability to scare the Jets by scaring the fan base that the league’s only true lockdown corner would not be in uniform came game 1 against Baltimore. I am always pro player in money disputes with management, in football and hockey especially. In those sports, careers can go by in a blink, and management, in general, would have no qualms about turning the table on a player who has given heart and soul to the team, once his play starts to decline. In football, where the money is not all guaranteed, teams are the most cold hearted when it comes to cutting ties. Football is the only sport where players actually give back money to their teams, or renegotiate their contracts on the team’s terms, if they feel their team may cut them and force them to uproot their families if the player does not comply. Good for Darrelle. He got his money. Did he exploit the Jets? Absolutely. Did Nick Mangold need to do so to get his new $ 57M contract? No. Will David Harris need to pull a lengthy holdout to get paid? That’s highly doubtful. Woody Johnson has 27 billion dollars. Everybody’s going to get paid. Again, I favor the players. But when I heard that Darrelle Revis was stopped on Friday doing 80 MPH in a 40 MPH zone, and the excuse he gave, the whole Revis situation, in mind, had to be resivited, with no praise forthcoming in Revis’ direction. Updated: Friday, 15 Oct 2010, 6:13 PM EDT Published : Friday, 15 Oct 2010, 6:13 PM EDT (NewsCore) – New York Jets star cornerback Darrelle Revis was ticketed for speeding and careless driving by police in New Jersey, the Newark Star-Ledger reported Friday. Revis was reportedly pulled over Thursday in Livingston, northern New Jersey, as he headed to nearby Florham Park, where the Jets’ facilities are located. He was driving 81 mph in a 40 mph zone, according to the Livingston Municipal Court. Livingston Police Capt. Jeffrey Payne described the traffic stop as “uneventful, normal.” Since he was traveling more than 40 mph over the speed limit, Revis will have to appear in court Nov. 4 before resolving the issue. “I’m not a speedster; I was just trying to get here to work,” said 25-year-old Revis, who is listed as questionable for Sunday’s game at Denver because of a hamstring injury. Sorry if I am especially sensitive to this issue, but my wife and daughter were in a serious accident at the hands of a careless, wreckless, speeding driver. And Revis, trying to pull out a common man’s excuse, that he was late to work, is greatly deluded. First off, Revis was 36 days late to work. He doesn’t really get to be late to work, after that, after having $32M in guaranteed money bestowed on him. No one ever said these guys were smart. Obviously, Revis, as smart as some may feel he is, a defensive coordinator on the field and whatnot, is football smart only. We are about 3 weeks removed from Rex Ryan publicly calling for the Jets to stop “being that team” and stop embarrassing their owner with knucklehead decision making away from the field. Should he have benched Braylon Edwards? Never. And he was right on that, and so were we. We needed him to win, we need to win, therefore, the player must do what he is paid to do: play. The courts dispense justice, not the Jets. I hope the Jets play Revis, as I am sick of hearing about his hamstring, which he hurt because he did not attend training camp. And I hope he actually contributes this week to their passing defense. Last week, the Jets hid Revis from Randy Moss, and Revis still got burned by Percy Harvin, and on one play, was tossed to the turf by Harvin like a rag doll. Revis might be the best Jet–when healthy–but he hasn’t been healthy, and that can be squarely attributed to the player’s greed. So Revis? Get to work early. You are paid well enough. And do everything right, the way Braylon Edwards has done since getting in hot water. Because if it’s not about your football greatness and the glory you bring to the New York Jets, then we are all sick of hearing about you. Tough game for the Jets today in Denver. They suck on the west coast, historically. J-E-T-S!!!!!! Crack (,)
310,731
TITLE: Determine whether $24x^5-30x^4+5=0$ is solvable by radicals over $ \mathbb{Q}?$ QUESTION [4 upvotes]: Determine whether $24x^5-30x^4+5=0$ is solvable by radicals over $ \mathbb{Q}$ My try:There is a theorem that a polynomial can be solved by radicals if and only if its Galois group is a solvable group.here the polynomial is irreducible in $ \mathbb{Q}$ by Eisenstein's criteria.if $\beta$ is the root of the polynomial then $gal( \mathbb{Q}(\beta): \mathbb{Q})=?$ REPLY [2 votes]: I learnt this technique recently on MSE. It is based on the general theorem of Galois: Theorem: If $p(x) $ is an irreducible polynomial of prime degree with rational coefficients then it is solvable by radicals if and only if all the roots of this polynomial can be expressed as rational functions of any two of its roots. This has a nice corollary that if an irreducible polynomial of prime degree is solvable by radicals and has two real roots then all the other roots being rational functions of these roots must be real. So we have Corollary: If $p(x) $ is an irreducible polynomial of prime degree with rational coefficients and it is solvable by radicals then either it has one real root or it has all its roots real. This handles many polynomials of prime degree and your example also fits here because it has only three real roots and thus is not solvable by radicals. The tough case is when such polynomial has only one real root and yet it may happen that it is not solvable by radicals (for example $x^5-x-16$).
118,542
Size of this preview: 424 × 480 pixels. Other resolutions: 212 × 240 pixels | 530 × 600 pixels | 679 × 768 pixels | 905 × 1,024 pixels. Full resolution (download) (1,792 × 2,028 pixels, file size: 585 KB, MIME type: image/jpeg) There is no description yet. Add a description. Appears on these pages of Mario, Sonic, the Eds, and Friends Ride Snow White's Scary Adventures Mario, Sonic, the Eds, and Friends Ride Snow White's Scary Adventures is a series of videos on... File history Click on a date/time to view the file as it appeared at that time.
111,793
- Who We Are - How We Work - Our Science - News & Events - Find a Scientist - OSM 2019 - Become a Member High-hopes are placed on the intensification of cattle ranching to reconcile agriculture and forest conservation in Latin America. A new study published in Sustainability by GLP Member Erasmus zu Ermgassen and colleagues looks at six on-the-ground efforts in the Brazilian Amazon to make this a reality. The team surveyed six initiatives which have used a range of technologies (rotational grazing, legume-grass mixes, agroforestry) to improve the productivity of beef and dairy production by 30–490% on >500,000 ha of pasture, while supporting compliance with the Brazilian Forest Code (Figure 1). High-productivity cattle ranching requires some initial investment (US$ 410–2180/ha), with average pay-back times of 2.5–8.5 years. Despite these promising results, intensification was not profitable under all conditions, and several barriers to the sustainable development of the cattle sector exist. The paper therefore sets out three key conditions which are required to mainstream sustainable cattle ranching in the Amazon: (1) Large-scale knowledge transfer—long-term funding and support is required for farmer-centered agricultural extension services, which increase awareness of high-yielding technologies and support small- and large-holders alike to adopt appropriate farming practices. (2) Financial support for sustainable ranching— the field-data suggested that high-yielding cattle ranching requires higher up-front investment than the figures (US$ 22-609/ha) used to estimate the cost of implementing Brazil's commitment to the United Nations Framework Convention on Climate Change (which includes efforts to reduce deforestation and increase cattle productivity through the restoration of 15 million hectares of degraded pasture). Rural credit lines should help farmers not only increase agricultural production, but also meet the costs of Forest Code compliance. Market signals also matter: price-premiums for good agricultural practices would encourage uptake (3) Increased transparency in cattle supply chains—efforts by some slaughterhouses to monitor direct suppliers are a step in the right direction, but do not go far enough. All slaughterhouses should monitor both indirect and direct cattle suppliers. Figure 1. (a) High-yielding cattle pasture (right) on a Novo Campo Program farm, one month after replanting, compared with conventional, unreformed pasture (left); (b) Stocking rate in intensified pasture plots for the period January 2013–September 2014. The grey dashed line represents the mean stocking rate for farms in the region.
166,646
Hudson Happenings H u d s o n R i v e r P r e s b y t e r y In This Issue From Your Interim General Presbyter Presbytery News & Events Congregational News and Events Regional and National News Regional and National Events Take Action Resources for Education, Justice, Liturgy and More! Seeking and Finding Employment Opportunities Share Hudson Happenings March 29, 2018 Dear Friends, In " L.A. Prayer " Francisco X. Alarcón reflects on the LA riots in 1992 after police beat and tasered Rodney King while arresting him. The fulcrum of the poem is built from these two stanzas, the more we run the more we burn o god show us the way lead us As we gather to remember and hear Scripture's testimony of Jesus' final days -- the meal, the betrayal, the trial, the crucifixion, and the empty tomb -- we have the opportunity to contemplate our own choices -- how we "are" in the world; from what do we hide? for what do we stand up? and how can our lives contribute to the new life God intends for all? This poem is a song of anxiety and hope in the vocative mood. A lower-case god pointing to a path forward that must be discerned and tried. o god show us the way. The next issue of Hudson Happenings will be published on Thursday, April 19. Please email your submissions to noelle@hudrivpres.org by Tuesday, April 17. Peace, Noelle From Your Interim General Presbyter Dear Friends, HRU (Hudson River University) was a huge success as judged by the numbers and evaluations! Over 150 individuals - members, deacons, ruling elders and pastors from nearly all of our 79 congregations - gathered at Stony Point on March 17 to learn, grow and connect. We gathered a faculty of 18 from around the presbytery and beyond, sharing their knowledge and experience with others. There was a great deal of energy and excitement when we started the day off with prayer, gathered together for lunch and throughout the five classes offered each session. I want to thank all those who came and especially the faculty for your preparation and teaching and your willingness to share your skills and abilities, your knowledge and experience! [ Read more ] Presbytery News & Events Installation for the Rev. Abbie Huff, April 14 You are cordially invited to the Service of Installation for The Reverend Abbie Huff as the new pastor of Germonds Presbyterian Church Saturday, April 14th 10:30 a.m. Germonds Presbyterian Church 39 Germonds Road, New City, NY 10956 Light fare will follow in Jensen Hall. We hope you can make it. Come celebrate with us! No RSVP necessary. . Walkers from FPC Washingtonville Habitat Walk for Housing, April 22 Presbybuild invites you to the Walk for Housing on Sunday, April 22. Registration is at 12:30PM and the Walk begins at 1:30PM from Washington's Headquarters, 84 Liberty St, Newburgh, NY. The attached forms for Teams and Walkers provide information about forming Walker Teams and fundraising as we, a Resurrection People, put God's love into action by building houses, community, hope. . . and joy. Questions? Contact Deke Spierling dspierling@hvc.rr.com . General information about the walk may be found here . You've Got Talent! Shine for A Good Cause, May 19 The 12th Annual PresbyBuild Talent Show in support of Habitat for Humanity of Greater Newburgh will be held on Saturday afternoon, May 19th at First Presbyterian Church of Montgomery in Montgomery, NY. The show will run approximately from 3-4:30pm. Refreshments will be at 2pm. We are seeking young and old, new and past performers. Many members of your congregation have donated their talents to this fundraising cause in the past and we would like to invite you and your congregation back so we will once again showcase a great afternoon of talent. We ask that our PresbyBuild churches and friends submit the name of an act, which will represent their church or themselves. Please send the information by April 1st. Learn more! Solarize Our Congregation is a new program that covers eight counties including Westchester, Rockland, Putnam, Dutchess, Columbia, Orange, Sullivan, and Ulster. This is a result of forward thinking efforts by the HRP Green Partnership to make a difference. It's a new twist on the Solarize model. With this program, the Solarize opportunity is open not just to 79 churches of the Hudson River Presbytery but to all their members and friends as well--homeowners and commercial property owners. HRP Green is working with two solar providers, NYS Solar Farm for Columbia, Orange, Ulster, Putnam, Dutchess and Sullivan counties and Sunrise Solar Solutions for Westchester and Rockland to answer your questions and provide solar installations. If you are interested in learning if your church, residence or business would be a good candidate for solar you can schedule your own personal solar feasibility assessment at a church information session (see below). Light snacks will be provided. I f you'd like more information on solarizing your congregation but can't attend an in-person session you can sign up at or contact kathydean64@gmail.com . In addition, the Solar Installers will make a $240 donation to a church for each residential or commercial installation by a member/friend/contact of the church. The dona tions will be processed at the end of the campaign. Important Deadlines: Residential customer contracts must be signed by June 4, 2018 Churches and commercial customer contracts must be signed by October 4, 2018 Information Sessions for Westchester and Rockland Counties -- all with Sunrise Solar April 22 at 11:15am at Bedford Presbyterian Church April 30 at 7:30pm at New Hempstead Presbyterian Church Information Sessions for Columbia, Orange, Ulster, Putnam, Dutchess and Sullivan counties - all with New York State Solar Farm March 18 at 11:30am at First Presbyterian Church of Beacon April 9 at 7pm at First Presbyterian Church of Chester April 15 at 11:30am at First Presbyterian Church of Port Jervis April 23 at 7pm at First Presbyterian Church of Marlboro April 29 at 11:45am at Calvary Presbyterian Church of Newburgh. Attention Undergraduate Students and Seminarians! Wurffel-Sills Scholarship and Interest-Free Loan, Deadline April 1 Learn more about the 2018-2019 Wurffel-Sills Scholarship and Interest-Free Loan Program Application. The application is open to all members of any Presbyterian Church within the Synod of the Northeast who will be attending school beginning in the Fall of 2018 for either undergraduate or seminary studies only. The deadline to apply is April 1, 2018. New applicant's form . Students who are reapplying please use this form . The forms contain additional information on the scholarship and loan. Please share this information with all of your churches, members, and anyone you know of who could use a little financial assistance with their higher educational goals. If you have questions, please contact Stacey Galloway at the synod office 315-446-5990 or email at Stacy.Galloway@synodne.org . Congregational News and Events Twice-Blessed Thrift Shop Ribbon Cutting, April 7 The First Presbyterian Church of Wappingers Falls will celebrate the Grand Re-opening of the Twice Blessed Thrift Shop on Saturday, April 7, 2018 from 9:00AM - 1:00PM. Come for the Ribbon Cutting Ceremony at 9 AM, then stay to shop the store's inventory of clothing for the whole family, housewares and jewelry - for bargain prices! Light refreshments will be served. All are welcome! The Thrift Shop will be open thereafter 9 AM - 1 PM on Fridays and Saturdays. The Thrift Shop is located at FPC Wappingers Falls on 2568 South Ave, Wappingers Falls. Learn more about the beautiful story of the thrift shop and how the church and community volunteers helped the store transition to emergency housing and then re-open in the church. It's a great example of community-based mission in action! Talent Show to Benefit Youth Economic Group of RMM, April 7 Hortonville Presbyterian Church will host their 32nd annual Talent Show on Saturday, April 7 at 7:00pm at the church on CR 131 in Hortonville. Donations of $5 benefits the YEG (Youth Economic Group) of RMM. For more information including signing up to perform, call Jane Orcutt at 845-887-4346. Download the flyer and share widely! Roscoe Rummage Sale, April 18-21 2018 Roscoe Presbyterian Church is holding its Semi-Annual Rummage Sale April 18 to April 21, 2018 at the church, located on Old Route 17 at the intersection of County Road 179 and Stewart Ave. in Roscoe. The sale will be open Wed. April 18: 9:30AM - 6:30PM Thurs/Friday April 19/20: 9:30AM-5:00PM Saturday April 21: 9:30AM - 1:00PM For information or questions contact. April McArthur - (845)292-4974 or Nancy Nissen - (607) 498-5144. Five Wishes Living Will Workshop & Community Meal, April 21 Do you want your dying to be consistent with your living, a reflection of who you are? Having a living will is one way to help make it so. Please join us at Pleasant Plains Presbyterian Church on Saturday, April 21, 2018, from 10:00am-2:00 pm for a workshop on the Five Wishes living will document. Five Wishes is unique among living wills, in that it speaks to the whole person, their emotional, spiritual and personal needs, as well as medical and legal issues. The workshop will guide participants through completion of the Five Wishes document, and offer advice about how to have those often difficult conversations with families and physicians. The event will finish with a community meal to celebrate everyone's accomplishment. Thanks to a grant from the HRP Challenge to Change Fund, the event is free; space is limited, however, so registration is required. Please contact Church Coordinator Alayne at 845-889-4019 or pleasantpchurch@optimum.net to sign up. Pleasant Plains Presbyterian is located at the corner of Hollow and Fiddlers Bridge Roads in Staatsburg. What better way to "live resurrection" than to pull life-giving and life-affirming moments out of our final days? We hope to see you there. Coffee House to Support RMM, April 21 Annual Coffee House in support of Rural & Migrant Ministry is to take place at Hitchcock Presbyterian Church on Saturday, April 21 from 7:00-9:00PM. Also supported by the Church of St. James the Less (Episcopal) in Scarsdale. Come for a fun evening to hear talented musicians and some surprises. All proceeds will go to RMM to support programs for the young people in the Youth Arts Group (YAG) and the Youth Economic Group (YEG). Tickets are $20 for adults and $15 for seniors and students. For more info contact Rene Thiel at 914-548-2163 Regional and National News Presbyterians Join Hundreds of Thousands in the March for Our Lives Across the country and the world Saturday, hundreds of thousands of people took part in rallies and demonstrations against gun violence. The March for Our Lives was organized by students in various communities. And Hudson River Presbytery members were among them! Check out this PNS story that features HRP's participation and an interview with Rob Trawick. Stated Clerk Files Amicus Brief Supporting DACA The Presbyterian Church (U.S.A.) General Assembly Stated Clerk J. Herbert Nelson, II has signed an amicus curiae brief in a suit filed in the Northern District of California to enjoin the federal government from ending the Deferred Action for Childhood Arrivals (DACA) program. Read the PNS. Chantal Atnip taps Ken Hockenberry as GA Vice Moderator running mate Chant. Read the full PNS Story . Hudson Valley Presbyterian Churches Get Big Break to Go Solar HRPGreen's Solarize Our Congregation program made headline news in LoHud.com with interviews with Kathy Dean, Jeff Geary and a great photo of Ben Larson-Wolbrink. This great article illustrated the church at work together with its neighbors, caring for creation and its future. Read the full story . Regional and National Events Black Women's March, April 7 Black Women's March: Continuing the Legacy of Harriet Tubman! We March Out of Love, Care and Concern for Our People!!! April 7th, 2018: CVS parking lot, corner of Broadway and Route 119, Tarrytown, New York. All are welcome. March is 3.1 miles across the Mario Cuomo/TappenZee Bridge. More details on the march and schedule of events . BlackLine is a 24-hour hotline geared towards the Black, Black LGBTQI, Brown, Native and Muslim community providing counseling and assistance in situations of police or vigilante violence. Thank you to Carla Lesh of FPC Highland for bringing the march to our attention. A World Uprooted, Ecumenical Advocacy Days, April 20-23 Register now for Ecumenical Advocacy Days which brings together members of the PC(USA) and other Christian denominations for shared learning and joint advocacy in Washington, DC. This year's focus is "A World Uprooted". As the U.S. government continues to debate the future of migrants, refugees and displaced people living in this country, this event will address the issue head-on. Ecumenical Advocacy Training Weekend April 20-23 in Washington, DC. Register her e. The PC(USA) kicks off EADs on Friday, April 20 with a special Compassion, Peace and Justice training day for Presbyterians at NY Avenue Presbyterian Church. This Presbyterian-specific opportunity will give you the chance to network with others in congregations and denominational staff while learning more about how the PC(USA) is engaged in justice work. Learn more .. Nat'l Assoc. of Presbyterian Clergywomen Triennial, April 23-26, 2018 The National Association of Presbyterian Clergywomen will be hosting its Triennial in April 2018 at Montreat Retreat Center. Registration is now open . (N.B. NAPC's website has three pages on the Triennial, two with information and one with the registration form. Look on the upper right side of the menu for Triennial Conference to access these pages). We invite you to share this information with all clergy as well as lay leadership within your Presbytery. Princeton Youth Ministry Forum, April 24-27 The Forum gathers Christian leaders who are passionate about young people and the church. Youth ministers, Christian educators, pastors, ministry innovators, and emerging scholars from the USA and beyond gather annually for this 4-day event on the campus of Princeton Theological Seminary. This year, our theme is IMAGE. We will explore what it means to practice youth ministry in a visual age. Hear from speakers like Matthew Milliner, Brian Bantum, and Katie Douglass. Renew your energy for ministry as you create vocational friendships, stretch your imagination, and give Sabbath to your soul. Learn more and register now ! Attention Christian Educators! Northeast Regional Association of Presbyterian Church Educators (NEACE) Annual Meeting will be May 1 - May 3 at Linwood Retreat Center in Rhinebeck, NY. It will focus on partnering with immigrant youth in social transformation and feature the Rev. Richard C. Witt Rural and Migrant Ministry's Executive Director and Andres Chammorro of RMM's Youth Empowerment Program. Download the flyer with registration info and spread the word! Mediation Skills Training, May 7-11 The Lombard Mennonite Peace Center is sponsoring a Mediation Skills Training. Our HRP Mediation team highly recommends this educational event. The Mediation Skills Training Institute for Church Leaders (MSTI) will be held Monday through Friday, May 7-11, 2018 at Christ Church Cathedral, 45 Church Street, Hartford, CT. Clergy and all persons in positions of church leadership who are interested in learning skills that will equip them to deal effectively with interpersonal, congregational, and other forms of church conflict Please note that the deadline to register with a $200 tuition discount is April 6, 2018. Space is limited, so we encourage you to register early. For more information, visit .. Take Action! Brand New PC(USA) Confirmation The brand-new confirmation curriculum for the PC(USA) is Big God Big Questions: Confirmation for a Growing Faith . Based on the findings of the Confirmation Project, it is focused on preparing youth to authentically answer the traditional membership questions. Learn more and download a sample! New Older Adult Ministry Planning Guide The Presbyterian Older Adult Ministries Network has just published a new guide to Older Adult Ministries. This comprehensive and caring resource discusses topics like "becoming a dementia-friendly" congregation. You will definitely want to take a look at this FREE resource. Is Your Church Taking the OGHS Offering this Easter? A gift to One Great Hour of Sharing enables the church to share God's love with our neighbors-in-need around the world by providing relief to those affected by natural disasters, provide food to the hungry, and helping to empower the poor and oppressed. See the interactive world map with One Great Hour of Sharing recipients . Most churches receive OGHS on Easter, some on Palm/Passion Sunday, and still others during the season of Lent. More information and interpretive materials . Pentecost Offering Webinar, April 12 The Pentecost Offering celebrates and advocates for youth, young adults, and children atrisk. Learn more about this offering, its themes and resources, and ways to encourage your church to participate: Webinar hosted by Special Offerings on Thu. Apr. 12 at Noon (EDT). Register I don't know about you, but next to surfing, nothing gets my mojo goin' more than Hudson Happenings! HH is filled with links to local, regional and national news as well as information on fabulous educational and advocacy events, it's your "one stop" for the latest and greatest news and events in Hudson River Presbytery, Synod of the NE and PC(USA). So why not share it with your friends?! And with the link below, your friends can also subscribe to other e-updates from the Presbytery! Again, just invite them to go to or this lovely Hang Ten pic of me. (the link is embedded).
348,309
Features - Great variety of applications - Multi-axis solutions - Very few moving parts - Integrates with existing machines Highly Dynamic drive concept for modern production. The use of torque motors for NC rotary tables brings about new innovative solutions in mechanical engineering, machine tool construction and handling technology as well as in other industries..
45,072
TITLE: Can a relation that is NOT a function be one-to-one or onto? QUESTION [4 upvotes]: Let $A = \{1,2,3\}$ and $B = \{7,8,9\}$. For the relation between $A$ and $B$ given as a subset of $A \times B = \{(1,9), (1,7), (3,8)\}$, $A\times B$ is not a function because $1$ appears as the first element in more than one ordered pair. This made me think about the following generalized question. If a relation is not a function, then it is only a subset of $A$ and $B$ denoted as $A\times B$. Then can this subset be one-to-one? Can it be onto? One-to-One?: Even though $A\times B$ is not a function, I believe it satisfies the requirements of being one-to-one because in $A$, $x_1 = x_2$ only when $x_1 = x_1$. Onto?: I want to say yes also. The range of $A$ could be the range of $B$. If $A \times B = \{(1,9), (1,8), (3,8)\}$, then does this satisfy the requirement of being onto? Basically what is piquing my curiosity is if non-function relations can be one-to-one or onto or if being one-to-one and/or onto implies that it must be a function. REPLY [6 votes]: The answer here is yes, relations which are not functions can also be described as injective or surjective. You can read more about it on the wikipedia page here under the section titled "Special types of binary relations," but the definitions are the natural ones: A relation $R\subset A\times B$ is injective if for all $a,a'\in A$ and $b\in B$, $(a,b)\in R$ and $(a',b)\in R$ implies $a = a'$. A relation $R\subset A\times B$ is surjective if for each $b\in B$, there is some $a\in A$ such that $(a,b)\in R$. Some examples will hopefully clarify: If $A = \{\mathbf{1}\}$, a singleton with one element, and $B = \{\mathbf 1,\mathbf 2,\mathbf 3\}$, then the relation $R = \{(\mathbf 1,\mathbf 1), (\mathbf 1,\mathbf 2), (\mathbf 1,\mathbf 3)\}$ is both injective and surjective, and it is not a function. If $A = \{\mathbf 1,\mathbf 2,\mathbf 3\}$, $B = \{\mathbf 1,\mathbf 2\}$, then the relation $R = \{(\mathbf 1,\mathbf 1), (\mathbf 2,\mathbf 1)\}$ is not injective, not surjective, and it is not a function. If $A = B = \Bbb Z$, the set of integers, and $R = \emptyset \subset A\times B$ is the empty set, then $R$ is injective (hint: vacuous truth), it is not surjective, and it is a function (vacuous truth again). If $A = \Bbb R$, the set of real numbers, $B = \Bbb R_{\ge 0}$, the set of non-negative real numbers, and $R\subset A\times B$ is the relation $R=\{(x,x^2): x\in \Bbb R\}$, then $R$ is a surjective function, but it is not injective.
99,719
Share Book Request Book Other Business Books (4 downloads) Business, 12th edition ( William M. Pride , Robert J. Hughes , Jack R. Kapoor ) Shared by le kaka on 2016-07-21 Business, 12th edition 2013 | 672 Pages | ISBN: 1133595855 | PDF | 47 MB Written by authors who have an extensive track record teaching the Introduction to Business course, the twelfth edition of this best-selling text features an up-to-date, comprehensive survey of the functional areas of business: management and organization. Vote (0) Report Link Facebook Comments Related books
150,673
Looking for the path toward a healthier you? It’s not hard to find. The journey begins with some simple tweaks to your lifestyle. The right diet, exercise, and stress-relief plan all play a big role. Women have unique nutritional needs. By eating well at every stage of life, you can control cravings, manage your weight, boost your energy, and look and feel your best.. Women in the United States live 81 years on average, almost five years longer than men. Our bodies and minds are made to carry us for many productive decades—to school, to work, and to give birth to babies and raise families. But women are also prone to dangerous diseases including heart disease, cancer, and stroke. There are so many different ways to keep your mind and body strong and healthy. Here are some streamlined tips for protecting a woman’s physical and mental health at any age. 1. Avoid Tobacco Half of all long-term smokers will die from using tobacco. Smoking has been linked to several diseases and negative health effects, including heart disease (the number one killer of women), stroke, women’s infertility, and lung cancer. Lung cancer kills more women than breast cancer. Fortunately, when you stop smoking (or never begin the habit at all), you greatly decrease your risk of developing these diseases. Learn how to quit smoking 2. Eat Healthy Food It can be difficult to eat enough fruits and vegetables and prepare nutritious meals for yourself and your family every day. But when you develop a habit of eating too many of the wrong foods and too few of the right foods, you are more likely to develop serious diseases and conditions, including cardiovascular disease, diabetes, cancer, and depression. Scientists are starting to find that unhealthy foods, such a fast foods and commercial baked goods, seem to increase your depression risk, and healthier foods, such as omega-3 fatty acids and cruciferous vegetables (like broccoli) lower your risk of depression and cancer. Learn how to eat a healthy diet. Should you avoid dairy because of its saturated fat content?. People say that women are strong, which is undeniably true! However, it isn’t easy to deal with the urban lifestyle, managing hectic work and personal life. The daily stress that a woman deals with, somewhere or the other ends up affecting both her physical and emotional wellbeing. Hence, it is important to start making healthy behaviours a habit while you’re still young, and you’re more likely to hold onto them for the rest of your life. In this blog, we will discuss some healthy diets for women and also a few lifestyle choices they should make by the time they reach the big 3–0.
323,262
Miele Canister Vacuum Buying Guide 2020-2021 | Differences & Comparison January 2, 2021 by SmartReview Filed under Vacuums Miele Canister Vacuums have received outstanding consumer and professional reviews, and are our top canisters. Miele canisters are quieter than competitor vacuums, and have variable suction control (a feature missing on most other canisters). Miele vacuums are great for those with allergies and they filter the air with a sealed system. The pet (cat & dog) models are great for pet hair and the bagged models have activated charcoal filters to contain pet odors. What makes Miele Unique- Main Differences: Miele also uses rubber wheels, which will not mar or scratch hardwood floors. Many of the canisters come with a Parquet hardwood floor brush. Miele uses a sealed system and hygienically sealed multi-layer bags, so no dust goes in the air when vacuuming or when emptying the bag into the trash. Great for those with allergies. Miele tests its vacuums to last 20 years. Miele will far outlast cheaper vacuums. Related Article: Miele Vacuum Cleaners | Comparison & Reviews 2020-2021 Miele Cat & Dog Vacuums for Pet Hair: The Miele Cat & Dog vacuums are great for pet hair on hardwood floors, hard floors, soft carpeting, or deeper carpeting. They all come with 3 tools to tackle pet hair. The Cat & Dog models include the SEB 228 motorized floor tool, and also include the handheld Turbo pet hair tool for smaller areas like upholstery, stairs, and car interiors, and a Parquet Twister with soft natural bristles that won’t scratch hard floors.SEB228 Motorized Floor Tool, is recommended by Soft Carpet manufacturers on difficult to vacuum Soft/Plush and high pile carpets. Cat & Dog Pet Hair Models: C1, CX1, and C3 Best Miele Canisters Primarily for Hard Floors: If you have mostly hard, hardwood, and laminate floors, without high or medium carpeting, then you can get a Miele that has pure suction and no motorized brushroll. All Miele vacuums have very strong suction, and have multiple settings from low suction for delicate drapes, or rugs, to the highest suction for hard to clean areas. Some of the canisters below also have an air driven turbo brushroll, which can do low pile carpeting, and they come with a Parquet hard floor brush with soft natural bristles that won’t scratch hard floors. Alternatively, you can buy the above motorized brushroll models that can do both higher pile carpet and hard floors, and come with a separate parquet hard floor tool. Best Miele Canisters For Soft/Plush & High Pile Carpet: To properly clean soft or dense carpet, you need a vacuum that won’t get stuck on the carpet. Miele’s SEB228 Motorized Floor Tool is designed to vacuum dense soft and plush carpet. This tool also has height settings depending on whether your carpet is low, medium or high. This brushroll has been recommended by soft carpet manufacturers. All the models below have the SEB228 motorized floor tool . Miele C1 vs. C2 vs. C3 vs. CX1 Canister Differences: The main differences between the C1, C2, and C3 is size. All are bagged canisters. The C1 are compact size, the C2 can be compact or full size, and the C3 are full size canisters. The C2 and C3 are fully sealed canisters, preventing more dust and particles from escaping the vacuum. The CX1 is unique in that it is a full size canister, and bagless. Within each group the main difference is the accessories that come with the vacuum. Those with motorized heads have electric hoses (wiring in the hose to power the tool). The C2 and C3 have a power cord that is 3 feet longer than the C1. The C2 and C3 have more sound insulation and are Quieter than the C1. Related Article: Miele Vacuum Cleaners | Comparison & Reviews 2020-2021 Related Article: Miele C1 Vs. C2 Vs. C3 Vs. CX1 Canister Vacuums | Comparison Related Article: Dyson Animal 2 Vs. Miele U1 Cat & Dog | Upright Vacuums Related Article: Miele C3 Marin Vs. Kona Canister Vacuums | Comparison Related Article: Miele C3 Marin Vs. Cat & Dog Canister Vacuums | Comparison
333,676
Gene Shortage and Evolutionary Psychology by Dr Beetle Gene shortage has been used to dismiss the claims of evolutionary psychology, where so much human behaviour is said to be genetically determined. It can take some 100 to 2500 genes to make or work each adaptation, yet the human genome project revealed that there are only some 30,000 functional genes in humans. This is no where near enough to produce all of the behavioural traits being claimed, such as the emotions, fears, disgusts, and daily decisions on levels of promiscuity, helpfulness and conduct. The problem of gene shortage has been championed by Paul Ehrlich . However, evolutionary psychology has returned with its counterclaim , that there is no gene shortage. Their argument is that if gene shortage is such a problem, then how can there be so many hundreds of adaptations and features in a body. There are already so many types and locations of bone, hair, sense organ, nerves, hormones, blood vessels etc. If there is no apparent gene shortage for making such variety in a body, then why should there be a problem in making a multitude of mind modules and inherited instincts as well? If each gene can work in different combination with other genes, then there are 10 to the power of 86 possibilities that they could control – more than the number of particles in the universe! However, there is a limit to how many combinations genes can make before nothing is determined or a mess created. It would be a bit like going to committee, and getting no decision. Genes do need to act within set groups and in quite precise sequences. If even one gene is omitted, or a different sequence followed, strange anomalies can occur. For example, incorrect activation of the Antennapedia gene in the Drosophila fly can make a leg grow on its head rather than its antenna. The incorrect mixture of genes in humans can result in genetic diseases such as Down’s Syndrome and Cystic fibrosis. It is actually quite a difficult accomplishment to find the precise set of genes needed to produce something functional like bone or muscle. Even a fraction of the number of combinations thought possible by evolutionary psychology is not possible. Genes cannot chop and change, but have to work together in a very limited number of ways, to avoid making mistakes and mismatches. Genes lack the numbers to control numerous tasks, and even if they could do more, biologically it would be folly to exclude guidance from a reliable environment. There are many kinds of organs and anatomical details within a body, but these are made from essentially a rather limited set of tissues. Genes concentrate on producing bone, muscle, nerves, collagen etc, and organs are the various combinations of these tissues. But genes do not micromanage when placing a leg or blood vessel in its certain place. They can just as reliably leave a fair chunk of those tasks to the environment of the developing body. Feedback and double checking with the environment is a far more reliable and repairable mechanism than genes trying to insist on control for all structural development themselves. To see through the scenario of gene micromanagement, think of stem cells. Stem cells are undifferentiated primal or embryonic-like cells that under certain conditions can differentiate into any of the full array of cell types that occur in a body. The decision on what the stem cell will turn into is made by the environment in which the stem cell is placed, not the gene. That is why stem cells are so exciting to medicine. Doctors have the prospect of being able to heal all kinds of bodily damage, when the doctor, not the genes, decides how to culture them. Similarly, the importance of the environment (within the body) in deciding how a body will grow and develop is a well studied branch of science, see morphogenesis and cell differentiation . The early embryo begins undifferentiated, and is composed of totipotent stem cells. For example, the morula embryonic stage is composed of 12-32 virtually identical cells. As rapid growth occurs, spatial differences arise, so that each cell receives differing environmental cues. Simply by being on the inner or outer region of an embryo will decide if that stem cell will produce one of the three germ layers found in mammals, from which the future sources of skin, viscera or bone will form. The stem cells react to the diffusing or cell bound chemicals in their adjacent environment. The cocktail of proteins and other molecules begin to vary in their proportions and types according to where they are located in the body. This environment determines whether a gene will be turned on to produce a bone, cartilage or other tissue, and what shape it will take. One of many possible examples is the scabrous gene in the fruit fly Drosophila (genetically, the most studied animal). The scabrous gene produces a protein that influences the number and positioning of sensory organs during neural development. If not turned on, the compound eye of the fly can become enlarged and have a confusing pattern of ommatidia, the hundreds of facets needed to make the eye. Alternatively in the wing bud, a lack of scabrous causes the sensory bristles along the margin of the wing to be missing or deformed. The scabrous gene does not pre-determine that it will affect an eye or wing. It is simply turned on or off according to localised environmental conditions in those parts of the body. The scabrous gene does not determine that a bristle will be located exactly so on a wing. Nor in humans does a gene plot exactly where a capillary will grow. That is the role of morphogenesis, where placement is determined by the surrounding environment. That is why fingerprints, retina capillary patterns, and iris recognition patterns are always unique and different, even in identical twins. They vary in distribution not according to a map laid out by genes, but according to the logical and interactive steps experienced by cells during development. The genes cannot say where to put the capillaries. That is largely determined by the environment. After all, there are some 42,000 kms of capillaries to be positioned within the average human body. These examples show the high level of reliance deferred from genes to the environment. The gene does not control to the nth degree, but leaves morphogenesis to the environment. Sure, those interactions within a body will occur and sequence in a highly predictable fashion, so that the gross anatomy of a human will replicate consistently to give two arms, aorta and body as we know it. But the fine detail reveals the real mechanisms and the relinquished responsibility by genes to the bodily environment for implementing growth and development. The body builds up from fine detail, and in the fine detail of a body, you can see the signs of environmental influence and the importance of spatial relationships, its ‘ecology’. Similarly, veins and arteries begin in the embryo as capillaries, and remodel due to the differential mechanical stresses that they experience due to their location in the embryo. The role of the embryonic environment is further demonstrated when cells that start to become arteries can be turned into veins, if transplanted into an opposing region, see Hirashima and Suda. Similarly, the gene can no more place and branch a neurone than it can a capillary, and placement and arrangement of neurones is responsible for how humans think. Like veins and arms, there are gross regions of the brain that will become centres for the coordination of vision, the delivery of emotions, and musical appreciation, but again, these arise because of their location in relation to other structures, and are not pre-ordained without heavy reliance on environmental cues. The gene shortage argument was never about denying that genes make the start for the production of hundreds of physical versions. What is important is to understand how far genes try to impose and control their productions, and when they leave the fate of the animal to its ability to interact correctly within its environment. The objectionable part of evolutionary psychology is the claim that so much behaviour is implanted by genes directly. But genes do little more than offer 30,000 or so proteins on tap to the mix, which the bodily environment then draws upon as needed according to the interactions it has. The great variety is produced by morphogenesis in a cascade of events. This cascade uses genes as the recipe, but it is a gross recipe designed to be adaptive or responsive to environmental information. In this way, the phenotype can adapt. If a child is blind, they gain heightened hearing skills. If a right handed person looses their arm, they can learn to become highly proficient with the l eft. There are enough genes to act as recipes, but not enough to micromanage their expression, nor do they ‘want’ to. Genes are in a partnership with the environment, and because the environment is so trustworthy, the consistent result obtained is an able bodied organism. Like morphogenesis, the mechanism for neuronal pathway formation in response to the environment is known, and called selective stabilisation (or neural Darwinism). This is the process where some 10,000 trillion synapses in a child’s brain is trimmed to about 500 trillion useful synapses in the adult, based upon a ‘natural selection’ within the brain. Here, the production of neural pathways is according to the strengths and frequencies of stimulation received. In comparison, evolutionary psychology is a theory, arising from a limited understanding of how evolution works, and cannot prove its mechanisms (no inherited instinct has been isolated in a newborn – apart from some basic reflexes which are different to instincts). They guess mind modules must be there, deduced from their limited understanding of evolution. They see evolution as survival of the fittest (where the animal is a contained and alone individual). But a better understanding of evolution is that it is survival of the wildest (which sees the animal as being the animal plus its full compliment of environmental ties). Environmental feedback in the cerebral cortex of the brain is varied and unpredictable. The brain is on the front line of external stimulation, more so than in the closeted environment of a body. As in morphogenesis, how can it be claimed that genes can build inherited instincts or fears in neurones, when they cannot even control the pathways of capillaries within their highly controlled internal environment? There are not enough genes to run that level of control. The cerebral cortex is like a blank slate, that receives genetic influence through a few simple desires, such as for interaction ( more beetle ), food, sex and water. The structures that produce these desires can be pin-pointed, and are not theory. Biology is about making use of all inputs and not wasting energy when something else will do the job for you. Biology looks for the most parsimonious way to achieve, and makes partnerships where they can be found. Cell interaction does the job of building bodies during morphogenesis. Similarly, the niche environment in combination with desire does the job of teaching what instincts should be learnt in the brain ( more beetle ). Biology should not fail to recognise the value of the feedback to be gained from the environment during development, in its understanding of evolution and in the role of genes. The importance of interacting with the environment should raise the profile of nature in humans. Instincts are not inherited though they entrench deeply so can offer that illusion. Instincts arise from interactions with the environment, and the synthesis of useful patterns gleaned. Evolutionary psychology promulgates the view that the environment can be ignored, as the fate of your feelings and development is already determined. My alternative view is that by shunning the full range of nature’s feedback (its most naked feedback comes from wildness, more beetle ), neural development becomes stunted. As proof, look at the despair, mental state and lack of wit occurring in society today. It is fashionable in biology to be tired of the gene vs environment debate. But it is an important understanding to get right. It will determine your outlook on the environment, and whether you see it as mother earth to be cared for and explored for its feedback and insight, or a place to be avoided or plundered (as is happening now). Article posted 14 Feb 2007. Evolutionary Psychology index Index
355,958
Team Run, Then Sailing today - If you need to, print out a copy of the run map!! Do NOT get lost: - Change into running gear right away. Stretch.. If you are there early, you can begin rigging up your boat - Once complete, we will run 3 distinct practices today in the boat clumps below. FJ’s will have a self-coached team race day. Lasers will do upwind/downwind boat handling and speed tuning, and 420’s will do basic boat handling and sail trim practice. - Lasers, before you head out, read this and “Know your 90“ - I will set up a box-style course and we will head off in waves: FJ team race first, then lasers and 420’s together in a tacking drill. Lasers stay together in the drills. I may separate the lasers, so we will see. - Mission for today is simply to get more time in the boat and work on your boat handling.
215,422
\begin{document} \title*{Parafermionic algebras, their modules and cohomologies} \author{Todor Popov} \institute{Todor Popov \at INRNE, Bulgarian Academy of Sciences, 72 Tsarigradsko chauss\'ee, 1784 Sofia, Bulgaria \\ \email{tpopov@inrne.bas.bg} } \maketitle \abstract{ We explore the Fock spaces of the parafermionic algebra introduced by H.S. Green. Each parafermionic Fock space allows for a free minimal resolution by graded modules of the graded 2-step nilpotent subalgebra of the parafermionic creation operators. Such a free resolution is constructed with the help of a classical Kostant's theorem computing Lie algebra cohomologies of the nilpotent subalgebra with values in the parafermionic Fock space. The Euler-Poincar\'e characteristics of the parafermionic Fock space free resolution yields some interesting identities between Schur polynomials. Finally we briefly comment on parabosonic and general parastatistics Fock spaces.} \section{Introduction} \label{sec:1} The parafermionic and parabosonic algebras were introduced by H.S. Green as a inhomogeneous cubic algebra having as quotients the fermionic and bosonic algebras with canonical (anti)commutation relations. In an attempt to find a new paradigm for quantization of classical fields H.S. Green introduced the parabosonic and parafermionic algebras encompassing the bosonic and fermionic algebras based on the canonical quantization scheme. Here we are dealing with the Fock spaces of the parafermionic algebra $\g$ of creation and annihilation operators. These Fock spaces are particular parafermionic algebra modules build at the top of a unique vacuum state by the creation operators. The creation operators close a free graded 2-step nilpotent algebra $\n$, $\n \subset \g$. The Fock space of a parafermionic algebra $\g$ is then defined as a quotient module of the free $\n$-module, where the quotient ideal stems from the generalization of the Pauli exclusion principle. In this note we calculate the cohomologies $H^{\bullet}(\n, \f)$ of the nilpotent subalgebra $\n$ with coefficients in the parafermionic Fock space $\f$ (taken as a $\n$-module). The cohomology ring $H^{\bullet}(\n, \f)$ is obtained due to by now classical Kostant's theorem \cite{Kostant}. With the data of $H^{\bullet}(\n, \f)$ one is able to construct a minimal resolution by free $\n$-module of the Fock space $\f$. Its existence is garanteed by the Henri Cartan's results on graded algebras. It turns out that the Schur polynomials identities which have been recently put forward \cite{LSVdJ,SVdJ} by Neli Stoilova and Joris Van der Jeugt stem from the Euler-Poincar\'e characteristics of the minimal free resolutions of the parafermionic and parabosonic Fock space. \section{Parafermionic and parabosonic algebras} \label{sec:2} The parafermionic algebra $\g$ with finite number $n$ degrees of freedom is a Lie algebra with a Lie bracket $[ \bullet , \bullet ]$ generated by the creation $a_i^{\dagger}$ and annihilation $a^j$ operators ($i,j=1,\ldots, n$) having the following exchange relations \beqa \ba{rcccrcc} [[ a^{\dagger}_{i},a^{j} ], a^{\dagger}_{k}] &=& 2 \delta_{k}^j a^{\dagger}_{i} \ ,&\quad & [[ a^{\dagger}_{i},a^{j} ], a^{k}] &=& - 2 \delta_{i}^k a^{j} \ , \\[4pt] [ [a^{\dagger}_{i}, a^{\dagger}_{j}],a^{\dagger}_{k} ] &=&0 \ , & \quad &[ [a^{i}, a^{j}],a^{k} ] &=&0 \ .\ea \label{PCR} \label{1} \eeqa The parafermionic algebra $\g$ with finite number degrees of freedom $n$ is isomorphic to the semi-simple Lie algebra \beq \label{bn} \g= \h \oplus \bigoplus_{\alpha\in \Delta_+} \g_{\alpha} \oplus \bigoplus_{\alpha\in \Delta_-} \g_{\alpha} \ , \eeq for a root system $\Delta= \Delta_+\cup \Delta_-$ of type $B_n$ with positive roots $\Delta_+$ given by $$\Delta_+= \{ e_i \}_{1\leq i \leq n} \cup \{e_i + e_j, e_i-e_j\}_{1\leq i<j\leq n} \ , \quad \mbox{and} \quad \Delta_-=-\Delta_+ \ .$$ Here $\{ e_i \}_{i=1}^n$ stand for the orthogonal basis in the root space, $(e_i| e_j)=\delta_{ij}$. One concludes that the parafermionic algebra $\g$ with $n$ degrees of freedom is isomorphic to the orthogonal algebra $\g \cong\mathfrak{so}_{2n+1}$ endowed with the anti-involution $\dagger$. The physical generators correspond to the Cartan-Weyl basis $a^{\dagger}_i:=E^{e_i}$ and $a^j:=E^{-e_j}$ . Similarly one defines the parabosonic algebra $\tilde{\g}$ with exchange relations (\ref{1}) as the Lie super-algebra endowed with a Lie super-bracket $[\bullet , \bullet ]$ whose generators $a^{\dagger}_i$ and $a^j$ are taken to be odd generators. The parabosonic algebra $\tilde{\g}$ with $m$ degrees of freedom is shown \cite{GP} to be isomorphic to the Lie super algebra of type $B_{0,m}$ in the Kac table, i.e., $\mathfrak{osp}_{1|2m}$. More generally, one defines the parastatistics algebra as the Lie super-algebra with $n$ even parafermionic and $m$ odd parabosonic degrees of freedom. The parastatistics algebra is shown to be isomorphic to the super-algebra of type $B_{n,m}$, i.e., $\mathfrak{osp}_{2n+1|2m}$ \cite{Palev}. Throughout this note we will concentrate on the parafermionic algebra and its representations. \section{Parafermionic Fock space} \label{sec:3} The parafermionic relations (\ref{1}) imply that the generators $E_{i}^j=\demi [a_i^{\dagger},a^j]$ are the matrix units satisfying \[ [E_{i}^j, E_{k}^l]= \delta_{k}^j E_{i}^l - \delta_{i}^l E_{k}^j \ . \] These generators close the real form $ \mathfrak{u}$ of a linear algebra $\mathfrak{gl}_n$ with $(E_i^j)^{\dagger}= E_{j}^i $. One has decomposition of the parafermionic Lie algebra into reductive algebra $\mathfrak{u}$ and nilpotent Lie algebras, $\n$ and $\n^\ast$ $$\g = \n^{\ast} \rtimes \mathfrak{u} \ltimes \n \ $$ where $\mathfrak{u}$ is the real form of the linear algebra $\mathfrak{gl}_n$. The free 2-step nilpotent Lie subalgebra $\n\subset \g$ is generated in degree 1 by the {\it creation} operators $a_i^{\dagger}$, $V:=\bigoplus_i \C a_i^\dagger$ $$ \n=\n_1\oplus \n_2= V \oplus \wedge^2 V \ . $$ Analogously the annihilation operators $a_i$ generate the subalgebra $\n^\ast= V^\ast \oplus \wedge^2 V^\ast$. The vector space $V=\n_1$ is the fundamental representation for the left action of the algebra $\mathfrak{gl}_n$, $E_{i}^j \cdot a_k^{\dagger}= \delta^j_{k} a_i^{\dagger}$. Similarly $V^\ast=\n_1^\ast$ is the fundamental representation for the right $\mathfrak{gl}_n$-action, $ a^k \cdot E_{i}^j= \delta_i^{k} a^j$. The linear algebra $\mathfrak{gl}_n$ acts on the algebras $\n$ and $\n^\ast$ by automorphisms. \begin{definition} The parafermionic Fock space is the unitary representation $\f$ of the parafermionic algebra $\g \cong {\mathfrak{so}_{2n+1}}$ built on a unique vacuum vector $\rvac$ such that \beq a_{i}\rvac =0 \ , \qquad [ a_{i},a_{j}^{\dagger} ]\rvac =p \delta_{ij} \rvac \ . \eeq The non-negative integer $p$ is called the order of the parastatistics. \end{definition} Let us single out a particular parabolic subalgebra $\mathfrak{p}= \mathfrak{gl} \ltimes \n$. In the Fock representation the vacuum module $\C\rvac $ is the trivial module for the subalgebra $\mathfrak{p}^\ast= \n^\ast \rtimes \mathfrak{gl}$. The representation induced by $\mathfrak{p}^\ast$ acting on the vacuum module is isomorphic the universal enveloping algebra of the creation algebra $\n$ $${\rm{Ind}}_{\p^\ast}^\g \C \rvac = U \g \otimes_{\p^\ast} \C \rvac \cong U \n \ .$$ Hence the Fock representation $\f$ which we now describe is a particular quotient of the algebra $U \n$ created by the free action of the creation algebra $\n$. The $\f$ of parastatistics order $p$ is a finite-dimensional $\g$-module with a unique Lowest Weight vector $\rvac$ of weight $-\frac{p}{2} \sum_{i=1}^n e_i$ and a unique Highest Weight (HW) vector \beq \label{hwv} |\Lambda\rangle = (a_1^{\dagger })^p \dots (a_n^{\dagger})^p \rvac \eeq thus the $\mathfrak{so}_{2n+1}$-module $\f$ is a highest weight module of weight $\Lambda$ $$ V^{\Lambda}= \f \qquad \qquad \Lambda=\frac{p}{2} \sum_{i=1}^n e_i\ . $$ The parafermionic algebra of order $p=1$ coincides with the canonical fermionic Fock space, i.e., the HW representation $\mathcal{V}(1)=V^\theta$ with $\theta=\frac{1}{2} \sum_{i=1}^n e_i$. The physical meaning of the order $p$ for the parafermionic algebra is the number of particles that can occupyone and the same state, that is, we deal with a Pauli exclusion principle of order $p$. The symmetric submodule $S^{p+1} \n_1 \subset \n_1^{\otimes p+1}$ is spanned by the ``exclusion condition'' $(a_i^{\dagger })^{p+1}=0$ and it generates an ideal $(S^{p+1} \n_1)$. The parafermionic Fock space $\f$ is a Lowest Weight module isomorphic to the factor module of $U \n$ by the ``exclusion'' ideal $(S^{p+1} \n_1)$ $$\f \cong U\n / (S^{p+1}\n_1) \ . $$ On the other hand the parafermionic Fock space $\f=V^\Lambda$ is a HW $\g$-module with HW vector $| \Lambda \rangle $ (\ref{hwv}) $$ V^{\Lambda} \cong U\n^\ast / (S^{p+1}\n_1^\ast) ) = \f \ .$$ \begin{theorem}[A.J. Bracken, H.S. Green\cite{BH}] The HW $\mathfrak{so}_{2n+1}$-module $V^{\Lambda}\cong \f$ of HW vector $| \Lambda \rangle=| p \theta \rangle$ splits into a sum of irreducible $\mathfrak{gl}_n$-modules $V^{\lambda}$ \beq \label{branch} V^{\Lambda}\downarrow^{\mathfrak{so}_{2n+1}}_{\mathfrak{gl}_n} =\bigoplus_{\lambda: \lambda\subseteq (p^n)} V^{\lambda -(p/2)^n} \ , \qquad \qquad \Lambda = \frac{p}{2} \sum_{i=1}^n e_i \eeq where the sum runs over all partitions which match inside the Young diagram $(p^n)$. \end{theorem} \begin{proof} The Weyl character formula applied to a Schur module $V^{\lambda}$ yields the Schur polynomial $$ s_{\lambda}(x_1, \ldots ,x_n)={\sum_{w\in W_1} \varepsilon(w) e^{w(\rho_1+ \lambda)}}/ \sum_{w\in W_1} \varepsilon(w) e^{w(\rho_1)} \qquad W_1:= S_n \ , $$ where the variables are $x_i := \exp({-e_i})$ and the vector $\rho_1=\demi\sum_{i=1}^n(n-2i+1)e_i$. Alternatively the Schur polynomial is written as a quotient of determinants \beq \label{schurp} s_{\lambda}(x_1, \ldots, n_n)= \frac{\det||x^{\rho_{1i}+\lambda_i}_j||}{ \det||x^{\rho_{1i}}_j ||} \ . \eeq The Weyl character formula applied to the ${\mathfrak{so}_{2n+1}}$-module $V^{\Lambda}$ reads \beq \label{charMac} \chi^{\Lambda}= D_{\rho + p \theta} / D_{\rho}= e^{ p \theta }\sum_{\lambda:\, l(\lambda')\leq p} s_{\lambda}(x_1, \ldots ,x_n) \ , \qquad e^{ p \theta }=(x_1 \ldots x_n)^{-\frac{p}{2}} \eeq where $W=S_n\ltimes \Z^n_2$ is the Weyl group of the root system of Dynkin type $B_n$ and $D_{\rho}= \sum_{w\in W} \varepsilon(w) e^{w\rho}$ with $\rho=\demi\sum_{i=1}^n(2n-2i+1)e_i$. The quotient of determinants $D_{\rho + p \theta} / D_{\rho}$ can be further expanded as a sum over the Schur polynomials with no more than $p$ columns (see p.84 in the book of Macdonald \cite{Macdonald}). Here $\lambda'$ stands for the partition conjugated to $\lambda$ and $l(\mu)$ is the length of the partition $\mu$. The Schur polynomials $s_{\lambda}(x)$ are characters of the $\mathfrak{gl}_n$-modules thus the expansion of the ${\mathfrak{so}_{2n+1}}$-character $\chi^{\Lambda}$ implies the branching formula (\ref{branch}). We are done. \qed \end{proof} \section{Kostant's theorem and the cohomology $H^{\bullet}(\n, \f)$} The Kostant theorem is a powerfull tool helping to calculate cohomologies. Let's have a semi-simple algebra $\g$ and its Borel subalgebra $\mathfrak b= \h \oplus \bigoplus_{\alpha\in \Delta_+} \g_\alpha \ .$ Any parabolic subalgebra $\p$, $\g \supset \p \supseteq \mathfrak b$ has a Levi decomposition $\p = \g_1 \ltimes \n$ where $\g_1$ is a reductive algebra and $\n$ is the nilradical (largest nilpotent ideal) of $\p$. Consider the $\g$-module $V^\Lambda$ of weight $\Lambda$ and the cohomology $H^{\bullet}(\n, V^\Lambda)$ with coefficients in the restriction $\n$-module $V^\Lambda\downarrow_{\n}^\g$. The Kostant's theorem gives the decomposition of $H^{\bullet}(\n, V^\Lambda)$ as a sum of irreducibles $\g_1$-modules $V^{\mu}$. \begin{theorem}(Kostant) Let $W$ be the Weyl group of the algebra $\g$ and the subset $\Phi_{\sigma}\subseteq \Delta_+$ be $$\Phi_{\sigma} :=\sigma \Delta_- \cap\Delta_+ \subseteq \Delta_+\ .$$ Let $\rho$ be the Weyl vector $\rho=\demi \sum_{\alpha \in \Delta_+} \alpha $. The roots of the nilpotent radical $\n$ are denoted as $\Delta(\n)$ and the subset $W^1=\{\sigma\in W| \Phi_\sigma\subset \Delta(\n)\}$ is a cross section of the coset $W_1\backslash W$. The cohomology $H^{\bullet}(\n, V^\Lambda)$ has a decomposition into irreducible $\g_1$-modules $V^{\mu}$ \[ H^{\bullet}(\n, V^\Lambda)= \bigoplus_{\sigma \in W^1} V^{\sigma(\rho +\Lambda) - \rho} \] where the cohomological degree of $H^{j}(\n)$ is the number of the elements $j:=\#\Phi_\sigma$. \end{theorem} J. Grassberger, A. King and P. Tirao \cite{tirao} applied Kostant's theorem to cohomology $H^{\bullet}(\n,\C)$ with trivial coefficients. Here we extend their method for cohomologies with coefficients in the parafer\-mio\-nic Fock space $\f$, $H^{\bullet}(\n,\f)$. \begin{theorem} \label{c1} Let $\n$ be the free 2-step nilpotent Lie algebra $\n= V\oplus \wedge^2 V$ and $V^{\Lambda}$ be the parafermionic Fock space, $V^{\Lambda}=\f$ . The cohomology $H^{\bullet}(\n, V^{\Lambda})$ with values in the $\n$-module $V^{\Lambda}\downarrow^\g_\n$ has a decomposition into irreducible $\mathfrak{gl}(V)$-modules \beq \label{Hkp} H^{k}(\n, \f) \cong \bigoplus_{\mu:\mu=\mu'} V^ {\ast \mu^{(p)}-(\frac{p}{2})^n} , \qquad \qquad k = \frac{1}{2}(|\mu| + r(\mu)) \ , \eeq where the sum is over self-conjugated Young diagrams $\mu=(\alpha|\alpha)$ and the notation $\mu^{(p)}$ stays for the $p$-augmented diagram $\mu^{(p)}=(\alpha+p |\alpha)$. \end{theorem} We recall the Frobenius notation for a Young diagram $\eta$ $$\eta:=( \alpha_1 , \ldots, \alpha_r | \beta_1 , \ldots, \beta_r ) \qquad r= r(\eta) $$ where the {\it rank} $r(\eta)$ is the number of boxes on the diagonal of $\eta$, the arm-length $\alpha_i$ is the number of boxes on the right of the $i$th diagonal box, and the leg-length $\beta_i$ is the number of boxes below the $i$th diagonal box. The overall number of boxes in $\eta$ is $ |\eta| = r+ \sum_{i=1}^{r} \alpha_i + \sum_{i=1}^r \beta_i \ . $ The conjugated diagram $\eta'$ is the diagram in which the arms and legs are exchanged $$ \eta':=( \beta_1 , \ldots, \beta_r | \alpha_1 , \ldots, \alpha_r ) \ .$$ \begin{proof} The parafermionic algebra $\g\cong\mathfrak{so}_{2n+1}$ has Cartan decomposition (\ref{bn}). Consider its parabolic subalgebra $ \p = \bigoplus_{i>j} \g_{e_i-e_j} \oplus \h \oplus \bigoplus_{\alpha\in \Delta_{+}} \g_{\alpha} \subset \g $. From the parafermionic relations (\ref{1}) is readily seen that the Levi decomposition of the parabolic subalgebra $\p=\g_1 \ltimes \n$ has reductive component \beqa \g_1= \h \oplus \bigoplus_{i\neq j} \g_{e_i-e_j} \ \cong \mathfrak{gl}_n \eeqa acting by automorphisms on the free 2-step nilpotent algebra $\n$ (the space $\n_1=V$ being the fundamental representation of $\g_1=\mathfrak{gl}_n$) \beqa \n&=&\bigoplus_{i}\g_{e_i} \oplus \bigoplus_{i< j} \g_{e_i+e_j} \cong V \oplus \wedge^2 V \ . \eeqa The Weyl group $W_1$ of $\g_1=\mathfrak{gl}_n$ is the symmetric group $S_n$ operating on $\{e_1, \ldots ,e_n\}$ by permutations. The Weyl group of $\g=\mathfrak{so}_{2n+1}$ is $W=S_n \ltimes \Z^n_2$. The $\Z^n_2$ is generated by operators $\tau_i$, $i=1,\ldots,n$ such that $\tau_i^2=1$ acting by $$ \tau_i (e_j) = \left\{ \ba{rcr} -e_j && i=j \\ e_j && i\neq j \ea \right. \ . $$ The elements $\tau_I\in \Z^n_2$ are indexed by subsets $I\subseteq \{1, \ldots, n\}$, $\tau_I \in \prod_{i\in I} \tau_i$. Let us describe the subset $W^1$ which has order $|W^1|=2^n$. Both $W^1$ and $\Z^n_2$ are cross sections of $W_1\backslash W$ thus for each $\tau_I\in \Z_2^n$ exists a unique permutation $\omega_I \in S_n$ such that $\omega_I \tau_I \in W^1$. Let $\mathfrak b^{0}$ be the nilpotent part of the Borel algebra $\mathfrak b^{0}=\mathfrak b \slash \h$ and and the complement be $\mathfrak m_1=\g_1 \cap \mathfrak b^{0}=\mathfrak b^{0} / \n$. The subset $W^1=\{ \sigma\in W| \Phi_\sigma \subseteq \Delta(\n) \}$ keeps stable also the complement of $\Delta(\n)$ $$ \sigma\Delta(\n)\subseteq \Delta_+ \qquad \Leftrightarrow \qquad \sigma^{-1} \Delta(\mathfrak b^{0} / \n)\subseteq \Delta_+ \ . $$ The root system of $\mathfrak m_1$ is $\Delta(\mathfrak m_1)=\{e_i- e_j, i<j \}$ therefore $\omega_I \tau_I \in W^1$ implies $\tau^{-1}_I\omega^{-1}_I\Delta(\mathfrak m_1)\subseteq \Delta_+$ or $ \tau_I\omega^{-1}_I (e_i- e_j)>0 $ for $ i<j \ .$ These inequalies are satisfied for $\omega_I\in S_n$ defined by $$\omega_I(a)>\omega_I(b) \quad \mbox{when} \quad \left\{ \ba{cccc} a<b& & a\in I & b\in I \\ a> b &&a\notin I &b\notin I\\ && a\in I& b\notin I\ea \right. \ .$$ The permutation places all elements of $I=\{i_1, \ldots i_r\}$ after all the elements of its complement $\bar{I}$ preserving the order of $\bar{I}$ and reversing the order of $I$, that is, \beq\label{oI} \omega_I(1, \ldots , i_1, \ldots , i_r, \ldots, n)= (1, \ldots , \hat{i}_1, \ldots , \hat{i}_r, \ldots, n, i_r, \ldots, i_2,i_1) \ . \eeq The permutation $\omega_I$ can be represented as a product of cyclic permutations $\omega_I= \zeta_{i_r} \ldots \zeta_{{i_2}}\zeta_{i_1}$ where $\zeta_{i_k}$ is the cycle (of length ${n-i_k+1}$) from positions $i_k-k+1$ to $n-k +1$. Therefore the action of $\omega_I$ is represented by the sequence of steps \beqa \zeta_{i_1 } (1, \ldots , i_1, \ldots, i_k,\dots n) &=& (1, \ldots ,\hat{i}_1, i_1+1, \ldots,n, i_1) , \nonumber \\ \zeta_{i_2 }(1, \dots ,\! \!\!\!\!\! \!\underbrace{ i_2}_{\mbox {place } i_2-1 } \!\!\!\!\! , \ldots,n, i_1)&=& (1, \dots ,\hat{i}_2, \ldots,n,i_2, i_1), \nonumber\\ & \ldots& \nonumber \\ \zeta_{i_k }(1, \dots ,\! \!\!\!\!\! \!\underbrace{ i_k}_{\mbox {place } i_k-k+1 } \!\!\!\!\! , \ldots,n,i_{k-1},\ldots, i_1)&=& (1, \dots ,\hat{i}_k, \ldots,n,i_k, \ldots, i_1) \ . \nonumber \eeqa Note that after the $j$-th step, the last $j$ places are not touched by the next cyclings. The Weyl vector $\rho$ associated to $\g=\mathfrak{so}_{2n+1}$ reads $\rho=\demi\sum_{i=1}^n(2n-2i+1)e_i$. Note that the components of $\rho$ are strictly decreasing with step $1=\rho_{i+1} - \rho_{i}$. The cohomology ring $H^{\bullet}(\n,V^{\Lambda})$ decomposes into $\mathfrak{gl}(V)$-modules with HW weights $\sigma(\rho+ \Lambda) - \rho$ for $\sigma\in W^1$. We are interested in the case $\Lambda =\frac{p}{2}\sum e_i $, $V^{\Lambda}=\f$. Consider first the case $p=0$, i.e., the cohomology with trivial coefficients $H^{\bullet}(\n, \C)$ following \cite{tirao}. The highest weights $\lambda_I= \sigma(\rho)- \rho$ for $\sigma\in W^1$ are non-positive due to $\sigma(\rho)_i\leq \rho_i$. The cycling structure of $\omega_I$ implies $$ \lambda_I=\sum \lambda_j e_j, \qquad \lambda_j ={- (n - i_{n-j+1} +1)} \chi_{(n-r+1\leq j \leq n)}- \sum_{k=1}^{r} \chi_{(i_k-k+1\leq j \leq n-k)} \ . $$ One has an isomorphism between a HW $\mathfrak{gl}_n$-module $V^{\lambda_I}$ with negative weight $\lambda_I\leq 0$ and the dual representation $V^{\ast \mu_I}$ with reflected weight $\mu_I\geq 0$ $$ V^{\lambda_I} \cong V^{\ast \mu_I} \qquad \qquad \mu_I := \sum_{i=1}^n \mu_i e_i = -\sum_{i=1}^n \lambda_{n-i+1} e_i \geq 0 \ . $$ The components of $\mu_I$ are decreasing positive integers $\mu_1\geq \ldots\geq \mu_n\geq 0$ \beq \label{dom} \mu_j = (n - i_{j} +1) \chi_{(1\leq j \leq r)}+ \sum_{k=1}^{r} \chi_{(k+1\leq j \leq n- i_k +k)} \ , \eeq and these components code a self-conjugated Young diagram $\mu^{\prime}_I=\mu_I$ $$ \mu_I= (\alpha_I|\alpha_I) \qquad \alpha_I=(\alpha_1, \ldots, \alpha_r), \quad \mbox{for} \quad \alpha_j=n-i_j \ .$$ Roughly speaking the $j$-th cyclic permutation $\zeta_{i_k}$ in $\omega_I$ creates a self-conjugated hook subdiagram of $\mu_I$ with $\alpha_j =n-i_j$. By virtue of the Kostant's theorem \cite{Kostant} the cohomology $H^\bullet(\n, \C)$ of the nilpotent Lie algebra $\n$ has decomposition into Schur modules with HW vector $| \mu_I\rangle$ $$H^\bullet(\n, \C)=\bigoplus_{\mu_I: \mu'_I=\mu_I} V^{\ast \mu_I}, \qquad | \mu_I\rangle =E^{-\Phi_{\sigma}}, \quad \sigma\in W^1$$ labelled by self-conjugated Young diagrams. All self-conjugated Young diagrams $\{\mu_I: \mu'_I=\mu_I\}$ are in bijection with elements of $W^1$ (with cardinality $|W^1| = 2^n$), all these diagrams are included into the maximal square diagram, $\mu_I \subseteq (n^n)$. Consider now the cohomology ring $H^{\bullet}(\n, V^{\Lambda})$ where $\Lambda =\frac{p}{2}\sum e_i$. It decomposes into $\mathfrak{gl}_n$-modules with HW weights $\lambda_I^{(p)}=\sigma(\rho+ \Lambda) - \rho$ where $\sigma=\omega_I \tau_I \in W^1$. Given a set $ I=\{i_1, \ldots , i_r\}$ the shift $\Lambda$ modifies the dominant weight $\nu_I= \sum \nu_i e_i$ to \[ \nu_j^{(p)}= - \lambda_{n-j+1}^{(p)} , \qquad \nu_j^{(p)} =-\frac{p}{2}+ (n - i_{j} +1+p) \chi_{(1\leq j \leq r)}+ \sum_{k=1}^{r} \chi_{(k+1\leq j \leq n- i_k +k)} \ . \] The weights $\nu_I^{(p)}= \mu_I^{(p)}- \frac{p}{2}\sum e_i$ fix the HW vectors in the $\mathfrak{gl}_n$-modules $V^{\ast\nu_I^{(p)}}$ $$V^{\ast\nu_I^{(p)}} =V^{\ast \mu^{(p)}_I} \otimes | \Lambda \rangle\quad \mbox{where} \qquad \mu^{(p)}_I=( \alpha_I +p |\alpha_I) \qquad \alpha_j=n-i_j $$ from where the decomposition of $H^{\bullet}(\n, \f)$ (\ref{Hkp}) follows, the sum over $\sigma \in W^1$ in Kostant's theorem being replaced by the sum over self-conjugated Young diagrams $\mu=\mu'$. The arm $p$-augmented diagram $\mu^{(p)}_I$ stems from the self-conjugated diagram $\mu_I=(\alpha_I|\alpha_I)$ cf. eq. (\ref{dom}) by augmenting the arm-lengths, $\mu^{(p)}_I=( \alpha_I +p |\alpha_I)$. The cohomological degree $k$ of the elements in $V^{\ast \mu_I^{(p)}}\otimes \rvac \subset H^{k}(\n, \f) $ do not depend on $p$ but only on $\sigma=\omega_I \tau_I \in W^1$ (or equivalently on $\mu_I$). In view of $ \Phi_{\sigma} = \Delta_- \cap\sigma^{-1}\Delta_+ $ a root $\xi \in \Phi_{\sigma}\subseteq \Delta(\n) $ whenever $\sigma^{-1} \xi< 0$. But the set $\Delta(\n)$ is stable under permutations and $\tau_I^{-1}= \tau_I$ thus \beqa \#\Phi_\sigma &=& \# \{ \xi \in \Delta(\n), \tau_I \xi <0 \} \nonumber \\ &=&\#\{ \g_{e_i}, i\in I \} + \# \{ \g_{e_i+e_j}: i<j, i \in I\}\nonumber\\ &=& \sum_{i\in I} (1 +n - i)= r + \sum_{k=1}^r (n - i_k) = r + s=\deg{ \mu_I} \nonumber \ . \eeqa Thus the cohomological degree $k=\deg \mu_I =\#\Phi_\sigma$ is the total degree $k=(r+s)$ of the bi-complex $ \wedge^s (\wedge^2 V^{\ast}) \otimes \wedge^s V^{\ast}$. The number of boxes above the diagonal in $\mu_I$ is $s= \demi (|\mu_I|- r)$ so finally one gets $ k= \deg { \mu_I} = \demi ( r(\mu_I)+ |\mu_I|) \ . $ We are done. \qed \end{proof} \section{Resolution of $\f$} A general result of Henri Cartan \cite{Cartan} states that every positively graded $\mathcal A$-module $M$ of a graded algebra $\mathcal A=\oplus_{n\geq 0} {\mathcal A}_n$ allows for a minimal projective resolution by projective $\mathcal A$-modules. Moreover the notions of a projective and a free module coincide in the graded category. Thus for every positively graded $\mathcal A$-module $M$ there exists a minimal resolution by free $\mathcal A$-modules. The universal enveloping algebra $U \n$ is a graded associative algebra and the parafermionic Fock space $\f=V^{\Lambda}$ is a positively graded $U \n$-module. There exists \cite{Cartan} a minimal free resolution $P_\bullet=\bigoplus_{k=0}^{N} P_k$ of the right $U\n$-module $\f^{\ast}$ \beq \label{freeres} 0\rightarrow P_N\rightarrow \ldots \rightarrow P_{1} \rightarrow P_{0} \stackrel{ }{\rightarrow} \f^{\ast} \rightarrow 0 \eeq by free right $U \n$-modules $P_k= E_k\otimes U \n $. We apply the functor $- \otimes_{U\n} \C$ on the complex $P_{\bullet}$, where $\C$ is the trivial $U\n$-module. The minimality of the resolution $P_{\bullet}$ implies \cite{Cartan} that the differentials of the complex $P_\bullet\otimes_{U \n} \C$ vanish. Hence the multiplicity spaces $E_k$ coincide with the homologies $$E_k\cong {\rm{Tor}}^{U\n}_{k}(\f^{\ast},\C) =H_k(\n,\f^{\ast}) \qquad \Rightarrow \qquad E_k^\ast \cong H^k(\n, \f) \ ,$$ where we used the isomorphism $H_k(\n, M)^{\ast}=H^{k}(\n, M^{\ast})$. Theorem \ref{c1} gives us the spaces $E_k \cong H^k(\n, \f)^\ast$ so we have constructed the minimal free resolution (\ref{freeres}). \begin{theorem} The Euler-Poincare characteristic of the free minimal resolution of the (dual of the) parafermionic Fock space $\f$ (\ref{freeres}) yields the identity \beq \label{conj} \frac{\sum_{\mu: \mu=\mu'} (-1)^{\demi(|\mu|+r(\mu)) }s_{\mu^{(p)}}(x)}{{\prod_{i} {(1-x_{i})} \prod_{i<j}{(1-x_{i}x_{j}} ) }} =\sum_{\lambda: l(\lambda')\leq p} s_\lambda(x) \ . \eeq \end{theorem} \begin{proof} In general, the mapping of modules of an algebra into its Grothendieck ring of characters is an example of Poincar\'e-Euler characteristic. The free resolution (\ref{freeres}) is naturally a (reducible) $\mathfrak{gl}(V)$-module and the Schur functions (\ref{schurp}) span the ring of $\mathfrak{gl}(V)$-characters. All the homology of a resolution is concentrated in degree 0, hence on the RHS of (\ref{conj}) stays the character of the self-conjugated\footnote{The self-conjugacy $\f\cong \f^\ast$ allows to switch between $x_i:=exp(\pm e_i)$ without a conflict.} module $\f$(\ref{charMac}) $$ch \f =ch \f^\ast = e^{-p \theta} \sum_{\lambda \subseteq (p^n)} s_\lambda(x) \qquad x_i:=exp(e_i) \ .$$ From the Poincar\'e-Birkhof-Witt theorem follows that the character of $P_k$ reads $$ ch\, P_k= ch (E_k\otimes U\n) = \frac{e^{-\Lambda} s_{\mu^{(p)}}(x)}{\prod_i (1- x_i) \prod_{i<j} (1-x_i x_j)} \ . $$ Thus the alternating sum on the LHS comes from the characters of the $\mathfrak{gl}(V)$-modules $ E_k\otimes U\n$ taken with alternating signs corresponding to the homological degree. The factor $e^{p \theta}=e^\Lambda$ accounting for the weight of the HW vector $|\Lambda \rangle$ cancels which proves the parafermionic sign-alternating identity (\ref{conj}). \qed \end{proof} {\bf Remark.} The free minimal resolution of the trivial module $\C$ constructed by J\'ozefiak and Weyman \cite{JW} with the help of the the homologies $H_{k}(\n,\C)$ corresponds to the resolution $P_{\bullet}$ (\ref{freeres}) of $\C \cong \mathcal V(p=0)$. The parafermionic sign-alternating identity (\ref{conj}) was proposed by Stoilova and Van der Jeugt in their study of parafermionic Fock space \cite{SVdJ}. The parabosonic Fock space has been explored in \cite{LSVdJ} where the ``super-symmetric partner'' of the identities (\ref{conj}) has been proposed (for a combinatorial proof see \cite{King}) \beq \label{conj'} \frac{\sum_{\mu: \mu=\mu'} (-1)^{\demi(|\mu|+r(\mu)) }s_{[\mu^{(p)}]^\prime}(x)}{{\prod_{i} {(1-x_{i})} \prod_{i<j}{(1-x_{i}x_{j}} ) }} =\sum_{\lambda: l(\lambda)\leq p} s_\lambda(x) \ . \eeq The parity functor $\Pi$ switches parafermionic {\it even} generators to parabosonic {\it odd} generators, thus $\g =\mathfrak{so}_{2n+1} \stackrel{\Pi}{\rightarrow} \tilde{\g}=\mathfrak{osp}_{1|2n}$. The effect of $\Pi$ is the passage $\lambda \stackrel{\Pi}{\rightarrow} \lambda^\prime$. The identity (\ref{conj'}) is rooted into a minimal free resolution of the parabosonic Fock space $\tilde{\mathcal V}(p)=\Pi \f$ by free $U\tilde{\n}$-modules of the nilpotent Lie super-algebra $\tilde{\n}\subset \tilde{\g}$. More generally, one can consider the parastatistics Fock space ${\mathcal V}_{n|m}(p)$ of the parastatistics Lie super-algebra $\g_{n|m}:=\mathfrak{osp}_{2n+1|2m}$ with $n$ parafermionic and $m$ parabosonic modes. We conjecture that there exists a complex of free $U\n_{n|m}$-modules of the maximal nilpotent Lie superalgebra $\n_{n|m}\subset \mathfrak{osp}_{2n+1|2m}$ whose cohomology is ${\mathcal V}_{n|m}(p)$. Then the Euler-Poincare characteristics of such a complex will yield one more identity (which was obtained by different method in \cite{LP}) \[ \frac{ \prod_{i<j \,,\,\hat{i}\neq \hat{j}} (1+x_{i}x_{j}) \sum_{\mu: \mu=\mu'} (-1)^{\frac{1}{2}(|\mu| + r(\mu))} hs_{\mu^{(p)}}(x)} { \prod_{i}(1-x_{i}) \prod_{i<j \,,\,\hat{i}=\hat{j}}{(1-x_{i}x_{j})} } =\sum_{\lambda:\, \lambda_1\leq { p}} hs_{\lambda}(x) \ . \] Here the $(n|m)$-hook Schur polynomial $hs_{\lambda}(x)$ is the character of the irreducible $\mathfrak{gl}_{n|m}$-module $ V^{\lambda}$, $hs_{\lambda}(x)=ch\, V^{\lambda} $. The non-trivial $\mathfrak{gl}_{n|m}$-modules $V^{\lambda}$ are labelled by diagrams $\lambda$ such that $\lambda_{n+1}\leq m$.
160,093
Did you know that grey hair is in vogue? While in the past, aging women felt the pressure to colour their hair and hide the grey hairs, they are now embracing it. And when it comes to both skin and hair care, embracing what is natural instead of fighting it is always recommended. Why are you greying? 1. Genetics We get grey hair when follicles stop producing melanin - a process which is determined more by genes than anything else. So if you’re wondering when your hair might start greying, find out when it happened to your parents. Generally, Caucasians tend to go grey earlier than Asians or Africans. 2. Stress? Although stressful periods in life seem to bring on grey hair, scientists are yet to find any proof of correlation. However, stress can cause temporary hair loss- known as telogen effluvium. The hair which grows back after telogen effluvium might be less pigmented. You might notice that your grey hair looks and behaves very differently from the rest of the hair -- it might be curlier o straighter. Generally, grey hair tends to be drier, more brittle, and more prone to breakage and yellowing. Here are some tips to help you care for your grey hair like a pro: 1. Condition Grey hair has a tendency to be dry and brittle -- hence prone to breaking. You can’t afford to be lazy with conditioning it. To give it the moisture it needs, go for deep-conditioning natural oils like avocado, coconut, Argan and Jamaican black castor oil. Also consider honey and agave treatments, deep-conditioning protein packs, and well-formulated daily leave-in conditioners. 2. Avoid heat Blow-drying, straightening and curling might not be such great ideas for grey hair. Heat treatments are generally discouraged for natural hair. Why? Heat zaps moisture from hair and alters the hair’s natural curl pattern. With grey hair, these concerns multiply. To keep your grey strands healthy, the best option is to avoid heat treatment at all costs. But if you’re addicted to heat treatments, you might consider using heat protectants serums or spray. 3. Beware UV rays Since it has little to no melanin, grey hair is particularly susceptible to sun damage. If you haven’t done so already, it’s time to consider using hair products containing SPF. Natural oils such as avocado and coconut have natural SPF and are therefore highly recommended. If you’re going to spend a considerable amount of time outdoors, wear a pretty hat, scarf, or head-wrap...you can be both stylish and sensible. 4. Make it shine Grey hair can look dull, yellow, and lifeless. To make it look shinier, go for specially formulated brightening grey hair shampoos. If your hair seems to be thinning as well, go for volumising shampoos. To naturally brighten your hair, massage lemon juice mixed half-and-half with water and wait for 10 to 15 minutes before rinsing off. You can also add apple cider vinegar to your shampoo. Apply natural oil afterwards. 5. Update your make-up Going grey is not an excuse to look frumpy and unstylish. To rock your grey tresses, you need to update your make-up and dressing accordingly. Avoid any clothes and accessories that scream ‘old-lady’. For instance instead of thick grandma glasses, go for trendy frames which complement your face. As for makeup, make sure to add colour to your face by applying bronzer or a bit of blush to your cheeks. Use concealer to hide dark and saggy under-eye bags or general skin unevenness. Draw attention to your eyes by using cool-coloured eye shadow shades such as slate or grey. Pink or rose-coloured lipstick or gloss will naturally complement your grey hair.
283,848
TITLE: How to prove that not all recursive sets are in $\mathsf{P}$? QUESTION [4 upvotes]: I am interested in finding simple proof (without time hierarchy theorem) of the following fact: To show that there is a recursive set that is not in class $\mathsf{P}$. REPLY [3 votes]: The class $\mathsf P$ is effectively enumerable: If we define $$ A_k = \{n\mid \text{Turing machine number $k$ accepts $n$ in at most $n^k+k$ steps}\}$$ then every set in $\mathsf P$ is $A_k$ for some $k$. This works because every real polynomial is pointwise smaller than something of the form $x^k+k$, and every Turing machine has (in effect) arbitrarily large indices -- namely, we can add unreachable states to it until the index is as large as we want it. (We could avoid this reasoning at the cost of introducing a tupling function and corresponding projections). Now we can diagonalize to find $$ D = \{ n \mid \text{Turing machine number $n$ does not accept $n$ in at most $n^n+n$ steps}\} $$ which is clearly recursive, but by construction differs from all the $A_k$s and so is not in $\mathsf P$. Whether this proof is significantly different from just applying the time hierarchy theorem is, I suppose, in the eye of the beholder.
130,577
News & Views from the Village of Gold River BC Canada Friday, December 9th is the next local Court day scheduled for Gold River. If you are involved in any type of proceedings as a witness or victim and would like some information as to what to expect, etc., please contact me (Kyra Fiddler) at the Nootka Sound Victim Services office at 283-7773, or email nootkasoundvs@gmail.com. Please note that I am also available for emotional support before, during and after Court. If you are nervous or worried, please call. Also, members of the public are always welcome to attend and watch court proceedings. Court begins at approximately 9:30am. Just an FYI to anyone who missed out on the previous flu clinics – flu vaccines are still available by appointment through the Gold River Public Health Unit. Please call 250-283-2626 EXT #3 to arrange an appointment or for more information. Via Julie Schimunek: REMINDER! THE RACE IS ON! 2 DAYS to November 22nd SMART METER PETITION SIGNATURE COUNTDOWN: for Premier Christy Clark and the Legislature COUNTDOWN: 2 DAYS UNTIL TUESDAY, NOVEMBER 22 The race is on to get as many signatures on our petition as possible by November 22. John Horgan, NDP Energy Critic, will take all our signatures (online and hard copy) into the Victoria Legislature on November 24 and officially demand a halt to the Smart Meter Program. Let’s give him the ammunition he needs with thousands more signatures! The deadline for submission of these signatures is TUESDAY, NOVEMBER 22nd. Please, if you haven’t already done so, SIGN OUR ONLINE PETITION (also on the sidebar on every page of this website). Also, we need to get it out on Facebook, Twitter, websites . . . Ask your friends and business associates to help with signature collection. For those who have been collecting signatures on our HARD COPY PETITIONS, NOW is the time to send them to us. FAX, or SCAN/ E-MAIL them to Una St. Clair as soon as you possibly can. Thank you for your efforts to learn about this issue, and be pro-active in your homes and communities. These are much appreciated, and will help all of us as the future unfolds. Bill 23 says this…...” Concerned about smart meters? Join the growing list of BC communities who are calling for a Moratorium. Click below and Click on Smart Meters Kit which includes letters to BC Hydro and Petition forms as well as Signs to place by your meter. The Food Court at the Gold River General Store (formally Payless) is now open! At lunch they have hot baked chicken and potato wedges, and in the deli case there are salads, sandwiches, wraps, veggies and dip, pizza pretzels and more! As well, there are also fresh pepperoni cheese sticks (made in store) and a variety of baked goods. Pizzas, including the much loved breakfast pizza are coming soon! Kleos Open House When: Wednesday, November 23, 2011, 6-8pm Where: Anne Fiddick Aquatic Centre Meeting Room Why: To introduce Kleos and our learning programs You are invited to join us for a bite to eat and food for thought as we share a little bit about Kleos Open Learning and how we support grade K-12 students. Chili, desserts, and refreshments will be served throughout the evening. A short presentation will be given at 6:30pm, followed by a question and answer session. Attendees will be given a complimentary pool pass to say thank you for coming out to meet us. We look forward to meeting you on November 23rd. There will be one more flu clinic held on November 9th from 9:30am -4:30pm. Please call 283-2626 (ext 3) for appointments. Please leave a message and we will return your call asap. White cat with grey on the top of her head was found at the Arena Monday Oct 24th. She appears to be approximately a year old and has no tattoos. Anyone who believes this is their cat please contact Michele 250-283-7736 or email thethomsonsATcablerocketDOTcom Cyber criminals are once again calling people in Gold River on the telephone, claiming to be from Microsoft, and offering to help solve their computer problems. Once cybercriminals have gained a victim’s trust, they can do one or more of the following: Microsoft will not make unsolicited phone calls to help you with your computer. If you receive a phone call like this, hang up. Today at 10:20am, a Gold River Buzz has registered and will be taking part in the drill. (well, however many of the four of us are awake at 10:20am tomorrow anyway – it is a Pro-D Day after all!) It’s not a big deal, and won’t take up much of your time – just a great opportunity to think about things and mentally prepare for an earthquake. Here’s the ‘Drop, Cover, and Hold On’ Drill (via ShakeOutBC): At 10:20am today, take a look around where you are – think about what you would do and what might happen if an earthquake struck right now. If there isn’t a table or desk near you, cover your face and head with your arms and crouch in an inside corner of the building. Do not try to run to another room just to get under a table.! What NOT to do: DO NOT get in a doorway! An early earthquake image from California showed. As with anything, practice makes perfect. Always try to be aware of your surroundings and potential hazards. And of course, now is a good time to evaluate (or prepare) your emergency kit. Visit GetPrepared.ca for more information. Copyright © 2012 Gold River Buzz - All Rights ReservedPowered by WordPress & Atahualpa
297,369
Cargill Announces New All-Vegetarian Diet for Good Nature Pork September 9, 2009 — Cargill Pork announced a new 100 percent vegetarian diet for hogs that supply its Good Nature pork line of products. According to Cargill, a vegetarian diet means hogs receive a mixture of natural grain -- a diet completely free of any animal byproducts. The vegetarian diet is Cargill's most recent addition to its all-natural standards for Good Nature pork. The product is already guaranteed to be antibiotic-free, and hogs selected from the program never receive antibiotics or growth stimulants. "Good Nature pork is targeted at a growing consumer audience that is very conscious of how and where their food is raised," said Joe Linot, pork marketing manager for Cargill Pork. "By using a 100 percent vegetarian diet for hogs in our program, Cargill can provide options for consumers who want to make healthy, environmentally responsible choices when they select Good Nature pork for their families." Source: MeatingPlace.com, September 9, 2009 By Ann Bagel Storck - Next story: l-arginine supplementation of milk liquid or dry diets fed to pigs after weaning has a positive effect on production in the first three weeks after weaning at 21 days of age [Abs.] - Next in category: Jeremy Albrecht Obituary [News] - Previous in category: USMEF Talks Pork at World Conference [News] - Previous story: Student Seminar Abstracts due September 25; New Submission Process [AASV]
46,236
...Animal Farm... It's true, I don't even really like dogs. But I always said I might like a dog if I was able to get it when it was a puppy...so here's to discovery! So far, the puppy makes noises like a dying cow at night when he's lonely (the night watchmen call it "crying." I call it horrendous). He's also got something wrong with his hind leg, so he's been limping all weekend. When they took him to get his rabies shot on Saturday, the vet said that the leg wasn't broken so we're hoping it will just heal. The puppy will start his worm treatment medicine today...maybe that will make him gain some weight. He's a bit scrawny...and a little worse for wear. He was living at a school downtown and the kids here tend to enjoy stoning dogs...so he's got a couple of battle scars. Oh yes, I believe he is going to be called "Frodo." Originally they wanted to call him "Hobbit" but I vetoed that. Giving him a name like that would be like naming a child "Human." It's just too impersonal. Even for a dog that makes noises like a dying cow. We also have a cat. The cat has been here since I arrived. She's given birth to something like 3 litters, but ate two of them. She managed to keep from devouring her most recent litter which was born just before I arrived. We're so proud. The kittens were given away before I moved in. Nuts. The cat's name is Beatrix - after the queen of the Netherlands or something like that. I don't know. I just call her "Puss." I'm not so original with names. When I was a child, I had a hampter named "Hampsty," and three fishes named "Goldie," "Blackie," and "Strawberry." Three guesses what colours they were! If I ever have children, I'll try to be more inventive. They make books for that, don't they?
232,115
\begin{document} \title{Using symmetry to generate solutions to the Helmholtz equation inside an equilateral triangle} \author{Nathaniel Stambaugh} \email{nstambaugh@flsouthern.edu} \affiliation{Department of Computer Science and Mathematics,\\ Florida Southern College, Lakeland, Florida. 33801, USA} \author{Mark Semon} \affiliation{Department of Physics and Astronomy,\\ Bates College, Lewiston, Maine. 04240, USA} \date{\today} \begin{abstract} We prove that every solution of the Helmholtz equation $\nabla^2 \psi + k^2 \psi = 0$ within an equilateral triangle, which obeys the Dirichlet conditions on the boundary, is a member of one of four symmetry classes. We then show how solutions with different symmetries, or different values of $k^2$, can be generated from any given solution using symmetry operators or a differential operator derived from symmetry considerations. Our method also provides a novel way of generating the ground state solution (that is, the solution with the lowest value of $k^2$). Finally, we establish a correspondence between solutions in the equilateral and $(30^{\circ},60^{\circ}, 90^{\circ})$ triangles. \end{abstract} \pacs{02.20.-a,02.30.Jr, 03.65.Ge} \maketitle \section{Introduction.}\label{sec:intro1} The solutions to many important physical problems, such as electromagnetic waves in waveguides \cite{Liboff}, lasing modes in nanostructures \cite{Chang}, the electronic structure of graphene \cite{Kaufman} and the quantum eigenvalues and eigenfunctions for various potential energies \cite{Doncheski} are obtained by solving the ubiquitous Helmholtz equation \begin{equation} \nabla^2 \psi + k^2 \psi = 0. \label{helmholtz} \end{equation} In this paper we discuss the solutions to this equation when the region of interest is an equilateral triangle ($\Delta$) and when the solutions vanish on the boundary (i.e. when they satisfy the Dirichlet condition $\psi\big|_{\partial \Delta} = 0$). Although the explicit solutions in this case are well-known, (\cite{Chang}, \cite{Doncheski}),\cite{Itzykson}, \cite{McCartin}) we present an alternative method of obtaining them that does not involve solving the differential equation directly, but rather uses only symmetry arguments, or a differential operator derived from symmetry considerations alone, to generate new solutions from any given solution. Our method is based upon first showing that each solution within the equilateral triangle is a member of one of four symmetry classes, and then introducing symmetry operators and a differential operator which transform solutions in one symmetry class into those of another or from one value of $k^2$ to another. Obviously any method that generates one or more new solutions to the Helmholtz equation from a given solution is quite a powerful and useful tool. The fact that the new solutions are generated from symmetry transformations alone rather than by solving the Helmholtz equation directly makes the method even more attractive. The method also has the advantage of being able to produce solutions with prescribed symmetries, which can be important if the desired solution needs, or is known to possess, certain symmetries. The paper is structured as follows: in Section \ref{math} we establish our notation while reviewing the results from representation theory and linear algebra which are used in the rest of the paper. In Section \ref{sec:rep3} we show that every solution to the Helmholtz equation within an equilateral triangle, which obeys the Dirichlet conditions on the boundary, is a member of one of four symmetry classes. In Section \ref{sec:res1} we show how to take a solution from any one of the four classes and generate from it solutions in a different symmetry class and/or with different values of the scalar $k^2$. In Section \ref{sec:ground} we use our approach to obtain the explicit solution to the Helmholtz equation with the lowest value of $k^2$, i.e. the ``ground state solution." In Section \ref{sec:con1} we summarize our results and discuss the various ways in which they can be applied. In particular, we discuss the correspondence between solutions in the equilateral and $(30^{\circ},60^{\circ}, 90^{\circ})$ triangles. \section{Notation and Background} \label{math} \subsection{Representation Theory} \label{reps} A representation is a homomorphism $\rho$ from the group $G$ into the group of linear transformations of a vector space (in our case, the real numbers suffice), which we denote by $\rho: G \rightarrow $GL$_n(\mathbb{R})$. The representation assigns to each group element a transformation of the vector space that is consistent with the multiplication table of the group. For the dihedral group $\mathcal{D}_3$, every such representation can be decomposed into a direct product of three irreducible representations. \begin{figure} \centering \epsfig{file=group.eps,width=.4\textwidth} \caption{The generators of the symmetry group for the equilateral triangle. We let $\sigma$ denote a counter-clockwise rotation by $120^{\circ}$ and $\mu$ the reflection in the $x$-axis.}\label{group} \end{figure} Before we can describe these homomorphisms we need to describe the elements of the group $\mathcal{D}_3$. Let $\sigma$ be a $120^{\circ}$ counter-clockwise rotation about the center of an equilateral triangle and $\mu$ be a reflection (without loss of generality) about the $x-$axis, as shown in Figure (\ref{group}). The defining relationship of the dihedral group says that $\mu \sigma = \sigma^{-1} \mu$. Since these two elements generate the whole group, we need only define each homomorphism on these generators. Listing the elements of the group, we have $$\mathcal{D}_3 = \{e, \sigma, \sigma^2, \mu, \mu\sigma, \mu \sigma^2\}.$$ The first irreducible representation is called the \textit{trivial} representation because it maps every group element to the identity map of $\mathbb{R}$. While this may seem somewhat, well, trivial, it actually plays an interesting role later on. Symbolically, $\rho_1(\alpha) = 1$ for every $\alpha \in \mathcal{D}_3$. The second representation is called the {\it sign} representation, though it also could be called the \textit{orientation} representation, because it shows whether a reflection has occurred. That is, $\rho_2(\sigma) = 1$ and $\rho_2(\mu) = -1$. Obviously the trivial and sign representations are one dimensional. The third representation is the only one that displays every nuance in the group, and it therefore is sometimes used to define $\mathcal{D}_3$. Unlike the two previous representations, $\rho_3$ is a two dimensional representation whose elements (in GL$_2(\mathbb{R})$) are \begin{center} \begin{tabular}{ccc} $\rho_3(\sigma) = \frac{1}{2} \begin{pmatrix} -1 & -\sqrt{3} \\ \sqrt{3} & -1 \end{pmatrix}$, & \hspace{1mm} $\rho_3(\mu) = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}$. \end{tabular} \end{center} Using $\rho_3$ we can define in a natural way the action of each group element $\alpha$ on a solution $f(x,y)$ of the Helmholtz equation . The argument of the function is the vector $\left[\begin{tabular}{c} $x$ \\ $y$ \end{tabular} \right]$ in $\mathbb{R}^2$, so the following action is well defined: \begin{equation} (\alpha \cdot f)(x,y) = f\Big(\rho_3(\alpha)^{-1} \left[\begin{tabular}{c} $x$ \\ $y$ \end{tabular} \right]\Big). \label{action} \end{equation} The use of the inverse of the representation matrix is required to make this a homomorphism, and can be thought of as a passive transformation on the coordinates. In order to simplify the notation, we will write $\alpha f$ in place of $\alpha\cdot f$. \subsection{Inner Product Spaces}\label{innerprod} If $f_1$ and $f_2$ are two solutions of the Helmholtz equation then we define the inner product of $f_1$ and $f_2$ as \begin{equation} \langle f_1, f_2 \rangle = \int\int_{\Delta} f_1(x,y)f_2(x,y)dxdy, \end{equation} where the integral is taken over the domain $\Delta$. The norm (or length) of a solution $f$ is defined as \begin{equation} ||f|| = \sqrt{\langle f,f \rangle}. \end{equation} This is called the $\mathcal{L}^2$ norm, which we will use to normalize any given solution and also to establish when two solutions $f_1$ and $f_2$ are orthogonal ($\langle f_1, f_2 \rangle =0 \Longleftrightarrow f_1 \perp f_2$). If two solutions have the same value of $k$ and are orthogonal, we can form a two dimensional space spanned by these solutions. In the same way that we use the vector $\left[\begin{tabular}{c} $a$ \\ $b$ \end{tabular} \right]$ to represent $a \hat{i} + b \hat{j}$, in our context this column vector will represent $a f_1 + b f_2$. \section{\label{sec:rep3} Classifying solutions to the Helmholtz equation by their symmetries.} If $f$ is a solution of the Helmholtz equation within a region (denoted by $\Delta$) whose sides form an equilateral triangle, and which satisfies the Dirichlet conditions on the boundary, then for each element $\alpha \in D_3$, $\alpha f$ (as defined by Eq. (\ref{action})) is a solution of the Helmholtz equation with the same value of $k^2$. In this section we prove that every such solution $f$ belongs to one of four sets according to its rotational and reflection symmetries. We call these sets {\it symmetry classes} and denote them by A1, A2, E1 and E2. We first consider the rotational symmetries of solutions of the Helmholtz equation. If a solution $f$ is rotated to obtain a new solution $\sigma f$ then, in general, the new solution can be rotationally symmetric ($\sigma f = f$), rotationally anti-symmetric ($\sigma f = -f$), or rotationally asymmetric ($\sigma f \neq \pm f$). We can eliminate the rotationally anti-symmetric case as follows: Suppose that $\sigma f = - f$. Then, since $\sigma^3$ is the identity element, \begin{equation} f = \sigma^3 f = \sigma \sigma (- f) = \sigma f = -f. \end{equation} \noindent Thus $f(x) = -f(x)$ for every $x\in \Delta$ so $f$ is identically zero on the domain. Therefore, when we rotate a (non-trivial) solution $f$ to obtain a new solution $\sigma f$, the new solution $\sigma f$ must be either rotationally symmetric or rotationally asymmetric. In what follows we first consider the effect of reflections on the rotationally symmetric solutions and then the effect of reflections on the rotationally asymmetric solutions. \subsection{Properties of rotationally symmetric solutions under reflection.} \label{rotsym1} Assume that $f$ is a rotationally symmetric solution of the Helmholtz equation, so $\sigma f = f$. In general, the solution $\mu f$ can be symmetric, anti-symmetric, or asymmetric under reflection. However, we can eliminate the asymmetric case as follows: Suppose $\mu f \neq \pm f$. Define the two functions \begin{eqnarray} f_+ & = & \frac{1}{2}(f + \mu f),\\ f_- & = & \frac{1}{2}(f - \mu f). \end{eqnarray} The functions $f_+$ and $f_-$ are solutions of the Helmholtz equation because each is constructed from a linear combination of solutions to the Helmholtz equation. Furthermore, $f_+$ is symmetric and $f_-$ is anti-symmetric under reflections about the $x-$axis, and the solution $f$ can be written as $f=f_+ + f_-$. In addition, the boundary condition obeyed by $f$ will also be obeyed by $f_+$ and $f_-$. Consequently, any asymmetric solution $f$ can be decomposed into the sum of the symmetric and anti-symmetric solutions $f_+$ and $f_-$, and we thus need only consider solutions to the Helmholtz equation which are symmetric or anti-symmetric under reflection about the x-axis. If we denote by A1 the set of solutions that are symmetric under a rotation $\sigma$ and symmetric under a reflection $\mu$, and by A2 the set of solutions that is symmetric under a rotation $\sigma$ and anti-symmetric under reflection $\mu$ , then we can summarize the results of this section with the following table: \begin{center} \begin{table}[h] \caption{Rotationally Symmetric Solutions} \begin{tabular}{c||c|c} & \; $\sigma f_i$ \; & \; $\mu f_i$ \; \\ \hline \hline $f_1\in$A1 & $+f_1$ & $+f_1$ \\ $f_2\in$A2 & $+f_2$ & $-f_2$ \end{tabular} \end{table} \end{center} Figure (\ref{sym}) shows examples of solutions in the symmetry classes A1 and A2. It's interesting to note that all of the solutions in A1 are orthogonal to all the solutions in A2: Suppose $f_1\in A1$, then $\mu f_1 = f_1 \Longleftrightarrow f_1(x,-y) = f_1(x,y)$, which means that if $f_1\in A1$ then $f_1$ is even in $y$. Similarly, if $f_2\in A2$, then $\mu f_2 = -f_2 \Longleftrightarrow f_2(x,-y) = -f_2(x,y)$, which means that if $f_2\in A2$ then $f_2$ is odd in $y$. Therefore the product $f_1 f_2$ is odd in $y \Rightarrow \langle f_1, f_2 \rangle =0 \Longleftrightarrow f_1 \perp f_2$ We end this subsection by noting that $f_1\in$ A1 is characterized in the trivial representation by \begin{equation} \alpha f_1 = \rho_1(\alpha) f_1\label{eqn:A1}, \end{equation} and $f_2\in$A2 is characterized in the sign representation \begin{equation} \alpha f_2 = \rho_2(\alpha) f_2.\label{eqn:A2} \end{equation} \begin{figure} \centering \subfigure[\, A1\label{A1}]{\epsfig{file=A1.eps,width=.2\textwidth}}\hspace{8mm} \subfigure[\, A2\label{A2}]{\epsfig{file=A2.eps,width=.2\textwidth}} \caption{Figures (a) and (b) show solutions $f$ which both have rotational symmetry. The solution shown in (a) is also symmetric under reflection, while the solution shown in (b) is also anti-symmetric under reflections.} \label{sym} \end{figure} \subsection{\label{sec:rep5} Properties of rotationally asymmetric solutions under reflection.} As was shown at the beginning of Section \ref{sec:rep3}, if $f$ is a solution of the Helmholtz equation in the region $\Delta$ then the rotated solution $\sigma f$ will be either rotationally symmetric or rotationally asymmetric. Having examined the reflection properties of the rotationally symmetric solutions in the previous section we now examine the reflection properties of the rotationally asymmetric solutions. Although the lack of rotational symmetry might make us expect that these solutions will be of little use, on the contrary, not only are they the most common solutions to the Helmholtz equation, but they also have a number of interesting and useful properties. Let E1 be the set of solutions which are asymmetric under rotations and symmetric under reflections, and E2 be the set of solutions which are asymmetric under rotations and anti-symmetric under reflections. Consider a normalized solution $f_1$ in class E1. At this point we do not know anything about $\sigma f_1$ except that it is a solution with the same value of $k^2$ as $f_1$. For that matter, so is $\sigma^2 f_1$. So consider the function $\hat{f}_2 = \sigma f_1 - \sigma^2 f_1$. We now show that $\hat{f}_2$ is in symmetry class E2: \begin{eqnarray} \mu \hat{f}_2 & = & \mu ( \sigma f_1 - \sigma^2 f_1) \nonumber \\ & = & \mu \sigma f_1 - \mu \sigma^2 f_1 \nonumber \\ & = & \sigma^2\mu f_1 - \sigma \mu f_1 \nonumber \\ & = & \sigma^2 f_1 - \sigma f_1 \nonumber \\ & = & - \hat{f}_2 \end{eqnarray} The reason we call this new solution $\hat{f}_2$ is that it is not normalized; we will call the normalized function $f_2$. We also note that $f_1$ and $f_2$ are orthogonal since their product is odd in $y$. Since the action of the group introduces a second solution with the same value of $k^2$, we consider the two dimensional solution space spanned by $f_1$ and $f_2$. As in Section \ref{A1}, the vector $\vec{f}=\left[ \begin{tabular}{c} $a$ \\ $b$ \end{tabular} \right]$ represents the solution $f = a f_1 + b f_2$. Written in this way, we can recognize the third irreducible representation $\rho_3$ $$\alpha \vec{f} = \rho_3(\alpha) \vec{f}.$$ This equation actually contains a lot of information, and is strikingly similar to Eqs.(\ref{eqn:A1}) and (\ref{eqn:A2}). We now use it to normalize $\hat{f}_2$. Let $\vec{f_1} = \left[ \begin{tabular}{c} 1 \\ 0 \end{tabular} \right]$. Then $\sigma f_1$ can be re-expressed as a linear combination of $f_1$ and $f_2$ as follows: \begin{eqnarray} \sigma f_1 & = & \frac{1}{2} \begin{pmatrix} -1 & -\sqrt{3} \\ \sqrt{3} & -1 \end{pmatrix} \begin{pmatrix} 1 \\ 0 \end{pmatrix} \\ & = & \frac{1}{2}\begin{pmatrix} -1 \\ \sqrt{3} \end{pmatrix} \nonumber \\ & = & - \frac{1}{2} f_1 + \frac{\sqrt{3}}{2} f_2. \end{eqnarray} Similarly, $\sigma^2 f_1 = - \frac{1}{2} f_1 - \frac{\sqrt{3}}{2} f_2.$ Therefore \begin{equation} \sigma f_1 - \sigma^2 f_1 = \sqrt{3} f_2, \end{equation} and solving for $f_2$, we find the normalized solution \begin{equation}\label{f2} f_2=\frac{1}{\sqrt{3}}(\sigma f_1 - \sigma^2 f_1). \end{equation} Similarly, $f_1$ can be expressed in terms of $f_2$ as \begin{equation}\label{f1} f_1=\frac{1}{\sqrt{3}}(\sigma f_2 - \sigma^2 f_2). \end{equation} Consequently, given any solution $f_1$ in the symmetry class E1 we can generate from it an orthogonal solution $f_2$ in the symmetry class E2, and \textit{vice versa}. Figu \ref{sym2} shows two examples of solutions from E1 and E2. \begin{figure} \centering \subfigure[\, E1\label{E1}]{\epsfig{file=E1.eps,width=.2\textwidth}}\hspace{8mm} \subfigure[\, E2\label{E2}]{\epsfig{file=E2.eps,width=.2\textwidth}} \caption{Figures (a) and (b) show two solutions $f$, neither of which has rotational symmetry. The solution $f_1$ in (a) is symmetric under reflection and in the set E1, while the solution $f_2$ in (b) is anti-symmetric under reflection and in the set E2.} \label{sym2} \end{figure} To summarize, in this section we have established the important result that every solution to the Helmholtz equation within an equilateral triangle can be placed into one of four symmetry classes. The class A1 is the set of solutions which are symmetric under rotation and under reflection, the class A2 is the set of solutions which are symmetric under rotation and anti-symmetric under reflection, the class E1 is the set of solutions which are asymmetric under rotation and symmetric under reflection, and the class E2 is the set of solutions which are asymmetric under rotation and anti-symmetric under reflection. These results are summarized in Table \ref{4sym}. \begin{table} \caption{The four symmetry classes}\label{4sym} \begin{tabular}{c|c|c} & \multicolumn{2}{c}{Rotation}\\ Reflection & Symmetric & Asymmetric \\ \hline && \\ Symmetric & A1 & E1 \\ &&\\ \hline &&\\ Anti-Symmetric & A2 & E2 \\ && \end{tabular} \end{table} \section{\label{sec:res1} Generating new solutions from a given solution.} In the previous section we showed how Eq.(\ref{f1}) can be used to generate a solution in the symmetry class E2 (i.e. a solution that is anti-symmetric when reflected about the $x$-axis) from any solution in the symmetry class E1 (i.e. from a solution that is symmetric when reflected about the $x$-axis) and \textit{vice versa} (using Eq.(\ref{f2})). More specifically, if we have a solution that is even in the $y$-coordinate and asymmetric under rotations $\sigma$, then we can generate from it a solution that is odd in the $y$-coordinate and asymmetric under rotations $\sigma$, and \textit{vice versa}. Furthermore, in each case, the generated solution is orthogonal to the original solution. In this section we show when it is possible to take a solution from one of the four symmetry classes and generate from it an orthogonal solution in one of the other symmetry classes and/or with a different value of $k^2$. In this section we present three ways to generate new solutions from a given solution. First, we show how to build a many-to-one correspondence between solutions in symmetry class A2 and symmetry class E2. Next we present a way to take {\it any} solution and generate from it a solution with a larger scalar, which we will call a ``harmonic" of the original solution. The final technique introduces a differential operator which transforms a solution from symmetry class A2 into a (non-trivial) solution in symmetry class A1. Applying this same technique to a solution in symmetry class A1 yields a solution in symmetry class A2. The resulting solution in A2 may be the trivial solution, and we discuss how this fact gives new insight into the ``ground state solution." In this section we present three ways to generate new solutions from a given solution. First, we show how to build a many-to-one correspondence between solutions in symmetry class A2 and symmetry class E2. Next we present a way to take {\it any} solution and generate from it a solution with a larger scalar, which we will call a ``harmonic" of the original solution. The final technique introduces a differential operator which transforms a solution from symmetry class A2 into a (non-trivial) solution in symmetry class A1. Applying this same technique to a solution in symmetry class A1 yields a solution in symmetry class A2. The resulting solution in A2 may be the trivial solution, and we discuss how this fact gives new insight into the ``ground state solution." In this section we present three ways to generate new solutions from a given solution. First, we show how to build a many-to-one correspondence between solutions in symmetry class A2 and symmetry class E2. Next we present a way to take {\it any} solution and generate from it a solution with a larger scalar, which we will call a ``harmonic" of the original solution. The final technique introduces a differential operator which transforms a solution from symmetry class A2 into a (non-trivial) solution in symmetry class A1. Applying this same technique to a solution in symmetry class A1 yields a solution in symmetry class A2. The resulting solution in A2 may be the trivial solution, and we discuss how this fact gives new insight into the ``ground state solution." We begin by quoting a theorem usually attributed to Lam\'e, using the statement (and referring the reader to the proof) given by McCartin \cite{McCartin}: \begin{thm}\label{Lame} (Lam\'e) Suppose that $T(x,y)$ is a solution to the Helmholtz equation which can be represented by the trigonometric series \begin{align} T(x,y) = & \sum_{i} \left( A_i \sin(\lambda_i x + \mu_i y + \alpha_i) \right. \nonumber \\ & \left. \hspace{2cm} B_i \cos(\lambda_i x + \mu_i y + \beta_i) \right), \end{align} with $\lambda_i^2 + \mu_i^2 = k^2$. Then \begin{enumerate} \item $T(x,y)$ is antisymmetric about any line along which it vanishes; \item $T(x,y)$ is symmetric about any line along which its normal derivative, $\frac{\partial T}{\partial \nu}$, vanishes. \end{enumerate} \end{thm} Lam\'e \cite{McCartin} also proves that the solutions to the Helmholtz equation in a triangular region subject to the Dirichlet conditions can be expressed in this way and that they form a complete, orthonormal set. Explicit expressions for these solutions are also given by Doncheski \textit{et al.} \cite{Doncheski} \subsection{\label{sec:E2_A2} E2 $\leftrightarrow$ A2} In order to relate solutions in the symmetry classes E2 and A2 we use a method called ``tessellating the plane," which extends any solution within the triangular domain to the plane. We begin the tessellation by defining the triangular region in which we are working as the ``fundamental domain" and then we reflect this domain across each of its three boundaries. An example of this construction is shown in Figure (\ref{tile}). Tessellating the plane provides a way to smoothly extend the solution from the triangular region of interest to the plane. \begin{figure} \centering \epsfig{file=tile.eps,width=.4\textwidth} \caption{\label{tile}This shows how the fundamental domain tiles the plane via reflections. Note how the location of the point in the fundamental domain changes as the reflections are made.} \end{figure} We begin by considering a solution $f_2$ in the symmetry class E2. Since $f_2$ is antisymmetric under reflection it must be zero along the $x$ axis. Thus, any solution in the symmetry class E2 will have the form shown in Figure (\ref{E2A2}a), with possibly more nodal curves. Similarly, if a solution is in the symmetry class A2, it not only needs a nodal line along the $x$-axis, but also along the other altitudes. Thus, any solution in the symmetry class A2 will have the form shown in Figure (\ref{E2A2}b), with possibly more nodal curves. \begin{figure} \subfigure[\,E2]{\epsfig{file=dissect1.eps,width=.15\textwidth}} \subfigure[\,A2]{\epsfig{file=dissect2.eps,width=.15\textwidth}} \caption{A pictoral representation of solutions in the symmetry classes E2 and A2.} \label{E2A2} \end{figure} By examining the equilateral triangles in Figure (\ref{E2A2}) we see that the linear transformation \begin{equation}\label{Trans} T \left( \left[\begin{tabular}{c} $x$ \\ $y$ \end{tabular}\right] \right)= \frac{1}{6} \left[ \begin{tabular}{c} $3 x +\sqrt{3} y +3$ \\ $-\sqrt{3} x + 3 y + \sqrt{3}$ \end{tabular} \right] \end{equation} maps the dashed region in Figure (\ref{E2A2}a) into the dashed region in Figure (\ref{E2A2}b) [The origin is one unit from each vertex]. The A2 solution can then be constructed by tessellating the dashed triangle in Figure (\ref{E2A2}b) and thus creating the triangle in that figure who boundaries are shown with the solid lines. Therefore, the transformation in Eq. (\ref{Trans}), which is deduced from the figures, takes {\it any} solution in the E2 symmetry class and transforms it into a solution in the A2 symmetry class. Having determined the explicit form of the transformation from the tessellation we can rewrite the Helmholtz equation in the new coordinates and deduce the new value of the scalar term. In this way we can construct an explicit solution in A2 from any given solution in E2 and determine the scalar term associated with it. We carry out this calculation explicitly in Appendix I and show that if the original function in E2 is a solution of the Helmholtz equation with the scalar $k^2$, then the solution in A2 generated from it will be a solution of the Helmholtz equation with the scalar $3k^2$. Similarly, if we start with a solution in A2 (rather than E2) of the Helmholtz equation with the scalar $k^2$, then the same argument will produce another solution in A2 but with the scalar $3{k^2}$. In summary if we start with any function in A2 or E2 that is a solution to the Helmholtz equations with the scalar $k^2$ then, using the method presented in Appendix I, we can generate a function in A2 that is a solution to the Helmholtz equation with the scalar $3k^2$. The proof given in Appendix I is reversible, so if we start at the end of the argument with a function in A2 which is a solution of the Helmholtz equation with scalar $k^2$, then we can run the argument backwards and generate another solution which is in either A2 or E2. In both cases the function produced will be a solution of the Helmholtz equation with the scalar \textbf{$\frac{k^2}{3}$}. However, since there exists a solution with the lowest allowed value of $k^2$ (the ``ground state solution") the process of generating solutions with smaller values of the scalar must stop. We discuss this situation in the next section. Note that this method establishes a many-to-one correspondence between solutions in symmetry class A2 and symmetry class E2. \subsection{\label{sec:A1_A1} Generating solutions with higher values of $k^2$} \begin{figure} \centering \subfigure[$n=2$]{\epsfig{file=t2.eps,width=.2\textwidth}}\hspace{8mm} \subfigure[$n=3$]{\epsfig{file=t3.eps,width=.2\textwidth}} \caption{The equilateral triangle can be decomposed into $n^2$ equilateral triangles} \label{boost} \end{figure} Although we can use the proof presented in the Appendix I to generate from a solution in A2 with the scalar $k^2$ new solutions in A2 or E2 with three times the value of the scalar $k^2$, there is a more direct way to take a solution and generate from it solutions with higher values of the scalar for solutions from {\it any} symmetry class. To do this we carry out a different tessellation of the plane and then extract the new value of the scalar from the coordinate transformation. The tessellation, which is shown in Figure (\ref{bst_vs}), decomposes the equilateral triangle into $n^2$ equilateral sub-triangles, for any $n \in \mathbb{N}$. As we show in Appendix II, the explicit coordinate transformation that creates the sub-triangles is constructed from a dilation followed by a translation along the $x$-axis. Carrying out this transformation we can start with a function in any symmetry class that is a solution of the Helmholtz equation with the scalar $k^2$ and generate from it a family of new solutions to the Helmholtz equation with scalar $n^2k^2$. Note that if the original solution is in the symmetry class E2 (i.e. is rotationally asymmetric) then this approach will produce a solution in A2 (i.e. the generated solution will become rotationally symmetric) whenever $n$ is divisible by $3$. In this case, however, we have would have already constructed this solution by using the method from Section (\ref{sec:E2_A2}) twice. See Figure \ref{bst_vs} for a schematic proof of this fact. \begin{figure} \centering \subfigure[Two successive constructions from Section \ref{sec:E2_A2}]{\epsfig{file=a3.eps,width=.2\textwidth}}\hspace{8mm} \subfigure[Harmonic with $n=3$\label{a9}]{\epsfig{file=a9.eps,width=.2\textwidth}} \caption{These pictures show two redundant constructions of a higher energy state.} \label{bst_vs} \end{figure} Note that any solution with a vertical nodal line can be reduced using this method. Extending the solution to the plane, we can use Theorem \ref{Lame} and any vertical nodal line to introduce a new mirror symmetry. Combining this with the mirror symmetries about the boundaries and the rotational symmetry, there will be a fundamental domain without vertical nodal lines. This represents a solution which can be used to generate the original solution without any vertical nodal lines. \subsection{\label{sec:A2_A1} Generating even solutions from odd solutions and {\it vice versa}.} In this section we prove that there exists a differential operator that transforms a solution $f_2$ in A2, i.e. a solution that is symmetric under rotations and an odd function of y, into a solution $f_1$ in A1, that is, into a solution which is symmetric under rotations and an even function of $y$. To show this we construct a function $\hat{f_1}$ in A1 (where $f_1$ is the normalized solution) by defining $\hat{f_1}$ in the following way: \begin{align} \hat{f}_1 &= \left(\frac{\partial^3}{\partial y^3} - 3 \frac{\partial^3}{\partial y\partial x^2}\right) f_2(x,y) \label{even_odd}. \end{align} The proof that $\hat{f_1}$ is in A1 goes as follows: First, it is easy to see that $\hat{f_1}$ is a solution to the Helmholtz equation with the scalar $k^2$ \textit{iff} $f_2$ is a solution with the same scalar $k^2$. To complete the proof we need to show four more things: first, that $\hat{f}_1$ is rotationally symmetric, second, that $\hat{f}_1$ satisfies the Dirichlet conditions, third, that $\hat{f_1}$ is even in the $y$-coordinate, and fourth, that $\hat{f}_1$ is not identically zero. With single variable functions, one common way of generating an even function from an odd function is to take the first derivative. However, the need to satisfy all of the above conditions requires a more complicated procedure. The generalization of the first derivative in one dimension to two dimensions is the directional derivative, and the directional derivative of a function $f$ is $\nabla f\cdot \hat{e}$. We can represent the directional derivative operator in the $\hat{e}$ direction as $\nabla \cdot \hat{e}$. Higher order directional derivatives are simply powers of this operator. As can be easily verified, although the first directional derivative of $f_2$ satisfies the Helmholtz equation it would not necessarily have the correct rotational symmetry. To correct this, we can {\it symmetrize} the solution by adding it to both of its rotates. The resulting solution will now be even and rotationally symmetric. Additionally, the transformed solution will satisfy the boundary (Dirichlet) condition since any nodal line parallel to a side will remain a nodal line. This follows from Theorem \ref{Lame}, since every solution is antisymmetric in a nodal line. Thus a directional derivative along the nodal line is zero, and by anti-symmetry the other two directional derivatives will cancel out. Unfortunately, this particular method always yields the trivial solution, so it is not very helpful. Indeed, we can imagine the directional derivatives in the tangent plane, and by symmetry they will always add to zero. However, this leads us to try the next odd power of the directional derivative in the $y$-direction, $\frac{\partial^3}{\partial y^3}$. Once again, the required rotational symmetry leads us to symmetrize the solution by adding to the third directional derivative in the $y$-direction the third directional derivative in the directions parallel to the other two sides. We can describe the resulting differential operator algebraically by rotating the directional derivative by $\sigma$: \begin{align} &\left(\nabla \cdot \hat{j}\right)^3 + \left(\nabla \cdot (\sigma \cdot \hat{j})\right)^3 + \left(\nabla \cdot (\sigma^2\cdot \hat{j})\right)^3 \nonumber\\ = & \left(\frac{\partial}{\partial y}\right)^3 + \frac{1}{8}\left(\sqrt{3}\frac{\partial}{\partial x}- \frac{\partial}{\partial y}\right)^3 + \frac{1}{8}\left(-\sqrt{3}\frac{\partial}{\partial x}- \frac{\partial}{\partial y}\right)^3 \nonumber \\ =& \frac{3}{4}\left(\frac{\partial^3}{\partial y^3} -3 \frac{\partial^3}{\partial y \partial x^2}\right) \end{align} This gives us our differential operator from Eq. (\ref{even_odd}), apart from a multiplicative constant (which is unimportant since the resulting solution will still need to be normalized). With this new understanding of Eq. (\ref{even_odd}), we can then argue as before that the new function $\hat{f_1}$ has the correct symmetry and satisfies the Dirichlet conditions. Using the Helmholtz relation, note that \begin{equation} \frac{\partial^3}{\partial y \partial x^2} = - \frac{\partial^3}{\partial y^3} - k^2 \frac{\partial}{\partial y}. \end{equation} \noindent Combining this with our differential operator, we can rewrite it as \begin{equation} 4 \frac{\partial^3}{\partial y^3} +3 k^2 \frac{\partial}{\partial y}. \end{equation} \noindent Written in this way, it is clear the $\hat{f}_1$ is symmetric in the $x$-axis. The last thing we need to do is show that the solution is non-zero. Without loss of generality we can assume there are no vertical nodal lines on the interior of the triangle. (This is possible using the last paragraph from Section \ref{sec:A1_A1}.) Since any solution will be analytic on the interior of the triangle, we consider the power series of the function $f_2(x,y)$ at the origin. \begin{equation} f_2(x,y) = \sum_{i,j} c_{i,j}x^iy^j \end{equation} \noindent Let's consider the solution along the $y$-axis, $f_2(0,y)$. By symmetry, we know that there are only odd terms. Additionally, note that the directional derivatives at the origin parallel to each side are all zero, also by symmetry. We have previously argued that the sum of the three directional derivatives parallel to each side gives the zero function, thus $c_{0,1} = 0$. \begin{equation} f_2(0,y) = f_2(y) = \sum_{j=3,\text{ odd}} c_{0,j}y^j \end{equation} By assumption, $f_2(0,y)$ is not identically zero, so there is a non-zero term with minimal index. Now consider $\hat{f}_1$: \begin{align} \hat{f}_1 & = \left( 4 \frac{\partial^3}{\partial y^3} +3 k^2 \frac{\partial}{\partial y}\right) f_2(y) \nonumber \\ & = 4 f_2'''(y) +3 k^2 f_2'(y) \end{align} \noindent Using the first non-zero term in the power series for $f_2$, we note that its third derivative is non-zero, and higher order terms cannot cancel it. Thus $\hat{f}_1$ is non-zero, so we can normalize it to get a new solution $f_1$. In addition, we can note that $f_1$ is non-zero almost everywhere. Indeed, if $f_1$ is zero on an open set, then by analyticity it would be zero everywhere. While we have only shown that this process works for functions in A2 without a node along the $y$-axis, the strength of our conclusion shows that this is a local property. Combining this with earlier methods, the requirement to be non-zero along the $y$-axis can be eliminated. It should be noted that the only place we used the fact that $f_2$ was anti-symmetric was to prove that $f_1$ was non-zero. Indeed, the process introduced in Eq. (\ref{even_odd}) can also be used to transform a symmetric solution into an anti-symmetric solution, though the transformed solution may be zero. In the next section we will show that if a solution in class A1 is transformed by this differential operator and becomes the zero function, then it was (a harmonic of) the ``ground state." Otherwise, the transformed solution can be normalized to a solution in class A2, giving a one-to-one correspondence between solutions in class A2 and those solutions in A1 which are not harmonics of the ground state. \section{\label{sec:ground} The Ground State Solution} In the previous section we introduced the differential operator defined in Eq. \ref{even_odd}) that transformed anti-symmetric solutions into symmetric solutions. One natural question is what does this operator do to the solution of the Helmholtz equation with the minimum value of $k^2$, that is, to the ``ground state solution''? Since the ground state solution is always non-degenerate, the operator in Eq. (\ref{even_odd}) must transform it into the zero function. We will now show that any solution from class A1 which transforms to zero under this differential operator is the ground state or a harmonic of the ground state. Let $f$ be a solution in symmetry class A1 which satisfies the two equations \begin{equation}\label{gs1} \left(\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} + k^2\right) f(x,y) =0 \end{equation} and \begin{equation}\label{gs2} \left(\frac{\partial^3}{\partial y^3} - 3 \frac{\partial^3}{\partial y\partial x^2}\right) f(x,y) =0. \end{equation} Using the first equation we can eliminate partial derivatives with respect to $x$ from the second equation to obtain \begin{align*} 0 & = \left(4 \frac{\partial^3}{\partial y^3} + 3 k^2 \frac{\partial}{\partial y}\right) f(x,y)\\ \implies 0& = \frac{\partial}{\partial y} \left(\frac{\partial^2}{\partial y^2} + \frac{3 k^2}{4} \right) f(x,y). \end{align*} Putting a solution of the form $f(x,y)= \sum X_i(x)Y_i(y)$ into the above equation we find that $Y_1 = 1$, $Y_2 = \sin(\frac{\sqrt{3}k}{2} y)$ and $Y_3 = \cos(\frac{\sqrt{3}k}{2} y)$. However, since solutions in class A1 are symmetric in $y$, the only allowed solutions are $Y_1$ and $Y_3$. Similarly, putting $f(x,y)= \sum X_i(x)Y_i(y)$ into Eq.(\ref{gs2}), we get that $X_1 = A_1\cos(kx) + B_1\sin(kx)$ and $X_3 = A_3\cos(kx/2) + B_3\sin(kx/2)$. Imposing the boundary condition at $x=-\ell/2$, we find that $X_1(x) = \sin\left(k (x+\ell/2)\right)$ and $X_3(x) = \sin\left(k/2 (x+\ell/2)\right)$. Putting this all together we find the solution \begin{align*} f(x) = &A \sin\left(k \Big(x+\frac{\ell}{2}\Big)\right) + B\sin\left(\frac{k}{2} \Big(x+\frac{\ell}{2}\Big)\right)\cos\left(\frac{\sqrt{3}k}{2} y\right). \end{align*} \noindent Imposing the remaining boundary condition along $y = \frac{-x+\ell}{\sqrt{3}}$, we get \begin{align*} 0 = & A \sin\left(k \Big(x+\frac{\ell}{2}\Big)\right) + B\sin\left(\frac{k}{2} \Big(x+\frac{\ell}{2}\Big)\right)\cos\left(\frac{\sqrt{3}k}{2} \left(\frac{-x+\ell}{\sqrt{3}}\right)\right) \\ & A \sin\left(k \Big(x+\frac{\ell}{2}\Big)\right) + B\sin\left(\frac{k}{2} \Big(x+\frac{\ell}{2}\Big)\right)\cos\left(\frac{k}{2} (x-\ell)\right) \\ & A \sin\left(k x+\frac{k\ell}{2}\Big)\right) + \frac{B}{2} \left(\sin\left(\frac{3 k \ell}{4}\right)+\sin\left(kx-\frac{k \ell}{4}\right)\right). \end{align*} \noindent This equation is satisfied when $B = \pm 2A$ and $\sin(3k\ell/4) = 0$, so $k = 4 \pi n/3 \ell$ for $n >0$. \begin{align*} 0 & = \sin\left(k x+\frac{k\ell}{2}\Big)\right) \pm \sin\left(kx-\frac{k \ell}{4}\right) \\ & = \sin\left(k x+\frac{2\pi n }{3}\Big)\right) \pm \sin\left(kx-\frac{\pi n}{3}\right) \end{align*} Note that these two sine waves are horizontally shifted by $\pi n$, so there is a suitable choice of sign for $B$ to make them cancel. Putting this all together, \begin{align}\label{gs} f_n(x) = &\sin\left(k_n \Big(x+\frac{\ell}{2}\Big)\right)+ 2(-1)^n \sin\left(\frac{k_n}{2} \Big(x+\frac{\ell}{2}\Big)\right)\cos\left(\frac{\sqrt{3} \, k_n}{2} y\right), \end{align} where $k_n = \frac{4 \pi n}{3 \ell}$. For $n=1$ this agrees with the accepted solution in the literature for the ground state solution with center-to-vertex length $\ell$.\cite{Doncheski} To compare Eq. (\ref{gs}) with the explicit solution given in McCartin, \cite{McCartin} note that if the inscribed circle has radius $r$ then $k= \frac{2 \pi}{3 r}$, and if the side length is $h$ then $k=\frac{4\pi \sqrt{3}}{3h}$. For larger values of $n$, we simply get the harmonics promised at the end of Section (\ref{sec:A2_A1}). \section{\label{sec:con1}Conclusion} In this paper we have examined the solutions to the Helmholtz equation $\nabla^2 \psi + k^2 \psi = 0$ within an equilateral triangle which obey the Dirichlet conditions on the boundary. We have shown that every solution is a member of one of four symmetry classes and that, from symmetry considerations alone, any given solution in one symmetry class can be used to generate solutions in another symmetry class and/or with other values of the scalar $k^2$. We also used symmetry considerations to find a novel derivation of the ``ground state" solution of the Helmholtz equation. These results have many interesting applications. For example, in some cases we are looking for solutions to the Helmholtz equation which possess certain specific reflection or rotational symmetries. Referring to the chart at the end of Section \ref{sec:rep3} we see that if we are looking for a solution with the symmetry properties of solutions in the symmetry class A2, we can generate such a solution if we already have another solution in either symmetry class A1 or symmetry class E2. Similarly, for any given value of the scalar $k^2$, we can generate the ground state solution using Eq. (\ref{gs}) and then, since the ground state solution is in symmetry class A1, use the method presented in Section \ref{sec:rep3} to generate solutions with ``higher harmonics," that is, solutions whose scalar values are $n^2k^2$. More generally, given any solution, we can generate many different solutions in different symmetries classes and with different values of the scalar $k^2$ from symmetry considerations alone, and keep generating solutions from each solution generated previously without ever having to solve the Helmholtz equation directly. We end by noting that, when collected together, the techniques in this article shed light on the relationship between solutions within the $(30^{\circ},60^{\circ}, 90^{\circ})$ and equilateral triangles. For example, solutions to the equilateral triangle which are in symmetry classes A2 and E2 vanish along the $x$-axis. If we restrict our domain $\Delta$ to quadrants I and II, these solutions become solutions to the $(30^{\circ},60^{\circ}, 90^{\circ})$ triangle (with the same value of $k^2$). See Figures \ref{A2} and \ref{E2}, respectively. Conversely, given a solution in the $(30^{\circ},60^{\circ}, 90^{\circ})$ triangle, we can reflect it across the $x$-axis to get a solution in the equilateral triangle that is in either of the symmetry classes A2 or E2. We can then take these solutions and construct from them solutions in A1 and E1 using the techniques from Section \ref{sec:A2_A1} and Section \ref{sec:rep5}, respectively. Since both of these techniques are reversible, we know that {\it all} solutions within the equilateral triangle can be found from the solutions in the $(30^{\circ},60^{\circ}, 90^{\circ})$ triangle except for the ground state solution and its harmonics. However, these solutions can be derived directly using the technique in Section \ref{sec:A2_A1}, which means that knowing all of the solutions within the $(30^{\circ},60^{\circ}, 90^{\circ})$ triangle enables us to obtain all of the solutions within the equilateral triangle. In summary, apart from providing a new way to generate new solutions from any given solution to the Helmholtz equation within a triangle region, our method can also be used to establish two interesting connections between solutions to the Helmholtz equation within the equilateral and $(30^{\circ},60^{\circ}, 90^{\circ})$ triangles. First, the set of solutions within the $(30^{\circ},60^{\circ}, 90^{\circ})$ triangle is a subset of the set of solutions within the equilateral triangle. Second, we have a two-to-one correspondence between the solutions in the equilateral triangle which are not harmonics of the ground state and the solutions of the $(30^{\circ},60^{\circ}, 90^{\circ})$ triangle. \section{Appendix I}\label{app1} We want to show that the solution $g(x,y) = (f \circ T^{-1})(x,y) = f(X(x,y),Y(x,y))$ is a solution to the differential equation $$\left(\frac{\partial^2}{\partial^2 x}+\frac{\partial^2}{\partial^2 y}\right) g(x,y) = 3 k^2 g(x,y)$$ \noindent when $f$ is a solution to the differential equation $$\left(\frac{\partial^2}{\partial^2 X}+\frac{\partial^2}{\partial^2 Y}\right) f(X,Y) = k^2 f(X,Y).$$ To make this a bit cleaner, we will need to write the transformation $T^{-1}(x,y)$ explicitly. Solving the system of equations \begin{align*} x & = \frac{1}{6} \left(3 X + \sqrt{3} Y + 3\right) \\ y & = \frac{1}{6} \left(\sqrt{3} X - 3 Y + \sqrt{3}\right) \end{align*} for $X$ and $Y$, we get \begin{align*} X(x,y) & = \frac{1}{4} \left(6 x - 2\sqrt{3} y + \sqrt{3}- 3\right) \\ Y(x,y) & = \frac{1}{4} \left(2\sqrt{3} x + 6 y - \sqrt{3} - 3 \right). \end{align*} In order to evaluate $\frac{\partial^2}{\partial^2 x} f(X,Y)$, we use the chain rule: \begin{align*} \frac{\partial^2}{\partial^2 x} f(X,Y) = & \frac{\partial}{\partial x} \left(\frac{\partial X}{\partial x} \frac{\partial}{\partial X}f(X,Y) + \frac{\partial Y}{\partial x} \frac{\partial}{\partial Y}f(X,Y) \right)\\ = & \frac{\partial}{\partial x} \left(\frac{6}{4} \frac{\partial}{\partial X}f(X,Y) + \frac{2\sqrt{3}}{4} \frac{\partial}{\partial Y}f(X,Y) \right) \\ = & \frac{\partial}{\partial x} \left(\frac{3}{2} f_X(X,Y) + \frac{\sqrt{3}}{2} f_Y(X,Y) \right) \\ = & \frac{3}{2} \frac{\partial}{\partial x} f_X(X,Y) + \frac{\sqrt{3}}{2} \frac{\partial}{\partial x} f_Y(X,Y) \\ = & \frac{3}{2} \left(\frac{\partial X}{\partial x} \frac{\partial}{\partial X}f_X(X,Y) + \frac{\partial Y}{\partial x} \frac{\partial}{\partial Y}f_X (X,Y) \right) + \\ & \hspace{1cm} \frac{\sqrt{3}}{2} \left(\frac{\partial X}{\partial x} \frac{\partial}{\partial X}f_Y(X,Y) + \frac{\partial Y}{\partial x} \frac{\partial}{\partial Y}f_Y (X,Y) \right) \\ = & \frac{3}{2} \left(\frac{3}{2} f_{XX}(X,Y) + \frac{\sqrt{3}}{2} f_{XY} (X,Y) \right) + \\ & \hspace{1cm} \frac{\sqrt{3}}{2} \left(\frac{3}{2} f_{YX}(X,Y) + \frac{\sqrt{3}}{2} f_{YY} (X,Y) \right) \\ = & \frac{9}{4} f_{XX}(X,Y) + \frac{3\sqrt{3}}{2} f_{XY} (X,Y) + \frac{3}{4} f_{YY} (X,Y) \end{align*} Similarly, for $\frac{\partial^2}{\partial^2 y} f(X,Y)$, we have \begin{align*} \frac{\partial^2}{\partial^2 y} f(X,Y) = & \frac{\partial}{\partial y} \left(\frac{\partial X}{\partial y} \frac{\partial}{\partial X}f(X,Y) + \frac{\partial Y}{\partial y} \frac{\partial}{\partial Y}f(X,Y) \right)\\ = & \frac{\partial}{\partial x} \left(-\frac{2\sqrt{3}}{4} \frac{\partial}{\partial X}f(X,Y) + \frac{6}{4} \frac{\partial}{\partial Y}f(X,Y) \right) \\ = & \frac{\partial}{\partial y} \left(-\frac{\sqrt{3}}{2} f_X(X,Y) + \frac{3}{2} f_Y(X,Y) \right) \\ = & -\frac{\sqrt{3}}{2} \frac{\partial}{\partial y} f_X(X,Y) + \frac{3}{2} \frac{\partial}{\partial y} f_Y(X,Y) \\ = & -\frac{\sqrt{3}}{2} \left(\frac{\partial X}{\partial y} \frac{\partial}{\partial X}f_X(X,Y) + \frac{\partial Y}{\partial y} \frac{\partial}{\partial Y}f_X (X,Y) \right) + \\ & \hspace{1cm} \frac{3}{2} \left(\frac{\partial X}{\partial y} \frac{\partial}{\partial X}f_Y(X,Y) + \frac{\partial Y}{\partial y} \frac{\partial}{\partial Y}f_Y (X,Y) \right) \\ = & - \frac{\sqrt{3}}{2} \left(-\frac{\sqrt{3}}{2} f_{XX}(X,Y) + \frac{3}{2} f_{XY} (X,Y) \right) + \\ & \hspace{1cm} \frac{3}{2} \left(\frac{-\sqrt{3}}{2} f_{YX}(X,Y) + \frac{3}{2} f_{YY} (X,Y) \right) \\ = & \frac{3}{4} f_{XX}(X,Y) - \frac{3\sqrt{3}}{2} f_{XY} (X,Y) + \frac{9}{4} f_{YY} (X,Y) \end{align*} Combining these two equations, we get \begin{align*} \nabla^2 g(x,y) = & \frac{\partial^2}{\partial^2 x} f(X,Y)+ \frac{\partial^2}{\partial^2 y} f(X,Y) \\ = &\frac{9}{4} f_{XX}(X,Y) + \frac{3\sqrt{3}}{2} f_{XY} (X,Y) + \frac{3}{4} f_{YY} (X,Y) \\ & \hspace{1cm} \frac{3}{4} f_{XX}(X,Y) - \frac{3\sqrt{3}}{2} f_{XY} (X,Y) + \frac{9}{4} f_{YY} (X,Y) \\ = & 3 (f_{XX}(X,Y) + f_{YY}(X,Y)) \\ = & 3 k^2 f(X,Y) \\ = & 3 k^2 g(x,y) \end{align*} which is the desired result. \section{Appendix II}\label{app2} In this section we examine the properties of solutions obtained by de-composing the fundamental domain into $n^2$ equilateral triangles. First we show that if we start with a solution $f$ in A2 or E2 which is a solution of the Helmholtz equation with the scalar $k^2$ then, by decomposing the triangular domain into $n^2$ equilateral triangles, we can generate a solution to the Helmholtz equation with scalar $n^2k^2$. The transformation of the plane which replaces the original triangle by $n^2$ triangles is a pure dilation by a factor of $n$, followed by a translation along the $x$ axis. The translation does not affect the scalar in Helmholtz's equation, so we will just call it $C$. Thus our transformation is \begin{align*} X(x,y) & = n x + C \\ Y(x,y) & = n y. \end{align*} An easier way to see the affect such a transformation would have on the scalar, consider the Jacobian matrix for the transformation \begin{equation*} \mathcal{J} = \begin{pmatrix} n & 0 \\ 0 & n \end{pmatrix}. \end{equation*} The determinant of this matrix is $n^2$, and gives us the desired result. Note that if the original solution has rotational symmetry (i.e. is in the symmetry class A2), then the transformation will not change the symmetry class. This is easy to see since, when the equilateral triangle is subdivided into $n^2$ triangles, the resulting picture is rotationally symmetric. Since the solution inside each of these smaller triangles is the same rotationally symmetric symmetric solution, the resulting solution is rotationally symmetric. However, if it started asymmetric (i.e. in the symmetry class E2) then this approach will lead to a solution in A2 iff $n$ is divisible by 3. As noted above, the subdivided triangle is rotationally symmetric. However, since we are starting with a solution in E2, the resulting picture may or may not be rotationally symmetric. To test for rotational symmetry, consider Figure \ref{a9}. Looking at the solution inside the small triangle at the left of the subdivided triangle, we can sketch in the solution on the large subdivided triangle by reflecting this small triangle over the boundary and keep track of the nodal line. Comparing the small triangles along the top edge, we can see that after two reflections, the solutions appears rotated by $\rho^{-1}$. Restricting ourselves to looking only at the 3 small triangles at the tips of a subdivided triangle, note that a rotation of the large triangle induces a rotation of the small triangles. This rotation only agrees with the reflection method if $\rho = \rho^{-(n-1)}$. Equivalently, $\rho^n = \mathbf{1}$, which means that $n$ must be a multiple of $3$. Given a solution with E2 symmetry and scalar $k^2$, the harmonic with scalar $3^2k^2$ is the same as the solution resulting from using the method from Section \ref{sec:E2_A2}) twice $k^2 \rightarrow 3 k^2 \rightarrow 9 k^2$. It is easiest to see this by comparing the two pictures, shown in Figure \ref{bst_vs}. On the left, we have the picture using the harmonic method, and on the right we have the picture resulting from using the method from Section \ref{sec:E2_A2}) twice. Following the two nodal patterns, we see that they are the same. \begin{acknowledgments} We would like to thank Peter Wong and Matthew Cot\'{e} for many helpful discussions, and for advising the senior thesis \cite{Stambaugh} on which much of the work presented here is based. \end{acknowledgments}
20,044
TITLE: Spiral tangents form constant angle with polar lines. QUESTION [0 upvotes]: Given the logarithmic spiral $$\alpha(t) = e^{-t}(\cos(t),\sin(t))$$ I take a ray from the origin given by $\lambda(\cos \theta, \sin \theta)$ and I have to prove that in $\alpha(\mathbb{R}) \cap R_{\theta}$ the tangents form a constant angle with the vector $(\cos \theta,\sin \theta)$ (constant in the sense that it does not depend on the point nor the angle $\theta$). My approach I compute the tangent line as having director vector $-e^{-t}(\cos(t)+\sin(t),\sin(t)-\cos(t))$ and then the intersection of the two lines is given by the equation $\lambda_1(\cos \theta,\sin \theta) = e^{-t}((1-\lambda_2)\cos(t)\sin(t),(1-\lambda_2)\sin(t)+\lambda_2\cos(t))$. Solving this gives me $$t = -\frac{1}{2}log\left(\frac{\lambda_1^2}{(1-\lambda_2)^2+\lambda_2^2}\right)$$ But then the angle is given by $$cos \alpha = \frac{-e^{-t_0}((\cos t_0+\sin t_0) \cos \theta + (\sin t_0 - \cos t_0) \sin \theta}{\sqrt{t}e^{-t_0}}$$ which depends on $t_0$ (the point) and $\theta$. Perhaps I should change to polar coordinates? REPLY [2 votes]: It's much simpler: You have to prove that for all $t\in{\mathbb R}$ the angle between $\alpha(t)$ and $\alpha'(t)$ is the same. This angle $\beta$ can be computed through $$\cos\beta={\alpha(t)\cdot\alpha'(t)\over |\alpha(t)|\>|\alpha'(t)|}$$ and turns out to be ${3\pi\over4}$.
19,630
\begin{document} \title{Pseudo-simple heteroclinic cycles in $\R^4$} \author[1]{Pascal Chossat} \author[2]{Alexander Lohse} \author[3]{Olga Podvigina} \affil[1]{\small Universit\'e C\^ote d'Azur - CNRS, Parc Valrose, 06108 Nice cedex, France} \affil[2]{\small University of Hamburg, Bundesstra\ss e 55, 20146 Hamburg, Germany} \affil[3]{\small Institute of Earthquake Prediction Theory and Mathematical Geophysics, 84/32 Profsoyuznaya St, 117997 Moscow, Russian Federation} \maketitle \begin{abstract} We study \emph{pseudo-simple} heteroclinic cycles for a $\Gamma$-equivariant system in $\R^4$ with finite $\Gamma \subset O(4)$, and their nearby dynamics. In particular, in a first step towards a full classification -- analogous to that which exists already for the class of \emph{simple} cycles -- we identify all finite subgroups of $O(4)$ admitting pseudo-simple cycles. To this end we introduce a constructive method to build equivariant dynamical systems possessing a robust heteroclinic cycle. Extending a previous study we also investigate the existence of periodic orbits close to a pseudo-simple cycle, which depends on the symmetry groups of equilibria in the cycle. Moreover, we identify subgroups $\Gamma\subset O(4)$, $\Gamma \not\subset SO(4)$, admitting fragmentarily asymptotically stable pseudo-simple heteroclinic cycles. (It has been previously shown that for $\Gamma\subset SO(4)$ pseudo-simple cycles generically are completely unstable.) Finally, we study a generalized heteroclinic cycle, which involves a pseudo-simple cycle as a subset. \end{abstract} \noindent {\em Keywords:} equivariant dynamics, quaternions, heteroclinic cycle, periodic orbit, stability \noindent {\em Mathematics Subject Classification:} 34C14, 34C37, 37C29, 37C75, 37C80, 37G15, 37G40 \section{Introduction}\label{sec1} A heteroclinic cycle is an invariant set of a dynamical system comprised of equilibria $\xi_1, \ldots ,\xi_M$ and heteroclinic orbits $\kappa_i$ from $\xi_i$ to $\xi_{i+1}$, $i=1\dots M$ with the convention $M+1=1$. For several decades these objects have been of keen interest to the nonlinear science community. A heteroclinic cycle is associated with intermittent dynamics, where the system alternates between states of almost stationary behaviour and phases of quick change. It is well-known that a heteroclinic cycle can exist robustly in equivariant dynamical systems, i.e. persist under generic equivariant perturbations, namely when all heteroclinic orbits are saddle-sink connections in (flow-invariant) fixed-point subspaces. Robust heteroclinic cycles, their nearby dynamics and attraction properties have been thorougly studied, especially in low dimensions. See \cite{cl2000, Kru97} for a general overview. In $\R^3$, there are comparatively few possibilities for heteroclinic dynamics and these are rather well-understood. In $\R^4$, the situation is significantly more involved. We therefore consider systems \begin{align}\label{sys1} \dot{x}=f(x), \end{align} where $f: \R^4 \to \R^4$ is a smooth map that is equivariant with respect to the action of a finite group $\Gamma \subset O(4)$, i.e.\ \begin{align}\label{equivariance} f(\gamma x)=\gamma f(x) \quad \text{for all} \ x \in \R^4,\ \gamma \in \Gamma. \end{align} In this setting, much attention has been paid to so-called \emph{simple} cycles, see e.g. \cite{cl14,cl16,km95a,km04}, for which (i) all connections lie in two-dimensional fixed-point spaces $P_j=\Fix(\Sigma_j)$ with $\Sigma_j \subset \Gamma$, and (ii) the cycle intersects each connected component of $P_{j-1} \cap P_j \setminus \{0\}$ at most once. This definition was introduced by \cite{km04}, who also suggested several examples of subgroups of $O(4)$ that admit such a cycle (in the sense that there is an open set of $\Gamma$-equivariant vector fields possessing such an invariant set). The classification of simple cycles was completed in \cite{sot03,sot05} (for homoclinic cycles) and finally in \cite{pc15} by finding all groups $\Gamma\subset O(4)$ admitting such a cycle. In \cite{pc15} it was also discovered that the original definition of simple cycles from \cite{km04} implicitly assumed a condition on the isotypic decomposition of $\R^4$ with respect to the isotropy subgroup of an equilibrium, see subsection \ref{sec21} for details. This prompted them to define \emph{pseudo-simple} heteroclinic cycles as those satisfying (i) and (ii) above, but not this implicit condition. It is the primary aim of the present paper to carry out a systematic study of pseudo-simple cycles in $\R^4$, by establishing a complete list of all groups $\Gamma \subset O(4)$ that admit such a cycle. This is done in a similar fashion to the classification of simple cycles in \cite{pc15}, by using a quaternionic approach to describe finite subgroups of $O(4)$. First examples for pseudo-simple cycles were investigated in \cite{pc15,pc16}. The latter of those also addressed stability issues: it was shown that a pseudo-simple cycle with $\Gamma\subset SO(4)$ is generically completely unstable, while for the case $\Gamma\not\subset SO(4)$ a cycle displaying a weak form of stability, called \emph{fragmentary asymptotic stability}, was found. A fragmentarily asymptotically stable (f.a.s.)\ cycle has a positive measure basin of attraction that does not necessarily include a full neighbourhood of the cycle. We extend this stability study by showing an example of group $\Gamma \not\subset SO(4)$ which admits an asymptotically stable generalized heteroclinic cycle and pseudo-simple subcycles that are f.a.s.. Moreover, we look at the dynamics near a pseudo-simple cycle and discover that asymptotically stable periodic orbits may bifurcate from it. Whether or not this happens depends on the isotropy subgroup $\D_k$, $k \geq 3$ of equilibria comprising the cycle. The case $k=3$ was already considered in \cite{pc16}. We illustrate our more general results by numerical simulations for an example with $\Gamma=(\D_4\rl\D_2;\D_4\rl\D_2)$ in the case $k=4$. This paper is organized as follows. Section \ref{sec2} recalls background information on (pseudo-simple) heteroclinic cycles and useful properties of quaternions as a means to describe finite subgroups of $O(4)$. Then, in section \ref{sec3} we give conditions that allow us to decide whether or not such a group $\Gamma\subset O(4)$ admits pseudo-simple heteroclinic cycles. Section \ref{secth1} contains the statement and proofs of theorems \ref{th1} and \ref{th2}, which use the previous results to list all subgroups of $O(4)$ admitting pseudo-simple heteroclinic cycles. The proof of theorem \ref{th1} relies on properties of finite subgroups of $SO(4)$ that are given in appendices A-C. In section \ref{sec6n} we investigate the existence of asymptotically stable periodic orbits close to a pseudo-simple cycle, depending on the symmetry groups $\D_k$, of equilibria. The cases $k=3,4$ and $k\geq 5$ are covered by theorems \ref{thperorb} and \ref{noorbit}, respectively. In section \ref{sec8n} we employ the ideas of the previous sections to provide a numerical example of a pseudo-simple cycle with a nearby attracting periodic orbit. Finally, in section \ref{sec6} for a family of subgroups $\Gamma\not\subset SO(4)$ we construct a generalized heteroclinic cycle (i.e., a cycle with multidimensional connection(s)) and prove conditions for its asymptotic stability in theorem \ref{as}. This cycle involves as a subset a pseudo-simple heteroclinic cycle, that can be fragmentarily asymptotically stable. Section \ref{sec8} concludes and identifies possible continuations of this study. The appendices contain additional information on subgroups of $SO(4)$ that is relevant for the proof of theorem \ref{th1}. \section{Background}\label{sec2} Here we briefly review basic concepts and terminology for pseudo-simple heteroclinic cycles and the quaternionic approach to describing subgroups of $SO(4)$ as needed in this paper. \subsection{Pseudo-simple heteroclinic cycles}\label{sec21} In this subsection we give the precise framework in which we investigate robust heteroclinic cycles and the associated dynamics. Given an equivariant system \eqref{sys1} with finite $\Gamma \subset O(4)$ recall that for $x \in \R^4$ the \emph{isotropy subgroup of $x$} is the subgroup of all elements in $\Gamma$ that fix $x$. On the other hand, given a subgroup $\Sigma \subset \Gamma$ we denote by $\Fix(\Sigma)$ its \emph{fixed point space}, i.e.\ the space of points in $\R^4$ that are fixed by all elements of $\Sigma$. Let $\xi_1, \ldots ,\xi_M$ be hyperbolic equilibria of a system \eqref{sys1} with stable and unstable manifolds $W^s(\xi_j)$ and $W^u(\xi_j)$, respectively. Also, let $\kappa_j \subset W^u(\xi_j) \cap W^s(\xi_{j+1}) \neq \varnothing$ for $j=1,\ldots,M$ be connections between them, where we set $\xi_{M+1}=\xi_1$. Then the union of equilibria $\{\xi_1,\ldots ,\xi_M\}$ and connecting trajectories $\{ \kappa_1, \ldots ,\kappa_M\}$ is called a \emph{heteroclinic cycle}. Following \cite{km95a} we say it is \emph{structurally stable} or \emph{robust} if for all $j$ there are subgroups $\Sigma_j \subset \Gamma$ such that $\xi_{j+1}$ is a sink in $P_j:=\Fix(\Sigma_j)$ and $\kappa_j$ is contained in $P_j$. We also employ the established notation $L_j:=P_{j-1} \cap P_j=\Fix(\Delta_j)$, with a subgroup $\Delta_j \subset \Gamma$. As usual we divide the eigenvalues of the Jacobian $df(\xi_j)$ into \emph{radial} (eigenspace belonging to $L_j$), \emph{contracting} (belonging to $P_{j-1} \ominus L_j$), \emph{expanding} (belonging to $P_j \ominus L_j$) and \emph{transverse} (all others), where we write $X \ominus Y$ for a complementary subspace of $Y$ in $X$. In accordance with \cite{km95a} our interest lies in cycles where \begin{itemize} \item[(H1)] $\dim P_j =2$ for all $j$, \item[(H2)] the heteroclinic cycle intersects each connected component of $L_j \setminus \{0\}$ at most once. \end{itemize} Then there is one eigenvalue of each type and we denote the corresponding contracting, expanding and transverse eigenspaces of $df(\xi_j)$ by $V_j$, $W_j$ and $T_j$, respectively. In \cite{pc15} it is shown that under these conditions there are three possibilities for the unique $\Delta_j$-isotypic decomposition of $\R^4$: \begin{enumerate} \item[(1)] $\R^4=L_j \oplus V_j \oplus W_j \oplus T_j$ \item[(2)] $\R^4=L_j \oplus V_j \oplus \widetilde{W}_j$, where $\widetilde{W}_j=W_j \oplus T_j$ is two-dimensional \item[(3)] $\R^4=L_j \oplus W_j \oplus \widetilde{V}_j$, where $\widetilde{V}_j=V_j \oplus T_j$ is two-dimensional \end{enumerate} Here $\oplus$ denotes the orthogonal direct sum. This prompts the following definition. \begin{definition}[\cite{pc15}] \label{def:simcyc} We call a heteroclinic cycle satisfying conditions (H1) and (H2) above \emph{simple} if case 1 holds true for all $j$, and \emph{pseudo-simple} otherwise. \end{definition} \begin{remark}\label{rem:grsig} In case 1 the group $\Delta_j$ acts as $\Z_2$ on each one-dimensional component other than $L_j$ and $\Delta_j\cong\D_2$ (which is always the case if $\Gamma\subset SO(4)$) or $\Delta_j\cong(\Z_2)^3$. In cases 2 and 3 the group acts on the two-dimensional isotypic component as a dihedral group $\mathbb{D}_k$ in $\R^2$ for some $k \geq 3$ and $\Delta_j\cong\D_k=<\rho_j,\sigma_j>$ (always for $\Gamma\subset SO(4)$) or $\Delta_j\cong\D_k\times\Z_2$. For $\Gamma \subset SO(4)$ in case 2 the element $\rho_j$ acts as a $k$-fold rotation on $\widetilde{W}_j$ and trivially on $P_{j-1}=L_j \oplus V_j$, while $\sigma_j$ acts as $-I$ on $V_j \oplus T_j$ and trivially on $L_j \oplus W_j$. In case 3 the element $\rho_j$ acts as a $k$-fold rotation on $\widetilde{V}_j$ and trivially on $P_j=L_j \oplus W_j$, while $\sigma_j$ acts as $-I$ on $W_j \oplus T_j$ and trivially on $L_j\oplus V_j$. \end{remark} \begin{remark}\label{rem:eigv} The existence of a two-dimensional isotypic component implies that in case 2 the contracting and transverse eigenvalues are equal ($c_j=t_j$) and the associated eigenspace is two-dimensional, while in case 3 the expanding and transverse eigenvalues are equal ($e_j=t_j$) and the associated eigenspace is two-dimensional. Hence, we say that $df(\xi_j)$ has a multiple contracting or expanding eigenvalue in cases 2 or 3, respectively. \end{remark} We are interested in identifying all subgroups of $O(4)$ that admit pseudo-simple heteroclinic cycles in the following sense. For simple cycles this task has been achieved step by step in \cite{km04,pc15, sot03, sot05}. \begin{definition}[\cite{pc15}] \label{def:admits} We say that a subgroup $\Gamma$ of O($n$) {\em admits} (pseudo-)simple heteroclinic cycles if there exists an open subset of the set of smooth $\Gamma$-equivariant vector fields in $\R^n$, such that all vector fields in this subset possess a (pseudo-)simple heteroclinic cycle. \end{definition} In order to establish the existence of a heteroclinic cycle it is sufficient to find a sequence of connections $\xi_1 \to \ldots \to \xi_{m+1}=\gamma \xi_1$ with some finite order $\gamma \in \Gamma$, that is minimal in the sense that no $i,j \in \{1, \ldots ,m\}$ satisfy $\xi_i=\gamma'\xi_j$ for any $\gamma' \in \Gamma$. \begin{definition}[\cite{pc16}] Such a sequence $\xi_1 \to \ldots \to \xi_m$ together with the element $\gamma \in \Gamma$ is called a \emph{building block} of the heteroclinic cycle. \end{definition} \begin{remark} Heteroclinic cycle in an equivariant system can be decomposed as a union of building blocks. Usually, it is tacitly assumed that all blocks in such a decomposition can be obtained from just one building block by applying the associated symmetry $\gamma$. We also make this assumption. \end{remark} \subsection{Asymptotic stability}\label{sec22} Given a heteroclinic cycle $X$ and writing the flow of \eqref{sys1} as $\Phi_t(x)$, the $\delta$-basin of attraction of $X$ is the set $$ {\cal B}_\delta(X) = \{x\in \R^4~;~d(\Phi_t(x),X)<\delta\hbox{~for all~}t>0\mbox{~and~} \lim_{t\rightarrow+\infty}d(\Phi_t(x),X)=0 \}. $$ \begin{definition} \label{def:asstable} A heteroclinic cycle $X$ is {\em asymptotically stable} if for any $\delta>0$ there exists $\varepsilon>0$ such that $B_{\varepsilon}\subset{\cal B}_\delta(X)$, where $B_{\varepsilon}(X)$ denotes $\varepsilon$-neighbourhood of $X$. \end{definition} \begin{definition} \label{def:completelyunstable} A heteroclinic cycle $X$ is {\em completely unstable} if there exists $\delta>0$ such that $l({\cal B}_\delta(X))=0$, where $l(\cdot)$ denotes Lebesgue measure on $\R^4$. \end{definition} \begin{definition} \label{def:fragmstable} A heteroclinic cycle $X$ is {\em fragmentarily asymptotically stable} if $l({\cal B}_\delta(X))>0$ for any $\delta>0$. \end{definition} \subsection{Quaternions and subgroups of $SO(4)$} \label{sec:quaternions} We briefly recall some information on quaternions and their relation to subgroups of $SO(4)$, mainly following the notation and exposition in \cite[Chapter 3]{pdv}. For a more detailed background on this in general, and in the context of heteroclinic cycles, we also refer the reader to \cite{conw} and \cite{pc15}, respectively. A quaternion ${\bf q} \in \mathbb{H}$ may be described by four real numbers as ${\bf q}=(q_1,q_2,q_3,q_4)$. With the convention $1=(1,0,0,0)$, $i=(0,1,0,0)$, $j=(0,0,1,0)$ and $k=(0,0,0,1)$ any ${\bf q} \in \mathbb{H}$ can be written as ${\bf q}=q_1+q_2i+q_3j+q_4k$. We denote the conjugate of ${\bf q}$ as ${\bf \tilde{q}}:=q_1-q_2i-q_3j-q_4k$. Multiplication is defined in the standard way through the rules $i^2=j^2=k^2=-1$, $ij=-ji=k$, $jk=-kj=i$, $ki=-ik=j$, such that for ${\bf p}, {\bf q} \in \mathbb{H}$ we have $$ \renewcommand{\arraystretch}{1.2} \begin{array}{ll} {\bf p}{\bf q}=&(p_1q_1-p_2q_2-p_3q_3-p_4q_4,\ p_1q_2+p_2q_1+p_3q_4-p_4q_3,\\ &~p_1q_3-p_2q_4+p_3q_1+p_4q_2,\ p_1q_4+p_2q_3-p_3q_2+p_4q_1). \end{array} $$ By $\mH \subset \mathbb{H}$ we denote the multiplicative group of unit quaternions, with identity element $(1,0,0,0)$. There is a 2-to-1 homomorphism from $\mH$ to $SO(3)$, relating ${\bf q} \in \mH$ to the map ${\bf v} \mapsto {\bf q}{\bf v}{\bf q}^{-1}$, which is a rotation in the three-dimensional space of points ${\bf v} = (0,v_2,v_3,v_4) \in \mathbb{H}$. Any finite subgroup of $\mH$ then falls into one of the following cases, which are pre-images of the respective subgroups of $SO(3)$ under this homomorphism: \begin{equation}\label{finsg} \renewcommand{\arraystretch}{1.5} \begin{array}{ccl} \Z_n&=&\displaystyle{\oplus_{r=0}^{n-1}}(\cos2r\pi/n,0,0,\sin2r\pi/n)\\ \D_n&=&\Z_{2n}\oplus\displaystyle{\oplus_{r=0}^{2n-1}}(0,\cos r\pi/n,\sin r\pi/n,0)\\ \V&=&((\pm1,0,0,0))\\ \T&=&\V\oplus\left(\pm{1\over2},\pm{1\over2},\pm{1\over2},\pm{1\over2}\right)\\ \mO&=&\T\oplus\sqrt{1\over2}((\pm1,\pm1,0,0))\\ \I&=&\T\oplus{1\over2}((\pm\tau,\pm1,\pm\tau^{-1},0)), \end{array}\end{equation} where $\tau=(\sqrt{5}+1)/2$. Double parenthesis denote all even permutations of quantities within the parenthesis. The four numbers $(q_1,q_2,q_3,q_4)$ can be regarded as Euclidean coordinates of a point in $\R^4$. For any pair of unit quaternions $({\bf l};{\bf r})$, the transformation ${\bf q}\to{\bf lqr}^{-1}$ is a rotation in $\R^4$, i.e. an element of the group $SO(4)$. The mapping $\Phi:\mH\times\mH\to SO(4)$ that relates the pair $({\bf l};{\bf r})$ with the rotation ${\bf q}\to{\bf lqr}^{-1}$ is a homomorphism onto, whose kernel consists of two elements, $(1;1)$ and $(-1;-1)$; thus the homomorphism is two to one. Therefore, a finite subgroup of $SO(4)$ is a subgroup of a product of two finite subgroups of $\mH$. Following \cite{pdv} we write $({\bf L}\rl{\bf L}_K;{\bf R}\rl{\bf R}_K)$ for the group $\Gamma$. The isomorphism between ${\bf L}/{\bf L}_K$ and ${\bf R}/{\bf R}_K$ may not be unique and different isomorphisms give rise to different subgroups of $SO(4)$. The complete list of finite subgroups of $SO(4)$ is given in table \ref{listSO4}, where the subscript $s$ distinguishes subgroups obtained by different isomorphisms for $s < r/2$ and prime to $r$. This is explained in more detail in \cite[Chapter 3]{pdv} and in section 2.2 of \cite{pc15}. \begin{table}[htp] \hskip -1cm\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \# & group & order && \# & group & order && \# & group & order \\ \hline 1 & $(\Z_{2nr}\rl\Z_{2n};\Z_{2kr}\rl\Z_{2k})_s$ & $2nkr$ && 15 & $(\D_n\rl\D_n;\mO\rl\mO)$ & $96n$ && 29 & $(\mO\rl\mO;\I\rl\I)$ & 2880 \\ \hline 2 & $(\Z_{2n}\rl\Z_{2n};\D_{k}\rl\D_{k})_s$ & $4nk$ && 16 & $(\D_n\rl\Z_{2n};\mO\rl\T)$ & $48n$ && 30 & $(\I\rl\I;\I\rl\I)$ & 7200 \\ \hline 3 & $(\Z_{4n}\rl\Z_{2n};\D_{k}\rl\Z_{2k})$ & $4nk$ && 17 & $(\D_{2n}\rl\D_n;\mO\rl\T)$ & $96n$ && 31 & $(\I\rl\Z_2;\I\rl\Z_2)$ & 120 \\ \hline 4 & $(\Z_{4n}\rl\Z_{2n};\D_{2k}\rl\D_{k})$ & $8nk$ && 18 & $(\D_{3n}\rl\Z_{2n};\mO\rl\V)$ & $48n$ && 32 & $(\I^{\dagger}\rl\Z_2;\I\rl\Z_2)$ & 120 \\ \hline 5 & $(\Z_{2n}\rl\Z_{2n};\T\rl\T)$ & $24n$ && 19 & $(\D_n\rl\D_n;\I\rl\I)$ & $240n$ && 33 & $(\Z_{2nr}\rl\Z_{n};\Z_{2kr}\rl\Z_{k})_s$ & nkr \\ \cline{1-7} 6 & $(\Z_{6n}\rl\Z_{2n};\T\rl\V)$ & $24n$ && 20 & $(\T\rl\T;\T\rl\T)$ & 288 && & $n\equiv k\equiv 1 (\mod 2)$ & \\ \hline 7 & $(\Z_{2n}\rl\Z_{2n};\mO\rl\mO)$ & $48n$ && 21 & $(\T\rl\Z_2;\T\rl\Z_2)$ & 24 && 34 & $(\D_{nr}\rl\Z_{n};\D_{kr}\rl\Z_{k})_s$ & 2nkr \\ \cline{1-7} 8 & $(\Z_{2n}\rl\Z_{2n};\mO\rl\T)$ & $48n$ && 22 & $(\T\rl\V;\T\rl\V)$ & 96 && & $n\equiv k\equiv 1$ (mod 2) & \\ \hline 9 & $(\Z_{2n}\rl\Z_{2n};\I\rl\I)$ & $120n$ && 23 & $(\T\rl\T;\mO\rl\mO)$ & 576 && 35 & $(\T\rl\Z_1;\T\rl\Z_1)$ & 12 \\ \hline 10 & $(\D_n\rl\D_n;\D_k\rl\D_k)$ & $8nk$ && 24 & $(\T\rl\T;\I\rl\I)$ & 1440 && 36 & $(\mO\rl\Z_1;\mO\rl\Z_1)$ & 24 \\ \hline 11 & $(\D_{nr}\rl\Z_{2n};\D_{kr}\rl\Z_{2k})_s$ & $4nkr$ && 25 & $(\mO\rl\mO;\mO\rl\mO)$ & 1152 && 37 & $(\mO\rl\Z_1;\mO\rl\Z_1)^{\dagger}$ & 24 \\ \hline 12 & $(\D_{2n}\rl\D_n;\D_{2k}\rl\D_k)$ & $16nk$ && 26 & $(\mO\rl\Z_2;\mO\rl\Z_2)$ & 48 && 38 & $(\I\rl\Z_1;\I\rl\Z_1)$ & 60 \\ \hline 13 & $(\D_{2n}\rl\D_n;\D_k\rl\Z_{2k})$ & $8nk$ && 27 & $(\mO\rl\V;\mO\rl\V)$ & 192 && 39 & $(\I^{\dagger}\rl\Z_1;\I\rl\Z_1)$ & 60 \\ \hline 14 & $(\D_n\rl\D_n;\T\rl\T)$ & $48n$ && 28 & $(\mO\rl\T;\mO\rl\T)$ & 576 && &&\\ \hline \end{tabular} \caption{Finite subgroups of $SO(4)$}\label{listSO4} \end{table} The superscript $\dagger$ is employed to denote subgroups of $SO(4)$ where the isomorphism between the quotient groups ${\bf L}/{\bf L}_K$ and ${\bf R}/{\bf R}_K\cong{\bf L}/{\bf L}_K$ is not the identity. The group $\I^{\dagger}$, isomorphic to $\I$, involves the elements $((\pm\tau^*,\pm1,\pm(\tau^*)^{-1},0))$, where $\tau^*=(-\sqrt{5}+1)/2$. The groups 1-32 contain the central rotation $-I$, and the groups 33-39 do not. A reflection in $\R^4$ can be expressed in the quaternionic presentation as ${\bf q}\to{\bf a\tilde qb}$, where ${\bf a}$ and ${\bf b}$ is a pair of unit quaternions. We write this reflection as $({\bf a};{\bf b})^*$. The transformations ${\bf q}\mapsto {\bf a\tilde qa}$ and ${\bf q}\mapsto -{\bf a\tilde qa}$ are respectively the axial reflection in the $\bf a$-axis (leaving unchanged all vectors parallel to the axis $\bf a$ and reversing all those perpendicular to it) and reflection through the hyperplane orthogonal to the vector $\bf a$. \subsection{Lemmas} In this subsection we recall lemmas \ref{lem2}-\ref{lem51} from \cite{pdv, op13,pc15} and prove lemmas \ref{lem6} and \ref{lem8}. They provide basic geometric information that is used to prove theorem \ref{th1} in section \ref{secth1}. \begin{lemma}[see proof in \cite{op13}]\label{lem2} Let $N_1$ and $N_2$ be two planes in $\R^4$ and $p_j$, $j=1,2$, be the elements of $SO(4)$ which act on $N_j$ as identity, and on $N_j^{\perp}$ as $-I$, and $\Phi^{-1}p_j=({\bf l}_j,{\bf r}_j)$, where $\Phi$ is the homomorphism defined in the previous subsection. Denote by $({\bf l}_1{\bf l}_2)_1$ and $({\bf r}_1{\bf r}_2)_1$ the first components of the respective quaternion products. The planes $N_1$ and $N_2$ intersect if and only if $({\bf l}_1{\bf l}_2)_1=({\bf r}_1{\bf r}_2)_1=\cos\alpha$ and $\alpha$ is the angle between the planes. \end{lemma} \begin{lemma}[see proof in \cite{pc15}]\label{lem3} Let $P_1$ and $P_2$ be two planes in $\R^n$, $\dim(P_1\cap P_2)=1$, $\rho\in$\,O($n$) is a plane reflection about $P_1$ and $\sigma\in$\,O($n$) maps $P_1$ into $P_2$. Suppose that $\rho$ and $\sigma$ are elements of a finite subgroup $\Delta\subset$\,O($n$). Then $\Delta\supset\D_m$, where $m\ge 3$. \end{lemma} \begin{lemma}[see proof in \cite{pc15}]\label{lem4} Consider $g\in SO(4)$, $\Phi^{-1}g=((\cos\alpha,\sin\alpha{\bf v});(\cos\beta,\sin\beta{\bf w}))$.\\ Then $\dim\Fix<g>=2$ if and only if $\cos\alpha=\cos\beta$. \end{lemma} \begin{lemma}[see proof in \cite{pc15}]\label{lem5} Consider $g,s\in SO(4)$, where $\Phi^{-1}g=((\cos\alpha,\sin\alpha{\bf v});(\cos\alpha,\sin\alpha{\bf w}))$ and $\Phi^{-1}s=((0,{\bf v});(0,{\bf w}))$. Then $\Fix<g>=\Fix<s>$. \end{lemma} \begin{lemma}[see proof in \cite{pdv}]\label{lem51} If ${\bf l}=(\cos\omega,{\bf v}\sin\omega)$ and ${\bf r}=(\cos\omega',{\bf v}'\sin\omega')$, then the transformation ${\bf q}\to{\bf lqr}^{-1}$ is a rotation of angles $\omega\pm\omega'$ in a pair of absolutely perpendicular planes. \end{lemma} \begin{lemma}\label{lem6} If $\Gamma\subset SO(4)$, $\Phi^{-1}\Gamma=({\bf L}\rl{\bf L}_K;{\bf R}\rl{\bf R}_K)$, admits pseudo-simple heteroclinic cycles then ${\bf L}\supset\D_k$ and ${\bf R}\supset\D_k$, where $k\ge3$. \end{lemma} \proof Let $\Phi^{-1}\rho_j=(\bl^{(1)};\br^{(1)})$ and $\Phi^{-1}\sigma_j=(\bl^{(2)};\br^{(2)})$, where $\Delta_j=<\rho_j,\sigma_j>\subset\Gamma$ is the group discussed in remark \ref{rem:grsig}. Existence of at least one such $\Delta_j$ follows from definitions \ref{def:simcyc} and \ref{def:admits}. Lemma \ref{lem51} implies that the order of the elements $\bl^{(1)}$ and $\br^{(1)}$ is $k$, while the order of $\bl^{(2)}$ and $\br^{(2)}$ is 2. Since $\Phi$ is a homomorphism, ${\bf L}\supset<\bl^{(1)},\bl^{(2)}>\cong\D_k$ and ${\bf R}\supset<\br^{(1)},\br^{(2)}>\cong\D_k$. \qed \begin{lemma}\label{lem8} If $\Gamma\subset SO(4)$ admits pseudo-simple heteroclinic cycles, then it has a symmetry axis $L=\Fix\Sigma$, where $\Sigma\subset\Gamma$ is a maximal isotropy subgroup such that $\Sigma_L\cong\D_k$ with $k\ge3$. \end{lemma} The proof follows from definitions \ref{def:simcyc} and \ref{def:admits} and remark \ref{rem:grsig}. \section{Construction of a $\Gamma$-equivariant system, possessing a heteroclinic cycle}\label{sec3} In this section we prove the following lemma: \begin{lemma}\label{lem1} \begin{itemize} \item[(i)] If for a given finite subgroup $\Gamma\subset O(4)$ there exist two sequences of isotropy subgroups $\Sigma_j$, $\Delta_j$, $j=1,\dots, m$, and an element $\gamma\in\Gamma$ satisfying the following conditions: \begin{itemize} \item[\bf{C1}.] Denote $P_j=\Fix(\Sigma_j)$ and $L_j=\Fix(\Delta_j)$. Then $\dim P_j=2$ and $\dim L_j=1$ for all $j$. \item[\bf{C2}.] For $i\neq j$, $\Sigma_i$ and $\Sigma_j$ are not conjugate. \item[\bf{C3}.] For $j=2,\dots,m$, $L_j=P_{j-1}\cap P_j$, and $L_1=\gamma^{-1}P_m\gamma\cap P_1$. We set $\Delta_{m+1}=\gamma\Delta_1\gamma^{-1}$. \item[\bf{C4}.] For all $j$, the subspaces $L_j$, $P_{j-1}\ominus L_j$ and $P_j\ominus L_j$ belong to different isotypic components in the isotypic decomposition of $\Delta_j$ in $\R^4$. \item[\bf{C5}.] $\Sigma_j\cong\Z_{k_j}$ with $k_j\ge3$ for at least one $j$. \end{itemize} Consider $G_j=N_{\Gamma}(\Sigma_{j})/\Sigma_{j}\cong \D_{k_{j}}$, the dihedral group of order $2k_j$, where we write $k_j=0$ for a trivial $G_j$ or $k_j=1$ for $G_j\cong\Z_2$. Let $n_j$ be the number of isotropy types of axes $\widetilde L_{sj}\subset P_j$ that are not fixed by an element of $G_j$ and $\widetilde L_{sj}=\Fix\widetilde\Delta_{sj}$, $1\le s\le n_j$. \begin{itemize} \item[\bf{C6}.] Depending on $n_j$ one of the following takes place:\\ (a) if $n_j=0$ then either $k_j$ is even and the groups $\Delta_{j-1}$, $\Delta_j$ are not conjugate, or $k_j$ is odd;\\ (b) if $n_j=1$ then the groups $\Delta_{j-1}$ and $\Delta_j$ are not conjugate and one of $\Delta_{j-1}$ or $\Delta_j$ is conjugate to $\widetilde\Delta_{1j}$;\\ (c) if $n_j=2$ then $\Delta_{j-1}$ and $\Delta_j$ are conjugate to $\widetilde\Delta_{1j}$ and $\widetilde\Delta_{2j}$. \end{itemize} then $\Gamma$ {\em admits} pseudo-simple heteroclinic cycles. \item[(ii)] If $\Gamma\subset O(4)$ admits pseudo-simple heteroclinic cycles, then there are two sequences of isotropy subgroups $\Sigma_j$, $\Delta_j$, $j=1,\dots,m$, where $m\ge2$, and an element $\gamma$, satisfying conditions {\bf C1},{\bf C3},{\bf C4}, {\bf C5} and {\bf C6$^*$}. \begin{itemize} \item[\bf{C6$^*$}.] If $-I\in\Gamma$, then $\Delta_j$ and $\Delta_i$ are not conjugate for any $i\ne j$, $1\le i,j\le m$. \end{itemize} \end{itemize} \end{lemma} In \cite{pc15} we proved a similar lemma, stating necessary and sufficient conditions for a group $\Gamma\subset\, O(n$) to admit simple heteroclinic cycles. As noted in \cite{pc16}, with minor modifications the proof can be used to prove sufficient conditions for a group $\Gamma\subset O(4)$ to admit pseudo-simple heteroclinic cycles. Here, our proof of lemma \ref{lem1} employs a different idea. We explicitly build a $\Gamma$-equivariant dynamical system $\dot {\bf x}={\bf f}({\bf x})$ possessing a pseudo-simple heteroclinic cycle and prove that the cycle persists under small $\Gamma$-equivariant perturbations. \bigskip \begin{proof} Starting with the proof of (i), we show that for any group $\Gamma\subset O(4)$ satisfying conditions {\bf C1}-{\bf C6} there is a vector field {\bf f} such that the associated dynamics possess a heteroclinic cycle between equilibria in $\Gamma L_j$ with connections in the fixed-point planes $\Gamma P_j$. \medbreak As a first step, for each plane $P_j$, $j=1,\ldots,m$, that contains the axes $L_j$ and $L_{j+1}$ (in agreement with {\bf C3}, $L_{m+1}=\gamma L_1$), we define a two-dimensional vector field ${\bf h}_j$, which in the polar coordinates $(r,\theta)$ is: \begin{equation}\label{fieh} {\bf h}_j(r,\theta)=\left(r(1-r), \ \sin(\theta)\prod_{i=1}^n \sin(\theta_{ij}-\theta)\right), \end{equation} where $0\le\theta_{ij}<\pi$ are the angles of all fixed-point axes in $P_j$ other than $L_j$, and the angle of $L_j$ is $\theta=0$. For the flow of $(\dot r,\dot\theta)={\bf h}_j(r,\theta)$ each of these axes is invariant and has an equilibrium $r=1$ which is attracting along the direction of $r$. Moreover, there are heteroclinic connections between equilibria on neighbouring axes, since the sign of $\dot{\theta}$ changes when an axis is crossed. We extend the vector fields ${\bf h}_j$ to ${\bf g}_j: \R^4 \to \R^4$ as follows: Denote by $\pi_j$ and $\pi^{\perp}_j$ the projections onto the plane $P_j$ and its orthogonal complement in $\R^4$, respectively. We set \begin{equation}\label{gj} \pi_j{\bf g}_j({\bf x})= {{\bf h}_j(\pi_j{\bf x})\over 1+A|\pi^{\perp}_j{\bf x}|^2},\quad \pi^{\perp}_j{\bf g}_j({\bf x})=0, \end{equation} with a positive constant $A$ (to be chosen sufficiently large later). The vector field ${\bf f}:\R^4\to\R^4$ is then defined as \begin{equation}\label{f-equation} {\bf f}({\bf x})=\sum\limits_{j=1}^{m} \sum\limits_{\gamma_{ij}\in \mathcal{G}_j} \gamma_{ij}{\bf g}_j(\gamma_{ij}^{-1}{\bf x}), \end{equation} where $\mathcal{G}_j=\Gamma/N_{\Gamma}(\Sigma_j)$ and $N_{\Gamma}(\Sigma_j)$ is the normalizer of $\Sigma_j$ in $\Gamma$. As the second step, we show that the system \begin{equation}\label{f-system} \dot{\bf x}={\bf f}({\bf x}) \end{equation} possesses steady states $\xi_j\in L_j$, $j=1,\ldots,m$, and heteroclinic connections $\xi_j\to\xi_{j+1}'\subset P_j$, where $\xi_{j+1}'=\gamma_j'\xi_{j+1}$ and $\gamma_j'\in N_{\Gamma}(\Sigma_j)$. Note, that by construction the system (\ref{f-system}) is $\Gamma$-equivariant, which implies invariance of the axes $L_j$ and planes $P_j$. The system (\ref{f-system}) restricted to $L_j$ is \begin{equation}\label{sysLj} \dot x_j=\sum\limits_{k=1}^{m}\sum\limits_{\gamma_{ik}\in \mathcal{G}_k} {x_j\cos\beta_{ik}(1-x_j\cos\beta_{ik})\over 1+Ax_j^2\sin^2\beta_{ik}}, \end{equation} where $x_j$ is the coordinate along $L_j$ and $\beta_{ik}$ is the angle between $L_j$ and $\gamma_{ik}P_k$. We split the sum in (\ref{sysLj}) into two, for $L_j\subset\gamma_{ik}P_k$ and $L_j\not\subset\gamma_{ik}P_k$, and write $$\dot x_j=s_jx_j(1-x_j)+\sum\limits_{i,k:\,L_j\not\subset\gamma_{ik}P_k} {x_j\cos\beta_{ik}(1-x_j\cos\beta_{ik})\over 1+Ax_j^2\sin^2\beta_{ik}},$$ where $s_j$ is the number of planes $\gamma_{ik}P_k$ that contain $L_j$. Hence, for sufficiently large positive $A$ there exists an equilibrium $\xi_j\in L_j$ with $x_j=c_j\approx 1$, attracting in $L_j$. To prove existence of a heteroclinic connection $\xi_j\to\xi_{j+1}'$, we consider a sector in $P_j$ between $L_j$ and $L'_{j+1}$, where $L'_{j+1}=\gamma_j'L_{j+1}$ with $\gamma_j'\in N_{\Gamma}(\Sigma_j)$ such that there are no invariant axes between $L_j$ and $L'_{j+1}$. Existence of such a sector follows from {\bf C6}. (In case (a) the axes $L_j$ and $L'_{j+1}$ are invariant axes of $G_j$, they are the only invariant axes in $P_j$. In case (b) invariant axes of $G_j$ alternate with (symmetric copies of) $\tilde L_{1j}$. In case (c) there is one $\tilde L_{1j}$ and one $\tilde L_{2j}$ between any two neighbouring invariant axes of $G_j$.) We choose a small number $a>0$ and divide this sector into three subsets, as sketched in Figure \ref{V1V2V3}: \begin{itemize} \item $V_1$: a strip of width $a$ near $L_j$ \item $V_3$: a strip of width $a$ near $L_{j+1}$ \item $V_2$: the rest of the sector \end{itemize} We now consider the dynamics of system (\ref{f-system}) in each of these regions. \begin{itemize} \item For $V_1$ we distinguish three cases: (a) $\xi_j$ is a simple equilibrium, i.e. the isotypic decomposition of $\R^4$ w.r.t.\ $\Delta_j$ has only 1D components, (b) $\xi_j$ is a pseudo-simple equilibrium, i.e. the isotypic decomposition of $\R^4$ w.r.t.\ $\Delta_j$ has a 2D component, and the component is the contracting eigenspace, (c) $\xi_j$ is a pseudo-simple equilibrium with 2D expanding eigenspace. In $V_1$ we employ the coordinates $(x_j,x_{j+1})=(r\cos(\theta),r\sin(\theta))$. Choosing $a>0$ sufficiently small and $A>0$ sufficiently large, to approximate the dynamics near $\xi_j$, we take into account only leading terms in ${\bf h}_{j-1}$ and ${\bf h}_j$ and in (\ref{f-system}) we omit the terms corresponding to the planes $\gamma_{ik}P_k$ that do not contain $L_j$. The condition {\bf C4} implies that in case (a) the axis $L_j$ belongs to planes $P_{j-1}$ and $P_j$ only; in case (b) it also belongs to several symmetric copies of $P_{j-1}$; in case (c) to $P_{j-1}$, $P_j$ and several symmetric copies of $P_j$. In case (a) we have $$\dot x_j=x_j(c_j-x_j)\biggl(1+{1\over 1+Ax_{j+1}^2}\biggr),\quad \dot x_{j+1}=C_jx_{j+1},\quad\hbox{where } C_j=\prod_{i=1}^{n_j}\sin\theta_{ij},$$ and $C_j>0$ since $0<\theta_{ij}<\pi$. In case (b), $P_j$ is orthogonal to $P_{j-1}$ and its $q_j$ symmetric copies. Hence, near $\xi_j$ we have $$\dot x_j=x_j(c_j-x_j)\biggl(1+{q_j\over 1+Ax_{j+1}^2}\biggr),\quad \dot x_{j+1}=C_jx_{j+1}.$$ In case (c), assuming that there are $q_j$ symmetric copies of $P_j$ containing $L_j$, hence the angles between neighbouring planes are $\pi/q_j$, we approximate the dynamics in $P_j$ as \begin{align*} \dot{x}_j&=x_j(c_j-x_j)\biggl({1\over1+Ax_{j+1}^2}+ \sum\limits_{k=0}^{q_j-1}{1\over 1+A\sin^2(k\pi/q_j)x_{j+1}^2}\biggr)\\ \dot{x}_{j+1}&=C_jx_{j+1} \sum\limits_{k=0}^{q_j-1}{\cos^2(k\pi/q_j)\over 1+A\sin^2(k\pi/q_j)x_{j+1}^2}. \end{align*} Thus, in all cases (a)-(c) the equilibrium $\xi_j$ is attracting in $L_j$ and we have $\dot{x}_{j+1}>0$ in $V_1$, so trajectories leave $V_1$ and enter $V_2$. At the point of entrance, $x_j\approx 1$. \begin{figure}[ht] \centerline{ \includegraphics[width=11cm]{V1V2V3.png}} \caption{Division of the sector in $P_j$ into $V_1$, $V_2$, $V_3$.\label{V1V2V3}} \end{figure} \item The region $V_2$ is bounded away from the axes $L_j$ and $L'_{j+1}$. Then, for any given $a>0$ there exists $A_0>0$ such that for all $A>A_0$, the dynamics away from the fixed-point axes are essentially those of ${\bf h}_j$. Namely, the trajectories through $(x_j,x_{j+1})\approx(1,a)$ are attracted by $V_3$ and at the entrance point $r\approx 1$. \item In $V_3$, for sufficiently small $a>0$ and sufficiently large $A>0$ all trajectories with $r\approx 1$ are attracted by $\xi'_{j+1}$ by arguments that are analogous to those for $V_1$. \end{itemize} So we have shown that for the dynamics of (\ref{f-system}) in each $P_j$ there is a connection $\xi_j \to \xi'_{j+1}=\gamma_j'\xi_{j+1}$. Taking into account their symmetric copies we obtain a sequence of connections $\xi_1 \to \gamma_1'\xi_2 \to \gamma_1'\gamma_2'\xi_3 \to \ldots \to \gamma_1'\gamma_2'\ldots \gamma_m'\xi_{m+1}=:\gamma'\xi_1$ forming a building block of a heteroclinic cycle. The cycle is pseudo-simple because of {\bf C5}, and robust since by construction all connections lie in fixed-point subspaces that persist under equivariant perturbations. For (ii) we note that necessity of conditions {\bf C1}, {\bf C3}, {\bf C4}, {\bf C5} and {\bf C6$^*$} follows directly from the definition of pseudo-simple heteroclinic cycles. The two sequences of subgroups are found by choosing $\Sigma_j$ as the isotropy subgroups of the planes $P_j$ and $\Delta_j$ as the isotropy subgroups of the equilibria $\xi_j$. \qed \end{proof} \medbreak In the following lemma we state sufficient conditions for a group $\Gamma\subset O(4)$ to admit pseudo-simple cycles, that are slightly different from the ones proven in lemma \ref{lem1}(i). The difference is that in lemma \ref{lem11} the subgroups $\Sigma_j$ can be conjugate in $\Gamma$. Since the proof of lemma \ref{lem11} is similar to the one of lemma \ref{lem1}(i), it is omitted. \begin{lemma}\label{lem11} If for a given finite subgroup $\Gamma\subset O(4)$, with $-I\in\Gamma$, there exist two sequences of isotropy subgroups $\Sigma_j$, $\Delta_j$, $j=1,\dots,m$, and an element $\gamma\in\Gamma$ satisfying conditions {\bf C1}, {\bf C2'}, {\bf C3}, {\bf C4}, {\bf C5} and {\bf C6'}, where \begin{itemize} \item[\bf{C2'}.] $\Delta_i$ and $\Delta_j$ are not conjugate for any $i\neq j$. \item[\bf{C6'}.] For any $j$, there exists a sector in $P_j$, bounded by $L_j$ and $L_{j+1}$, that does not contain any other isotropy axes of $\Gamma$. \end{itemize} then $\Gamma$ {\em admits} pseudo-simple heteroclinic cycles. \end{lemma} \begin{remark}\label{rem11} Note that lemma \ref{lem11} can be generalised to $\R^n$ as follows:\\ If for a given finite subgroup $\Gamma\subset O(n)$, with $-I\in\Gamma$, there exist two sequences of isotropy subgroups $\Sigma_j$, $\Delta_j$, $j=1,\dots,m$, and an element $\gamma\in\Gamma$ satisfying conditions {\bf C1}, {\bf C2'}, {\bf C3}, {\bf C4} and {\bf C6'}, then $\Gamma$ {\em admits} heteroclinic cycles. \end{remark} \section{List of groups}\label{secth1} \subsection{The groups $\Gamma$ in $SO(4)$} In this subsection we prove theorem \ref{th1} that exhibits all finite subgroups of $SO(4)$, admitting robust pseudo-simple heteroclinic cycles. The proof employs lemmas \ref{lem1}(i) and \ref{lem11}, that give sufficient conditions for $\Gamma\subset SO(4)$ to admit pseudo-simple cycles, and lemma \ref{lem1}(ii) that gives necessary conditions. The lemmas allow us to split subgroups of $SO(4)$ into two classes, those admitting and those not admitting pseudo-simple heteroclinic cycles. Similarly to \cite{pc15}, we use the quaternionic presentation for subgroups of $SO(4)$, see subsection \ref{sec:quaternions}. Appendices A-C contain detailed information on the geometry of various subgroups of $SO(4)$ which are used for proving the theorem. \begin{theorem}\label{th1} A group $\Gamma\subset SO(4)$ admits pseudo-simple heteroclinic cycles, if and only if it is one of those listed in table \ref{table-th1}. \pagebreak \hskip -3cm\begin{table}[ht] \begin{equation*} \renewcommand{\arraystretch}{1.5} \begin{array}{l} (\D_{2K_1}\rl\D_{2K_1};\D_{2K_2}\rl\D_{2K_2}),\ \gc(K_1,K_2)\ge2\\ (\D_{K_1r}\rl\Z_{2K_1};\D_{K_2r}\rl\Z_{2K_2})_s,\ \gc(K_1,K_2)\gc(r,K_1-sK_2)\ge3\\ (\D_{2K_1}\rl\D_{K_1};\D_{2K_2}\rl\D_{K_2}), \gc(K_1,K_2)\ge2\\ (\D_{2K_1}\rl\D_{K_1};\D_{K_2}\rl\Z_{2K_2}),\ \gc(K_1,K_2)\ge3\\ (\D_K\rl\D_K;\mO\rl\mO),\ K=3m_1\hbox{ and/or }K=4m_2\\ (\D_K\rl\Z_{2K};\mO\rl\T),\ K=3m\\ (\D_{2K}\rl\D_K;\mO\rl\T),\ K=3m_1\hbox{ and/or }K=2(2m_2+1)\\ (\D_{3K}\rl\Z_{2K};\mO\rl\V)\\ (\D_K\rl\D_K;\I\rl\I),\ K=3m_1\hbox{ and/or }K=5m_2\\ (\D_{K_1r}\rl\Z_{K_1};\D_{K_2r}\rl\Z_{K_2})_s,\ K_1\equiv K_2\equiv1(\hbox{mod } 2),\ \gc(K_1,K_2)\gc(r,K_1-sK_2)\ge3\\ \end{array} \end{equation*} \caption{Groups $\Gamma\subset SO(4)$ admitting pseudo-simple heteroclinic cycles}\label{table-th1} \end{table} \end{theorem} \bigskip\noindent To prove the theorem, we proceed in four steps: \medskip In step [i], using lemmas \ref{lem6} and \ref{lem8} we identify subgroups of $SO(4)$, that do not satisfy necessary conditions for existence of pseudo-simple heteroclinic cycles stated in the lemmas. The groups 1-9 and 33 (see table \ref{listSO4}) do not satisfy conditions of lemma \ref{lem6}. The groups 14, 20-32 and 35-39 do not satisfy conditions of lemma \ref{lem8}. The groups 10-13, 15-19 and 34 should satisfy extra conditions on $k_1$, $k_2$, $n$, $r$ and $s$. \medskip In step [ii], using lemmas \ref{lem2}-\ref{lem51} and the correspondence between ${\bf L}$ and ${\bf R}$ (see section \ref{sec:quaternions}\,), we identify all subgroups $\Sigma$ such that $\dim\Fix\Sigma=2$, which are elements of groups found in step [i]. The results are listed in appendix A. \medskip In step [iii], using the results obtained at step [ii], we determine the (maximal) conjugacy classes of subgroups of $\Gamma$, isomorphic to $\Z_k$, which have two-dimensional fixed-point subspaces and (maximal) conjugacy classes of $\Delta\cong\D_k$ such that $\dim\Fix(\Delta)=1$. The results are listed in appendix B. \medskip Finally, in step [iv], using the list in appendix B, we identify all groups that possess sequences of subgroups $\Sigma_j$ and $\Delta_j$ satisfying conditions {\bf C1}-{\bf C6} of lemmas \ref{lem1}(i) or \ref{lem11} (they are presented in appendix C). All the other groups do not have sequences satisfying conditions {\bf C1},{\bf C3}, {\bf C4},{\bf C5} and {\bf C6$^*$}. In fact, the only groups satisfying the conditions of lemma \ref{lem11}, but not those of lemma \ref{lem1}(i), are $(\D_{15K}\rl\D_{15K};\I\rl\I)$ and $(\D_{3K}\rl\Z_{6K};\mO\rl\T)$ with odd $K$. \bigskip\noindent {\bf Proof of the theorem}\\ Step [i]\\ The groups that satisfy conditions of lemmas \ref{lem6} and \ref{lem8} are: \vskip 3mm \begin{tabular}{c|c} \# & group \\ \hline 10 & $(\D_n\rl\D_n;\D_k\rl\D_k)$, $\gc(n,k)\ge3$\\ 11 & $(\D_{nr}\rl\Z_{2n};\D_{kr}\rl\Z_{2k})_s$ $\gc(n,k)\gc(r,k-sn)\ge3$\\ 12 & $(\D_{2n}\rl\D_n;\D_{2k}\rl\D_k)$, $\gc(n,k)\ge2$\\ 13 & $(\D_{2n}\rl\D_n;\D_k\rl\Z_{2k})$, $\gc(n,k)\ge2$\\ 15 & $(\D_n\rl\D_n;\mO\rl\mO)$, $n=3m_1$ and/or $n=4m_2$\\ 16 & $(\D_n\rl\Z_{2n};\mO\rl\T)$, $n=3m$\\ 17 & $(\D_{2n}\rl\D_n;\mO\rl\T)$, $n=3m_1$ and/or $n=2(2m_2+1)$\\ 18 & $(\D_{3n}\rl\Z_n;\mO\rl\V)$\\ 19 & $(\D_n\rl\D_n;\I\rl\I)$, $n=3m_1$ and/or $n=5m_2$\\ 34 & $(\D_{nr}\rl\Z_{n};\D_{kr}\rl\Z_{k})_s$, $n\equiv k\equiv1(\mod 1)$, $\gc(n,k)\gc(r,k-sn)\ge 3$\\ \end{tabular} \bigskip Below we show that the groups $(\D_{3K}\rl\Z_{6K};\mO\rl\T)$ and $(\D_{15K}\rl\D_{15K};\I\rl\I)$ admit heteroclinic cycles, while the groups $(\D_{K_1}\rl\D_{K_1};\D_{K_2}\rl\D_{K_2})$, where at least one of $K_1$ or $K_2$ is odd, do not. For other groups the proofs are similar and we omit them. \bigskip {\bf The group $\Gamma=(\D_{K_1}\rl\D_{K_1};\D_{K_2}\rl\D_{K_2})$.} \begin{itemize} \item [[ii]] The group $\D_n$ (see (\ref{finsg})\,) is comprised of the elements \begin{equation}\label{dn} \rho_n(t)=(\cos t\pi/n,0,0,\sin t\pi/n),\ \sigma_n(t)=(0,\cos t\pi/n,\sin t\pi/n,0),\ 0\le t<2n. \end{equation} The pairs $({\bf l};{\bf r})\in(\D_{K_1}\rl\D_{K_1};\D_{K_2}\rl\D_{K_2})$ satisfy ${\bf l}\in\D_{K_1}$, ${\bf r}\in\D_{K_2}$, where all possible combinations are elements of the group. If both $K_1$ and $K_2$ are odd, then the elements $\gamma\in\Gamma$ satisfying $\dim\Fix\gamma=2$ are \begin{equation}\label{pref1} \begin{array}{l} \kappa_1(\pm,n)=((\cos(n\theta),0,0,\pm\sin(n\theta)); (\cos(n\theta),0,0,\sin(n\theta)))\\ \kappa_2(n_1,n_2)=((0,\cos(n_1\theta_1),\sin(n_1\theta_1),0); (0,\cos(n_2\theta_2),\sin(n_2\theta_2),0)), \end{array}\end{equation} where $\theta_1=\pi/K_1$, $\theta_2=\pi/K_2$, $\theta=\pi/m$, $m=\gc(K_1,K_2)\ge3$, $0\le n_1<2K_1$, $0\le n_2<K_2$ and $0\le n<m$. The elements $\kappa_2(n_1,n_2)$ are plane reflections, while $\kappa_1(\pm,n)$ is a rotation by $2n\theta$ in the plane orthogonal to $\Fix\kappa_1(\pm,n)$. For even $K_2$ the group possesses an additional set of plane reflections $$\kappa_3(n_1)=((0,\cos(n_1\theta_1),\sin(n_1\theta_1),0);(0,0,0,1)).$$ \item [[iii]] In the group $\D_n$ the elements $(0,\cos(t\pi/n),\sin(t\pi/n),0)$ split into two conjugacy classes, corresponding to odd and even $t$. Since $\kappa_2(n_1,n_2)=\kappa_2(n_1+K_1,n_2+K_2)$, in the case when both $K_1$ and $K_2$ are odd the group $\Gamma$ has three maximal isotropy types of subgroups satisfying $\dim\Fix\Sigma=2$. The subgroups are $$\Sigma^{(1)}(\pm)=<\kappa_1(\pm,1)>,\ \Sigma^{(2)}(n_1,n_2)=<\kappa_2(n_1,n_2)>,\ n_1+n_2\hbox{ even or odd}.$$ The subgroups $\Sigma^{(1)}(+)$ and $\Sigma^{(1)}(-)$ are conjugate, e.g. by\\ $\sigma(n_1)=((0,\cos(n_1\theta_1),\sin(n_1\theta_1),0);(1,0,0,0))$. For any plane $P=\Fix\Sigma^{(2)}(n_1,n_2)$ the only symmetry axes $L\subset P$ are the intersections with $\Sigma^{(1)}(\pm)$. The axes are conjugate by $\sigma(n_1)\in N_{\Gamma}(\Sigma^{(2)}(n_1,n_2))$. Therefore, the group has two maximal isotropy types of subgroups satisfying $\dim\Fix(\Delta)=1$: $$\Delta(\pm,n_1,n_2)= <\kappa_1(\pm,1),\kappa_2(n_1,n_2)>,\ n_1+n_2\hbox{ even or odd}.$$ Since planes $\Fix\Sigma^{(2)}(n_1,n_2)$ do not satisfy the condition {\bf C4$^*$} and the remaining planes $\Fix\Sigma^{(1)}(+)$ and $\Fix\Sigma^{(1)}(-)$ do not intersect, the group $(\D_{K_1}\rl\D_{K_1};\D_{K_2}\rl\D_{K_2})$ with odd $K_1K_2$ does not admit heteroclinic cycles. In the case when $K_1$ is odd and $K_2$ is even, a plane fixed by the reflection $\kappa_3(n_1)$ does not intersect with any of $\Fix\Sigma_1(\pm)$ and $\Fix\Sigma_2(n_1,n_2)$ (see lemma \ref{lem2}). Moreover, $\Fix\kappa_3(n_1)$ does dot intersect with $\Fix\kappa_3(n_1')$ for any $n_1\ne n_1'$. Similar arguments apply when $K_1$ is even and $K_2$ is odd. Therefore, the group $(\D_{K_1}\rl\D_{K_1};\D_{K_2}\rl\D_{K_2})$ does not admit heteroclinic cycles when at least one of $K_1$ or $K_2$ is odd. \end{itemize} \bigskip {\bf The group $\Gamma=(\D_{3K}\rl\Z_{6K};\mO\rl\T)$.} \begin{itemize} \item [[ii]] The group $\mO$ can be decomposed as $\mO=\T\oplus\sqrt{1\over2}((\pm1,\pm1,0,0))$, see (\ref{finsg}). Therefore, the group $(\D_{3K}\rl\Z_{6K};\mO\rl\T)$ is comprised of the following elements: \begin{equation}\label{elex2} \begin{array}{l} ((\cos(n\theta),0,0,\sin(n\theta));\T)\\ ((0,\cos(n\theta),\sin(n\theta),0);\sqrt{1\over2}((\pm1,\pm1,0,0)) \end{array}\end{equation} where $\theta=\pi/3K$ and $0\le n<3K$. For odd $K$ the elements $\gamma\in\Gamma$ satisfying $\dim\Fix\gamma=2$ are \begin{equation}\label{pref2} \begin{array}{l} \kappa_1(\pm,\pm,\pm,\pm)=((1,0,0,\pm\sqrt{3})/2);(1,\pm1,\pm1,\pm1)/2)\\ \kappa_2(n,r,\pm)=((0,\cos(n\theta),\sin(n\theta),0);\rho^r(0,1,\pm1,0)), \end{array}\end{equation} where $\rho(a,b,c,d)=(a,c,d,b)$. Here $\kappa_2$ are plane reflections and $\kappa_1$ are rotations by $2\pi/3$ in the planes orthogonal to $\Fix\kappa_1$. For even $K$ the group possesses an additional set of plane reflections $$\kappa_3(r,\pm)=((0,0,0,1);\rho^r(0,0,0,\pm1)).$$ \item [[iii]] Since $\kappa_2(n,r,\pm)=-\kappa_2(n+3K,r,\pm)$ and in $\T$ the elements $(0,1,\pm1,0)$ and $-(0,1,\pm1,0)$ are conjugate, for odd $K$ all $\kappa_2(n,r,\pm)$ are conjugate in $\Gamma$. The elements\\ $\kappa_1(+,(-1)^{s_1},(-1)^{s_2},(-1)^{s_3})$ split into two conjugacy classes, depending on whether $s_1+s_2+s_3$ is even or odd. Hence, for odd $K$ the group has three maximal isotropy types of subgroups satisfying $\dim\Fix\Sigma=2$: \begin{equation}\label{iii2is} \begin{array}{l} \Sigma^{(1)}((-1)^{s_1},(-1)^{s_2},(-1)^{s_3}))= <\kappa_1(+,(-1)^{s_1},(-1)^{s_2},(-1)^{s_3})>,\ s_1+s_2+s_3\hbox{ even or odd}\\ \Sigma^{(2)}(n,r,\pm)=<\kappa_2(n,r,\pm)>. \end{array}\end{equation} Each $\Fix\Sigma^{(1)}$ contains $3K$ isotropy axes, each of them are intersections with three $\Fix<\kappa_2(n,r,\pm)>$, where $r=0,1,2$. Hence, the isotropy groups of symmetry axes can be written as \begin{equation}\label{iii1is} \Delta(n,(-1)^{s_1},(-1)^{s_2},(-1)^{s_3})= <\kappa_1(+,(-1)^{s_1},(-1)^{s_2},(-1)^{s_3}),\kappa_2(n,0,(-1)^{s_1+s_2+1})>. \end{equation} They split into two isotropy types, depending on whether $s_1+n$ is even or odd. Any plane $\Fix\Sigma^{(2)}$ contains four isotropy axes which are intersections with $\Fix\Sigma^{(1)}$. Since $N_{\Gamma}(\Sigma_2)=<\Sigma_2,-\Sigma_2>$ (this can be checked directly using the list (\ref{elex2})\,), all four isotropy axes are of different types. Therefore, the group has four types of isotropy subgroups (\ref{iii1is}) safisfying $\dim\Fix\Delta=2$, corresponding to odd and even $s_1+n$ and $s_1+s_2+s_3$. In the case when $K$ is even there exist five isotropy types of subgroups satisfying $\dim\Fix\Sigma=2$: \begin{equation}\label{pref22} \begin{array}{l} \Sigma^{(1)}((-1)^{s_1},(-1)^{s_2},(-1)^{s_3}))= <\kappa_1(+,(-1)^{s_1},(-1)^{s_2},(-1)^{s_3})>,\ s_1+s_2+s_3\hbox{ even or odd}\\ \Sigma^{(2)}(n,r,\pm)=<\kappa_2(n,r,\pm)>,\ n\hbox{ even or odd}\\ \Sigma^{(3)}(r,\pm)=<\kappa_3(r,\pm)>. \end{array}\end{equation} A plane $\Fix\Sigma^{(2)}(n,r,\pm)$ orthogonally intersects with the ones $\Fix\Sigma^{(2)}(n-(-1)^s3K/2,r,\mp)$ (and also with $\Fix\Sigma^{(3)}(r,(-1)^s)$), hence for odd and even $K/2$ the isotropy axes are different. Namely, for odd $K/2$ they are \begin{equation}\label{pref22o} \begin{array}{l} \Delta^{(1)}(n,(-1)^{s_1},(-1)^{s_2},(-1)^{s_3})=\\ ~~~<\kappa_1(+,(-1)^{s_1},(-1)^{s_2},(-1)^{s_3}),\kappa_2(n,0,(-1)^{s_1+s_2+1})>,\ s_1+s_2+s_3,\ n\hbox{ even or odd}\\ \Delta^{(2)}(n,r,\pm,(-1)^s)= <\kappa_2(n,r,\pm),\kappa_3(r,(-1)^s)>,\ n+s\hbox{ even or odd}, \end{array}\end{equation} while for even $K/2$ the second set of isotropy axes is \begin{equation}\label{pref22e} \Delta^{(2)}(n,r,\pm,\pm)= <\kappa_2(n,r,\pm),\kappa_3(r,\pm)>,\ n\hbox{ even or odd}. \end{equation} \item [[iv]] According to (iii), for odd $K$ the group $(\D_{3K}\rl\Z_{6K};\mO\rl\T)$ does not have isotropy subgroups satisfying conditions {\bf C1}-{\bf C6} of lemma \ref{lem1}(i). Let us show that we can find subgroups satisfying conditions of lemma \ref{lem11}. Set \begin{equation} \begin{array}{l} \Sigma_1=<\kappa_1(+,+,+,+)>,\ \Sigma_2=<\kappa_2(0,0,-)>,\ \Sigma_3=<\kappa_1(+,+,+,-)>,\\ \Sigma_4=<\kappa_2(1,0,-)>,\ \Delta_j=<\Sigma_{j-1},\Sigma_j>,\ j=2,3,4,\ \Delta_1=<\Sigma_4,\Sigma_1>\hbox{ and }\gamma=e. \end{array}\end{equation} By construction and due to (\ref{iii1is}) and (\ref{iii2is}), the subgroups satisfy conditions {\bf C1},{\bf C2'},{\bf C3},{\bf C4} and {\bf C5}. To show that $\Fix<\kappa_2(0,0,-)>$ satisfies condition {\bf C6'}, we recall (see (iii)) that the plane involves four isotropy axes, all non-conjugate, that are intersections with $\Fix\kappa_1(+,+,+,+)$, $\Fix\kappa_1(+,+,+,-)$, $\Fix\kappa_1(+,-,-,+)$ and $\Fix\kappa_1(+,-,-,-)$. To determine the angles between axes, we use lemmas \ref{lem5} and \ref{lem51}. By lemma \ref{lem5},\\ $\Fix\kappa_1(+,\pm,\pm,\pm)=\Fix\kappa'$, where $\kappa'(\pm,\pm,\pm)=((0,0,0,1);(0,\pm1,\pm1,\pm1)/\sqrt{3})$ is a plane reflection about a plane that intersects with $\Fix<\kappa_2(0,0,-)>$ orthogonally. Therefore, $\Fix<\kappa_2(0,0,-)>$ is $\kappa'$-invariant. Composition of two reflections about axes, intersecting with the angle $\alpha$, is a rotation by $2\alpha$. Since $\kappa'(+,+,+)\kappa'(+,+,-)=((1,0,0,0);(1,2,-2,0)/3)$, by lemma \ref{lem51} the angle in $\Fix<\kappa_2(0,0,-)>$ between the lines of intersections with $\kappa'(+,+,+)$ and $\kappa'(+,+,-)$ is $\arccos(1/3)/2$, while the lines of intersections with $\kappa'(+,+,+)$ and $\kappa'(-,-,-)$ are orthogonal. Hence, in $\Fix<\kappa_2(0,0,-)>$ no other isotropy axes belong to the smaller sector bounded by $\Fix\kappa_1(+,+,+,+)$ and $\Fix\kappa_1(+,+,+,-)$. Similarly, it can be shown that the condition {\bf C6'} holds true for $j=1,3,4$ as well. For even $K$ we apply lemma \ref{lem1}. We choose $$\Sigma_1=<\kappa_1(+,+,+,+)>,\ \Sigma_2=<\kappa_2(0,0,-)>,\ \Sigma_3=<\kappa_3(+,0)>,\ \Sigma_4=<\kappa_2(1,0,-)>,$$ $$\Delta_1=<\kappa_2(1,0,-),\kappa_1(+,+,+,+)>,\ \Delta_2=<\kappa_1(+,+,+,+)>,\kappa_2(0,0,-)>,$$ $$\Delta_3=<\kappa_2(0,0,-),\kappa_3(+,0)>,\ \Delta_4=<\kappa_3(+,0),\kappa_2(1,0,-)>,$$ which together with $\gamma=e$ satisfy conditions {\bf C1}-{\bf C6}, as follows from (\ref{pref22}), (\ref{pref22o}) and (\ref{pref22e}). \end{itemize} \bigskip {\bf The group $\Gamma=(\D_{15K}\rl\D_{15K};\I\rl\I)$.} \begin{itemize} \item [[ii]] The group is comprised of the pairs $({\bf l};{\bf r})$, where ${\bf l}\in\D_{15K}$ and ${\bf r}\in\I$. Since for odd $K$ all elements $((0,\cos(n\theta),\sin(n\theta),0);{\bf r})$ are conjugate, the group has the following the elements satisfying $\dim\Fix\gamma=2$: \begin{equation}\label{pref31} \begin{array}{l} \kappa_1(\pm,\pm,\pm,\pm)=((1,0,0,\pm\sqrt{3})/2;(1,\pm1,\pm1,\pm1)/2)\\ \kappa_1'(\pm,r,\pm,\pm)= ((1,0,0,\pm\sqrt{3})/2);\rho^r(1,\pm\tau^{-1},\pm\tau,0)/2)\\ \kappa_2(\pm,r,\pm,\pm)= ((\tau,0,0,\pm\tau^*)/2;\rho^r(\tau,\pm1,\pm\tau^{-1},0)/2)\\ \kappa_2'(\pm,r,\pm,\pm)= ((\tau^{-1},0,0,\pm\tau^{**})/2;\rho^r(\tau^{-1},\pm\tau,\pm1,0)/2)\\ \kappa_3(n,r,\pm)=((0,\cos(n\theta),\sin(n\theta),0);\rho^r(0,0,0,\pm1))\\ \kappa_3'(n,r,\pm,\pm)= ((0,\cos(n\theta),\sin(n\theta),0);\rho^r(0,1,\pm\tau,\pm\tau^{-1})), \end{array}\end{equation} where $\theta=\pi/15K$, $0\le n<30K$, $\tau^*=2\sin(\pi/5)=\sqrt{5}(\tau)^{-1}$ and $\tau^{**}=2\sin(2\pi/5)=\sqrt{5}/2$. By $\kappa_i$ and $\kappa_i'$ we denote elements that are conjugate in $\Gamma$. Here $\kappa_3$ and $\kappa_3'$ are plane reflections, $\kappa_1$ and $\kappa_1'$ are rotations by $2\pi/3$, $\kappa_2$ is a rotation by $2\pi/5$ and $\kappa_2'$ is a rotation by $4\pi/5$. For even $K$ the group possesses additional set of plane reflections: $$\kappa_4(r,\pm)=((0,0,0,1);\rho^r(0,0,0,\pm1)),\ \kappa_4'(r,\pm,\pm)=((0,0,0,1);\rho^r(0,1,\pm\tau^{-1},\pm\tau)).$$ \item [[iii]] For odd $K$ all plane reflections are conjugate in $\Gamma$. The rotations by $2\pi/3$ are conjugate, the rotations by $2\pi/5$ and $4\pi/5$ are conjugate as well. Hence, the group has three maximal isotropy types of subgroups satisfying $\dim\Fix\Sigma=2$: \begin{equation}\label{15K2is} \begin{array}{l} \Sigma^{(1)}(\pm,\pm,\pm)=<\kappa_1(+,\pm,\pm,\pm)>,\\ \Sigma^{(1')}(r,\pm,\pm)=<\kappa_1'(+,r,\pm,\pm)>,\\ \Sigma^{(2)}(r,\pm,\pm)=<\kappa_2(+,r,\pm,\pm)>,\\ \Sigma^{(3)}(n,r,\pm)=<\kappa_3(n,r,\pm)>,\\ \Sigma^{(3')}(n,r,\pm,\pm)=<\kappa_3'(n,r,\pm,\pm)>. \end{array}\end{equation} Each $\Fix\Sigma^{(1)}$ contains $30K$ isotropy axes, each of them is an (orthogonal) intersection with three $\Fix<\kappa_3>$. Each $\Fix\Sigma^{(2)}$ contains $30K$ isotropy axes, each of them in an (orthogonal) intersection with five $\Fix<\kappa_2>$. Hence, the isotropy groups of symmetry axes can be written as \begin{equation}\label{15K1is} \begin{array}{l} \Delta^{(1)}(n,(-1)^{s_1},(-1)^{s_2},(-1)^{s_3})=\\ ~~~~<\kappa_1(+,(-1)^{s_1},(-1)^{s_2},(-1)^{s_3}), \kappa_3'(n,0,(-1)^{s_1+s_2+1}(-1)^{s_1+s_3+1},))>,\ s_1+n\hbox{ even or odd},\\ \Delta^{(1')}(n,(-1)^{s_1},(-1)^{s_2})= <\kappa_1'(+,r,(-1)^{s_1},(-1)^{s_2}),\kappa_3(n,0,\pm)>,\ s_1+s_2+n\hbox{ even or odd},\\ \Delta^{(2)}(n,(-1)^{s_1},(-1)^{s_2})= <\kappa_2(+,(-1)^{s_1},(-1)^{s_2}),\kappa_3(n,0,\pm))>,\ s_1+s_2+n\hbox{ even or odd},\\ \Delta^{(2')}(n,(-1)^{s_1},(-1)^{s_2})=\\ ~~~~<\kappa_2(+,(-1)^{s_1},(-1)^{s_2}),\kappa_3'(n,0,(-1)^{s_1+s_2+1},\pm))>,\ s_1+n\hbox{ even or odd}. \end{array}\end{equation} In the case when $K$ is even there exist five isotropy types of subgroups satisfying $\dim\Fix\Sigma=2$: \begin{equation}\label{15Kev} \begin{array}{l} \Sigma^{(1)},\ \Sigma^{(1')},\ \Sigma^{(2)},\\ \Sigma^{(3)}(n,r,\pm)=<\kappa_3(n,r,\pm)>,\ n\hbox{ even or odd},\\ \Sigma^{(3')}(n,r,\pm,\pm)=<\kappa_3'(n,r,\pm,\pm)>,\ n\hbox{ even or odd},\\ \Sigma^{(4)}(r,\pm)=<\kappa_4(\pm,r)>,\\ \Sigma^{(4')}(r,\pm,\pm)=<\kappa_4'(r,\pm,\pm)>. \end{array}\end{equation} Each of the planes $\Fix\Sigma^{(3)}$ has twelve isotropy axes. Four of them (of two isotropy types) are orthogonal intersections with $\Fix\Sigma^{(4)}$, therefore $N_{\Gamma}(\Sigma^{(3)})/\Sigma^{(3)}\cong\D_4$. The other eight axes (of two isotropy types) are intersections with $\Fix\Sigma^{(1)}$ and $\Fix\Sigma^{(2)}$. The respective isotropy subgroups are different for odd or even $K/2$, as stated in appendix B. A plane $\Fix\Sigma^{(1)}$ or $\Fix\Sigma^{(2)}$ involves two isotropy types (with odd or even $n$) of symmetry axes, which are intersections with $\Fix\Sigma^{(3)}$. \item [[iv]] For odd $K$ we show that the groups \begin{equation}\label{15Kgrps} \begin{array}{l} \Sigma_1=<\kappa_1'(+,0,+,+)>,\ \Sigma_2=<\kappa_3(0,0,+)>,\ \Sigma_3=<\kappa_2(+,0,+,+)>,\\ \Sigma_4=<\kappa_3(1,0,+)>,\ \Delta_j=<\Sigma_{j-1},\Sigma_j>,\ j=2,3,4,\ \Delta_1=<\Sigma_4,\Sigma_1> \end{array}\end{equation} and $\gamma=e$ satisfy conditions of lemma \ref{lem11}. By construction and due to (\ref{15K2is}) and (\ref{15K1is}), the subgroups satisfy conditions {\bf C1},{\bf C2'},{\bf C3},{\bf C4} and {\bf C5}. Consider $\Fix\kappa_3(0,0,+)$. Denote by $\alpha_1$, $\alpha_2$ and $\alpha_3$ the angles between the intersection with $\Fix\kappa_1'(+,0,+,+)$ and the following three axes: intersections with $\Fix\kappa_2(+,0,+,+)$, $\Fix\kappa_1'(+,0,+,-)$ and $\Fix\kappa_2(+,0,+,-)$, respectively. By lemmas \ref{lem5} and \ref{lem51}, $$\cos2\alpha_1=(3+\sqrt{5})/(2\sqrt{15\tau}),\ \cos2\alpha_2=\sqrt{5}/3,\ \cos2\alpha_3=(\sqrt{5}-1)/(2\sqrt{15\tau}),$$ which implies that $\alpha_1<\alpha_2<\alpha_3$. Since $N_{\Gamma}(\Sigma^{(3)})/\Sigma^{(3)}\cong\Z_4$ for old $K$ and due to (\ref{15K1is}), in $\Fix\kappa_3(0,0,+)$ the angle between the intersection with $\Fix\kappa_1'(+,0,+,+)$ and any other isotropy axes is not smaller that $\alpha_1$. Therefore, $j=2$ satisfies the condition {\bf C6'}. Similar arguments imply that this condition is satisfied for $j=1,3,4$ as well. Since for even $K$ the elements $\kappa_3(0,0,+)$ and $\kappa_3(1,0,+)$ are not conjugate, the set (\ref{15Kgrps}) satisfies conditions {\bf C1}-{\bf C6} of lemma \ref{lem1}. \end{itemize} \qed \subsection{The groups $\Gamma$ in $O(4)$ but not in $SO(4)$}\label{secth2} In this subsection we prove theorem \ref{th2} that completes the list of finite subgroups of $O(4)$, admitting pseudo-simple heteroclinic cycles. A reflection in $\R^4$ can be expressed in the quaternionic presentation as ${\bf q}\to{\bf a\tilde qb}$, where ${\bf a}$ and ${\bf b}$ is a pair of unit quaternions (see \cite{pdv,pc15}). We write this reflection as $({\bf a};{\bf b})^*$. The transformations ${\bf q}\mapsto {\bf a\tilde qa}$ and ${\bf q}\mapsto -{\bf a\tilde qa}$ are respectively the reflections about the axis $\bf a$ and through the hyperplane orthogonal to the vector $\bf a$. A group $\Gamma^*\subset O(4)$, $\Gamma^*\not\subset SO(4)$, can be decomposed as $\Gamma^*=\Gamma\oplus\sigma\Gamma$, where $\Gamma\subset SO(4)$ and $\sigma=({\bf a};{\bf b})^*\notin SO(4)$. If $\Gamma^*$ is finite, then in the quaternionic form of $\Gamma$ \begin{equation}\label{gO4} \Gamma=({\bf L}\rl{\bf L}_K;{\bf R}\rl{\bf R}_K),\hbox{ where } {\bf L}\cong{\bf R}\hbox{ and }{\bf L}_K\cong{\bf R}_K. \end{equation} \begin{theorem}\label{th2} A group $\Gamma^*\subset O(4)$, \begin{equation}\label{decomp} \Gamma^*=\Gamma\oplus\sigma\Gamma, \hbox{ where $\Gamma\subset SO(4)$ and $\sigma\notin SO(4)$}, \end{equation} admits pseudo-simple heteroclinic cycles, if and only if ${\Gamma}$ and $\sigma$ are listed in table \ref{table-th2}. \begin{table}[h] \begin{equation*} \renewcommand{\arraystretch}{1.5} \begin{array}{l|l} \hline {\Gamma} & \sigma \\ \hline (\D_{Kr}\rl\Z_K;\D_{Kr}\rl\Z_K)_s,\ K\gc(r,K(1-s))\ge3 & -((0,1,0,0);(0,1,0,0))^* \\ \hline (\D_{Kr}\rl\Z_{2K};\D_{Kr}\rl\Z_{2K})_s,\ K\gc(r,K(1-s))\ge3 & ((\cos\theta_0,0,0,\sin\theta_0);(1,0,0,0))^*,\\ & \theta_0=\pi/(2K) \\ \hline \end{array} \end{equation*} \caption{Groups $\Gamma\oplus\sigma\Gamma \subset O(4)$ admitting pseudo-simple heteroclinic cycles}\label{table-th2} \end{table} \end{theorem} \proof Lemma 8 in \cite{pc15} states that if a group $\Gamma^*$ admits simple heteroclinic cycles, then so does $\Gamma$. By similar arguments the same holds true for pseudo-simple heteroclinic cycles. Therefore (see lemma \ref{lem1}), the group $\Gamma$ has two sequences of isotropy subgroups $\Sigma_j$, $\Delta_j$, $j=1,\dots,m$, satisfying conditions {\bf C1},{\bf C3},{\bf C4},{\bf C5} and {\bf C6$^*$}. Let $\Sigma_1$ be the subgroup satisfying {\bf C5}, i.e. $\Sigma_1\cong\Z_{k_1}$ with $k_1\ge3$. An element $\sigma'\in\Gamma^*$, $\sigma'\notin SO(4)$, maps $P_1=\Fix\Sigma_1$ either to itself, or to another $P'=\Fix\Sigma'$ with $\Sigma'\cong\Z_{k_1}$. First, we assume the existence of $\sigma'$, such that $\sigma'P_1=P_1$. Hence, there exists $\sigma\in\Gamma^*$ which is a reflection through a hyperplane that contains $P_1$. Let the hyperplane be spanned by ${\bf e}_1$, ${\bf e}_3$ and ${\bf e}_4$ and $P_1=<{\bf e}_1,{\bf e}_4>$. The hyperplane is mapped by elements of $\Sigma_1$ to \begin{equation}\label{hplanes} <{\bf e}_1,{\bf e}_4,\cos\theta_n{\bf e}_2+\sin\theta_n{\bf e}_3>,\ 0\le n<k_1/2,\ \theta_n=2\pi n/k_1. \end{equation} Any isotropy plane of $\Gamma$, that intersects with $P_1$, is $P(\theta',\theta_n)= <\cos\theta'{\bf e}_1+\sin\theta'{\bf e}_4,\cos\theta_n{\bf e}_2+\sin\theta_n{\bf e}_3>$. An isotropy plane $P'=\Fix\Sigma'\ne P_1$, such that $\Sigma'\cong\Z_{k'}$ with $k'\ge3$, is orthogonal to all hyperplanes (\ref{hplanes}). Therefore (if such an isotropy plane exists), it is $<{\bf e}_2,{\bf e}_3>$. Any other isotropy plane of $\Gamma$ (different from $P_1$, $P'$ and $P(\theta',\theta_n)$) either intersects all hyperplanes (\ref{hplanes}) orthogonally, or the line of intersection belongs to $P_1$ or $P'$. Since there is no isotropy plane that satisfies these conditions, we conclude that the only isotropy planes of $\Gamma$ are $P_1$, $P'$ and $P(\theta',\theta_n)$. The groups listed in table \ref{table-th1} satisfying these conditions and (\ref{gO4}) are $(\D_{Kr}\rl\Z_K;\D_{Kr}\rl\Z_K)_s$, $K\gc(r,K(1-s))\ge3$. The element $\sigma$ acting as a reflection through $<{\bf e}_1,{\bf e}_3,{\bf e}_4>$ is $-((0,1,0,0),(0,1,0,0))^*$. Second, we assume that there is no $\sigma\in\Gamma^*$, $\sigma\notin SO(4)$, such that $\sigma P_1=P_1$. Therefore, $\sigma\notin SO(4)$ satisfies $\sigma P_1=P'$, where $P'=\Fix\Sigma'$ with $\Sigma'\cong\Z_{k_1}$, and the subgroups $\Sigma_1$ and $\Sigma'$ are not conjugate in $\Gamma$. The only groups in table \ref{table-th1} that contain such $\Sigma_1$ and $\Sigma'$ are $(\D_{Kr}\rl\Z_{2K};\D_{Kr}\rl\Z_{2K})_s$, $K\gc(r,K(1-s))\ge3$. Moreover, $\Sigma'=\Sigma_3$ (see appendix C). The element $\sigma$ maps a symmetry axis in $P_1$ to a symmetry axis in $\Fix\Sigma_3$. For definiteness, we assume that $\sigma$ maps $\Fix\Delta_1$ to $\Fix\Delta_3$, where according to the appendices\\ $\Delta_1=<\kappa_2(1,0,0),\kappa_1(+,1)>$, $\Delta_3=<\kappa_2(0,0,0),\kappa_1(-,1)>$,\\ $\kappa_2(n,0,0)=((0,\cos n\theta_1,\sin n\theta_1,0);(0,1,0,0))$, $\kappa_1(\pm,1)=((\cos\theta,0,0,\pm\sin\theta);(\cos\theta,0,0,\sin\theta))$, $\theta_1=\pi/K$, $\theta=\pi/m$ and $m=K\gc(r,K(1-s))$. Such $\sigma$ is $((\cos\theta_0,0,0,\sin\theta_0);(1,0,0,0))^*$. \qed \begin{remark}\label{O4_1} A heteroclinic cycle in a $\Gamma^*$-equivariant system, where in the decomposition (\ref{decomp}) $\Gamma=(\D_{Kr}\rl\Z_{2K};\D_{Kr}\rl\Z_{2K})_s,\ K\gc(r,K(1-s))\ge3$ and $\sigma=((\cos\theta_0,0,0,\sin\theta_0);(1,0,0,0))^*$, in general is completely unstable. The proof follows the same arguments as the proof of theorem 1 in \cite{pc16}. Similarly, the conditions for existence of a nearby periodic orbit are the ones given in theorems \ref{thperorb} and \ref{noorbit} in section \ref{sec6n} below. \end{remark} \begin{remark}\label{O4_2} A heteroclinic cycle in a $\Gamma^*$-equivariant system, where in the decomposition (\ref{decomp}) $\Gamma=(\D_{Kr}\rl\Z_K;\D_{Kr}\rl\Z_K)_s,\ K\gc(r,K(1-s))\ge3$ and $\sigma=-((0,1,0,0),(0,1,0,0))^*$, can be fragmentarily asymptotically stable. The conditions for stability can be obtained by algebra, which is standard (see, e.g., theorem 3 in \cite{pc16}) but tedious; we do not present it here. \end{remark} \section{Existence of nearby periodic orbits when $\Gamma\subset SO(4)$} \label{sec6n} As shown in \cite{pc16}, despite complete instability of a pseudo-simple heteroclinic cycle in a $\Gamma$-equivariant system for $\Gamma\subset SO(4)$, trajectories staying in a small neighbourhood of a pseudo-simple cycle for all $t>0$ can possibly exist. Namely, it was proven {\it ibid} that in a one-parameter dynamical system an asymptotically stable periodic orbit can bifurcate from a cycle. More specifically, in their example such an asymptotically stable periodic orbit exists as long as a double positive eigenvalue is sufficiently small. Building blocks of the considered cycles were comprised of two equilibria, whose isotropy groups were isomorphic to $\D_3$. One of these equilibria had a multiple expanding eigenvalue, while the other equilibrium had a multiple contracting one. In this section we prove that similar periodic orbits can bifurcate in a more general setup -- we do not restrict the number of equilibria in a building block (note that building block of a pseudo-simple cycle in $\R^4$ is comprised of at least two equilibria) and assume that their isotropy groups are isomorphic to $\D_k$ with $k\le4$. However, we assume that building block of a heteroclinic cycle involves only one equilibrium with a multiple expanding eigenvalue. In the case of several such equilibria, the bifurcation of a periodic orbit has codimension two or higher, which is beyond the scope of this paper. By contrast, no such periodic orbits bifurcate in a codimension one bifurcation if a building block involves an equilibrium with the isotropy group $\D_k$ with $k\ge5$. \subsection{The case $\D_3$ and $\D_4$}\label{d3d4} Consider the $\Gamma$-equivariant system \begin{equation}\label{eqal_ode} \dot{\bf x}=f({\bf x},\mu),\hbox{ where } f(\gamma{\bf x},\mu)=\gamma f({\bf x},\mu) \mbox{ for all }\gamma\in\Gamma\subset SO(4), \end{equation} and $f:\R^4\times\R\to\R^4$ is a smooth map. We assume that the system possesses a pseudo-simple heteroclinic cycle with a building block $\{\xi_1\to\ldots\xi_m;\ \gamma\}$. By $-c_j$, $e_j$ and $t_j$ we denote the non-radial eigenvalues of $df(\xi_j)$, $1\le j\le m$. Let $\xi_2$ be an equilibrium with a two-dimensional expanding eigenspace (hence, $e_2=t_2$) and a symmetry group $\Delta_2=\D_k$, $k=3$ or 4, acting naturally on the expanding eigenspace, and all other equilibria have one-dimensional expanding eigenspaces. A general $\D_k$-equivariant dynamical system in $\C$ in the leading order is $\dot z=\alpha z+\beta \bar{z}^2$ (for $k=3$) and $\dot z=\alpha z+\beta_1z^2\bar z+\beta_2\bar z^3$ ($k=4$). A necessary condition for existence of a heteroclinic trajectory $\xi_2\to\xi_3$ along the direction of real $z$ is that $e_2=\alpha>0$ and $\beta>0$ or $\beta_1+\beta_2>0$ (for $k=3$ or $k=4$, respectively). Suppose that there exists $\mu_0>0$ such that \begin{itemize} \item[(i)] $e_2<0$ for $-\mu_0<\mu<0$ and $e_2>0$ for $0<\mu<\mu_0$; \item[(ii)] for any $0<\mu<\mu_0$ there exist heteroclinic connections $\kappa_j=(W_u(\xi_j)\cap P_j)\cap W_s(\xi_{j+1})\ne\varnothing$, for all $1\le j\le m$, where $\xi_{m+1}=\gamma\xi_1$. \end{itemize} Denote by $X$ the group orbit of heteroclinic connections $\kappa_j$: $$X=\cup_{\sigma\in\Gamma}\sigma\biggl(\bigcup_{1\le j\le m}\kappa_j\biggr),$$ $\eta$ is the product $\eta=\prod_{3\le j\le m}\min(c_j/e_j,1-t_j/e_j)$, where we set $\eta=1$ if $m=2$, and $\zeta=3$ (for $k=3$) or $\zeta=2\beta_2/(\beta_1+\beta_2)$ (for $k=4$). \begin{theorem}\label{thperorb} \item[(a)] If $\eta\zeta c_1<e_1$ then there exist $\mu'>0$ and $\delta>0$, such that for any $0<\mu<\mu'$ almost all trajectories escape from $B_{\delta}(X)$ as $t\to\infty$. \item[(b)] If $\eta\zeta c_1>e_1$ then generically there exists a periodic orbit bifurcating from $X$ at $\mu=0$. To be more precise, for any $\delta>0$ we can find $\mu(\delta)>0$ such that for all $0<\mu<\mu(\delta)$ the system (\ref{eqal_ode}) possesses an asymptotically stable periodic orbit that belongs to $B_{\delta}(X)$. \end{theorem} We give the proof only for $k=4$, for $k=3$ it can be obtained by a simple modification combined with results of \cite{pc16}. Since it follows closely the proof of theorem 2 {\it ibid}, some details are omitted and the reader is referred to that paper. We first formulate lemma \ref{d3_30} below, describing properties of trajectories of a generic $\D_4$-equivariant systems in $\C$, which in the leading order is \begin{equation}\label{sysA} \dot z=\alpha z+\beta_1z^2\bar z+\beta_2\bar z^3. \end{equation} In polar coordinates, $z=r\re^{\ri\theta}$, it takes the form \begin{equation}\label{pcor1} \begin{array}{rcl} \dot r&=&\alpha r+r^3(\beta_1+\beta_2\cos4\theta),\\ \dot\theta&=&-\beta_2 r^2\sin4\theta. \end{array} \end{equation} We assume that \begin{equation}\label{pcoraux} \alpha>0,\ \beta_2>0\hbox{ and }\beta_1+\beta_2>0. \end{equation} The system has four invariant axes with $\theta=K\pi/4$, $K=0,1,2,3$. The two axes with even $K$ are symmetric images of one another, as are the two axes with odd $K$. In case $\beta_1-\beta_2<0$ there are four equilibria that are not at the origin with $r^2=\alpha/(\beta_2-\beta_1)$ and $\theta=(2k+1)\pi/4$, $k=0,1,2,3$. We consider the system in the sector $0\le\theta<\pi/4$, the complement part of $\C$ is related to this sector by symmetries of the group $\D_4$. Trajectories of the system satisfy \begin{equation}\label{pcor2} {\rd\tilde r\over\rd\theta}=-{2\alpha+2\tilde r(\beta_1+\beta_2\cos4\theta) \over \beta_2\sin4\theta}, \end{equation} where we have denoted $\tilde r=r^2$. Re-writing this equation as $${\rd\tilde r\over\rd\theta}+\tilde r{2(\beta_1+\beta_2\cos4\theta) \over\beta_2\sin4\theta}=-{2\alpha\over\beta_2\sin4\theta},$$ multiplying it by $s(\theta)=(\sin4\theta)^{(\beta_1+\beta_2)/2\beta_2}(1+\cos4\theta)^{-\beta_1/2\beta_2}$ and integrating, we obtain that \begin{equation}\label{trajr} r^2s(\theta)=-{2\alpha\over\beta_2}S(\theta)+C,\hbox{ where } S(\theta)=\int_0^{\theta}{s(\theta)\over\sin4\theta}\rd\theta, \end{equation} which implies that \begin{equation}\label{trajr1} r^2s(\theta)+{2\alpha\over\beta_2}S(\theta)= r^2_0s(\theta_0)+{2\alpha\over\beta_2}S(\theta_0) \end{equation} for the trajectory through the point $(r_0,\theta_0)$. \begin{lemma}\label{d3_30} Let $\tau(r_0,\theta_0)$ denote the time it takes the trajectory of the system (\ref{pcor1}),(\ref{pcoraux}) starting at $(r_0,\theta_0)$ to reach $r=1$ and $\vartheta(r_0,\theta_0)$ denote the value of $\theta$ at $r=1$. Then \begin{itemize} \item[(i)] $\tau(r_0,0)$ satisfies $${\rm e}^{\alpha\tau(r_0,0)}={r_0+\alpha/(\beta_1+\beta_2)\over r_0(1+\alpha/(\beta_1+\beta_2))}.$$ \item[(ii)] $\tau(r_0,\theta_0)$ satisfies \begin{equation}\label{estr} \tau(r_0,\theta_0)>\tau(r_0,0)\hbox{ for any }0<\theta_0<\pi/4. \end{equation} \item[(iii)] $\vartheta(r_0,\theta_0)$ satisfies \begin{equation}\label{estt} s(\vartheta(r_0,\theta_0))+ {2\alpha\over\beta_2}S(\vartheta(r_0,\theta_0))= r_0^2s(\theta_0)+{2\alpha\over\beta_2}S(\theta_0). \end{equation} \item[(iv)] Given $C>0$, $\beta_1+\beta_2>0$ and $0<\theta_0<\pi/4$, for sufficiently small $\alpha$ and $r_0$ $${\rm e}^{-C\tau(r_0,\theta_0)}\ll\vartheta(r_0,\theta_0).$$ \end{itemize} \end{lemma} The proof is similar to the proof of lemma 3(i-iv) in \cite{pc16} and is omitted. \bigskip\noindent {\bf Proof of the theorem}\\ As usual, we approximate trajectories in the vicinity of the cycle by superposition of local and global maps, $\phi_j:\ H^{(in)}_j\to H^{(out)}_j$ and $\psi_j:\ H^{(out)}_j\to H^{(in)}_{j+1}$, respectively, where $H^{(in)}_j$ and $H^{(out)}_j$ are cross sections transversal to the incoming and outgoing connections at an equilibrium $\xi_j$. We consider $g=\gamma\phi_1\psi_2\phi_2...\psi_m\phi_m\psi_1:\ H^{(out)}_1\to H^{(out)}_1$, where the $\gamma$ is the symmetry in the definition of a building block. Since the expanding eigenspace of $\xi_2$ is two-dimensional, the contracting eigenspace of $\xi_1$ is two-dimensional as well. By the assumption of the theorem, other equilibria in the cycle have one-dimensional expanding and contracting eigenspaces. We employ the coordinates $(w_j,q_j)$ in $H^{(in)}_j$ and $(v_j,q_j)$ in $H^{(out)}_j$, similarly to \cite{pc16}. We also employ the coordinates $(\rho_1,\theta_1)$ and $(\rho_2,\theta_2)$, in $H^{(out)}_1$ and $H^{(in)}_2$, respectively, such that $v_1=\rho_1\cos\theta_1$, $q_1=\rho_1\sin\theta_1$, $w_2=\rho_2\cos\theta_2$ and $q_2=\rho_2\sin\theta_2$. In the leading order the maps $\phi_1$ is $$(v_1^{(out)},q_1^{(out)})=\phi_1(w_1^{(in)},q_1^{(in)})= (v_{0,1}(w_1^{(in)})^{c_1/e_1},q_1^{(in)}(w_1^{(in)})^{c_1/e_1}),$$ which in polar coordinates takes the form \begin{equation}\label{phmap1} (\rho_1^{(out)},\theta_1^{(out)})=\phi_1(w_1^{(in)},q_1^{(in)})= (v_{0,1}(w_1^{(in)})^{c_1/e_1},\arctan(q_1^{(in)}/v_{0,1})). \end{equation} The maps $\phi_j$, $j=3,\ldots,m$, are \begin{equation}\label{phmapj} (v_j^{(out)},q_j^{(out)})=\phi_j(w_j^{(in)},q_j^{(in)})= (v_{0,j}(w_j^{(in)})^{c_j/e_j},q_j(w_j^{(in)})^{-t_j/e_j}). \end{equation} (Here superscripts indicate coordinates in $H^{(in)}_1$ or $H^{(out)}_1$. Below, where it does not create ambiguity, we do not use superscripts.) In the leading order the map $\psi_1$ is \begin{equation}\label{glmap1} (\rho_2,\theta_2)=\psi_1(\rho_1,\theta_1)=(A\rho_1,\theta_1+\Theta), \end{equation} where generically $\Theta\ne N\pi/4$ for $N=1,...,8$. The maps $\psi_j$, $j=2,...,m$, are \begin{equation}\label{psi2} (w_1,q_1)=\psi_j(v_2,q_2)=(B_{j,11}v_2+B_{j,12}q_2,B_{j,21}v_2+B_{j,22}q_2). \end{equation} Because of (i), for small $\mu$ the expanding eigenvalue of $\xi_2$ depends linearly on $\mu$, therefore without restriction of generality we can assume that $e_2=\mu$. Generically, all other eigenvalues and coefficients in the expressions for local and global maps do not vanish for sufficiently small $\mu$ and are of the order of one. We assume them to be constants independent of $\mu$. From (ii), the eigenvalues satisfy $e_1>0$, $-c_1<0$ and $-c_2<0$. For small enough $\tilde\delta$, in the scaled neighbourhoods $B_{\tilde\delta}(\xi_2)$ the restriction of the system to the unstable manifold of $\xi_2$ in the leading order is $\dot z=\mu z+\beta_1 z^3+\beta_2\bar z^3$, where we have denoted $z=w_2+\ri q_2$. We assume that the local bases near $\xi_1$ and $\xi_2$ are chosen in such a way that the heteroclinic connections $\gamma^{-1}\xi_m\to\xi_1$ and $\xi_2\to\xi_3$ go along the directions $\arg(\theta_j)=0$ for both $j=1,2$. In the complement subspace the system is approximated by the contractions $\dot u=-r_2 u$ and $\dot v=-c_2 v$. In terms of the functions $\tau(r,\theta)$ and $\vartheta(r,\theta)$ introduced in lemma \ref{d3_30}, the map $\phi_2$ is $$(v_2,q_2)=\phi_2(\rho_2,\theta_2)= (v_{0,2}\re^{-c_2\tau(\rho_2,\theta_2)},\sin\vartheta(\rho_2,\theta_2)).$$ According to lemma \ref{d3_30}(iv), for small $\rho_2$ and $\mu$ $$\re^{-c_2\tau(\rho_2,\theta_2)}\ll\sin\vartheta(\rho_2,\theta_2),$$ which implies that the superposition $\psi^*=\psi_3...\psi_m\phi_m\psi_1$ can be approximated as $\psi^*(v_2,q_2)\approx(B_{1,*}q_2^\eta,B_{2,*}q_2^\eta)$, where $\eta=\prod_{3\le j\le m}\min(c_j/e_j,1-t_j/e_j)$ and the constants $B_{1,*}$ and $B_{2,*}$ depend on $B_{j,kl}$, $2\le j\le m$, and eigenvalues of $\rd f(\xi_j)$, $3\le j\le m$. For small $\theta_1$ we have $\sin\theta_1\approx\tan\theta_1\approx\theta_1$. Taking into account (\ref{phmap1}), (\ref{glmap1}) and lemma \ref{d3_30}(iii), we obtain that \begin{equation}\label{mapg} g(\rho_1,\theta_1)\approx \biggl(C_1(\rho_1^2A\beta_2 s(\Theta)+\mu S(\Theta))^{\eta\zeta c_1/2e_1}, C_2(\rho_1^2A\beta_2 s(\Theta)+\mu S(\Theta))^{\eta\zeta/2}\biggr), \end{equation} where we have denoted $\zeta=2\beta_2/(\beta_1+\beta_2)$, $C_1=v_{0,1}\beta_2^{-\eta\zeta c_1/2e_1}|B_{1,*}|^{c_1/e_1}$ and $C_2=v_{0,1}^{-1}4^{-1}\beta_2^{-\eta\zeta/2}B_{2,*}$. \medskip (a) From (\ref{mapg}), the $\rho$-component of $g$ satisfies $$g_{\rho}(\rho_1,\theta_1)> C_3\rho_1^{\eta\zeta c_1/e_1}, \hbox{ where }C_3=C_1(A\beta_2 s(\Theta))^{\eta\zeta c_1/2e_1},$$ hence if $\eta\zeta c_1<e_1$ then for any $0<\delta<C_3^{e_1/(e_1-c_1\eta\zeta)}$ the iterates $g^n(\rho_1,\theta_1)$ with initial $0<\rho_1<\delta$ satisfy $g_{\rho}^n(\rho_1,\theta_1)>\delta$ for sufficiently large $n$. \medskip (b) Assume that $\eta\zeta c_1>e_1$. Existence and stability of a fixed point of the map $g$ (\ref{mapg}) for small $\mu$ can be proven by the same arguments as employed to prove theorem 2(b) in \cite{pc16}. We omit the proof. The fixed point can be approximated by $(\rho_p,\theta_p)=(C_1(\mu S(\Theta))^{\eta\zeta c_1/2e_1}, C_2\mu S(\Theta)^{\eta\zeta/2})$. This fixed point is an intersection of a periodic orbit with $H^{(out)}_1$. The distance from $(\rho_p,\theta_p)$ to $X$ depends on $\mu$ as $\mu^{c_1\eta\zeta/2e_1}$, therefore the trajectory approaches $X$ as $\mu\to0$. \qed \subsection{The case $\D_k$, $k\ge5$} In this subsection we prove that a bifurcation of a periodic orbit, that was discussed in the previous subsection, does not take place for $k\ge5$: \begin{theorem}\label{noorbit} Suppose that for $0<\mu<\mu_0$ the system (\ref{eqal_ode}) possesses a pseudo-simple heteroclinic cycle $X=\xi_1\to\ldots\to\xi_M$, where $\xi_2$ has a two-dimensional expanding eigenspace with the associated eigenvalue $e_2=\mu$ and the symmetry group $\Delta_2=\D_k$, $k\ge5$, acting naturally on the expanding eigenspace. There exist $\varepsilon>0$ and $\mu'>0$, such that for any $0<\mu<\mu'$ almost all trajectories $\Phi(x,t)$ of the system (\ref{eqal_ode}), such that $d(\Phi(x_0,0),X)<\varepsilon$, satisfy $d(\Phi(x_0,t_0),X)>\varepsilon$ for some $t_0>0$. By $d(\cdot,\cdot)$ we denoted the distance between a point and a set. \end{theorem} \proof Similarly to the proof of theorem 1 in \cite{pc16}, we consider the map $\phi_2\psi_1\phi_1:\,H^{(in)}_1\to H^{(out)}_2$ and prove existence of $\varepsilon>0$ such that \begin{equation}\label{nonincl} \phi_2\psi_1\phi_1(H^{(in)}_1(\varepsilon))\cap H^{(out)}_2(\varepsilon)=\varnothing, \end{equation} where $$H^{(in)}_1(\varepsilon)=\{(w,q)\in H^{(in)}_1~:~|(w,q)|<\varepsilon\} \hbox{ and }H^{(out)}_2(\varepsilon)= \{(v,q)\in H^{(out)}_2~:~|(v,q)|<\varepsilon\}.$$ Equation (\ref{nonincl}) shows that all points in $H^{(in)}_1(\varepsilon)$ are mapped outside $H^{(out)}_2(\varepsilon)$, which implies the statement of the theorem. The maps $\phi_1$ and $\psi_1$ are the same as for the $\D_4$ system, they are given by (\ref{phmap1}) and (\ref{glmap1}), respectively. In (\ref{glmap1}) generically $\Theta\ne N\pi/k$ for $N=1,2,...,2k$. Moreover, there exist $\Theta'>0$ and $\mu'>0$, such that $\min_{1\le N\le 2k} |\Theta- N\pi/k|>\Theta'$ for all sufficiently small $\delta$ and $0<\mu<\mu'$ (recall that $\delta$ is the distance from $H^{(in)}_2$ and $H^{(out)}_2$ to $\xi_2$). For small enough $\delta$, in a $\delta$-neighbourhoods of $\xi_2$ the restriction of the system to the unstable manifold of $\xi_2$ in the leading order is $\dot z=\mu z+\beta_1 z^3+\beta_2\bar z^{k-1}$, where $z=w_2+\ri q_2$. In polar coordinates the system takes the form $\dot r=\alpha r+r^3\beta_1$, $\dot\theta=-\beta_2 r^{k-1}\sin k\theta$, which implies that the map $\phi_2(\rho_2^{in},\theta_2^{in})=(v_2^{out},q_2^{out})$ satisfies $q_2^{out}=\delta\tan\theta_2^{out}$ and \begin{equation}\label{phmap2n} |\theta_2^{out}-\theta_2^{in}|<\int_{\rho_2^{in}}^{\delta} {|\beta_2|\over|\beta_1|}r^{k-5}\rd r={|\beta_2|\over|\beta_1|(k-4)} (\delta^{k-4}-(\rho_2^{in})^{k-4}). \end{equation} We choose $0<\delta<\Theta'/4$ and set \begin{equation}\label{setde} 0<\varepsilon<\min\biggl(\delta\tan{\Theta'\over4},~v_{0,1}\tan{\Theta'\over4}\biggr). \end{equation} Any $(w_1,q_1)\in H^{(in)}_1(\varepsilon)$ satisfies $q_1<\varepsilon$, therefore (\ref{phmap1}) and (\ref{setde}) imply that $\theta_1<\Theta'/4$. Hence, due to (\ref{glmap1}),(\ref{phmap2n}) and (\ref{setde}), $|\theta_2- N\pi/k|>\Theta'/4$ for any $N$. The steady state $\xi_2$ has $k$ symmetric copies (under the action of symmetries $\sigma\in\Sigma_2$) of the heteroclinic connection $\kappa_2:\xi_2\to\xi_3$ which belong to the hyperplanes $\theta_2=N\pi/k$ with some integer $N$'s. Due to (\ref{phmap2n}) and (\ref{setde}), the distance of $(v_2,q_2)$ to any of these hyperplanes is larger than $\delta\tan(\Theta'/4)$, which implies (\ref{nonincl}). \qed \section{Example: periodic orbit near a heteroclinic cycle in a $(\D_4\rl\D_2;\D_4\rl\D_2)$-equivariant system}\label{sec8n} We proved (see section \ref{sec6n}) that an attracting periodic orbit can exist near a pseudo-simple heteroclinic cycle if the isotropy subgroup of one of its equilibria is $\D_3$ or $\D_4$. For the case when the isotropy subgroup is $\D_3$, examples of $\Gamma$-equivariant systems possessing periodic orbit in a neighbourhood of a heteroclinic cycle were given in \cite{pc16} for $\Gamma=(\D_3\rl\Z_1;\D_3\rl\Z_1)$ and $\Gamma=(\D_3\rl\Z_2;\mO\rl\V)$. The vector fields considered {\it ibid} were third order normal forms commuting with the considered actions of $\Gamma$. Here we present a numerical example of a heteroclinic cycle with a nearby attracting periodic orbit, where the isotropy subgroup of an equilibrium is $\D_4$ and a $\Gamma$-equivariant vector field is constructed using ideas employed in the proofs of lemma \ref{lem1} and theorem \ref{thperorb}. We consider a $\Gamma$-equivariant dynamical system where $\Gamma=(\D_4\rl\D_2;\D_4\rl\D_2)$ (recall that the quaternionic group $\D_2$ is usually denoted by $\V$). The elements of $\Gamma$ are: \begin{eqnarray*}\label{listex1} (\V&;&\V),\\ ((1,0,0,\pm1)/\sqrt{2}&;&(\pm1,0,0,\pm1)/\sqrt{2}),\\ ((1,0,0,\pm1)/\sqrt{2}&;&(0,\pm1,\pm1,0)/\sqrt{2}),\\ ((0,1,\pm1,0)/\sqrt{2}&;&(\pm1,0,0,\pm1)/\sqrt{2}),\\ ((0,1,\pm1,0)/\sqrt{2}&;&(0,\pm1,\pm1,0)/\sqrt{2}). \end{eqnarray*} The group has five isotropy types of subgroups $\Sigma$ satisfying $\dim\Fix\Sigma=2$ (see appendix A). In agreement with appendix C, we take $\Sigma_1=<\kappa_1>\cong\Z_4$ and $\Sigma_2=<\kappa_5>\cong\Z_2$. For convenience, we use different notation for generating elements. Namely, we write $\Sigma_1=<\gamma_1>$ and $\Sigma_2=<\gamma_2>$, where $$\gamma_1(s)=((1,0,0,1)/\sqrt{2};(1,0,0,(-1)^s)/\sqrt{2})\hbox{ and }$$ $$\gamma_2(q,r,t)=((0,1,(-1)^q,0)/\sqrt{2};(0,(-1)^r,(-1)^t,0)/\sqrt{2}).$$ The action $(\bl;\br):\bx\to\bl\bx\br^{-1}$ on $\R^4$ of (some) elements of $\Gamma$ is \begin{equation}\label{action} \begin{array}{r|l} (\bl;\br)&\bx\to\bl\bx\br^{-1}\\ \hline\\ ((0,0,0,1);(0,0,0,1)) & \bx\to(x_1,-x_2,-x_3,x_4)\\ ((0,0,1,0);(0,0,1,0)) & \bx\to(x_3,x_4,-x_1,-x_2)\\ ((1,0,0,1)/\sqrt{2};(1,0,0,1)/\sqrt{2}) & \bx\to(x_1,x_3,-x_2,x_4)\\ ((0,1,1,0)/\sqrt{2};(0,1,1,0)/\sqrt{2}) & \bx\to(x_1,x_3,x_2,-x_4). \end{array}\end{equation} The isotropy planes can be labelled as follows: \begin{eqnarray*}\label{listex12} P_1(s)=\Fix\Sigma_1(s),&\hbox{ where }\Sigma_1(s)=<\gamma_1(s)>,\\ P_2(q,r,t)=\Fix\Sigma_2(q,r,t),& \hbox{ where }\Sigma_2(q,r,t)=<\gamma_2(q,r,t)>, \end{eqnarray*} hence there exist two different planes $P_1$ with $s=0,1$ and eight different planes $P_2$ corresponding to $q,r,t=0,1$. A plane $P_1$ contain four symmetry axes of two isotropy types with isotropy groups of the axes isomorphic to $\D_4$. An axis is an intersection of $P_1$ with two planes $P_2$ (and also with two other planes fixed by $\kappa_4$, that is irrelevant), namely $P_1(s)$ intersects with $P_2(0,r,t)$ and $P_2(1,r+s,t+s+1)$. The axes split into two isotropy classes, with odd or even $s+r+t$. A plane $P_2$ contains two isotropy axes which are intersections with $P_1(0)$ and $P_1(1)$. We choose $P_1(0)=(x_1,0,0,x_4)$, $P_2(0,0,0)=(x_1,x_2,x_2,0)$ and, in agreement with (\ref{fieh}), set \begin{equation}\label{fieh12} {\bf h}_1(r_1,\theta_1)=(r_1(1-r_1),\sin(4\theta_1),\ {\bf h}_2(r_2,\theta_2)=(r_2(1-r_2),-\sin(2\theta_1), \end{equation} where $x_1=r_1\cos\theta_1$ and $x_4=r_1\sin\theta_1$ in $P_1$ and $x_1=r_2\cos\theta_2$ and $x_2=(r_2\sin\theta_2)/\sqrt{2}$ in $P_2$. Hence, $\xi_1\approx(1/\sqrt{2},0,0,-1/\sqrt{2})\in P_1(0)\cap P_2(0,1,0)$ is unstable in $P_1$ and stable in $P_2$; $\xi_2\approx(1,0,0,0)\in P_1(0)\cap P_2(0,0,0)$ is stable in $P_1$ and unstable in $P_2$. Following the proof of lemma \ref{lem1}, we construct the system (\ref{gj})-(\ref{f-system}) that possesses a heteroclinic cycle with a building block $\xi_1\to\xi_2\to\gamma\xi_1$, where $\gamma=((1,0,0,0);(0,0,1,0))$. In agreement with theorem 1 in \cite{pc16}, the cycle is not asymptotically stable, hence trajectories starting near the cycle escape from it (see fig. \ref{figs9}(a)). Theorem \ref{thperorb} states that a periodic orbit exists near a heteroclinic cycle with $\Delta_2\cong\D_4$ if the multiple expanding eigenvalue $e_2$ is sufficiently small and $2c_1\beta_2/(\beta_1+\beta_2)>e_1$ (recall that $\alpha$, $\beta_1$ and $\beta_2$ are the coefficients of the system (\ref{sysA})\,). To be more precise, in the proof we use the fact that the ratio $\alpha/\beta_2$ is small. Therefore, we introduce a modified system \begin{equation}\label{fm-system} \dot{\bf x}={\bf f}^*({\bf x}),\hbox{ where } {\bf f}^*({\bf x})={\bf f}({\bf x})+ \sum\limits_{\gamma^*\in G_2} \gamma^*{\bf g}^*((\gamma^*)^{-1}{\bf x}), \end{equation} \begin{equation}\label{g*} {\bf g}^*({\bf y})=(0,0,{c y_3^3\over 1+B|\pi^{\perp}{\bf y}|^2}, {-b y_4y_3^2\over 1+B|\pi^{\perp}{\bf y}|^2}), \end{equation} $G_2=\Gamma/N_{\Gamma}(\Sigma_2(0,0,0,0))$ and $\by=(x_1,x_2,(x_3+x_4)/\sqrt(2),(x_3-x_4)/\sqrt(2))$. In a small neighbourhood of $\xi_2$ the projection of the local field (\ref{fm-system}) into the plane $x_1=x_2=0$ is $$\dot y_3=ay_3+cy_3^3-by_3y_4^2,\quad \dot y_4=ay_4+cy_4^3-by_4y_3^2.$$ Comparing the above expression with (\ref{sysA}), we obtain that $$\beta_1=(c-3b)/2\hbox{ and }\beta_2=(c+b)/4.$$ If the coefficient $B$ in (\ref{g*}) is sufficiently large, then by the same arguments as applied in the proof in lemma \ref{lem1}, the system (\ref{fm-system}) possesses the heteroclinic cycle $\xi_1\to\xi_2\to\gamma\xi_1$. Theorem \ref{thperorb} indicates that for sufficiently large $b\gg c>0$ there exists a stable periodic orbit close to the cycle. Therefore, we set \begin{equation}\label{parr} B=100,\ b=1000,\ c=0.1\ . \end{equation} In agreement with our arguments, the system (\ref{fm-system})-(\ref{parr}) has an attracting periodic orbit near the heteroclinic cycle, as shown on fig. \ref{figs9}(b). \begin{figure}[h] \vspace*{-62mm} \hspace*{-10mm}\includegraphics[width=10cm]{fig1.pdf}\hspace*{-20mm}\includegraphics[width=112mm]{fig2.pdf} \vspace*{-17mm} \hspace*{60mm}{\large (a)}\hspace*{70mm}{\large (b)} \vspace*{3mm} \noindent \caption{Projection of the heteroclinic connections $\xi_2\to \xi_1$ (dashed lines), $\xi_1\to \xi_2$ (dotted lines), a trajectory of the system $\dot\bx=f(\bx))$ (a) and a periodic orbit of the system $\dot\bx=f^*(\bx))$ (b) (solid lines) into the plane $<{\bf v}_1,{\bf v}_2>$, where ${\bf v}_1=(4,2,4,1.5)$ and ${\bf v}_2=(2,4,-1.5,4)$, (a). The steady state $\xi_1$ is denoted by a hollow circle and $\xi_2$ by filled one.} \label{figs9}\end{figure} \section{An example of stability when $\Gamma\not\subset SO(4)$}\label{sec6} In this section we show that a family of subgroups $\Gamma\subset O(4)$, $\Gamma\not\subset SO(4)$, admits heteroclinic cycles involving multidimensional heteroclinic orbits. Following \cite{cgl1999}, we call such heteroclinic cycles {\it generalized}. We derive conditions for asymptotic stability of such generalized cycle and show that it involves as a subset a pseudo-simple heteroclinic cycle, that can be fragmentarily asymptotically stable. Numerical studies indicate that addition of small perturbation that breaks an $O(4)$ symmetry can result on emergence of asymptotically stable periodic orbit or on chaotic dynamics in the vicinity of a pseudo-simple heteroclinic cycle. We shall in fact consider a class of subgroups of $O(4)$ defined as follows. \\ Let $(x_1,y_1,x_2,y_2)\in\R^4$ and $z_j=x_j+iy_j$. Fix an integer $n\geq 3$ and let $\Gamma$ be the group generated by the transformations \begin{equation} \label{gen:Gamma} \rho:~(z_1,z_2)\mapsto (z_1,e^{\frac{2\pi i}{n}}z_2),~~\kappa(z_1,z_2)\mapsto (\bar z_1,z_2),~~\sigma:~(z_1,z_2)\mapsto (z_1,\bar z_2) \end{equation} (Choosing coordinates $z_1=q_1+iq_4$ and $z_2=q_2+iq_3$, we obtain that in quaternionic presentention the $SO(4)$ subgroup of $\Gamma$ is $(\D_n\rl\Z_1;\D_n\rl\Z_1)$, in agreement with theorem \ref{th2}.) This group action decomposes $\R^4$ into the direct sum of three irreducible representations of the dihedral group $\Gamma=\D_n\times\Z_2$: \\ (i) the trivial representation acting on the component $x_1$, \\ (ii) the one-dimensional representation acting on $y_1$ by $\kappa y_1=-y_1$, \\ (iii) the two-dimensional natural representation of $\D_n$ acting on $z_2=(x_2,y_2)$. \\ There are four types of fixed-point subspaces for this action: \begin{itemize} \item $L=P_1\cap P_2=Fix(\Gamma)$, \item $P_1=\{(x_1,y_1,0,0)\}=Fix(\rho,\sigma)$, \item $P_2=\{(x_1,0,x_2,0)\}=Fix(\kappa,\sigma)$, \item $V=\{(x_1,y_1,x_2,0)\}=Fix(\sigma)$, \item $W=\{(x_1,0,x_2,y_2)\}=Fix(\kappa)$. \end{itemize} When $n$ is even there are two more types of invariant subspaces: \begin{itemize} \item $P'_2=\{(x_1,0,x_2\cos(\pi/n),x_2\sin(\pi/n))\}=Fix(\kappa,\rho\sigma)$, \item $V'=\{(x_1,y_1,x_2\cos(\pi/n),x_2\sin(\pi/n)))\}=Fix(\rho\sigma)$. \end{itemize} Note that $P_1$ is fixed by $\Gamma$. When $n$ is odd $P_2$ and $V$ have $n-1$ symmetric copies $\rho^j P_2$, $\rho^j V$, $j=1,\dots,n-1$. When $n$ is even each of $P_2$, $P'_2$, $V$, $V'$ has $n/2-1$ symmetric copies. \\ It can be shown that for an open set of $\Gamma$-equivariant vector fields, there exists an equilibrium $\xi_1$ on the negative semi-axis in $L$, an equilibrium $\xi_2$ on the positive semi-axis, and heteroclinic orbits lying in the planes $P_1$ and $P_2$ and realizing a cycle between $\xi_1$ and $\xi_2$. Moreover this cycle is pseudo-simple due to the action of the rotation $\rho$ on the plane $P_2$, which forces the eigenvalues along the $x_2$ direction in $P_2$ to be double. To fix ideas we assume the double eigenvalue is stable at $\xi_1$ and unstable at $\xi_2$. In order to study the stability of this pseudo-simple cycle we shall exploit a property that was observed in the case $n=3$ in \cite{pc16} and appears to also occur when $n>3$. First, the two dimensional unstable manifold at $\xi_2$ lies entirely in the invariant subspace $W$, which also contains the axis $L$. Second, for an open set of vector fields any orbit on this unstable manifold lies in the stable manifold of $\xi_1$, hence realizing a two dimensional manifold of saddle-sink connections in $W$. Therefore the pseudo-simple heteroclinic cycle is part of a cycle involving multidimensional heteroclinic orbits, which was called a generalized heteroclinic cycle in \cite{cgl1999}. Let us prove this claim. \begin{proposition}\label{prop:existence gen cycle} There exists an open set ${\cal V}$ of $\Gamma$-equivariant smooth vector fields which possess a generalized heteroclinic cycle. This cycle, which we denote by ${\cal X}$, connects two equilibria $\xi_1$ and $\xi_2$ which lie on the negative, resp. positive semi axis in $L$. It is composed of a single heteroclinic orbit in $P_1$ and a two dimensional manifold of heteroclinic orbits in the space $W$. This manifold in $W$ contains heteroclinic orbits in $P_2$ and in $P'_2$ (when $n$ is even), which realize two isotropy types of pseudo-simple heteroclinic cycles. \end{proposition} \proof Let us consider the group $\Gamma_\infty$ defined by relations \eqref{gen:Gamma} where we replace the transformation $\rho$ by $\rho_\varphi~(z_1,z_2)\mapsto (z_1,e^{i\varphi}z_2)$, $\varphi\in S^1$. This group has the same invariant subspaces as $\Gamma$, but in addition any copy of the plane $P_2$ by $\rho_\varphi$ is also invariant, and moreover $W$ is spanned by letting $\rho_\varphi$ rotate $P_2$ with any $\varphi\in S^1$. Therefore if a saddle-sink connection between equilibria $\xi_1$, $\xi_2$ lying on $L$ exists in $P_2$, then a two dimensional manifold of connections exists in $W$. The fact that such equilibria and connections exist for an open set of smooth vector fields follows from a slight adaptation of lemma \ref{lem1}, which shows that the group $\Gamma_\infty$ {\em admits} robust heteroclinic cycles with connections in $P_1$ and $P_2$. Since any $\Gamma$ equivariant perturbation of this vector field leaves $W$, $P_1$ and $P_2$ invariant, we conclude by structural stability that generalized heteroclinic cycles persist for an open set of $\Gamma$ equivariant smooth vector fields. The same argument applies if replacing $P_2$ by $P'_2$ when $n$ is even. \qed We denote by $e_j>0$ and $-c_j<0$ the non-radial eigenvalues at $\xi_j$, $j=1,2$, and further assume that $-c_1$ and $e_2$ are the double eigenvalues. Hence $\xi_2$ is a source while $\xi_1$ is a sink in $W$, along the eigendirections $(x_2,y_2)$. \begin{theorem}\label{as} The generalized heteroclinic cycle ${\cal X}$ defined in Proposition \ref{prop:existence gen cycle} is asymptotically stable if $c_1c_2>e_1e_2$ and is completely unstable if $c_1c_2<e_1e_2$. Moreover there exists an open subset of ${\cal V}$ such that for any vector field in this subset, a pseudo-simple heteroclinic subcycle of ${\cal X}$ is fragmentarily asymptotically stable. \end{theorem} \proof As usual we want to define a first return map in the vicinity of the heteroclinic cycle, and to do so we decompose the dynamics close to ${\cal X}$ into local maps and global transition maps between suitably chosen cross-sections to the heteroclinic orbits near the equilibria. Possibly after a smooth $\Gamma$-equivariant change of coordinates we can always assume that in a neighborhood of the equilibria their stable and unstable manifolds are linear. Let $v_j$, resp. $r_je^{i\theta_j}$ denote the local coordinates near $\xi_j$ along $y_1$, resp. $z_2$. The "radial" direction (along the axis $L$, coordinate $x_1$) can be neglected. We define the cross-sections along the (single) heteroclinic orbit from $\xi_1$ to $\xi_2$ (Fig. \ref{fig:crosssec1}) by \begin{equation}\label{cross-sections1} \begin{array}{l} H_1^{out}=\{(v_1=\varepsilon, r_1>0,\theta_1<\pi/n)\} \\ H_2^{in}=\{(v_2=\varepsilon, r_2>0,\theta_2<\pi/n)\} \end{array} \end{equation} where $\varepsilon>0$ is a small constant value. \begin{figure}[ht!] \centerline{\includegraphics[width=12cm]{crosssec1.pdf}} \caption{Cross-sections to the heteroclinic orbit $\xi_1\to\xi_2$.} \label{fig:crosssec1} \end{figure} Similarly we define the cross-sections along the two-dimensional manifold of connections from $\xi_2$ to $\xi_1$ by (see Fig. \ref{fig:crosssec2}): \begin{equation}\label{cross-sections2} \begin{array}{l} H_1^{in}=\{(v_1>0, r_1=\varepsilon,\theta_1<\pi/n)\} \\ H_2^{out}=\{(v_2>0, r_2=\varepsilon,\theta_2<\pi/n)\} \end{array} \end{equation} \begin{figure}[ht!] \centerline{\includegraphics[width=8cm]{crosssec2.pdf}} \caption{Cross-sections to the heteroclinic manifold $\xi_2\to\xi_1$.} \label{fig:crosssec2} \end{figure} The boundaries of the cross-sections at the limit values $\theta_j=0$ ($j=1,2$) lie in the space $V$ while at $\theta_j=\pi/n$ they lie in the space $\rho V$ (when $n$ is odd) or $V'$ (when $n$ is even). Since these spaces are flow-invariant, the sections defined above are mapped to each other by the flow in the order $H_1^{in}\to H_1^{out}\to H_2^{in}\to H_2^{out}\to H_1^{in}$. We can therefore define the local first hit maps $\Phi_j~:~H_j^{(in)}\rightarrow H_j^{(out)}$ and global maps $\Psi_j~:~H_j^{(out)}\rightarrow H_j^{(in)}$, $j=1,2$. \\ By choosing $\varepsilon$ small enough and if non resonance conditions are satisfied between the eigenvalues at each equilibrium, we can approximate the local vector fields by their linear parts. Therefore near $\xi_1$ the flow is defined by the equations $$ \frac{d(r_1e^{i\theta_1})}{dt}=-c_1r_1e^{i\theta_1}~\mbox{ and }~\frac{dv_1}{dt}=e_1v_1~, $$ which gives \begin{equation}\label{eq:phi1} (r'_1,\theta'_1)=\Phi_1(v_1,\theta_1)=(v_1^{c_1/e_1},\theta_1) \end{equation} and near $\xi_2$ the flow is defined by $$ \frac{dv_2}{dt}=-c_2v_2~\mbox{ and }~\frac{d(r_2e^{i\theta_2})}{dt}=e_2r_2e^{i\theta_2}~, $$ which gives \begin{equation}\label{eq:phi2} (v'_2,\theta'_2)=\Phi_2(r_2,\theta_2)=(r_2^{c_2/e_2},\theta_2) \end{equation} The far map $\Psi_1$ is a $\Gamma$-equivariant near identity diffeomorphism which can be linearized under generic conditions. We therefore have \begin{equation}\label{eq:psi1} \Psi_1(r_1,\theta_1)=(ar_1,\theta_1) \end{equation} where $a$ is a positive constant. \\ The far map $\Psi_2$ is also $\Gamma$-equivariant, however it is not near identity and it can't be expressed as simply as $\Psi_1$. Let us set \begin{equation}\label{eq:psi2} \Psi_2(v_2,\theta_2)=(v(v_2,\theta_2),\theta(v_2,\theta_2)) \end{equation} The component $v(v_2,\theta_2)$ vanishes when $v_2=0$, hence there exists a smooth function $h$ such that $v'_1(v_2,\theta_2)=v_2h(v_2,\theta_2)$. Moreover because $\Psi_2$ is a diffeomorphism $h(0,\theta_2)\neq 0$, which allows by a smooth change of variables to set $v'_1(v_2,\theta_2)=v_2b(\theta_2)$ where $b$ is a bounded function. The map $k(\theta_2):=\theta(0,\theta_2)$ is differentiably defined in the interval $[0,\pi/n]$ and has fixed-points at $0$ and $\pi/n$. Now we can define the first return map in $H_1^{in}$ by $g=\Psi_2\circ\Phi_2\circ\Psi_1\circ\Phi_1$ and we write $(v'_1,\theta'_1)=g(v_1,\theta_1)$. \\ Applying the above expressions for $\Phi_j$ and $\Psi_j$ one obtains \begin{equation}\label{eq:v'1} v'_1 = b(\theta_1)a^{\frac{c_2}{e_2}}v_1^{\frac{c_1c_2}{e_1e_2}} \end{equation} Since $b$ is a bounded function the iterates of the first component of $g$ tend to 0 if and only if $c_1c_2>e_1e_2$. This proves the first part of the theorem. \\ The second component of $g$ has the form \begin{equation}\label{eq:theta'1} \theta'_1 = \theta(a^{\frac{c_2}{e_2}}v_1^{\frac{c_1c_2}{e_1e_2}},\theta_1) \end{equation} Assume $c_1c_2>e_1e_2$, then by iteration the first argument of the function $\theta$ tends to 0. Therefore the dynamics of $\theta$ converges to the dynamics of the map $k$. By an argument similar to Prop. 4.9 of \cite{km95a}, $k$ has generically hyperbolic fixed points at $0$ and $\pi/n$. Moreover there exists an open subset of ${\cal V}$ such that for vector fields in this subset, $k$ has no fixed point inside $(0,\pi/n)$. In this case we can conclude that the iterates of $g$ converge to a pseudo-simple heteroclinic cycle. \qed In order to illustrate this result we built a $\D_n$ equivariant polynomial system with $n>2$ satisfying the hypotheses of the theorem and performed numerical simulations. We use bifurcation method to find the equilibria and corresponding heteroclinic orbits. Applying classical methods in computing equivariant bifurcation systems \cite{GSS} we construct \begin{equation}\label{eq:Dnequivariant} \begin{array}{l} \dot z_1= a_1z_1+a_2\bar z_1+a_3z_1^2+a_4\bar z_1^2+a_5z_1\bar z_1+ a_6z_2\bar z_2+a_7z_1^2\bar z_1+a_8z_1z_2\bar z_2+a_9(z_2^n +\bar z_2^n) \\ \dot z_2=z_2[b_1+b_2(z_1+\bar z_1)+b_3z_1\bar z_1+b_4z_2\bar z_2]+b_5\bar z_2^{n-1}, \end{array} \end{equation} where $a_1$, $a_2$ and $b_1$ are small parameters. Suitable coefficient values for the system to possess generalized heteroclinic cycles can be found as was done in \cite{pc16} in the $n=3$ case. We additionally assume that $a_3+a_4+a_5$ is close to 0 in order to ensure supercritical bifurcation of two equilibria on the $x_1$ axis. There is no loss of generality to take this sum equal to 0, so that the bifurcation is a pitchfork. Moreover in this bifurcation context it is suitable to take negative cubic coefficients in both equations, in order to keep the dynamics bounded. We normalize these coefficients to $-1$. Then the bifurcated equilibria are $\xi_1=-\sqrt{a_1+a_2}$ and $\xi_2=+\sqrt{a_1+a_2}$. The non radial eigenvalues at $\xi_1$ and $\xi_2$ are \begin{equation}\label{evs} \begin{array}{l} e_1=2(a_1-(a_3-a_4)\sqrt{a_1+a_2}),~ -c_1=b_1-2b_2\sqrt{a_1+a_2}-a_1-a_2 \\ e_2=b_1+2b_2\sqrt{a_1+a_2}-a_1-a_2,~ -c_2=2(a_1+(a_3-a_4)\sqrt{a_1+a_2}) \end{array} \end{equation} The heteroclinic cycles exist for a range of coefficient values which includes the following: \begin{equation}\label{eq:coefficients} \begin{array}{l} a_1=0.2,~ a_2=0,~ a_3=-0.3,~ a_4=0.05,~ a_5=0.25,~ a_6=-0.6,~ a_7=a_8=-1 \\ b_1=0.05,~ b_2=0.4,~ b_3=b_4=-1,~ b_5=-0.1 \end{array} \end{equation} The eigenvalues are \begin{equation*} \begin{array}{l} e_1 = 0.283406,~ c_1 = -0.428634 \\ e_2 = 0.228634,~ c_2 = -0.483406 \end{array} \end{equation*} so that the generalized heteroclinic cycle is an attractor. The numerical simulations (with Matlab) were done with $n=3$ and $n=5$. The two pictures in Figure \ref{fig:simus} show the dynamics of the $z_2$ variable in polar coordinates: $z_2=r_2e^{i\phi_2}$. The horizontal axis is the radial variable $r_2$ while the vertical axis is the angle $\phi_2$ (in degrees). Observe that in both cases, taking an initial condition close to $\xi_2$ even with a small angle $\phi_2$ (hence close to the plane $P_2$) the trajectory comes back to the vertical axis sequentially (as expected since it corresponds to going close to $\xi_2$), but with an increasing value of the angle. In the $n=3$ case the angle converges to $60^\circ$ while in the case $n=5$ it converges to $36^\circ$. In both cases this corresponds to convergence to a pseudo-simple cycle with a connection in $\rho P_2$. \begin{figure}[ht!] \centering \mbox{\subfigure[Case $n=3$]{\includegraphics[width=8cm]{D3z2.jpg}}\quad \subfigure[Case $n=5$]{\includegraphics[width=8cm]{D5z2.jpg}}} \caption{\small{Dynamics of the $z_2=r_2e^{i\phi_2}$ variable in polar coordinates. Horizontal axis: $r_2$, vertical axis: $\phi_2$ (in degrees).}} \label{fig:simus} \end{figure} It is clear from this figure that when $n=3$ the convergence to the pseudo-simple cycle is faster and in particular the trajectory near the equilibria (near the vertical coordinate axis in the figure) is oblique while it is nearly horizontal in the $n=5$ case. This is consistent with the results of \cite{pc16} where the case $n=3$ was studied using a different approach in which the double unstable eigenvalue $e_2$ is small enough for nonlinear effects to be felt by the flow near $\xi_2$. This argument doesn't work however when $n>4$ because one essential property of the case $n=3$ is that on the center manifold which exists at $\xi_2$ when $e_2$ is small enough, an unstable equilibrium point always exists near $\xi_2$ in $P_2$, which obliges the flow to "bend" back to $P_2$ or to $\rho P_2$ in the vicinity of $\xi_2$. A similar idea holds when $n=4$. The advantage of the method of \cite{pc16} is that it does not require the existence of a generalized heteroclinic cycle, however only fragmentarily asymptotic stability can be proved in such case. Let us assume now that a perturbation is added to the vector field, which breaks the symmetry $\kappa$. The symmetry group is therefore reduced to the action of $\D_n$ generated by the transformations $\rho$ and $\kappa\sigma$. The invariant planes $P_1$, $P_2$ (and its copies by $\rho^k$) and $V$ are preserved, but not the invariant space $W$. If the perturbation is not too large the equilibria in $L=P_1\cap P_2$ and their heteroclinic connections in the invariant planes persist, hence a pseudosimple heteroclinic cycle exists, however we know it is completely unstable. The question is what happens to the dynamics when this perturbation is switched on. Some preliminary numerical experiments have been performed on the system \eqref{eq:Dnequivariant}, where $n=5$ and the perturbation consists in replacing the terms $a_9(z_2^5 +\bar z_2^5)$ by $a_9z_2^5 + a'_9\bar z_2^5$ and $b_2(z_1 + \bar z_1)z_2$ by $(b_2z_1 + b'_2\bar z_1)z_2$, where $|a_9- a'_9|$ and $|b_2-b'_2|$ are small but non zero. Other coefficients are the same as in \eqref{eq:coefficients} except $a_1=0.25, a_2=0.05, b_1=0.2$. It has been observed that the dynamics remains in a neighborhood of the cycle and converges in certain cases to a periodic orbit (Fig. \ref{fig:simus-D5-1}) while in other cases it exhibits a clear a aperiodic, possibly chaotic behavior (Fig. \ref{fig:simus-D5-2}). The mathematical analysis of this behavior will be a subject for future study. \begin{figure}[ht] \centering \mbox{\subfigure[Coordinates $x_1,~y_1$]{\includegraphics[width=8cm]{type1-x1y1.jpg}}\quad \subfigure[Coordinates $x_1,~x_2,~y_2$]{\includegraphics[width=8cm]{type1-x1x2y2.jpg}}} \caption{\small{Asymptotic dynamics with $a_9=0.9$, $a'_9=1.05$, $b_2=0.292$, $b'_2=0.31$.}} \label{fig:simus-D5-1} \end{figure} \begin{figure}[ht] \centering \mbox{\subfigure[Coordinates $x_1,~y_1$]{\includegraphics[width=8cm]{type2-x1y1.jpg}}\quad \subfigure[Coordinates $x_1,~x_2,~y_2$]{\includegraphics[width=8cm]{type2-x1x2y2.jpg}}} \caption{\small{Asymptotic dynamics with $a_9=1.1$, $a'_9=0.85$, $b_2=0.28$, $b'_2=0.32$.}} \label{fig:simus-D5-2} \end{figure} \section{Conclusion}\label{sec8} In this paper we completed the study of pseudo-simple heteroclinic cycles in $\R^4$, which have been discovered and distinguished from simple cycles only recently \cite{pc15,pc16}. Our primary contribution is a complete list of finite subgroups of $O(4)$ admitting pseudo-simple heteroclinic cycles. Similar to the completion of the classification of simple cycles in \cite{pc15}, and as projected ibid, this was achieved using the quaternionic presentation of such groups. Up to now stability of pseudo-simple cycles had only been addressed in \cite{pc16}, where generic complete instability for the case $\Gamma\subset SO(4)$ was shown, and an example of a \emph{fragmentarily asymptotically stable} cycle, an intermediate weak form of stability, with $\Gamma \not\subset SO(4)$ was given. We extended the stability analysis for pseudo-simple cycles in subsection \ref{secth2} by identifying all subgroups of $O(4)$ admitting f.a.s.\ pseudo-simple heteroclinic cycles. A more comprehensive study, e.g.\ derivation of conditions for fragmentary asymptotic stability or calculation of stability indices along the heteroclinic connections as defined in \cite{pa11}, is beyond the scope of this work. We have also studied the behaviour of trajectories close to pseudo-simple cycles. Namely, we proved that asymptotically stable periodic orbits can bifurcate from the cycle in a codimension one bifurcation at a point where a multiple expanding eigenvalue vanishes. Necessary and sufficient conditions for such a bifurcation are given in theorems \ref{thperorb} and \ref{noorbit}. In section \ref{sec8n} we illustrated this through a numerical example of a heteroclinic cycle with a nearby attracting periodic orbit with symmetry group $\Gamma =(\D_4\rl\D_2;\D_4\rl\D_2)$. In contrast with \cite{pc15}, the proof of lemma \ref{lem1} to characterize conditions for a group to be admissible relies upon an explicit construction of corresponding equivariant systems. This allows us to build examples of pseudo-simple heteroclinic cycles for any admissible group. As we noted (see remark \ref{rem11}), lemma \ref{lem11} can be generalized to $\R^n$ with $n>4$ to provide sufficient conditions for a subgroup of $O(n)$ to admit heteroclinic cycles. Moreover, the explicit construction of an equivariant system in $\R^n$ is applicable for this subgroup. In addition to simple and pseudo-simple heteroclinic cycles other types of structurally stable heteroclinic cycles can exist in $\R^4$. One example is the generalized heteroclinic cycle that we studied in section \ref{sec6}. Another example is the cycle considered in \cite{mrwp}. To describe all robust heteroclinic cycles existing in $\R^4$ is an open problem which is beyond the scope of this paper. Other possible continuations of our work include the full classification of pseudo-simple cycles in $\R^5$, similar to the full classification of homoclinic cycles in \cite{op13}, as well as the study of networks, which are connected unions of more than one cycle. In principle we think this can be achieved by the same means as we used here, even though a complete classification of networks has not even been done for simple cycles yet, partial results to this end can be found in \cite{cl16}.
109,598
Explanatory text for LaTex.bib Database: Chebyshev & Fourier Spectral Methods The link below is to a very large text (ASCII) file which is a LaTeX .bib file with over 1700 references. You may freely copy this database in its entirety for the purpose of using selected items in the bibliography of your papers without the bother of retyping the information in your own papers or Latex .bib files. However, this database is copyright by the author, and reproduction of a large portion of this database in a single work of your work (say, over 100 references) should not be done without asking permission of the author. This database is updated on a regular basis (without warning!) You are invited to send information about relevant papers on spectral methods, not yet in the database, to the author at jpboyd@engin.umich.edu. Thanks! .bib file (0.6 megabytes)
20,232
It’s that time of year. HAIR FAIR!! I have a passion-weakness for hair in SL as I used to be a cosmetologist in RL and WAAaaAAaa 4 whole sims of hair! Hair Fair starts at midnight SLT tonight and you don’t want to miss a thing. Continue reading “Hair Fair 2015!” Hair Fair 2015!
277,553
TITLE: Is $\int\frac{\textrm{d}y}{\textrm{d}x}\,\textrm{d}z=\int\frac{\textrm{d}z}{\textrm{d}x}\,\textrm{d}y$? QUESTION [2 upvotes]: $$\int\frac{\textrm{d}y}{\textrm{d}x}\,\textrm{d}z=\int\frac{\textrm{d}z}{\textrm{d}x}\,\textrm{d}y$$ Is the above statement true? (I think it is because I often see some such manipulation.) What does it even mean (precisely)? If it's true, how do we prove it? One way I've tried to read it is: The antiderivatives of $\frac{\textrm dy} {\textrm dx}$ with respect to $z$ are the same as the antiderivatives of $\frac{\textrm dz} {\textrm dx}$ with respect to $y$. But I can't quite make sense of this last sentence, especially the "with respect to" bits. REPLY [3 votes]: If $y$ and $z$ are functions of $x$ the both sides are equal to $\int \frac {dy} {dx} \frac {dz} {dx} dx$.
144,789
The smart Trick of taser gun for sale walmart That No One is DiscussingOpponents say Tasers can be employed for torture; supporters say the equipment are Safe and sound when utilised appropriately. SABRE's regular e-newsletter is stuffed with material that may help you Stay a safe and healthier everyday living. Enter your email to subscribe! The new shark enamel prongs technology is so effective that it can effortless prevent any attacker and certain convey him down. The ideal choice for a faculty college student or one Mother! Protect oneself and Your loved ones with the simple convenience and affordability of a stun gun from one of many sector’s most reliable makes. The TASER X26C is actually a software upgradable, Electronic Management Machine (ECD). These devices use wires with barbs within the ends which shoot outward towards a issue. After they hit the topic, the barbs penetrate via approximately 1" of apparel and provide The present or shock. A stun gun will work by attacking the anxious method, offering high-voltage electrical energy to an attacker’s human body. Not like tasers, stun guns will have to make direct contact with the assailant to work but they’re considerably less controlled and typically scaled-down and less complicated to hide. Stun guns are at this time some of the most effective and inexpensive self-protection weapons accessible. A stun gun, or maybe a teaser, mainly because it is frequently referred to, is really a hand held unit that puts out a higher voltage shock and stuns the attacker. All an individual has to do is click here contact the assailant While using the stun gun to immobilize them for quite a few minutes. An acceptable illustration with the asked for resource could not be located on this server. This mistake was produced by Mod_Security. Stun devices are small, much more affordable than tasers, low cost nonlethal self-defense products which have been powered by superior-voltage and minimal amps with near ninety% performance. It can be the most comprehensive and reliable on the net place for legislation enforcement companies and law enforcement departments worldwide. Information & Video clip A man thought to generally be all over sixty years outdated was pinned down by two law enforcement officers while A different shot him with a stun gun near Denver Tuesday. The person was accused of ... + Safe and sound ESCAPE Solution REPLACEMENT Assurance Should you at any time find yourself inside of a predicament in which you should use your TASER to safeguard your self or your family members and depart your TASER on the scene, we will replace your TASER gratis. Your lifetime is worth way taser gun for sale walmart more to us than the cost of a TASER. The new shark tooth prongs technological innovation is so powerful that it might easy deter any attacker and confident convey him down. We offer a big range buy now of stun gun components like holsters and additional cost packs, also, obtainable in numerous distinct colors. There’s something for everybody!
393,685
Displaying 1 of 1 2015 Título: Life after life / Raymond A. Moody Jr., M.D. ; with a new afterword by the author ; new foreword by Eben Alexander, M.D. Autor: Moody, Raymond A., Jr., author. Reservas vigentes: 0 Resumen:. Temas: Future life. Near-death experiences. Bereavement -- Psychological aspects. Loss (Psychology) Grief. Formato: Book Editorial, Fecha: New York, N.Y. : HarperOne, an imprint of HarperCollinsPublishers, [2015] ©2015 Descripción: xx, 182 pages ; 21 cm Notas: "The bestselling original investigation that revealed 'near-death experiences.'" Originally published in 1975 by MBB. Inc. and later published by Bantam Books. Contenido: The phenomenon of death -- The experience of dying -- Parallels -- Questions -- Explanations -- Impressions. ISBN: 9780062428905 006242890X
237,885
\begin{document} \title[On quasi-contractivity of $C_0$-semigroups on Banach spaces]{On quasi-contractivity of $C_0$-semigroups on Banach spaces} \author{M\'at\'e~Matolcsi} \email{matomate@renyi.hu} \address{ Alfr\'ed R\'enyi Institute of Mathematics, Hungarian Academy of Sciences POB 127 H-1364 Budapest, Hungary Tel: (+361) 483-8302 Fax: (+361) 483-8333} \date{\today} \maketitle \begin{abstract} A basic result in semigroup theory states that every $C_0$-semigroup is quasi-contractive with respect to some appropriately chosen equivalent norm. This paper contains a counterpart of this well-known fact. Namely, by examining the convergence of the Trotter-type formula $(e^{\frac{t}{n}A}P)^n$ (where $P$ denotes a bounded projection), we prove that whenever the generator $A$ is unbounded it is possible to introduce an equivalent norm on the space with respect to which the semigroup is {\it{not}} quasi-contractive. {\it{Mathematics subject classification (2000)}}: 47A05, 47D06 \end{abstract} \section{Introduction} Many important results in semigroup theory rely on the simple fact that for a given $C_0$-semigroup $e^{tA}$ on a Banach space $X$ it is always possible to introduce an equivalent norm on $X$ with respect to which $e^{tA}$ is quasi-contractive. While examining the convergence of the Trotter-type formula \begin{equation}\label{pro} (e^{\frac{t}{n}A}P)^n \end{equation} (see \cite{mat}), the author was led to the natural question of whether it is always possible to introduce an equivalent norm on $X$ with respect to which $e^{tA}$ is {\it{not}} quasi-contractive. This is clearly not possible if the generator $A$ is bounded. However, if $A$ is unbounded then it is natural to expect that such a norm does exist. Indeed, in \cite{mat}, Theorem 2, the Hilbert space version of this question was settled: assuming that $A$ is unbounded an equivalent {\it{scalar product}} (not merely a norm) was constructed with respect to which $e^{tA}$ is not quasi contractive. This result was then used to show that whenever $A$ is unbounded it is possible to find a bounded (but not necessarily orthogonal) projection $P$ such that \eqref{pro} does not converge strongly (cf. \cite{mat}, first part of Theorem 3). The proofs of these results, however, relied heavily on the notion of orthogonality. The aim of this paper is to prove the Banach space analouge of these two results (see Theorem \ref{thm1} and Corollary \ref{cor1} below). We are aware that the existence of a 'non-quasi-contractive' norm will probably not have as much use as that of a 'quasi-contractive' one. Nevertheless, it gives an affirmative answer to a natural question, and shows that whenever $A$ is unbounded it is up to our choice whether to regard $e^{tA}$ as being quasi-contractive or non-quasi-contractive. As the motivation to tackle the questions above came from the investigations of the convergence of formula \eqref{pro}, we mention that the history of this formula goes back to \cite{valaki}, \cite{dav}, \cite{kato} and \cite{ab}. The interested reader can also find a brief overview and some recent results in \cite{ms} and \cite{mat}. Thus far, most of the attention concerning formula \eqref{pro} has been devoted to the Hilbert space case, but Theorem \ref{thm1} below shows that some of the results remain true in the most general settings. \section{Main results} In the Banach space setting we reverse the steps of \cite{mat}. First we characterize the convergence of \eqref{pro} in terms of the generator $A$, and then we use this result to construct an equivalent norm with respect to which the semigroup $e^{tA}$ is not quasi-contractive. \begin{theorem}\label{thm1} Let $A$ generate a $C_0$-semigroup $e^{tA}$ on a complex Banach space $X$. The following are equivalent: (i) $A$ is bounded (ii) $\lim_{n\to \infty}(e^{\frac{t}{n}A}P)^nx$ exists for all bounded projection $P$, and all $x\in X$, $t\ge 0$. \begin{proof} The implication $(i)\rightarrow (ii)$ is fairly standard and contained in \cite{ms}, Theorem 1. For the implication $(ii)\rightarrow (i)$ assume that $A$ is unbounded. Then, there exists an element $\phi\in X^\ast$ such that $\phi\notin \dom A^\ast$. $\kr \phi$, the kernel of $\phi$, is a 1-codimensional subspace of $X$. As $\dom A$ is dense in $X$, $\dom A \not\subset \kr\phi$. Therefore, we can choose a vector $x\in \dom A$ such that $\phi (x)=1$. Let $P_x$ denote the projection along $\kr \phi$ onto the 1-dimensional subspace spanned by $x$; i.e. we decompose each element $z\in X$ as $z=\f (z)x +(z-\f (z)x)$ and we let $ P_xz=\f (z)x$. Note that $\f (P_x z)=\f (z)$. Therefore, \begin{equation}\label{eq2} \f ((e^{\frac{1}{n}A}P_x)^nx)= \f ((P_xe^{\frac{1}{n}A}P_x)^nx) \end{equation} Now, observe that $(P_xe^{\frac{1}{n}A}P_x)x=c_nx$ where $c_n=\f (e^{\frac{1}{n}A}x)$. Therefore, \begin{equation}\label{eq3} \f ((P_xe^{\frac{1}{n}A}P_x)^nx)= \f (c_n^nx)=c_n^n=(\f (e^{\frac{1}{n}A}x))^n \end{equation} Furthermore, $$ n(c_n-1)= \f\left ( \frac{e^{\frac{1}{n}A}x-x}{1/n}\right )= \f\left ( \frac{(e^{\frac{1}{n}A}-I)x}{1/n}\right )$$ therefore \begin{equation}\label{eq4} \lim_{n\to \infty} n(c_n-1)=\f (Ax) \end{equation} Combining \eqref{eq2} , \eqref{eq3} and \eqref{eq4} we get \begin{equation}\label{eq5} \lim_{n\to\infty}\f\ ((e^{\frac{1}{n}A}P_x)^nx)= \lim_{n\to \infty}c_n^n=e^{\f (Ax)} \end{equation} The rest of the proof is similar to the Hilbert space version (see \cite{mat}). We are going to construct an element $y\in X$ such that $\f (y)=1$ and $\lim_{n\to\infty}\f\ ((e^{\frac{1}{n}A}P_y)^ny)$ does not exist. The vector $y$ will be given as $\lim_{n\to \infty}x_k$ where the sequence $(x_k)$ is to be constructed in the sequel. We would like to choose $x_0$ so that $x_0\in \dom A$, $\f (x_0)=1$ and $\re \f (Ax_0)\ge 0$ holds. To do this we take a vector $z$ such that $z\in \dom (A)$, $\f (z)=1$, and we are looking for $x_0$ in a small neighbourhood of $z$. As $\f \notin \dom A^\ast$ we can find a vector $v\in \dom A$ such that $\|v\| \le \frac{1}{2\|\f\|}$ and $|\f (Av)|\ge 4|\f (Az)|$. Now, let $v':=e^{i\alpha}v$ with suitable $\alpha$ such that $\f (Av')$ is nonnegative real. Let $x_0:=\frac{z+v'}{\f (z+v')}$. Then $x_0\in \dom A$, $\f (x_0)=1$ and $\re \f (Ax_0)=\re (\frac{1}{1+\f (v')}\f (Az))+ \re (\frac{1}{1+\f (v')}\f (Av'))\ge -2|\f (Az)|+ \frac{1}{2}\f (Av')\ge 0$ as desired. We know that $\lim_{n\to\infty}\f\ ((e^{\frac{1}{n}A}P_{x_0})^nx_0)=e^{\f (Ax_0)}$. Let $\epsilon >0$ be fixed. Take an index $n_0$ so large that $|\f ((e^{\frac{1}{n_0}A}P_{x_0})^{n_0}x_0)-e^{\f (Ax_0)}|<\epsilon $. It is clear from standard continuity arguments that there exists a radius $\delta_0 >0$ such that the conditions $h\in B(x_0,\delta_0)$ and $\f (h)=1$ together imply that $|\f ((e^{\frac{1}{n_0}A}P_{h})^{n_0}h)-e^{\f (Ax_0)}|<2\epsilon $. Without loss of generality we can assume that $\delta_0< \frac{\|x_0\|}{2}$. We are going to construct the sequence $(x_k)$ inductively. Assume, therefore, that vectors $x_0, \ x_1, \dots , x_k$, positive numbers $\d_0, \ \d_1, \dots , \d_k$ and indices $n_0, \ n_1, \dots , n_k$ are already given such that for all $0\le j\le k$ the following hold: $x_j\in \dom A$, $\f (x_j)=1$, $\re \f (Ax_j)\ge j$ and $|\f ((e^{\frac{1}{n_j}A}P_{h})^{n_j}h)-e^{\f (Ax_j)}|<2\epsilon $ for all vectors $h$ satisfying $h\in B(x_j,\delta_j)$ and $\f (h)=1$. Assume, furthermore, that $\|x_{j+1}-x_j\|<\min \{\frac{\d_0}{2^{j+1}}, \ \frac{\d_1}{2^j}, \dots \frac{\d_j}{2}\}$ for all $0\le j\le k-1$. Clearly, there exists a (sufficiently small) radius $\gamma_k >0$ such that for all $g\in B(0,\gamma_k)$ we have $\frac{x_k+g}{\f (x_k+g)}\in B(x_k, \d )$, where $\d:= \min \{\frac{\d_0}{2^{k+1}}, \ \frac{\d_1}{2^k}, \dots \frac{\d_k}{2}\}$. Now, the construction of $x_{k+1}$ from the given vector $x_k$ goes the same way as the construction above of $x_0$ from the given vector $z$; using that $\f \notin \dom A^\ast$ we find an appropriate vector $g\in \dom A$, $\|g\|<\gamma_k$ such that $\f (Ag)$ is positive and 'large', assuring that the definition $x_{k+1}:= \frac{x_k+g}{\f (x_k+g)}$ gives $\re \f (Ax_{k+1})\ge k+1$. Note, also, that $x_{k+1}\in B(x_k,\d)$ because $\|g\|<\gamma_k$. Finally, the index $n_{k+1}$ and the radius $\d_{k+1}$ are chosen to correspond to the vector $x_{k+1}$, so that $|\f ((e^{\frac{1}{n_{k+1}}A}P_{h})^{n_{k+1}}h)-e^{\f (Ax_{k+1})}|<2\epsilon$ holds for all vectors $h$ satisfying $h\in B(x_{k+1},\delta_{k+1})$ and $\f (h)=1$. It is clear, by construction, that the sequence $(x_k)$ converges in $X$. For $y:=\lim x_k$ we have $\f (y)=1$ (and $\|y\|\ge \frac{\|x_0\|}{2}$ by the choice of $\d_0$). It is also clear, by construction, that $y\in B(x_k, \d_k)$ for all $k\ge 0$. Hence, for all $k\ge 0$ we have $$|\f ((e^{\frac{1}{n_{k}}A}P_{y})^{n_{k}}y)-e^{\f (Ax_{k})}|<2\epsilon $$ Notice that $|e^{\f (Ax_{k})}|=e^{\re \f (Ax_{k})}\ge e^k$. This means that the sequence $(e^{\frac{1}{n}A}P_{y})^{n}y$ does not converge (even weakly). \end{proof} \end{theorem} From this result the existence of a 'non-quasi-contractive' norm follows easily. \begin{corollary}\label{cor1} Let $A$ generate a $C_0$-semigroup $e^{tA}$ on a complex Banach space $X$. The following are equivalent: (i) A is bounded (ii) the semigroup $e^{tA}$ is quasi-contractive with respect to every equivalent norm on $X$. \begin{proof} The implication $(i)\rightarrow (ii)$ is obvious. For the implication $(ii)\rightarrow (i)$ assume that $A$ is not bounded. By the proof of the preceding theorem we can find vectors $\f\in X^\ast$, $y\in X$ such that $\f ((e^{\frac{1}{n}A}P_{y})^{n}y)$ does not converge. Moreover, for the subsequence $n_k$ above we have $|\f ((e^{\frac{1}{n_k}A}P_{y})^{n_k}y)|\ge e^k-2\epsilon$. Introduce a new norm $\| \ \cdot \ \|_0$ on $X$ by $\|z\|_0:=\|P_yz\|+\|(I-P_y)z\|$. It is clear that the norms $\| \ \cdot \ \|_0$ and $\| \ \cdot \ \|$ are equivalent, and $P_y$ is contractive with respect to $\| \ \cdot \ \|_0$. We claim, that $e^{tA}$ is not quasi-contractive with respect to $\| \ \cdot \ \|_0$. Indeed, assume, by contradiction, that there exists a $\l \in \R$ such that $\|e^{tA}\|_0\le e^{\l t}$ for all $t\ge 0$. Then $|\f ((e^{\frac{1}{n_k}A}P_{y})^{n_k}y)|\le \|\f\|\cdot \|y\|\cdot e^\l$, a contradiction. \end{proof} \end{corollary}
77,423
Supplier Homepage Product Uninterruptible Power Supplies Online Uninterruptible Power Supply 15kVA Powr Frequency Online UPS for Generator 15kVA Powr Frequency Online UPS for Generator Request a custom order and have something just for you!Send Customized Request Product Catalog - UPS - 39 Products - Content - Product Image - Product Name - FOB Price - Port Basic Info - Model NO.: SNC-150 - Type: On-line - Application: Industry - Standby Time: Standard Machine - Output Capacity: Large Type - Brand: Snat - Waveform: Pure Sine Wave - Size: 650*700*1600 - OEM/ODM Service: Provide - Specification: CE ISO9001 - HS Code: 8504402000 - Phase: Single Phase - Protection: Sealed Maintenance Free - Classification: Lighting/Power - Standby UPS: Sine Wave Output UPS - Equipment Mode: Decentralized - Mode: Double Conversion - Weight: 130kg - Noise: <60db (Distance 1m) - Trademark: SNAT - Origin: Foshan City Guangdong Province Product Description Production Discription: The SP-11 series power frequency on-line intelligent UPS is designed by SNAT company with high stability and reliability, aim to the China power grid environment and network system for power supply reliability requirements Feature The excellent quality can provide safe and reliable comprehensive protection for the loads of the user data centers and industrial control, precision equipment of the medical system Charactistics: 1-Using DSP, MCU and DDC real-time processing of full digital vector control technology, with perfect protection function and high reliability 2-Double conversion, pure sine wave output on-line, no matter in the electric supply or battery mode, can output pure sine wave power supply of low distortion degree, provides the best of power supply guarantee for the loads equipment of the user Specification parameter Design special specification in accordance with your requirements! Any requirements, welcome to contact with Sabrina Liu: Click Trade Messenger; Tel: +86 757 81816131 Mobile: +86 18039293535 You will get our reply in 12 hours!
198,954
\begin{document} \FirstPageHeading{Bihlo&Popovych} \ShortArticleName{Point symmetry group of the barotropic vorticity equation} \ArticleName{Point Symmetry Group\\ of the Barotropic Vorticity Equation} \Author{Alexander BIHLO~$^\dag$ and Roman O.~POPOVYCH~$^{\dag\ddag}$} \AuthorNameForHeading{A.~Bihlo and R.O.~Popovych} \AuthorNameForContents{Bihlo A.\ and POPOVYCH R.O.} \ArticleNameForContents{Point symmetry group of the barotropic vorticity equation} \Address{$^\dag$~Fakult\"at f\"ur Mathematik, Universit\"at Wien,\\ \hphantom{$^\ddag$}~Nordbergstra{\ss}e 15, A-1090 Wien, Austria} \EmailD{alexander.bihlo@univie.ac.at, rop@imath.kiev.ua} \Address{$^\ddag$~Institute of Mathematics of NAS of Ukraine,\\ \hphantom{$^\dag$}~3 Tereshchenkivska Str., Kyiv-4, Ukraine} \Abstract{The complete point symmetry group of the barotropic vorticity equation on the $\beta$-plane is computed using the direct method supplemented with two different techniques. The first technique is based on the preservation of any megaideal of the maximal Lie invariance algebra of a differential equation by the push-forwards of point symmetries of the same equation. The second technique involves a priori knowledge on normalization properties of a class of differential equations containing the equation under consideration. Both of these techniques are briefly outlined.} \section{Introduction} It is well known that it is much easier to determine the continuous part of the complete point symmetry group of a differential equation than the entire group including discrete symmetries. The computation of continuous (Lie) symmetries is possible using infinitesimal techniques, which amounts to solving an overdetermined system of linear partial differential equations (referred to as \emph{determining equations}) for coefficients of vector fields generating one-parameter Lie symmetry groups. Owing to the algorithmic nature of this problem, the automatic computation of Lie symmetries is already implemented in a number of symbolic calculation packages, see, e.g., papers~\cite{Bihlo&Popovych:Carminati&Khai2000,Bihlo&Popovych:Head1993,Bihlo&Popovych:RochaFilho&Figueiredo2010} for detail description of certain packages and reviews~\cite{Bihlo&Popovych:Hereman1997,Bihlo&Popovych:Butcher&Carminati&Vu2003}. The relative simplicity of finding Lie symmetries of differential equations is also a primary reason why the overwhelming part of research on symmetries is devoted to symmetries of this kind. See, e.g., the textbooks \cite{Bihlo&Popovych:Bluman&Cheviakov&Anco2010,Bihlo&Popovych:Bluman&Kumei1989, Bihlo&Popovych:Meleshko2005,Bihlo&Popovych:Olver2000,Bihlo&Popovych:Ovsiannikov1982} for general theory and numerous examples and additionally the works \cite{Bihlo&Popovych:Andreev&Kaptsov&Pukhnachov&Rodionov1998,Bihlo&Popovych:Bihlo&Popovych2009a, Bihlo&Popovych:Bihlo&Popovych2009b,Bihlo&Popovych:Fushchych&Popovych1994,Bihlo&Popovych:Meleshko2004} for several applications of Lie methods in hydrodynamics and meteorology. As continuous symmetries, also discrete symmetries are of practical relevance in a number of fields such as dynamical system theory, quantum mechanics, crystallography and solid state physics. They can also be helpful in some issues related to Lie symmetries, e.g.\ allowing for a simplification of optimal lists of inequivalent subalgebras, and due to enabling the construction of new solutions of differential equations from known ones. It is not possible, in general, to determine the whole point symmetry group in terms of finite transformations by usage of infinitesimal techniques. On the other hand, the direct computation of point symmetries based on their definition boils down to solving a cumbersome nonlinear system of determining equations, which is difficult to be integrated. Similar determining equations also arise under calculations of equivalence groups and sets of admissible transformations of classes of differential equations by means of employing the direct method. In order to simplify the derivation of the determining equations, different special techniques have been developed involving, in particular, the implicit representation of unknown functions, the combined splitting with respect to old and new variables and the inverse expression of old derivative via new ones~\cite{Bihlo&Popovych:Popovych&Bihlo2010,Bihlo&Popovych:Popovych&Kunzinger&Eshraghi2010,Bihlo&Popovych:Prokhorova2005}. There exist two particular techniques that can be applied for \emph{a priori} simplification of calculations concerning the point symmetry groups of differential equations. The first technique was presented in~\cite{Bihlo&Popovych:Hydon2000} for equations whose maximal Lie invariance algebras are finite dimensional. It is based on the fact that the push-forwards of point symmetries of a given system of differential equations to vector fields on the space of dependent and independent variables are automorphisms of the maximal Lie invariance algebra of the same system. This condition yields restrictions for those point transformations that can qualify as symmetries of the system of differential equations under consideration. We will adopt this technique to the infinite dimensional case using the notion of megaideals of Lie algebras, which are the most invariant algebraic structures. The second technique involves available information on the set of admissible transformations of a class of differential equations~\cite{Bihlo&Popovych:Popovych&Kunzinger&Eshraghi2010}, which contains the investigated equation. In the present paper, we will demonstrate both of these techniques by computing the complete point symmetry group of the barotropic vorticity equation on the $\beta$-plane. This is one of the most classical models which are used in geophysical fluid dynamics. The techniques to be employed are briefly described in Section~\ref{Bihlo&Popovych:sec:Techniques}. The actual computations using the method based on the corresponding Lie invariance algebra and that involving a priori knowledge on admissible transformations of a class of generalized vorticity equations are presented in Section~\ref{Bihlo&Popovych:sec:CalculationsInvarianceAlgebra} and~\ref{Bihlo&Popovych:sec:DirectMethodForBVE}, respectively. A short summary concludes the paper. \section{Techniques of calculation \\ of complete point symmetry groups}\label{Bihlo&Popovych:sec:Techniques} Both the techniques described in this section should be considered merely as tools for deriving preliminary restrictions on point symmetries. In either case, calculations must finally be carried out within the framework of the direct approach. \subsection{Using megaideals of Lie invariance algebra}\label{Bihlo&Popovych:sec:TechniquesLieInvarianceAlgebra} The most refined version of the technique involving Lie symmetries in the calculations of complete point symmetry groups was applied in~\cite{Bihlo&Popovych:Hydon2000}. It is outlined as follows: Given a system of differential equations~$\mathcal L$ whose maximal Lie invariance algebra $\mathfrak g$ is $n$-dimensional with a basis $\{e_1,\dots,e_n\}$, $n<\infty$, one has to compute the entire automorphism group of $\mathfrak g$, $\mathrm{Aut}(\mathfrak g)$. Supposing that $\mathcal T$ is a transformation from the complete point symmetry group~$G$ of~$\mathcal L$, one has the condition $\mathcal T_* e_j=\sum_{i=1}^n e_ia_{ij}$ for $j=1,\dots,n$, where $\mathcal T_*$ denotes the push-forward of vector fields induced by $\mathcal T$ and $(a_{ij})$ is the matrix of an automorphism of $\mathfrak g$ in the chosen basis. This condition implies constraints on the transformation $\mathcal T$ which are then taken into account in further calculations with the direct method. The method we propose here is different to those described in the previous paragraph. In fact, it uses only the minimal information on the automorphism group $\mathrm{Aut}(\mathfrak g)$ in the form of a set of megaideals of $\mathfrak g$. Due to this, it is applicable also in the case when the maximal Lie invariance algebra is infinite dimensional. The notion of megaideals was introduced in~\cite{Bihlo&Popovych:Popovych&Boyko&Nesterenko&Lutfullin2005}. \begin{definition} A \emph{megaideal} $\mathfrak i$ is a vector subspace of $\mathfrak g$ that is invariant under any transformation from the automorphism group $\mathrm{Aut}(\mathfrak g)$ of $\mathfrak g$. \end{definition} That is, we have $\mathfrak T \mathfrak i=\mathfrak i$ for a megaideal~$\mathfrak i$ of~$\mathfrak g$, whenever $\mathfrak T$ is a transformation from $\mathrm{Aut}(\mathfrak g)$. Any megaideal of~$\mathfrak g$ is an ideal and characteristic ideal of~$\mathfrak g$. Both the improper subalgebras of~$\mathfrak g$ (the zero subspace and $\mathfrak g$ itself) are megaideals of~$\mathfrak g$. The following assertions are obvious. \begin{proposition}\label{prop:OnMegaIdeals1} If $\mathfrak i_1$ and $\mathfrak i_2$ are megaideals of~$\mathfrak g$ then so are $\mathfrak i_1+\mathfrak i_2,$ $\mathfrak i_1\cap \mathfrak i_2$ and $[\mathfrak i_1,\mathfrak i_2]$, i.e., sums, intersections and Lie products of megaideals are again megaideals. \end{proposition} \begin{proposition}\label{prop:OnMegaIdeals2} If $\mathfrak i_2$ is a megaideal of $\mathfrak i_1$ and $\mathfrak i_1$ is a megaideal of $\mathfrak g$ then $\mathfrak i_2$ is a megaideal of $\mathfrak g$, i.e., megaideals of megaideals are also megaideals. \end{proposition} \begin{corollary}\label{cor:OnMegaIdeals3} All elements of the derived, upper and lower central series of a Lie algebra are its megaideals. In particular, the center and the derivative of a Lie algebra are its megaideals. \end{corollary} \begin{corollary}\label{cor:OnMegaIdeals4}\looseness=-1 The radical~$\mathfrak r$ and nil-radical~$\mathfrak n$ (i.e., the maximal solvable and nilpotent ideals, respectively) of~$\mathfrak g$ as well as different Lie products, sums and intersections involving~$\mathfrak g$, $\mathfrak r$ and~$\mathfrak n$ ($[\mathfrak g,\mathfrak r]$, $[\mathfrak r,\mathfrak r]$, $[\mathfrak g,\mathfrak n]$, $[\mathfrak r,\mathfrak n]$, $[\mathfrak n,\mathfrak n]$, etc.) are megaideals of~$\mathfrak g$. \end{corollary} Suppose that $\mathfrak g$ is finite dimensional and possesses a megaideal $\mathfrak i$ which, without loss of generality, can be assumed to be spanned by the first $k$ basis elements, $\mathfrak i=\langle e_1,\dots,e_k\rangle$. Then the matrix $(a_{ij})$ of any automorphism of $\mathfrak g$ has block structure, namely, $a_{ij}=0$ for $i>k$. In other words, in the finite dimensional case we take into account only the block structure of automorphism matrices. This is reasonable as the entire automorphism group $\mathrm{Aut}(\mathfrak g)$ (which should be computed within the method from~\cite{Bihlo&Popovych:Hydon2000}) may be much wider than the group of automorphisms of $\mathfrak g$ induced by elements of the point symmetry group~$G$ of~$\mathcal L$. Moreover, it seems difficult to find the entire group $\mathrm{Aut}(\mathfrak g)$ if the algebra~$\mathfrak g$ is infinite dimensional. At the same time, in view of the above assertions it is easy to determine a set of megaideals for any Lie algebra. \subsection{Direct method and admissible transformations}\label{Bihlo&Popovych:sec:DirectMethodAndAdmTrans} The initial point of the second technique is to consider a given $p$th order system~$\mathcal L^0$ of $l$~differential equations for $m$~unknown functions $u=(u^1,\ldots,u^m)$ of $n$~independent variables $x=(x_1,\ldots,x_n)$ as an element of a class~$\mathcal L|_{\mathcal S}$ of similar systems~$\mathcal L_\theta$: $\smash{L(x,u_{(p)},\theta(x,u_{(p)}))=0}$ parameterized by a tuple of $p$th order differential functions (arbitrary elements)~$\theta=(\theta^1(x,u_{(p)}),\ldots,\theta^k(x,u_{(p)}))$. Here $u_{(p)}$ denotes the set of all the derivatives of~$u$ with respect to $x$ of order not greater than~$p$, including $u$ as the derivatives of order zero. The class~$\mathcal L|_{\mathcal S}$ is determined by two objects: the tuple $L=(L^1,\ldots,L^l)$ of $l$ fixed functions depending on~$x$, $u_{(p)}$ and~$\theta$ and~$\theta$ running through the set~$\mathcal S$. Within the framework of symmetry analysis of differential equations, the set~$\mathcal S$ is defined as the set of solutions of an auxiliary system consisting of a subsystem $S(x,u_{(p)},\theta_{(q)}(x,u_{(p)}))=0$ of differential equations with respect to $\theta$ and a non-vanish condition $\Sigma(x,u_{(p)},\theta_{(q)}(x,u_{(p)}))\ne0$ with another differential function $\Sigma$ of~$\theta$. In the auxiliary system, $x$ and $u_{(p)}$ play the role of independent variables and $\theta_{(q)}$ stands for the set of all the partial derivatives of $\theta$ of order not greater than $q$ with respect to the variables $x$ and $u_{(p)}$. In view of the purpose of our consideration we should have that $\mathcal L^0=\mathcal L_{\theta_0}$ for some $\theta_0\in\mathcal S$. Following~\cite{Bihlo&Popovych:Popovych&Kunzinger&Eshraghi2010}, for $\theta,\tilde\theta\in\mathcal S$ we denote by $\mathrm T(\theta,\tilde\theta)$ the set of point transformations which map the system~$\mathcal L_\theta$ to the system~$\mathcal L_{\tilde\theta}$. The maximal point symmetry group~$G_\theta$ of the system~$\mathcal L_\theta$ coincides with~$\mathrm T(\theta,\theta)$. \begin{definition}\label{DefOfSetOfAdmTrans} $\mathrm T(\mathcal L|_{\mathcal S})=\{(\theta,\tilde\theta,\varphi)\mid \theta,\tilde\theta\in\mathcal S,\, \varphi\in\mathrm T(\theta,\tilde\theta)\}$ is called the {\em set of admissible transformations in~$\mathcal L|_{\mathcal S}$}. \end{definition} Sets of admissible transformations were first systematically described by King\-ston and Sophocleous for a class of generalized Burgers equations~\cite{Bihlo&Popovych:Kingston&Sophocleous1991} and Winternitz and Gazeau for a class of variable coefficient Korteweg--de Vries equations~\cite{Bihlo&Popovych:Winternitz&Gazeau1992}, in terms of {\em form-preserving} \cite{Bihlo&Popovych:Kingston&Sophocleous1991,Bihlo&Popovych:Kingston&Sophocleous1998,Bihlo&Popovych:Kingston&Sophocleous2001} and {\em allowed}~\cite{Bihlo&Popovych:Winternitz&Gazeau1992} transformations, respectively. The notion of admissible transformations can be considered as a formalization of their approaches. Any point symmetry transformation of an equation~$\mathcal L_\theta$ from the class~$\mathcal L|_{\mathcal S}$ generates an admissible transformation in this class. Therefore, it obviously satisfies all restrictions which hold for admissible transformations \cite{Bihlo&Popovych:Kingston&Sophocleous1998}. For example, it is known for a long time that for any point (and even contact) transformation connecting a pair of $(1+1)$-dimensional evolution equations its component corresponding to~$t$ depends only on~$t$, cf.~\cite{Bihlo&Popovych:Magadeev1993}. The equations in the pair can also coincide. As a result, the same restriction should be satisfied by any point or contact symmetry transformation of every $(1+1)$-dimensional evolution equation. The simplest description of admissible transformations is obtained for normalized classes of differential equations. Roughly speaking, a class of (systems of) differential equations is called \emph{normalized} if any admissible transformation in this class is induced by a transformation from its equivalence group. Different kinds of normalization can be defined depending on what kind of equivalence group (point, contact, usual, generalized, extended, etc.) is considered. Thus, the \emph{usual equivalence group}~$G^{\sim}$ of the class~$\mathcal L|_{\mathcal S}$ consists of those point transformations in the space of variables and arbitrary elements, which are projectable on the variable space and preserve the whole class~$\mathcal L|_{\mathcal S}$. The class~$\mathcal L|_{\mathcal S}$ is called normalized in the usual sense if the set $\mathrm T(\mathcal L|_{\mathcal S})$ is generated by the usual equivalence group~$G^{\sim}$. As a consequence, all generalizations of the equivalence group within the framework of point transformations are trivial for this class. See~\cite{Bihlo&Popovych:Popovych&Kunzinger&Eshraghi2010} for precise definitions and further explanations. If the class~$\mathcal L|_{\mathcal S}$ is normalized in certain sense with respect to point transformations, the point symmetry group~$G_{\theta_0}$ of any equation~$\mathcal L_{\theta_0}$ from this class is contained in the projection of the corresponding equivalence group of~$\mathcal L|_{\mathcal S}$ to the space of independent and dependent variables (taken for the value $\theta=\theta_0$ in the case when the generalized equivalence group is considered). As a rule, calculations of certain common restrictions on admissible transformations of the entire normalized class or its normalized subclasses or point symmetry transformations of a single equation from this class have the same level of complexity. For example, in order to derive the restriction that the transformation component corresponding to~$t$ depends only on~$t$, we should carry out approximately the same operations, independently of considering the whole class of $(1+1)$-dimensional evolution equations, any well-defined subclass from this class or any single evolution equation. This is why it is worthwhile to first construct nested series of normalized classes of differential equations by starting from a quite general, obviously normalized class, imposing on each step additional auxiliary conditions on the arbitrary elements and then studying the complete point symmetries of a single equation from the narrowest class of the constructed series. In the way outlined above we have already investigated hierarchies of normalized classes of generalized nonlinear Schr\"odinger equations~\cite{Bihlo&Popovych:Popovych&Kunzinger&Eshraghi2010}, $(1+1)$-dimensional linear evolution equations~\cite{Bihlo&Popovych:Popovych&Kunzinger&Ivanova2008}, $(1+1)$-dimensional third-order evolution equations including variable-coefficient Korteweg--de Vries and modified Korteweg--de Vries equations~\cite{Bihlo&Popovych:Popovych&Vaneeva2010} and generalized vorticity equations arising in the study of local parameterization schemes for the barotropic vorticity equation~\cite{Bihlo&Popovych:Popovych&Bihlo2010}. If an equation does not belong to a class whose admissible transformations have been studied earlier, one can try to map this equation using a point transformation to an equation from a class for which constraints on its admissible transformations are known a priori. Then one can either map the known constraints on admissible transformations back and then complete the calculations of point symmetries of the initial equation using the direct method or calculate the point symmetry group of the mapped equation using the direct method and then map this group back. The example on the application of this trick to the barotropic vorticity equation in presented in Section~\ref{Bihlo&Popovych:sec:DirectMethodForBVE}. \section{Calculations based on Lie invariance algebra\\ of the barotropic vorticity equation}\label{Bihlo&Popovych:sec:CalculationsInvarianceAlgebra} The barotropic vorticity equation on the $\beta$-plane reads \begin{align}\label{Bihlo&Popovych:eq:vortbeta} \zeta_t+\psi_x\zeta_y-\psi_y\zeta_x+\beta\psi_x=0, \end{align} where $\psi=\psi(t,x,y)$ is the stream function and $\zeta:=\psi_{xx}+\psi_{yy}$ is the relative vorticity, which is the vertical component of the vorticity vector. The barotropic vorticity equation in the formulation~\eqref{Bihlo&Popovych:eq:vortbeta} is valid in situations where the two-dimensional wind field can be regarded as almost non-divergent and the motion in North--South direction is confined to a relatively small region. It is then convenient to use a local Cartesian coordinate system. In such a coordinate system, the effect of the sphericity of the Earth is conveniently taken into account by approximating the normal component of the vorticity due to the rotation of the Earth, $2\Omega\sin\varphi$, by its linear Taylor series expansion, where $\Omega$ is the angular rotation of the Earth and $\varphi$ is the geographic latitude. This linear approximation at some reference latitude $\varphi_0$ is given by $2\Omega\sin\varphi_0+\beta y$, where $\beta=2\Omega\cos\varphi/a$ and $a$ is the radius of the Earth. This is the traditional $\beta$-plane approximation, see~\cite{Bihlo&Popovych:Pedlosky1987} for further details. Then, taking the vertical component of the curl of the two-dimensional ideal Euler equations and using the $\beta$-plane approximation leads to Eq.~\eqref{Bihlo&Popovych:eq:vortbeta}. It is straightforward to determine the maximal Lie invariance algebra~$\mathfrak g$ of Eq.~\eqref{Bihlo&Popovych:eq:vortbeta} using infinitesimal techniques: \[ \mathfrak g=\langle\mathcal{D},\partial_t,\partial_y,\mathcal{X}(f),\mathcal{Z}(g)\rangle, \] where $\mathcal{D}=t\partial_t-x\partial_x-y\partial_y-3\psi\partial_\psi$, $\mathcal{X}(f)=f(t)\partial_x-f_t(t)y\partial_\psi$ and \mbox{$\mathcal{Z}(g)=g(t)\partial_\psi$}, and $f$ and $g$ run through the space of smooth functions of $t$. (In fact, the precise interpretation of~$\mathfrak g$ as a Lie algebra strongly depends on what space of smooth functions is chosen for~$f$ and~$g$, cf.\ Note~A.1 in \cite[p.~178]{Bihlo&Popovych:Fushchych&Popovych1994}.) This result was first obtained in~\cite{Bihlo&Popovych:Katkov1965} and is now easily accessible in the handbook \cite[p.~223]{Bihlo&Popovych:Ibragimov1995}. See also~\cite{Bihlo&Popovych:Bihlo&Popovych2009a} for related discussions and the exhaustive study of the classical Lie reductions of Eq.~\eqref{Bihlo&Popovych:eq:vortbeta}. The nonzero commutation relations of the algebra $\mathfrak g$ in the above basis are exhausted by the following ones: \begin{gather*} [\partial_t,\mathcal{D}]=\partial_t,\quad [\partial_y,\mathcal{D}]=-\partial_y,\\ [\mathcal{D},\mathcal{X}(f)]=\mathcal{X}(tf_t+f),\quad [\mathcal{D},\mathcal{Z}(g)]=\mathcal{Z}(tg_t+3g),\\ [\partial_t,\mathcal{X}(f)]=\mathcal{X}(f_t),\quad [\partial_t,\mathcal{Z}(g)]=\mathcal{Z}(g_t),\quad [\partial_y,\mathcal{X}(f)]=-\mathcal{Z}(f_t). \end{gather*} It is easy to see from the commutation relations that the Lie algebra $\mathfrak g$ is solvable since \begin{gather*} \mathfrak g'=[\mathfrak g,\mathfrak g]=\langle\partial_t,\partial_y,\mathcal{X}(f),\mathcal{Z}(g)\rangle, \\ \mathfrak g''=[\mathfrak g',\mathfrak g']=\langle\mathcal{X}(f),\mathcal{Z}(g)\rangle,\\ \mathfrak g'''=[\mathfrak g'',\mathfrak g'']=0. \end{gather*} Therefore, the radical~$\mathfrak r$ of~$\mathfrak g$ coincides with the entire algebra~$\mathfrak g$. The nil-radical of~$\mathfrak g$ is the ideal \[ \mathfrak n=\langle\partial_y,\mathcal{X}(f),\mathcal{Z}(g)\rangle. \] Indeed, this ideal is a nilpotent subalgebra of~$\mathfrak g$ since \[ \mathfrak n^{(2)}=\mathfrak n'=[\mathfrak n,\mathfrak n]=\langle\mathcal{Z}(g)\rangle, \quad \mathfrak n^{(3)}=[\mathfrak n,\mathfrak n']=0. \] It can be extended to a larger ideal of~$\mathfrak g$ only with two sets of elements, $\{\partial_t\}$ and $\{\mathcal{D},\partial_t\}$. Both resulting ideals are not nilpotent. In other words, $\mathfrak n$ is the maximal nilpotent ideal. Continuous point symmetries of Eq.~\eqref{Bihlo&Popovych:eq:vortbeta} are determined from the elements of $\mathfrak g$ by integration of the associated Cauchy problems. It is obvious that Eq.~\eqref{Bihlo&Popovych:eq:vortbeta} also possesses two discrete symmetries, $(t,x,y,\psi)\mapsto (-t,-x,y,\psi)$ and $(t,x,y,\psi)\mapsto (t,x,-y,-\psi)$, which are independent up to their composition and their compositions with continuous symmetries. The proof that the above symmetries generate the entire point symmetry group was, however, outstanding. \begin{theorem}\label{Bihlo&Popovych:TheoremOnPointSymGroupOfBVE} The complete point symmetry group of the barotropic vorticity equation on the $\beta$-plane~\eqref{Bihlo&Popovych:eq:vortbeta} is formed by the transformations \begin{align*} \mathcal T\colon &\quad \tilde t=T_1t+T_0, \quad \tilde x=\frac{1}{T_1}x+f(t), \quad \tilde y=\frac{\varepsilon}{T_1}y+Y_0, \\ &\quad \tilde\psi=\frac{\varepsilon}{(T_1)^3}\psi-\frac{\varepsilon}{(T_1)^2}f_t(t)y+g(t), \end{align*} where $T_1\ne0$, $\varepsilon=\pm1$ and $f$ and $g$ are arbitrary functions of $t$. \end{theorem} \begin{proof} The discrete symmetries of the barotropic vorticity equation on the $\beta$-plane are computed as described in section~\ref{Bihlo&Popovych:sec:TechniquesLieInvarianceAlgebra}. The general form of a point transformation of the vorticity equation is: \[ \mathcal T\colon\quad (\tilde t, \tilde x,\tilde y, \tilde \psi)=(T, X, Y, \Psi), \] where $T$, $X$, $Y$ and $\Psi$ are regarded as functions of $t$, $x$, $y$ and $\psi$, whose joint Jacobian does not vanish. To obtain the constrained form of $\mathcal T$, we use the above four proper nested megaideals of~$\mathfrak g$, namely $\mathfrak n'$, $\mathfrak g''$, $\mathfrak n$ and $\mathfrak g'$, and~$\mathfrak g$ itself. Recall once more that the transformation $\mathcal T$ must satisfy the conditions $\mathcal T_* \mathfrak n'=\mathfrak n'$, $\mathcal T_* \mathfrak g''=\mathfrak g''$, $\mathcal T_* \mathfrak n=\mathfrak n$, $\mathcal T_* \mathfrak g'=\mathfrak g'$ and $\mathcal T_* \mathfrak g=\mathfrak g$ in order to qualify as a point symmetry of the vorticity equation, where $\mathcal T_*$ denotes the push-forward of $\mathcal T$ to vector fields. In other words, we have \begin{gather} \mathcal T_* \mathcal{Z}(g)=g(T_\psi\partial_{\tilde t}+X_\psi\partial_{\tilde x}+Y_\psi \partial_{\tilde y}+\Psi_\psi \partial_{\tilde \psi})=\mathcal{\tilde Z} (\tilde g^g), \label{Bihlo&Popovych:eq:MegaidealConstraintForT1}\\ \mathcal T_* \mathcal{X}(f)=\mathcal{\tilde X}(\tilde f^f)+\mathcal{\tilde Z}(\tilde g^f), \label{Bihlo&Popovych:eq:MegaidealConstraintForT2}\\ \mathcal T_* \partial_t=T_t\partial_{\tilde t}+X_t\partial_{\tilde x}+Y_t\partial_{\tilde y}+\Psi_t\partial_{\tilde \psi}=a_1\partial_{\tilde t}+a_2\partial_{\tilde y}+\mathcal{\tilde X}(\tilde f)+\mathcal{\tilde Z}(\tilde g), \label{Bihlo&Popovych:eq:MegaidealConstraintForT3}\\ \mathcal T_* \partial_y=T_y\partial_{\tilde t}+X_y\partial_{\tilde x}+Y_y\partial_{\tilde y}+\Psi_y\partial_{\tilde \psi}=b_1\partial_{\tilde y}+\mathcal{\tilde X}(\tilde f^y)+\mathcal{\tilde Z}(\tilde g^y), \label{Bihlo&Popovych:eq:MegaidealConstraintForT4}\\ \mathcal T_* \mathcal{D}=c_1\mathcal{\tilde D}+c_2\partial_{\tilde t}+c_3\partial_{\tilde y}+\mathcal{\tilde X}(\tilde f^D)+\mathcal{\tilde Z}(\tilde g^D), \label{Bihlo&Popovych:eq:MegaidealConstraintForT5} \end{gather} where all $\tilde f$'s and $\tilde g$'s are smooth functions of~$\tilde t$ which are determined, as the constant parameters $a_1$, $a_2$, $b_1$, $c_1$, $c_2$ and~$c_3$, by~$\mathcal T_*$ and the operator from the corresponding left-hand side. We will derive constraints on~$\mathcal T_*$, consequently equating coefficients of vector fields in conditions \eqref{Bihlo&Popovych:eq:MegaidealConstraintForT1}--\eqref{Bihlo&Popovych:eq:MegaidealConstraintForT5} and taking into account constraints obtained on previous steps. Thus Eq.~\eqref{Bihlo&Popovych:eq:MegaidealConstraintForT1} immediately implies $T_\psi=X_\psi=Y_\psi=0$ (hence $\Psi_\psi\ne0$) and $g\Psi_\psi=\tilde g^g$. Evaluating the last equation for $g=1$ and $g=t$ and combining the results gives $t=\tilde g^t(T)/\tilde g^1(T)$, where $\tilde g^t=\tilde g^g|_{g=t}$ and $\tilde g^1=\tilde g^g|_{g=1}$. As the derivative with respect to $T$ in the right hand side of this equality does not vanish, the condition $T=T(t)$ must hold. This implies that $\Psi_\psi$ depends only on~$t$. As then $\mathcal T_*\mathcal{X}(f)=fX_x\partial_{\tilde x}+fY_x\partial_{\tilde y}+(f\Psi_x-f_ty\Psi_\psi)\partial_{\tilde \psi}$, it follows from Eq.~\eqref{Bihlo&Popovych:eq:MegaidealConstraintForT2} that $Y_x=0$ and \[ fX_x=\tilde f^f,\quad f\Psi_x-f_ty\Psi_\psi=-\tilde f^f_{\tilde t}Y+\tilde g^f. \] \looseness=-1 Evaluating the first of the displayed equalities for $f=1$, we derive that $X_x=\tilde f^1(T)=: X_1(t)$. Therefore, $\tilde f^f(T)=f(t)X_1(t)$. The second equality then reads \[ f\Psi_x-f_ty\Psi_\psi=-\frac{(fX_1)_t}{T_t}Y+\tilde g^f. \] Setting $f=1$ and $f=t$ in the last equality and combining the resulting equalities yields $y\Psi_\psi=(T_t)^{-1}X_1Y+t\tilde g^1-\tilde g^t$, where $\tilde g^t=\tilde g^f|_{f=t}$ and $\tilde g^1=\tilde g^f|_{f=1}$. As $X_1\ne0$ this equation implies that $Y=Y_1(t)y+Y_0(t)$. After analyzing Eq.~\eqref{Bihlo&Popovych:eq:MegaidealConstraintForT3}, we find $T_t=\mathop{\rm const}\nolimits$, $Y_t=\mathop{\rm const}\nolimits$, which leads to $Y_1=\mathop{\rm const}\nolimits$, $X_t=\tilde f(T)$ and thus $X_{tx}=0$, i.e., $X_1=\mathop{\rm const}\nolimits$. Finally, Eq.~\eqref{Bihlo&Popovych:eq:MegaidealConstraintForT3} also implies $\Psi_t=-\tilde f_{\tilde t}Y+\tilde g$. In a similar manner, upon taking into account the restrictions already derived so far, collecting coefficients in Eq.~\eqref{Bihlo&Popovych:eq:MegaidealConstraintForT4} gives the constraint $X_y=\tilde f^y=:X_2=\mathop{\rm const}\nolimits$ since $X_{yt}=0$. Moreover, $\Psi_y=\tilde g^y$, as $\tilde f^y_{\tilde t}=0$. The final restrictions on $\mathcal T$ based on the preservation of~$\mathfrak g$ are derivable from Eq.~\eqref{Bihlo&Popovych:eq:MegaidealConstraintForT5}, where \begin{align*} \mathcal T_* \mathcal{D}={}& tT_t\partial_{\tilde t}+(tX_t-xX_x-yX_y)\partial_{\tilde x}+(tY_t-yY_y)\partial_{\tilde y}\\ &\!\!+(t\Psi_t-x\Psi_x-y\Psi_y-3\psi\Psi_\psi)\partial_{\tilde \psi}. \end{align*} Collecting the coefficients of $\partial_{\tilde t}$ and $\partial_{\tilde y}$, we obtain that $c_1=1$ and $Y_t=0$. Similarly, equating the coefficients of $\partial_{\tilde\psi}$ and further splitting with respect to $x$ implies that $\Psi_x=0$. The results obtained so far lead to the following constrained form of the general point symmetry transformation of the vorticity equation~\eqref{Bihlo&Popovych:eq:vortbeta} \begin{gather}\label{Bihlo&Popovych:eq:RestrictedFormOfPointTransForBVE} \begin{split} &T=T_1t+T_0, \quad X=X_1x+X_2y+f(t), \quad Y=Y_1y+Y_0, \\ &\Psi=\Psi_1\psi+\Psi_2(t)y+\Psi_4(t), \end{split} \end{gather} where $T_0$, $T_1$, $X_1$, $X_2$, $Y_0$, $Y_1$ and $\Psi_1$ are arbitrary constants, $T_1X_1Y_1\Psi_1\ne0$, and $f(t)$, $\Psi_2(t)$ and $\Psi_4(t)$ are arbitrary time-dependent functions. The form~\eqref{Bihlo&Popovych:eq:RestrictedFormOfPointTransForBVE} takes into account all constraints on point symmetries of~\eqref{Bihlo&Popovych:eq:vortbeta}, which follow from the preservation of the maximal Lie invariance algebra~$\mathfrak g$ by the associated push-forward of vector fields. Now the direct method should be applied. We carry out a transformation of the form~\eqref{Bihlo&Popovych:eq:RestrictedFormOfPointTransForBVE} in the vorticity equation. For this aim, we calculate the transformation rules for the partial derivative operators: \[ \partial_{\tilde t}=\frac{1}{T_1}\left(\partial_t-\frac{f_t}{X_1}\partial_x\right), \quad\partial_{\tilde x}=\frac{1}{X_1}\partial_x, \quad \partial_{\tilde y}= \frac{1}{Y_1}\left(\partial_y-\frac{X_2}{X_1}\partial_x\right). \] Further restrictions on $\mathcal T$ can be imposed upon noting that the term $\psi_{txy}$ can only arise in the expression for $\tilde \psi_{\tilde t \tilde y\tilde y}$, which is \[ \tilde \psi_{\tilde t\tilde y\tilde y}=-\frac{2\Psi_1}{T_1Y_1}\frac{X_2}{X_1}\psi_{txy}+\cdots. \] This obviously implies that $X_2=0$. In a similar fashion, the expression for $\tilde \zeta_{\tilde t}$ is \[ \tilde \zeta_{\tilde t}=\frac{\Psi_1}{T_1}\left(\frac{1}{(X_1)^2}\zeta_t+\left(\frac{1}{(Y_1)^2}-\frac{1}{(X_1)^2}\right)\psi_{yyt}\right)+\cdots, \] upon using $\psi_{xxt}=\zeta_t-\psi_{yyt}$. Hence $(X_1)^2=(Y_1)^2$ as there are no other terms with $\psi_{yyt}$ in the invariance condition. After taking into account these two more restrictions on $\mathcal T$, it is straightforward to expand the transformed version of the vorticity equation. This yields \begin{align*} &\frac{\Psi_1}{T_1(X_1)^2}\zeta_t-\frac{f_t\Psi_1}{T_1(X_1)^3}\zeta_x+\frac{(\Psi_1)^2}{(X_1)^3Y_1}\psi_x\zeta_y-\left(\frac{\Psi_1}{Y_1}\psi_y+ \frac{\Psi_2}{Y_1}\right)\frac{\Psi_1}{(X_1)^3}\zeta_x {} \\ & {}+\beta\frac{\Psi_1}{X_1}\psi_x=\frac{\Psi_1}{T_1(X_1)^2}\left(\zeta_t+\psi_x\zeta_y-\psi_y\zeta_x+\beta \psi_x\right). \end{align*} The invariance condition is fulfilled provided that the constraints \[ \Psi_2=-\frac{Y_1}{T_1}f_t, \quad X_1=T_1(X_1)^2, \quad \frac{(\Psi_1)^2}{(X_1)^3Y_1}=\frac{\Psi_1}{T_1(X_1)^2}. \] hold. This completes the proof of the theorem. \end{proof} \begin{corollary} The barotropic vorticity equation on the $\beta$-plane possesses only two independent discrete point symmetries, which are given by \[ \Gamma_1\colon (t,x,y,\psi)\mapsto (-t,-x,y,\psi), \quad \Gamma_2\colon (t,x,y,\psi)\mapsto (t,x,-y,-\psi). \] They generate the group of discrete symmetry transformations of the barotropic vorticity equation on the $\beta$-plane, which is isomorphic to $\mathbb Z^2\times\mathbb Z^2$, where $\mathbb Z^2$ denotes the cyclic group of two elements. \end{corollary} \section{Direct method and admissible transformations\\ of classes of generalized vorticity equations}\label{Bihlo&Popovych:sec:DirectMethodForBVE} The construction of the complete point symmetry group~$G$ of the barotropic vorticity equation~\eqref{Bihlo&Popovych:eq:vortbeta} by means of using only the direct method involves cumbersome and sophisticated calculations. As Eq.~\eqref{Bihlo&Popovych:eq:vortbeta} is a third-order PDE in three independent variables, the system of determining equations for transformations from~$G$ is an overdetermined nonlinear system of PDEs in four independent variables, which should be solved by taking into account the nonsingularity condition of the point transformations. This is an extremely challenging task. Fortunately, a hierarchy of normalized classes of generalized vorticity equations was recently constructed~\cite{Bihlo&Popovych:Popovych&Bihlo2010} that allows us to strongly simplify the whole investigation. Eq.~\eqref{Bihlo&Popovych:eq:vortbeta} belongs only to the narrowest class of this hierarchy, which is quite wide and consists of equations of the general form \begin{gather}\label{Bihlo&Popovych:eq:class1} \zeta_t=F(t,x,y,\psi,\psi_x,\psi_y,\zeta,\zeta_x,\zeta_y,\zeta_{xx},\zeta_{xy}, \zeta_{yy}), \quad \zeta:=\psi_{xx}+\psi_{yy}, \end{gather} where $(F_{\zeta_x},F_{\zeta_y},F_{\zeta_{xx}},F_{\zeta_{xy}},F_{\zeta_{yy}})\ne(0,0,0,0,0)$. The equivalence group~$G^\sim_1$ of this class is formed by the transformations \begin{gather*} \tilde t=T(t), \quad \tilde x=Z^1(t,x,y), \quad \tilde y=Z^2(t,x,y), \quad \tilde\psi=\Upsilon(t)\psi+\Phi(t,x,y), \\ \tilde F=\frac1{T_t}\left( \frac\Upsilon LF+\Bigl(\frac\Upsilon L\Bigr)_0\zeta+\Bigl(\frac{\Phi_{ii}}L\Bigr)_0 -\frac{Z^i_tZ^i_j}L\left(\frac\Upsilon L\zeta_j+\Bigl(\frac\Upsilon L\Bigr)_j\zeta+\Bigl(\frac{\Phi_{ii}}L\Bigr)_j\right) \right), \end{gather*} where $T$, $Z^i$, $\Upsilon$ and $\Phi$ are arbitrary smooth functions of their arguments, satisfying the conditions $Z^1_kZ^2_k=0$, $Z^1_kZ^1_k=Z^2_kZ^2_k:=L$ and $T_t\Upsilon L\ne0$. The subscripts~1 and~2 denote differentiation with respect to~$x$ and~$y$, respectively, the indices~$i$ and~$j$ run through the set $\{1,2\}$ and the summation over repeated indices is understood. As Eq.~\eqref{Bihlo&Popovych:eq:vortbeta} is an element of the class~\eqref{Bihlo&Popovych:eq:class1} and this class is normalized, the point symmetry group~$G$ of Eq.~\eqref{Bihlo&Popovych:eq:vortbeta} is contained in the projection~$\hat G^\sim_1$ of the equivalence group~$G^\sim_1$ of the class~\eqref{Bihlo&Popovych:eq:class1} to the variable space $(t,x,y,\psi)$. At the same time, the group~$G$ is much narrower than the group~$\hat G^\sim_1$, and in order to single out~$G$ from~$\hat G^\sim_1$ we should still derive and solve a quite cumbersome system of additional constraints. Instead of this we use the trick described in the end of Section~\ref{Bihlo&Popovych:sec:DirectMethodAndAdmTrans}. Namely, by the transformation \begin{equation}\label{Bihlo&Popovych:eq:TrickTransOfPsi} \check\psi=\psi+\frac\beta6y^3, \end{equation} which identically acts on the independent variables and which is prolonged to the vorticity according to the formula~$\check\zeta=\zeta+\beta y$, we map Eq.~\eqref{Bihlo&Popovych:eq:vortbeta} to the equation \begin{align}\label{Bihlo&Popovych:eq:vortbetaMod} \check\zeta_t+\check\psi_x\check\zeta_y-\check\psi_y\check\zeta_x=-\frac\beta2y^2\check\zeta_x. \end{align} Eq.~\eqref{Bihlo&Popovych:eq:vortbetaMod} belongs to the subclass of class~\eqref{Bihlo&Popovych:eq:class1} that is singled out by the constraints $F_\psi=0$, $F_\zeta=0$, $F_{\psi_x}=-\zeta_y$ and $F_{\psi_y}=\zeta_x$, i.e., the class consisting of the equations of the form \begin{equation}\label{Bihlo&Popovych:eq:class2} \zeta_t+\psi_x\zeta_y-\psi_y\zeta_x=H(t,x,y,\zeta_x,\zeta_y,\zeta_{xx},\zeta_{xy}, \zeta_{yy}), \quad \zeta :=\psi_{xx}+\psi_{yy}, \end{equation} where $H$ is an arbitrary smooth function of its arguments, which is assumed as an arbitrary element instead of $F=H-\psi_x\zeta_y+\psi_y\zeta_x$. The class~\eqref{Bihlo&Popovych:eq:class2} also is a member of the above hierarchy of normalized classes. Its equivalence group~$G^\sim_2$ is much narrower than~$G^\sim_1$ and is formed by the transformations \[ \begin{split} &\tilde t=\tau, \quad \tilde x=\lambda(x\mathfrak c-y\mathfrak s)+\gamma^1, \quad \varepsilon\tilde y=\lambda(x\mathfrak s+y\mathfrak c)+\gamma^2, \\ &\tilde\psi=\varepsilon\frac{\lambda}{\tau_t}\left(\lambda\psi+\frac\lambda2\theta_t(x^2{+}y^2) -\gamma^1_t(x\mathfrak s{+}y\mathfrak c)+\gamma^2_t(x\mathfrak c{-}y\mathfrak s)\right)+\delta+\frac\sigma2(x^2{+}y^2),\\ &\tilde H=\frac\varepsilon{\tau_t{}^2} \left(H-\frac{\lambda_t}\lambda(x\zeta_x+y\zeta_y)+2\theta_{tt}\right) -\frac{\delta_y{+}\sigma y}{\tau_t\lambda^2}\zeta_x+\frac{\delta_x{+}\sigma x}{\tau_t\lambda^2}\zeta_y +\frac2{\tau_t}\left(\frac\sigma{\lambda^2}\right)_t, \end{split} \] where $\varepsilon=\pm1$, $\mathfrak c=\cos\theta$, $\mathfrak s=\sin\theta$; $\tau$, $\lambda$, $\theta$, $\gamma^i$ and $\sigma$ are arbitrary smooth functions of~$t$ satisfying the conditions $\lambda>0$, $\tau_{tt}=0$ and $\tau_t\ne0$ and $\delta=\delta(t,x,y)$ runs through the set of solutions of the Laplace equation $\delta_{xx}+\delta_{yy}=0$. In order to derive the additional constraints that are satisfied by the group parameters of transformations from the point symmetry group~$G_2$ of Eq.~\eqref{Bihlo&Popovych:eq:vortbetaMod}, we substitute the values $H=-\beta y^2\zeta_x/2$ and~$\tilde H=-\beta\tilde y^2\tilde \zeta_{\tilde x}/2$ as well as expressions for the transformed variables and derivatives via the initial ones into the transformation component for~$H$ and then make all possible splitting in the obtained equality. As a result, we derive the additional constraints \[ \theta=\gamma^2_t=0, \quad \lambda=\frac1{\tau_t}, \quad \sigma=\frac{\varepsilon\beta\gamma^2}{2\tau_t{}^2}, \quad \delta_x=-\sigma x, \quad \delta_y=\sigma y+\frac{\varepsilon\beta(\gamma^2)^2}{2\tau_t}. \] After projecting transformations from~$G^\sim_2$ on the variable space $(t,x,y,\psi)$, constraining the group parameters using the above conditions and taking the adjoint action of the inverse of the transformation~\eqref{Bihlo&Popovych:eq:TrickTransOfPsi}, we obtain, up to re-denoting, the transformations from Theorem~\ref{Bihlo&Popovych:TheoremOnPointSymGroupOfBVE}. \section{Conclusion}\label{Bihlo&Popovych:sec:Conclusion} In this paper, we have computed the complete point symmetry group of the barotropic vorticity equation on the $\beta$-plane. It is obvious that both of the techniques presented in this paper are applicable to general systems of differential equations. Despite of the apparent simplicity of the techniques employed above, there are a number of features that should be discussed properly. In particular, the relation between discrete symmetries of a differential equation and discrete automorphisms of the corresponding maximal Lie invariance algebra is neither injective nor surjective. This is why it can be misleading to restrict the consideration to discrete automorphism when trying to finding discrete symmetries. This and related issues will be investigated and discussed more thoroughly in a forthcoming work. \subsection*{Acknowledgements} AB is a recipient of a DOC-fellowship of the Austrian Academy of Sciences. The research of ROP was supported by the project P20632 of the Austrian Science Fund.
10,800
TITLE: Using symmetry in Gauss' Law QUESTION [3 upvotes]: I have to find electric field at any inside point due to a uniformly charged solid sphere I do it in following steps $\to$ First I choose a spherical gaussian surface passing through required point concentric with the charged sphere. $\to$ Next by symmetry I say that electric field will be same in magnitude at all points as all the points are equivalent to given charge distribution. $\to$ By symmetry I say that the electric field will be directed radially. But problem that arises is that how can I show that this field will be directed radially outwards i.e. what is the argument to prove that electric field will not be radially inwards REPLY [3 votes]: Your arguments are not in the right order: First, you prove by symmetry, that the field is radial and depends only on the distance to the center O : $\vec{E}=E(r)\vec{e_r}$. Note that $E(r)$ is an algebraic quantity: you don't need to know its sign. Then, you choose a Gauss surface adapted to this symmetry: here, a sphere centered on O. With this choice of Gauss surface, the electric flux is written very simply $\Phi=4\pi r^2E(r)$ and the field is determined completely $E(r)=Q_{int}/4\pi \varepsilon r^2$. Gauss's theorem gives you the field with its sign : the radial component is positive if $Q_{int}$ is positive.
137,700
If it stood alone, 12,276-foot Mt. Adams would be a prime recreation site, silhouetted on license plates and key chains. But from a Seattle [1] viewpoint, Adams is geographically behind and below its limelight-hogging neighbors. The distance from main towns and roads make Mt. Adams an ideal spot to escape civilization. Like its more active neighbors, Mt. Adams is of volcanic origins; unlike its neighbors, the mountain is believed to have been formed by a congregation of volcanic cones instead of a single large one. The mountain has been relatively quiescent for 10,000 years, and large glaciers crown its summit, including the Klickitat Glacier, second biggest of all Cascadian glaciers. There are two ways to approach Mt. Adams: from Seattle [1], take I-5 south to I-205 near Vancouver [2], then follow I-205 to Highway 14 and head east. At Underwood, take Highway 141 north to Trout Lake [3]. An alternative is to take I-5 south past Chehalis, then east on Highway 12 to Randle [4] and take the Randle Road (Forest Service Road 23) south for 56 miles to Trout Lake. This isolated road is definitely the scenic route, and the entire length is paved. Approaching from the east, you can drive down Highway 97 to Goldendale [5] and west on Highway 142 to Klickitat and take the Glenwood-Trout Lake Road, or follow Highway 14 from Maryhill [6] to Underwood and drive north. The roads into the Mt. Adams area are closed each winter due to heavy snowfall. Links: [1] [2] [3] [4] [5] [6]
107,733
Find the RAM Trucks Your Life Demands at Drive Motors You're in the market for a great truck. The first thing that came to mind? The RAM lineup. There's a reason for that. Over the years the RAM brand has worked hard to provide drivers of all stripes with high-quality trucks that provide comfort, valuable features, and top-notch performance. When a RAM truck is the what you need, Driver Motors is the place to go.
413,792
Freelance/Project Based – Programmer/Web Developer Seeking mobile-friendly website front-end developers for remote project-based or on-site ongoing freelance. On-site has potential for full time developer position. Related experience working with senior management and key production staff as well as direct communication with clients is mandatory. Proven expertise in customer-focused medium to large scale production jobs or programs are a plus. Requirements: Must be extremely willing to learn and grow, staying current with internet trends and technology. Expected to work in tandem with designers with varying code knowledge. Desired code knowledge skillset: - Responsive Design for mobile-friendly sites - WordPress Custom Development (Plugins, Themes, Menus) - WordPress backend user interface customization a huge plus - HTML (HTML5 a plus) - Full understanding of CSS2 and adequate understanding of CSS3 features - JavaScript (OO a plus) - JQuery (Other MVCs a plus) - API integration (using cURL, XML, etc.. ) - E-Commerce - SQL/MySQL - PHP - SASS - Grunt - SVN (We use Github) - Open Source Development a plus - Drupal Custom Development knowledge a plus Be specific about your strengths and weaknesses regarding the list above and your ability/desire to learn. Additional info: We are focused on service, quality, process and work environment to deliver only the best products to our customers. The ideal candidate will be very detail-oriented, high energy relationship builder with the drive to bring effective solutions and succeed within their teams. Minimum 5 years experience is required.
410,976
TITLE: Do neutrinos change speed in neutrino oscillations? QUESTION [6 upvotes]: The process of neutrino oscillations is not very intuitive, hence a question. If an electron neutrino on its way from the sun turns into a tau neutrino, then for the energy to conserve, the speed would have to become slower. This however would change the momentum. For a single particle in an empty space, it seems impossible to change its rest mass without violating either energy or momentum conservation. Does this mean the rest mass of all neutrino flavors is the same or is there a better explanation from the standpoint of the physical sense? REPLY [2 votes]: At the core of quantum mechanics is a most counter-intuitive truth: Although we might presume conservation is "disobeyed" over an unobserved part of an experiment, we find it really was obeyed when we observe it. In the case of neutrinos, no matter what flavor is detected we will see that both momentum and energy are conserved. If its a heavier flavor that's detected, then we'll also see that the time from emission to absorption is longer (the neutrino went more slowly). This QM principle is possible because observation of an experiment and alteration of an experiment are inseparable. Observation and alteration are synonymous. We are welcome to imagine any crazy behavior for something that is not observed as long it obeys consistent laws when it is observed. For example, I could say that the emission event 'knew' where the neutrino would be absorbed, and accordingly chose the neutrino flavor to emit at emission time. Notice that the recoil information from an emission event only tells you the momentum of the neutrino, not its velocity. So you can't tell the flavor at emission time.
62,507
Don’t Be a Greenwasher; Prove That Your Data Center Has Reduced Its Power Usage Posted by RJ Tee on August 13, 2015Categories: Tags: data center power usage Your company works hard to maintain its esteemed reputation. Customers know that they can trust your company to deliver on its promises and maintain its integrity. Your company, in other words, has a lot to lose. Going green is a serious endeavor, one that takes a big commitment. Don’t promise to make this change in your organization without a proper plan, or you’ll risk losing credibility with your customers and business partners. Here’s why: In business, there is an ugly term floating around called “greenwashing.” Basically, it’s a label given to companies that claim to be embracing sustainability initiatives but can’t back up their claims. Usually, greenwashing is done to try and earn the trust of consumers to boost sales. It’s a nefarious practice, and it’s hard to reverse public opinion once the term is applied to your business. After all, it’s basically a fancy name for lying. Unfortunately, many companies are labeled as greenwashers despite having actually taken active measures to try and be more sustainable. This is because they lack the ability to benchmark and prove their progress. So, if you’re looking to reduce power in your data center, and rightfully market your company as a green organization committed to helping the environment, it’s important that you invest in a data center power-monitoring solution that will provide real-time and historical information about your business’s daily power usage. One solution that can help your business actively track and prove its daily data center power consumption is Server Technology’s Sentry Power System. Equipped with intelligent power distribution units and real-time tracking software, you will gain all of the pertinent data you need to transform your data center into a lean, green machine.
345,162
Results 1 to 4 of 4 does anyone here collect teacups? I have a few (three to be exact, rofl) and I would like to get some more. I love the teacups with matching saucers, the daintier, the better. I found some at ebay. And though I do have certain criteria, i don't know what to look for otherwise. I would love to buy, but again i don't know what to look for. Also on the look out for teapots, anyone collect these that can give me a point in the right direction for a bargain?~~ Missy ~~ Planting and raising an urban homestead in the middle of Downtown big city right at the foot of the Rocky Mountains! Zone 5 Colorado Springs, CO USA - Rep Power - 10 I do but I only collect blue & white so I don't have much advice for you. Just buy what you love, unless you're looking for some kind of investment. I buy (when I have funds!) what catches my eye without a thought to their "worth". Glad to hear of another cup collector! - Rep Power - 11 I love Tea Cups & Tea Pots. I only have 3 cups & saucers that were my Mom's & 1 teapot She bought me when She went to England, but sadly it went through the fire we had & is cracked & smoke stained, but I keep it anyway. I do have one other teapot given to me by our Son & DIL. I love um!!!!! - Rep Power - 8 I love when others give me teacups!! I have 3 - one from my step mom, one from my stepgrandma and one from my sister. The one from my stepmom is really cool story. It was her great-aunts and she used to read tea leaves. This was one of her teacups that she read from. I have the whole story written out and taped to the bottomof the saucer. Similar Threads What do you collect?By Russ in forum General ChatReplies: 56Last Post: 03-31-2011, 07:01 PM What do you collect?By mombottoo in forum HobbiesReplies: 25Last Post: 04-02-2008, 06:28 PM What do you collectBy jade73 in forum HobbiesReplies: 32Last Post: 01-12-2007, 04:28 PM Do any of you collect something and if so what?By leeleeaub in forum General ChatReplies: 17Last Post: 02-12-2006, 08:33 AM Bookmarks
179,647
You're Sure To Grow Your Business! Network your business by meeting, exchanging business cards, sharing products & services with other professionals in the mortgage field. Meet Mortgage Lenders, Builders, Real Estate Agents, Title Agents, Home Inspectors, Appraisers, Insurance Agents & More. Become A Sponsor Gold ($1000) Host your personal Combined Hitting Bays (2 Lanes) for 3 Hours. Advertise your company with your own material, Share a product or message to your group, etc. You run the show in your Bays! Logo presence on all “The Mixer” material produced and themixer.net home page. Dedicated business page on themixer.net dedicated to the message or product of your choice, or a link to your home web page. Silver ($500) Share & host your personal Hitting Bay (1 Lane) for 3 hours. Advertise your company within your own bay, Share a product or message to your group. Logo presence on all “The Mixer” material produced and themixer.net home page. Bronze ($250) Sponsor table – Dedicated table to your company, advertise on tables in all hitting bays. share pamphlets, business cards and any other material you wish. Company name printed on all “The Mixer” material produced and sponsor section on themixer.net Current Sponsors More Sponsors Local Title Agency – ST Law Offices – BeeBold Real Estate –
377,339
A man who stabbed his housemate to death after an April Fool’s Day altercation told his adopted brother – who had earlier fought with the deceased – “this is all your fault”. The background to George Christopher Cross’s death in Taupō can be revealed after William James Henare Capper pleaded guilty to manslaughter last week. Capper, 35, had originally been charged with, and pleaded not guilty, to murder over the 2021 death, and court documents state he wielded both a hammer and a knife. The killing happened at Capper and Cross’s Taupō residence, the summary of facts revealed. Two other associates, one of them Capper’s adopted brother, were visiting and the men had been drinking “Shortly after 11pm an altercation has broken out between the deceased and Capper’s adopted brother,” the summary said. Capper then approached with a large kitchen knife, despite one witness grabbing his shoulder and telling him “Bro, drop the knife”. “The defendant ignored [the witness] and began striking the deceased in the right leg with the knife,” the summary said. “The deceased started to scream.” Capper’s adopted brother then heard him tell Cross “that’s what you get”. Capper was then asked to get towels while an ambulance was called. “The defendant was standing by the sliding door, [his adopted brother] yelled at the defendant and told him to leave the address,” the summary said. “The defendant yelled ‘this is all your fault’ to his brother and left.” The summary said Cross suffered four sharp force injuries to his right thigh, one stab wound penetrating 12cm. The stab wounds severed two arteries and Cross died from severe blood loss. Capper later “admitted to stabbing the deceased”. He will be sentenced at the High Court in Rotorua in June.
98,684
\begin{document} \title{Functional Gaussian approximations in Hilbert spaces: the non-diffusive case} \thanks{S. Bourguin was supported in part by the Simons Foundation grant 635136} \author{Solesne Bourguin$^1$} \address{$^1$Boston University, Department of Mathematics and Statistics, 111 Cummington Mall, Boston, MA 02215, USA} \email{bourguin@math.bu.edu} \author{Simon Campese$^2$} \address{$^2$Hamburg University of Technology, Institute of Mathematics, Am Schwarzenberg-Campus 3, 21073 Hamburg, Germany} \email{simon.campese@tuhh.de} \author{Thanh Dang$^1$} \email{ycloud77@bu.edu} \begin{abstract} We develop a functional Stein-Malliavin method in a non-diffusive Poissonian setting, thus obtaining a) quantitative central limit theorems for approximation of arbitrary non-degenerate Gaussian random elements taking values in a separable Hilbert space and b) fourth moment bounds for approximating sequences with finite chaos expansion. Our results rely on an infinite-dimensional version of Stein's method of exchangeable pairs combined with the so-called Gamma calculus. Two applications are included: Brownian approximation of Poisson processes in Besov-Liouville spaces and a functional limit theorem for an edge-counting statistic of a random geometric graph. \end{abstract} \subjclass[2010]{46G12, 46N30, 60B12, 60F17} \keywords{Poisson space; Gaussian measures on Hilbert spaces; Dirichlet structures; Stein's method on Banach spaces; Gaussian approximations; probabilistic metrics; functional limit theorems; fourth moment conditions} \bibliographystyle{amsalpha} \maketitle \section{Introduction} The now classical Stein-Malliavin method, a combination of Stein's method with Malliavin calculus, has been very sucessful in deriving quantitative central limit theorems for non-linear approximation. Since its inception by Nourdin and Peccati in 2013 (see~\cite{nourdin-peccati:2009:steins-method-wiener}), it has formed a vivid community which developed the theory further and applied it to numerous situations. An excellent exposition of the basic method is available in the monograph~\cite{nourdin-peccati:2012:normal-approximations-malliavin} , while I. Nourdin keeps a rather exhaustive and continuously updated list of references on the webpage \texttt{https://sites.google.com/site/malliavinstein}. From a theoretical point of view, one of the main remaining challenges is an adaptation of the method to the infinite-dimensional setting, with quantitative approximation of Gaussian processes as main application. For random elements taking values in a Hilbert space, and in a diffusive context, this has recently been achieved by~\cite{bourguin-campese:2020:approximation-hilbert-valued-gaussians}. In this work, we provide the natural analogue in the non-diffusive context of Poisson spaces. More specifically, let $X$ be a square-integrable measurable transformation of a Poisson process and $Z$ be a Gaussian process, both taking values in some separable Hilbert space $K$. Informally, our main results (Theorems~\ref{theorem_fourmomentHilbert} and \ref{theorem_contractionestimate} on page \pageref{theorem_fourmomentHilbert}) provide bounds on a probabilistic distance between $X$ and $Z$ (metrizing convergence in law) in terms of the first four strong moments of $X$ or alternatively in terms of so called contractions. From these bounds, one can directly deduce quantitative and functional central limit theorems for convergence towards a Gaussian process, as well as an infinite-dimensional version of the Fourth Moment Theorem, which says that for a sequence of $K$-valued multiple Poisson-integrals, convergence of the second and fourth moments implies convergence towards a Gaussian process. It is noteworthy to observe that while the analogous diffusive statements in~\cite{bourguin-campese:2020:approximation-hilbert-valued-gaussians} look similar to our non-diffusive ones, their proofs are rather different, for the same reason as in the finite-dimensional case: No chain rule is available in the non-diffusive case, which renders the usual integration by parts argument unfeasible. Instead, one can construct an appropriate exchangeable pair and then apply a Taylor argument in order to control the term resulting from an application of Stein's method. Compared to the finite-dimensional setting, several technical issues arise which require the use of Hilbert-space techniques. A commonality with the diffusive statements is, however, that our main results subsume all known finite-dimensional Malliavin-Stein bounds in a Poissonian context as special cases (see Remark~\ref{rmk:1} on page~\pageref{rmk:1} for details). In order to illustrate our results, we provide two applications: The first one concerns the classical approximation of a Brownian motion by a normalized Poisson process with growing intensity $\lambda$. A natural class of Hilbert spaces accommodating the sample paths of both processes are the so-called Besov-Liouville spaces. In~\cite{coutin-decreusefond:2013:steins-method-brownian}, the authors showed that convergence takes place at rate $\lambda^{-1/2}$ (as in the classical one-dimensional case). To prove this, they first transferred both processes isometrically $\ell^2(\mathbb{N})$ and then had to go through rather tedious calculations. In contrast to this, our bounds yield the same result in just a few lines, and no isometry is necessary. As a second application we illustrate, using an edge counting statistic of a random graph, how known one-dimensional central limit theorem can be made functional with very little additional effort. Besides the already mentioned reference~\cite{bourguin-campese:2020:approximation-hilbert-valued-gaussians}, the work~\cite{coutin-decreusefond:2013:steins-method-brownian} is concerned with quantitative functional approximation in a Malliavin-Stein context as well. As already mentioned, the authors use a different approach which crucially depends on isometrically mapping all random elements to $\ell^2(\mathbb{N})$. In applications, the need to explicitly evaluate such an isometry can be seen as a drawback. Also, our setting seems to be more general and does not rely on ad-hoc arguments depending on the Gaussian process at hand. Other related references proving functional central limit theorems using Malliavin-Stein techniques are ~ \cite{kasprzak:2017:multivariate-functional-approximations ,kasprzak:2020:functional-approximations-via ,dobler-kasprzak:2021:steins-method-exchangeable ,dobler-kasprzak-peccati:2019:functional-convergence-u-processes }. The rest of this paper is organized as follows. In Section~\ref{Section_prelim} we introduce the necessary preliminaries, followed by the main results in Section~\ref{sec:stat-main-results}. The proofs are given in Section~\ref{sec:proof-main-results} which is followed by the two aforementioned applications in Section~\ref{sec:applications}. An appendix contains several technical lemmas required for the proofs. \section{Preliminaries} \label{Section_prelim} \subsection{Probability on Hilbert spaces} \hfill\\ \indent Let $K$ be a real separable Hilbert space, $\mathcal{B}(K)$ the Borel $\sigma$-algebra of $K$ and $\brac{\Omega,\mathcal{F},P}$ a complete probability space. A $K$-valued random variable $X$ is a measurable map from $\brac{\Omega,\mathcal{F}}$ to $\brac{K,\mathcal{B}(K)}$. Such random variables are characterized by the property that for any continuous linear functional $\phi\in K^*$, the function $\phi(X):\Omega\to\R$ is a real-valued random variable. As usual, the distribution or law of $X$ is the push-forward probability measure $P\circ X^{-1}$ on $\brac{K,\mathcal{B}(K)}$. The set of all $K$-valued random variables is a a vector space over the field of real numbers. If the Lebesgue integral $\E{\norm{X}_K}=\int_\Omega \norm{X}_KdP$ exists and is finite, then the Bochner integral $\int_\Omega XdP$ exists in $K$ and is called the expectation of $X$. Slightly abusing notation, we denote this integral by $\E{X}$ as well, and it can always inferred from the context whether $\E{\cdot}$ refers to Lebesgue or Bochner integration with respect to $P$. For $p\geq 1$, $L^p\brac{\Omega,P}$ denotes the Banach space of all equivalence classes (under almost sure equality) of $K$-valued random variables $X$ with finite $p$-th moment, i.e., such that \begin{align*} \norm{X}_{L^p\brac{\Omega,P}}=\E{\norm{X}^p_K}^{1/p}<\infty. \end{align*} Note that for all $X\in L^p\brac{\Omega,P}$, the Bochner integral $\E{X}$ exists. In the case $X\in L^2\brac{\Omega,P}$, the covariance operator $S:K\to K$ of $X$ is defined by \begin{align*} Su=\E{\inner{X,u}_K X}. \end{align*} $S$ is a positive, self-adjoint trace-class operator that verifies the identity \begin{align*} \Tr S= \E{\norm{X}^2_K}. \end{align*} We denote by $\mathcal{S}_1(K)$ the Banach space of all trace-class operators on $K$, equipped with norm $\norm{T}_{\mathcal{S}_1(K)}=\Tr \abs{T}$, where $\abs{T}=\sqrt{T T^{\ast}}$ and $T^{\ast}$ denotes the adjoint of $T$. The subspace of Hilbert-Schmidt operators on $K$ is denoted by $\operatorname{HS}(K)$, its inner product and norm by $\inner{\cdot,\cdot}_{\operatorname{HS}(K)},\norm{\cdot}_{\operatorname{HS}(K)}$ respectively. Recall that \begin{align*} \norm{\cdot}_{\operatorname{op}}\leq \norm{\cdot}_{\operatorname{HS}(K)}\leq \norm{\cdot}_{\mathcal{S}_1(K)}, \end{align*} where $\norm{\cdot}_{\operatorname{op}}$ denotes the operator norm. \subsection{Gaussian measures and Stein's method} \hfill\\ \indent In this section, we introduce Gaussian measures, the associated abstract Wiener spaces and Stein characterization of Gaussian measures. The theory will be presented within a general Banach space setting. Standard references for Gaussian measures and abstract Wiener spaces are the monographs \cite{bogachev:1998:gaussian-measures,kuo:1975:gaussian-measures-banach}, while Stein's method for Gaussian measures has been developed by Shih in \cite{shih:2011:steins-method-infinite-dimensional} (see also Barbour's earlier work~\cite{barbour:1990:steins-method-diffusion} for the special case of Brownian motion). \subsubsection{Abstract Wiener spaces} Let $H$ be a real separable Hilbert space equipped with inner product $\inner{\cdot,\cdot}_H$ and $\norm{\cdot}$ be a norm on $H$ weaker than $\norm{\cdot}_H$. Denote $B$ the Banach space obtained via completion of $H$ with respect to $\norm{\cdot}$ and $i$ the canonical embedding of $H$ into $B$. The triple $(i,H,B)$ defines an abstract Wiener space and has first been introduced by Gross in~\cite{gross:1967:abstract-wiener-spaces}. We identify $B^*$ as a dense subspace of $H^{\ast}$ under the adjoint $i^*$ of $i$, so that we have the continuous embeddings $B^*\subseteq H\subseteq B$, where, as usual, $H$ is identified with its dual $H^{\ast}$. All of this can be summarized via the diagram \begin{align*} B^*\xrightarrow{i^*} H^*=H \xrightarrow{i} B. \end{align*} The abstract Wiener measure $p$ on $B$ is characterized as the Borel measure on $B$ satisfying \begin{align*} \int_B \exp\brac{{i\inner{x,\eta}}_{B,B^*}} p(dx)=\exp\brac{-\frac{\norm{\eta}^2_H}{2}}, \end{align*} for any $\eta\in B^*$. \subsubsection{Gaussian measures} Let $B$ be a separable Banach space, with $\mathcal{B}(B)$ its Borel $\sigma$-algebra. A Gaussian measure $\mu$ is a probability measure on $(B,\mathcal{B}(B))$ such that every linear functional $x\in B^*$, considered as a (real-valued) random variable on $(B,\mathcal{B}(B),\mu)$, has a Gaussian distribution on $(\mathbb{R},\mathcal{B}(\mathbb{R}))$. Such a Gaussian measure is called centered and/or non-degenerate, if these properties hold for the distributions of every $x\in B^*$. We can see that every abstract Wiener measure is a Gaussian measure, and conversely, for every Gaussian measure $\mu$ on $B$, there exists a Hilbert space $H$ such that $(i,H,B)$ forms an abstract Wiener space. The space $H$ is known as the Cameron Martin space. \subsubsection{Stein characterization of Gaussian measures} Let $B$ be a real separable Banach space with norm $\norm{\cdot}$. Let $Z$ be a $B$-valued random variable which induces a centered Gaussian measure $\mu_Z$ on $B$ and let $(i,H,B)$ be the associated abstract Wiener space. By $\{P_t:t\geq 0\}$ we denote the Ornstein-Uhlenbeck semi-group of $Z$. It has the Mehler representation \begin{align*} P_tf(x)=\int_B f\brac{e^{-t}x+\sqrt{1-e^{-2t}}y}\mu_Z(dy), \end{align*} provided such an integral exists. In \cite[Theorem 3.1]{shih:2011:steins-method-infinite-dimensional}, Shih proved the following Stein lemma for abstract Wiener measures. \begin{theorem} \label{theorem_steinlemma} Let $X$ be a $B$-valued random variable with distribution $\mu_X$. \begin{enumerate}[label=\roman*)] \item If $B$ is finite-dimensional, then $\mu_X=\mu_Z$ if and only if \begin{align} \label{Stein_characterization} \E{\inner{X,\nabla f(X)}_{B,B^*}-\Delta_G f(X)}=0 \end{align} for any twice-differentiable function $f$ on $B$ such that $\E{\norm{\nabla^2 f(Z)}_{\mathcal{S}_1{(H)}}}<\infty$. \item If $B$ is infinite-dimensional, then $\mu_X=\mu_Z$ if and only if \eqref{Stein_characterization} holds for every twice $H$-differentiable function $f$ on $B$ such that $\nabla f(x)\in B^*$ for every $x\in B$, \\$\E{\norm{\nabla^2 f(Z)}_{\mathcal{S}_1(H)}}<\infty$ and $\E{\norm{\nabla f(Z)}_{B^*}^2}<\infty$. \end{enumerate} \end{theorem} The notion of $H$-derivative which is also known as Fréchet derivative along $H$ and appears in Theorem \ref{theorem_steinlemma} was introduced by Gross in \cite{gross:1967:potential-theory-hilbert}, and we briefly recall it here for the sake of self-containedness. A function $f:U\to W$ from an open set $U$ of $B$ into a Banach space $W$ is said to be $H$-differentiable at $x\in U$ if the map $\phi(h)=f(x+h),h\in H$, regarded as a function defined in a neighborhood of the origin of $H$ is Fréchet-differentiable at $0$. The $H$-derivative of $f$ at $x$ in the direction $h\in H$ is denoted by $\inner{\nabla f(x), h}_H$. The $k$-th order $H$-derivatives of $f$ at $x$ can then be constructed inductively and are denoted by $\nabla^k f(x)$, provided they exist. If $f$ is scalar-valued, $\nabla f(x)\in H^* \simeq H$ and $\nabla^2 f(x)$ is a bounded linear operator from $H$ to $H^*$ for every $x\in U$. The notation $\inner{\nabla^2 f(x)h,k}_H$ or $\nabla^2 f(x)(h,k)$ will stand for the action of the linear form $\nabla^2 f(x)(h,\cdot)$ on $k$. \noindent If $\nabla^2 f(x)$ is a trace-class operator on $H$, the Gross Laplacian $\Delta_{G}f(x)$ of $f$ at $x$ is defined as $\Delta_G f(x)=\Tr_H(\nabla^2 f(x))$. \subsubsection{Stein's equation} In view of Theorem \ref{Stein_characterization}, the associated Stein equation is given by \begin{align*} \inner{x,\nabla g(x)}_{B,B^*}-\Delta_G g(x)=h(x)-\E{h(Z)} \end{align*} for $x\in B$, where $h$ belongs to a suitable class of test functions. In this paper, we will assume our test functions belong to $C^3_b(K)$, the class of real-valued functions on $K$ that have bounded Fréchet derivatives up to order three. This space is equipped with the norm \begin{align*}\norm{h}_{C^3_b(K)}=\sup_{j=1,2,3}\sup_{x\in K}\norm{D^jh(x)}_{K^{\otimes j}}.\end{align*} Using standard semigroup techniques, the first two authors of this work showed in \cite{bourguin-campese:2020:approximation-hilbert-valued-gaussians} that there is a solution $g_h(x)$ for every test function $h(x)$ and that $g_h\in C^3_b(K)$ when $h\in C^3_b(K)$. Specifically, \cite[Lemma 2.4]{bourguin-campese:2020:approximation-hilbert-valued-gaussians} provides the estimates \begin{align*} \sup_{x\in K}\norm{D^jg_h(x)}_{K^{\otimes j}}\leq \frac{1}{j}\norm{h}_{C^j_b(K)} \end{align*} and \begin{align*} \norm{g_h}_{C^3_b(K)}\leq \norm{h}_{C^3_b(K)}. \end{align*} Thus, using the probability distance \begin{align*} d_3(X_1,X_2)=\sup_{\substack{h\in C^3_b(K)\\\norm{h}_{C^3_b(K)}\leq 1}}\abs{\E{h(X_1)-h(X_2)}}, \end{align*} Stein's equation implies that \begin{align*} d_3(X,Z)= \sup_{\substack{h\in C^3_b(K)\\\norm{h}_{C^3_b(K)}\leq 1}}\abs{\E{\Delta_G g_h(X)-\inner{X,Dg_h(X)}_K}}. \end{align*} \subsection{Dirichlet structure} \hfill\\ \indent This section contains an overview of Dirichlet structures, which is the framework we will be working within alongside Stein's method. We start by recalling the definition and properties of a Dirichlet structure on $L^2(\Omega;\R)$ (full details can be found in the monographs \cite{bakry-gentil-ledoux:2014:analysis-geometry-markov, bouleau-hirsch:1991:dirichlet-forms-analysis}) before focusing on an extension to $L^2(\Omega;K)$. Given a probability space $\brac{\Omega,\mathcal{F},P}$, a Dirichlet structure $\brac{\mathbb{D},\mathcal{E}}$ on $L^2(\Omega;\R)$ with the associated carré du champ operator $\Gamma$ consists of a Dirichlet domain $\mathbb{D}$, which is a dense subset of $L^2(\Omega;\R)$ and a carré du champ operator $\Gamma:\mathbb{D}\times \mathbb{D}\to L^1(\Omega,\R)$ characterized by the following properties. \begin{itemize} \item[-] $\Gamma$ is bilinear, symmetric ($\Gamma(F,G)=\Gamma(G,F)$) and positive ($\Gamma(F,F)\geq 0$). \item[-] the induced positive linear form $F\to \mathcal{E}(F,F)$, where $\mathcal{E}(F,G)=\frac{1}{2}\E{\Gamma(F,G)}$, is closed in $L^2(\Omega;\R)$, i.e., $\mathbb{D}$ is complete when equipped with the norm \begin{align*} \norm{\cdot}^2_{\mathbb{D}}=\norm{\cdot}^2_{L^2(\Omega;\R)}+\mathcal{E}(\cdot). \end{align*} \end{itemize} \begin{remark} We do not assume that $\Gamma$ satisfies the so-called diffusion property -- see \cite[Definition 3.1.3]{bakry-gentil-ledoux:2014:analysis-geometry-markov} -- as opposed to what is being done in \cite{bourguin-campese:2020:approximation-hilbert-valued-gaussians}. \end{remark} Here and in the following, $\E{\cdot}$ denotes the expectation on $\brac{\Omega,\mathcal{F}}$ with respect to $P$. The linear form $\mathcal{E}$ is known as a Dirichlet form and for brevity we write $\mathcal{E}(F)$ for $\mathcal{E}(F,F)$. Every Dirichlet form gives rise to a strongly continuous semigroup $\left\{P_t \right\}_{t\geq 0}$ on $L^2(\Omega;\R)$ and an associated symmetric Markov generator $-L$, defined on a dense subset $\operatorname{dom}(-L)\subseteq \mathbb{D}$. There are two important relations between $\Gamma$ and $L$, the first being the integration by part formula \begin{align*} \E{\Gamma(F,G)}=-\E{FLG}=-\E{GLF}, \end{align*} which is valid for $F,G\in\mathbb{D}$. The second relation is \begin{align*} \Gamma(F,G)=\frac{1}{2}\brac{L(FG)-GLF-FLG}, \end{align*} which holds for all $F,G\in \operatorname{dom}(L)$ such that $FG\in \operatorname{dom}(L)$. If $-L$ is diagonalizable with spectrum $\N_{0}$ (the set of natural numbers plus $0$) and $F_q$ is an eigenfunction corresponding to the eigenvalue $q$, then $-L F_q=qF_q$. We can define a pseudo-inverse $-L^{-1}$ by $-L^{-1}F_q=\frac{1}{q}F_q$ when $q\neq 0$ and $0$ otherwise. The definition of $-L$ and $-L^{-1}$ for a general $F=\sum_{q\in \N_{0}}F_q$ follows naturally via linearity. Alternatively, $L$ can be defined as the generator of the heat semigroup $\left\{P_t \right\}_{t\geq 0}$ (on $\operatorname{dom}(L)$) which satisfies \begin{align*} \partial_t P_t=LP_t=P_tL. \end{align*} \noindent Next we present what is meant by a Dirichlet structure on $L^2(\Omega;K)$. Let us adopt the notations $\widetilde{\mathbb{D}}, \widetilde{\Gamma},\widetilde{L}, \widetilde{P_t}$ for the Dirichlet domain, Dirichlet form, carré du champ operator, generator and semigroup associated with elements in $L^2(\Omega;\R)$. Meanwhile, ${\mathbb{D}}, {\Gamma},{L}, {P_t}$ are reserved for the counterpart objects associated with elements in $L^2(\Omega;K)$. Given a separable Hilbert space $K$, one has that $L^2(\Omega;K)$ is isomorphic to $L^2(\Omega;\R)\otimes K$. The Dirichlet structure on $L^2(\Omega;\R)$ can therefore be extended to $L^2(\Omega;K)$ via a tensorization procedure. Let $\N_{0}$ be the spectrum of $-\widetilde{L}$ and $\{ k_i\}_{i\in\N}$ an orthonormal basis of $K$. $\mathcal{A}$ will be the set of all functions $X$ taking the form \begin{align*} X=\sum_{q,i\in I}F_{q,i}\otimes k_i \end{align*} such that $I\subseteq \N^2$ is a finite set and $F_{q,i}\in \operatorname{ker}\brac{-\widetilde{L}+qI}$. Assuming another element $Y=\sum_{p,j\in J}G_{p,j}\otimes k_j$ in $\mathcal{A}$, we can define $L,\Gamma,P_t,\mathcal{E}$ for $t\geq 0$ via \begin{equation*} \begin{cases} \displaystyle LX=L\sum_{q,i\in I}F_{q,i}\otimes k_i=\sum_{q,i\in I}\brac{\widetilde{L}F_{q,i}}\otimes k_i\\ \displaystyle P_tX=P_t\sum_{q,i\in I}F_{q,i}\otimes k_i=\sum_{q,i\in I}\brac{\widetilde{P_t}F_{q,i}}\otimes k_i\\ \displaystyle \Gamma(X,Y)=\frac{1}{2}\sum_{q,i\in I}\sum_{p,j\in J}\widetilde{\Gamma}(F_{q,i},F_{p,j})\otimes\brac{k_i\otimes k_j+k_j\otimes k_i} \end{cases} \end{equation*} and \begin{align*} \mathcal{E}(X,Y)=\E{\Tr \Gamma(X,Y)}. \end{align*} In the last line, we identify $\Gamma(X,Y)$ as an element of $L^2(\Omega;\R)\otimes K\otimes K\simeq L^2(\Omega,\mathcal{L}(K,K))$ via the action \begin{align*} \Gamma(X,Y)u=\frac{1}{2}\sum_{q,i\in I}\sum_{p,j\in J}\widetilde{\Gamma}(F_{q,i},F_{p,j})\otimes\brac{\inner{k_i,u}_K\otimes k_j+\inner{k_j,u}_K\otimes k_i}. \end{align*} Since $\mathcal{A}$ is clearly dense in $L^2(\Omega;K)$, these operators can be extended to appropriate domains in $L^2(\Omega;K)$. This has been verified in \cite[Proposition 2.5 and Theorem 2.6]{bourguin-campese:2020:approximation-hilbert-valued-gaussians} (excluding the diffusion identity), which we restate below for the reader's convenience. \begin{prop}[Proposition 2.5 in \cite{bourguin-campese:2020:approximation-hilbert-valued-gaussians}] The operators $L$ $L^{-1}$, $\mathcal{E}$ and $\Gamma$ can be extended to $\operatorname{dom}(L)$, $\operatorname{dom}(L^{-1})$ and $\operatorname{dom}(\Gamma)=\operatorname{dom}(\mathcal{E})=\mathbb{D}\times\mathbb{D}$, respectively, given by \begin{equation*} \operatorname{dom}(L)=\Big\{X\in L^2(\Omega;K):\sum_{q\in \N_{0}} q^2 \widetilde{J}_{q}\brac{\norm{X}^2_K}<\infty \Big\}, \end{equation*} $\operatorname{dom}(L^{-1})=L^2(\Omega;K)$ and \begin{align*} \mathbb{D}&=\Big\{X\in L^2(\Omega;K):\sum_{q\in \N_{0}} q \widetilde{J}_{q}\brac{\norm{X}^2_K}<\infty \Big\}, \end{align*} where $\widetilde{J}_{q}(\cdot)$ denotes the projection onto $\operatorname{ker}\brac{\widetilde{L}+qI}\subseteq L^2(\Omega;\R)$. In particular, one has \begin{align*} \mathcal{A}\subseteq \operatorname{dom}(L)\subseteq \mathbb{D}\subseteq \operatorname{dom}(L^{-1})=L^2(\Omega;K),\end{align*} and all inclusions are dense. \end{prop} \begin{theorem}[Theorem 2.6 in \cite{bourguin-campese:2020:approximation-hilbert-valued-gaussians}] For a Dirichlet structure $(\mathbb{D},\Gamma)$ on $L^2(\Omega;K)$, the following is true. \begin{enumerate}[label=(\roman*)] \item $\Gamma$ is bilinear, almost surely positive, symmetric and self-adjoint with respect to $\inner{\cdot,\cdot}_K$. \item The Dirichlet domain $\mathbb{D}$ equipped with the norm \begin{align*} \norm{X}_{\mathbb{D}}^2=\norm{X}_{L^2(\Omega;K)}+\norm{\Gamma(X,X)}_{L^1(\Omega;\mathcal{S}_1)} \end{align*} is complete, so that $\Gamma$ is closed. \item The generator $-L$ acting on $L^2(\Omega;K)$ is positive, symmetric, densely defined and has the same spectrum as $-\widetilde{L}$. \item There is a compact pseudo-inverse $L^{-1}$ of $L$ such that \begin{align*} LL^{-1}X=X-\E{X} \end{align*} for all $X\in L^2(\Omega;K)$, where the expression on the right is a Bochner integral. \item The integration by parts formula \begin{align*} \E{\Tr\Gamma(X,Y)}=-\E{\inner{LX,Y}_K}=-\E{\inner{X,LY}_K} \end{align*} is satisfied for all $X,Y\in \operatorname{dom}(-L)$. \item The generators $\Gamma,L,\widetilde{L}$ are related via \begin{align} \Tr\Gamma(X,Y)=\frac{1}{2}\brac{\widetilde{L}\inner{X,Y}_K-\inner{LX,Y}_K-\inner{X,LY}_K} \end{align} for all $X,Y\in \operatorname{dom}(-L)$. \item The identity \begin{align*} \inner{\Gamma(X,Y)u,v}_K=\frac{1}{2}\brac{\widetilde{\Gamma}\brac{\inner{X,u}_K,\inner{Y,v}_K}+\widetilde{\Gamma}\brac{\inner{Y,u}_K,\inner{X,v}_K}}, \end{align*} is valid for all $X,Y\in\mathbb{D}$ and $u,v\in K$. \end{enumerate} \end{theorem} \subsection{Analysis on Poisson space}\label{subsection_Poissonspace} \hfill\\ \indent So far we have been working with a general probability space. In this section we will get more specific and describe the Poisson space on which most of our objects of interest are defined. We direct the reader to the references \cite{last-penrose:2018:lectures-poisson-process,nualart-nualart:2018:introduction-malliavin-calculus} for an extensive treatment of this topic. Let $(\mathcal{Z},\mathscr{L},\mu)$ be a measure space such that $\mu $ is $\sigma$-finite. A Poisson random measure $\eta$ on $(\mathcal{Z},\mathscr{L})$ with control measure $\mu$ is a family of distributions defined on some probability space $(\Omega,\mathcal{F},P)$ that satisfies \begin{itemize} \item[-] $\eta(B)$ is a Poisson distribution on $\Omega$ with mean $\mu(B)$, \item[-] $\eta(B_1), \eta(B_2)$ are independent when $B_1\cap B_2=\emptyset$. \end{itemize} If such a Poisson random measure exists, the associated probability space $(\Omega,\mathcal{F},P)$ is called a Poisson space. Next, let $\widehat{\eta}$ be the compensated Poisson random measure, that is $\widehat{\eta}(B)=\eta(B)-\mu(B)$, whenever $\mu(B)$ is finite. Denote $L^2_s(\mu^q)$ the set of all symmetric functions in $L^2(\mu^q)$. For $f\in L^2_s(\mu^q)$, $I_q^\eta(f)$ denotes a multiple (Wiener-It\^o) integral of order $q$. Unless we are simultaneously dealing with two different Poisson random measures, $I_q(\cdot)$ will be understood as an integral with respect to $\widehat{\eta}$. Multiple integrals have the following isometry property: for any integers $q,p \geq 1$, \begin{align*} \E{I_{q}(f) I_{p}(g)} = \mathds{1}_{\left\{ q=p \right\}} q! \langle \tilde{f},\tilde{g}\rangle _{L^2(\mu^q)}, \end{align*} where $\tilde{f} $ denotes the symmetrization of $f$, and we recall that $I_q(f) = I_q ( \tilde{f})$. The contraction of two kernels $f\in L^2_s(\mu^q)$ and $g\in L^2_s(\mu^p)$, denoted by $f\star^l_r g$ for $0\leq l\leq r\leq q\wedge p$, is obtained by identifying $r$ variables and then integrating $l$ of those: \begin{align*} &f\star^l_r g\brac{y_{1},\ldots,y_{r-l},y_{r-l+1},\ldots,y_{q-l},z_1,\ldots,z_{p-r}}\\ &=\int_{\mathcal{Z}^l}f(x_1,\ldots,x_l,y_{1},\ldots,y_{r-l},y_{r-l+1},\ldots,y_{q-l})g(x_1,\ldots,x_l,y_{1},\ldots,y_{r-l},z_1,\ldots,z_{p-r})\\&\hspace*{31em} d\mu\brac{x_1,\ldots,x_l} \end{align*} provided the integral exists in $L^2(\mu^{q+p-r-l})$. Contractions are central objects for analysis on Poisson space as they appear in the product formula for multiple integrals. There are two ways of stating this product formula on Poisson space: \cite[Proposition 6.1]{last:2016:stochastic-analysis-poisson} and \cite[Lemma 2.4]{dobler-peccati:2018:fourth-moment-theorem}, each having different assumptions. We will state both below. \begin{lemma}[Proposition 6.1 in \cite{last:2016:stochastic-analysis-poisson}] Let $f\in L^2_s(\mu^q),g\in L^2_s(\mu^p)$ and assume that $f\star^l_r g\in L^2(\mu^{q+p-r-l})$. Then, \begin{align} \label{productformulaPoisson} I_q(f)I_p(g)=\sum_{r=0}^{q\wedge p} r!{q\choose r}{p\choose r}\sum_{l=0}^r{r\choose l}I_{q+p-r-l}(f\star^l_r g). \end{align} \end{lemma} \begin{lemma}[Lemma 2.4 in \cite{dobler-peccati:2018:fourth-moment-theorem}] Let $f\in L^2_s(\mu^q),g\in L^2_s(\mu^p)$ and assume that $F=I_q(f),G=I_p(g)\in L^4(P)$. Then \begin{align*} FG=\sum_{k=1}^{q\wedge p-1}\widetilde{J}_k(FG)+I_{q+p}(f\widetilde{\otimes}g). \end{align*} \end{lemma} The collection of all multiple integrals of order $q$ form the so-called Poisson chaos of order $q$ in $L^2(\Omega;\R)$, which is denoted by $\mathcal{H}_q$. Since $\E{I_q(f)I_p(g)}=0$ for $q\neq p$, we have the orthogonal decomposition \begin{align*} L^2(\Omega,\mathcal{F},P)=\bigoplus_{q=1}^\infty \mathcal{H}_q. \end{align*} Similarly as what we did for Dirichlet structures, we define $\mathcal{H}_q(K)$ ($K$-valued Poisson chaos of order $q$) as the closure of $\mathcal{H}_q\otimes K$ in $L^2(\Omega, K)$. Then, \begin{align*} L^2(\Omega;K)=\bigoplus_{q=1}^\infty \mathcal{H}_q(K). \end{align*} Consequently, every $X\in L^2(\Omega, K)$ can be decomposed as \begin{align*} X=\sum_{q\in \N_{0}}F_q=\sum_{\substack{i\in\N,\\q\in \N_{0}}}\inner{F_q,k_i}_K k_i=\sum_{\substack{i\in\N,\\q\in \N_{0}}}F_{q,i}k_i, \end{align*} where $F_q\in\mathcal{H}_q(K)$, $F_{q,i}\in\mathcal{H}_q$ with $F_{q,i}=I_q(f_{q,i})$ for some $f_{q,i}\in L^2_s(\mu^q)$. \subsection{An exchangeable pair on Poisson space} \label{constructionofthepair} Another tool that we will make use of alongside Stein's method is the method of exchangeable pairs, which we will describe here. \noindent Per \cite[Corollary 3.7]{last-penrose:2018:lectures-poisson-process}, since $\eta$ is a Poisson random measure on $(\mathcal{Z},\mathscr{L},\mu)$, we can consider $\eta$ as a proper Poisson point process written as \begin{align*} \eta=\sum_{n=1}^\kappa \delta_{X_n}, \end{align*} such that $X_n,\kappa$ are random elements in $\mathcal{Z},\N\cup \{0,\infty\}$, respectively. It is well known that any $F\in L^2\brac{\Omega;\R}$ has the representation $F=f(\eta)$ for some measurable function $f:\N\to\R$, which is uniquely defined up to null sets (see \cite{last-peccati-schulte:2016:normal-approximation-poisson}). In \cite[Section 3.1]{dobler-vidotto-zheng:2018:fourth-moment-theorems}, via continuous thinning of $\eta$, the authors are able to construct a family of new Poisson point processes $(\eta^t)_{t\geq 0}$ and from there derive a path-wise representation for the semigroup $\widetilde{P}_t$ associated with $\eta$. Specifically, the action of $\widetilde{P}_t$ can be described via the Mehler formula \begin{align*} \widetilde{P}_t f(\eta)=\E{f(\eta^t)|\eta}. \end{align*} Following up on this result, they made the important observation that for every $t\geq 0$, $(\eta,\eta^t)$ is an exchangeable pair (i.e., $(\eta,\eta^t)$ and $(\eta^t,\eta)$ have the same distribution) and that as a result, for any kernel $g\in L^2_s(\mu^p)$, the pair $\brac{I^\eta_p(g),I^{\eta^t}_p(g)}$ is also exchangeable. \section{Statement of main results} \label{sec:stat-main-results} In what follows, let $K$ be a separable Hilbert space with orthonormal basis $\{k_i \}_{i\in\N}$, and let $X$ denote a $K$-valued centered random variable in $L^2 \left( \Omega;K \right)$ with finite chaos decomposition \begin{equation} \label{chaosdecompofX} X = \sum_{q=1}^N F_q, \end{equation} where each $F_q$ belongs to the $q$-th $K$-valued Poisson chaos. Furthermore, assume that $X$ has covariance operator $S$, which in turn decomposes as \begin{equation*} S = \sum_{q=1}^N S_q, \end{equation*} where, for each $1 \leq q \leq N$, $S_q$ is the covariance operator of $F_q$. Finally, we will denote by $f_{q,i} \in \mathfrak{H}^{\otimes q}$ the kernel of $F_{q,i} = \left\langle F_q,k_i \right\rangle_K = I_q \left( f_{q,i} \right)$. Our first main result provides a quantitative bound on the distance between the law of $X$ and a centered $K$-valued Gaussian random variable $Z$ in terms of the first four moments of $X$. \begin{theorem} \label{theorem_fourmomentHilbert} Assume $X$ is a $K$-valued random variable as described above with finite fourth moment, i.e., $\E{\norm{X}_K^4}<\infty$. Then, letting $Z$ be a centered Gaussian random variable on $K$ with covariance operator $S'$, the following estimate holds \begin{align*} d_3(X,Z) &\leq\frac{1}{2}\norm{S-S'}_{\operatorname{HS}} \\ &\quad+\sum_{1\leq q\leq N}\frac{2q-1}{4q}\sqrt{\E{\norm{F_q}^4_K}-\E{\norm{F_q}^2_K}^2-2\norm{S_q}^2_{\operatorname{HS}}}\\ &\quad +\sum_{1\leq p\neq q\leq N}\frac{p+q-1}{4p}\sqrt{\E{\norm{F_p}^2_K\norm{F_q}^2_K}-\E{\norm{F_p}^2_K}\E{\norm{F_q}^2_K}}\\ & \quad +\sqrt{N\E{\norm{X}^2_K}}\sqrt{\sum_{1\leq q\leq N} 2^{3q-1}(4q-3)\brac{\norm{F_q}_K^4-\E{\norm{F_q}^2_K}^2-2\norm{S_q}^2_{\operatorname{HS}}}} \\ &\leq \frac{1}{2}\norm{S-S'}_{\operatorname{HS}} \\ & \quad +\brac{\frac{N(2N-1)}{4}+\sqrt{2^{3N-1}N(4N-3)\E{\norm{X}^2_K}}} \\ &\qquad \qquad \qquad \sqrt{\E{\norm{X}^4_K}-\E{\norm{X}^2_K}^2-2\norm{S}^2_{\operatorname{HS}}}. \end{align*} \end{theorem} \begin{remark} \label{rmk:1} Note that Theorem \ref{theorem_fourmomentHilbert} is an infinite-dimensional version of the fourth moment theorems on the Poisson space obtained in \cite[Theorem 1.2, Theorem 1.7]{dobler-vidotto-zheng:2018:fourth-moment-theorems} and \cite[Theorem 1.3]{dobler-peccati:2018:fourth-moment-theorem}. In particular, the aforementioned results are special cases of Theorem \ref{theorem_fourmomentHilbert} obtained by setting $K=\R^d$ for a positive integer $d$. \end{remark} \begin{remark} Observe that Theorem \ref{theorem_fourmomentHilbert} can be viewed as a Poissonian counterpart of \cite[Theorem 3.10]{bourguin-campese:2020:approximation-hilbert-valued-gaussians} in the context of a non-diffusive chaos structure. The fact that we are working with a non-diffusive structure (where no chain rule is available for the Gamma calculus introduced in Section \ref{Section_prelim}) forces us to use different techniques in order to obtain the above quantitative bounds than the ones used in \cite{bourguin-campese:2020:approximation-hilbert-valued-gaussians}, making these results comparable in nature, but very different in their methodologies of proof. \end{remark} Whenever $X$ belongs to a single chaos, we can reformulate Theorem \ref{theorem_fourmomentHilbert} in a more compact form: \begin{corollary}[Quantitative Fourth Moment Theorem] \label{corollary_fourmomentHilbert} Let the notation of Theorem \ref{theorem_fourmomentHilbert} prevail. When $X$ belongs to a single chaos, i.e., $X\in \mathcal{H}_q(K)$ for some $q \geq 1$, one has \begin{align*} d_3(X,Z)\leq &\frac{1}{2}\norm{S-S'}_{\operatorname{HS}}\\&+\brac{\frac{2q-1}{4q}+\sqrt{2^{3q-1}(4q-3)q\E{\norm{X}^2_K}} }\sqrt{ \E{\norm{X}_K^4}-\E{\norm{X}^2_K}^2-2\norm{S}^2_{\operatorname{HS}}}. \end{align*} \end{corollary} As $d_3$ metrizes convergence in law, the above corollary in particular shows that within a single non-diffusive chaos, convergence of the second and fourth strong moments implies convergence towards a (Hilbert-valued) Gaussian. A particularly useful formulation of the above moment bounds for applications uses contraction operators acting on the kernels of the multiple integrals appearing in the chaos decomposition representation of $X$ given in \eqref{chaosdecompofX}. Contractions, which are the analytic quantities defined in Section \ref{Section_prelim}, allow for much simpler computation compared to dealing directly with the first four moments. Some examples of previous works that use contraction norms to obtain quantitative limit theorem for Poisson random variables include \cite{lachieze-rey-peccati:2013:fine-gaussian-fluctuations ,lachieze-rey-peccati:2013:fine-gaussian-fluctuations1 ,reitzner-schulte:2013:central-limit-theorems }. Our second main result is the following contraction bound. \begin{theorem} \label{theorem_contractionestimate} Let the notation and setup of Theorem \ref{theorem_fourmomentHilbert} prevail. Moreover, let $\mathfrak{H}=L^2(\mathcal{Z},\mu)$ where $\mathcal{Z}$ is the $\sigma$-fnite measure space described in Subsection \ref{subsection_Poissonspace}. Then it holds that \begin{align*} d_3(X,Z)\leq \brac{\frac{N(2N-1)}{4}+\sqrt{2^{3N-2}N(4N-3)\E{\norm{X}^2_K}}}\sqrt{\beta}+\frac{1}{2}\norm{S-S'}_{\operatorname{HS}}, \end{align*} where the quantity $\beta$ is given (in terms of contraction norms) by \begin{align*} \beta&=\sum_{\substack{1 \leq p,q \leq N\\q\neq p}}a_{p,q}(p\wedge q)\norm{f_q \star^{q\wedge p}_{q\wedge p} f_p}^2_{\mathfrak{H}^{\otimes\abs{q-p}} \otimes K^{\otimes 2}}\\ &\quad +\sum_{1 \leq p,q \leq N}\sum_{r=1}^{q\wedge p -1}b_{p,q}(r) \norm{f_{q} \star^r_r f_{p}}^2_{\mathfrak{H}^{\otimes (q+p-2r)}\otimes K^{\otimes 2}}\\ & \quad +\sum_{1 \leq p,q \leq N}\sum_{(r,s,l,m)\in I}c_{p,q,l,m}(r,s)\norm{{f_{q}{\star}^l_rf_{p}}}_{\mathfrak{H}^{\otimes (q+p-r-l)}\otimes K^{\otimes 2}}\norm{{f_{q} {\star}^m_sf_{p}}}_{\mathfrak{H}^{\otimes (q+p-r-l)}\otimes K^{\otimes 2}}. \end{align*} Here, the combinatorial coefficients are given by \begin{equation*} \begin{cases} \displaystyle a_{p,q}(r) = p!q! \binom{q}{r}\binom{p}{r} + r!^2 \binom{q}{r}^2 \binom{p}{r}^2 \abs{p-q}! \\ \displaystyle b_{p,q}(r) = p!q! \binom{q}{r}\binom{p}{r} \\ \displaystyle c_{p,q,l,m}(r,s) = r!s! \binom{q}{r}\binom{q}{s}\binom{p}{r}\binom{p}{s}\binom{r}{l}\binom{s}{m}(p+q-r-l)! \end{cases}, \end{equation*} and the index set $I$ is defined by \begin{align*}I=\{(r,s,l,m)\in \N^4\colon & 0\leq r,s\leq q\wedge p,\ 0\leq l\leq r,\ 0\leq m\leq s,\\ & r+l=s+m,\ (r,s,l,m)\notin \{(0,0,0,0),(q\wedge p,q\wedge p,q\wedge p,q\wedge p) \}\}.\end{align*} \end{theorem} \begin{example} \label{remark_contraction_order2} If $X$ is a sum of elements of the first two chaoses, i.e., $X=I_1(f_1)+I_2(f_2)$, Theorem \ref{theorem_contractionestimate} requires the contraction norms $\norm{f_{1}\star^1_1 f_{2}}_{\mathfrak{H}\otimes K^{\otimes 2}}$, $\norm{f_{2}\star^1_1 f_2}_{\mathfrak{H}^{\otimes 2}\otimes K^{\otimes 2}}$, $\norm{f_{1}\star^0_1 f_{2}}_{\mathfrak{H}^{\otimes 2}\otimes K^{\otimes 2}}$, $\norm{f_{1}\star^0_1 f_{2}}_{\mathfrak{H}^{\otimes 2}\otimes K^{\otimes 2}}$, $\norm{f_{2}\star^0_2 f_{2}}_{\mathfrak{H}^{\otimes 2}\otimes K^{\otimes 2}}$, $\norm{f_{2}\star^1_2 f_{2}}_{\mathfrak{H}\otimes K^{\otimes 2}}$ and $\norm{f_{1}\star^0_1 f_{1}}_{\mathfrak{H}\otimes K^{\otimes 2}}$ to converge to $0$ to get convergence towards a Gaussian law. \end{example} \begin{example} Let $\mu$ be a $\sigma$-finite measure on some measure space. By setting $K=\R, \mathfrak{H}=L^2(\mu)$ and $X=I_p(f)$ for some $p\geq 2$ in Theorem \ref{theorem_contractionestimate}, we get a result comparable to \cite[Theorem 5.1]{peccati-sole-taqqu-ea:2010:steins-method-normal} and \cite[Theorem 2]{peccati-taqqu:2008:central-limit-theorems}. For instance, whenever $X=I_2(f)$, Theorem \ref{theorem_contractionestimate} and \cite[Example 5.2]{peccati-sole-taqqu-ea:2010:steins-method-normal} both state that normal convergence happens if $\norm{f\star^1_1 f}_{L^2(\mu^2)}$, $\norm{f}_{L^4(\mu^2)}$ and $\norm{f\star^1_2 f}_{L^2(\mu)}$ converge to $0$, keeping in mind that $\norm{f}^2_{L^4(\mu^2)}=\norm{f\star^0_2 f}_{L^2(\mu^2)}$, and $\norm{f\star^0_1 f}_{L^2(\mu^3)}=\norm{f\star^1_2 f}_{L^2(\mu)}$. Another example is \cite[Example 5.3]{peccati-sole-taqqu-ea:2010:steins-method-normal}, which states that $X=I_3(g)$ converges to a Gaussian distribution if $\norm{g}^2_{L^4(\mu^3)}$, $\norm{g\star^1_1 g}_{L^2(\mu^4)}$, $\norm{g\star^1_2 g}_{L^2(\mu^3)}$, $\norm{g\star^1_3 g}_{L^2(\mu^2)}$ and $\norm{g\star^2_3 g}_{L^2(\mu)}$ all converge to $0$, which is the same condition suggested in Theorem \ref{theorem_contractionestimate}. Further, we would like to mention \cite{eichelsbacher-thale:2014:new-berry-esseen-bounds ,lachieze-rey-peccati:2013:fine-gaussian-fluctuations ,lachieze-rey-peccati:2013:fine-gaussian-fluctuations1 } which also offer contraction bounds for normal approximation on the Poisson space. \end{example} \section{Proof of main results} \label{sec:proof-main-results} We begin with the proof of Theorem \ref{theorem_fourmomentHilbert} which uses the method of exchangeable pairs developed in Section \ref{Section_prelim}. \subsection{Proof of Theorem \ref{theorem_fourmomentHilbert}} \label{Section_prooffourmomentHilbert} Let $G$ be a Gaussian random variable on $K$ with the same covariance operator as $X$, i.e., $G$ has covariance operator $S$. Similarly to \cite[Corrolary 3.3]{bourguin-campese:2020:approximation-hilbert-valued-gaussians}, it holds that \begin{align*} d_3\brac{G,Z}\leq \frac{1}{2}\norm{\mathcal{S}-\mathcal{S}'}_{\operatorname{HS}}. \end{align*} Therefore, it suffices to derive the desired moment bound for $d_3\brac{X,G}$ which yields the first item in Theorem \ref{theorem_fourmomentHilbert} as \begin{align*} d_3\brac{X,Z}\leq d_3\brac{X,G}+d_3\brac{G,Z}. \end{align*} In Subsection \ref{constructionofthepair}, we constructed an exchangeable pair of the form $(F_q,F_q^t)$ based on an element of a fixed $K$-valued chaos $F_q$, where $q$ denotes the order of the Poisson chaos. Recall that $X$ has the chaos decomposition \eqref{chaosdecompofX}. It follows that, for any $t \geq 0$, if we define $X^t$ as \begin{equation*} X^t=\sum_{q=1}^{N}F_q^t, \end{equation*} then the pair $(X,X^t)$ is also exchangeable, and we can apply Taylor's theorem to get \begin{align*} 0&=\lim_{t\to 0}\frac{1}{2t}\E{\inner{-L^{-1}(X^t-X),Dg(X^t)+Dg(X)}_K}\\ &= \lim_{t\to 0}\E{\frac{1}{2t}\inner{-L^{-1}(X^t-X),Dg(X^t)-Dg(X)}_K+\frac{1}{t}\inner{-L^{-1}(X^t-X),Dg(X)}_K}\\ &=\lim_{t\to 0}\E{\frac{1}{2t}\inner{-L^{-1}(X^t-X),D^2g(X)(X^t-X)+r}_K+\frac{1}{t}\inner{-L^{-1}(X^t-X),Dg(X)}_K}, \end{align*} where $r$ denotes the remainder term. Let $R(t)=\E{\frac{1}{2t}\inner{-L^{-1}(X^t-X),r}_K}$. Note that $\E{\Delta_G g(X)}=\sum_{1\leq q\leq N}\E{\Tr_K\brac{D^2g(X)S_q}}$. Combined with part (a) and (b) of Lemma \ref{lemmaexchangeablepair} and keeping in mind $F_q=\sum_{i\in \N}F_{q,i}k_i$, this leads to \begin{align*} 0 &=\sum_{1\leq q\leq N}\E{\Tr_K\brac{D^2g(X)\Gamma \brac{F_q,-L^{-1}F_q}}}\\ & \quad +\sum_{1\leq p\neq q\leq N}\sum_{i,j\in\N}\E{\inner{k_i,D^2g(X)\widetilde{\Gamma}\brac{-\widetilde{L}^{-1}F_{p,i},F_{q,j}}k_j }_K} -\E{\inner{X,Dg(X)}_K}+\lim_{t\to 0}R(t)\\ &=\E{\Delta_G g(X)}-\E{\inner{X,Dg(X)}_K} +\sum_{1\leq q\leq N}\E{\Tr_K\brac{D^2g(X)\brac{\Gamma \brac{F_q,-L^{-1}F_q}-S_q}}}\\ & \quad +\sum_{1\leq p\neq q\leq N}\sum_{i,j\in\N}\E{\inner{k_i,D^2g(X)\widetilde{\Gamma}\brac{-\widetilde{L}^{-1}F_{p,i},F_{q,j}}k_j }_K}+\lim_{t\to 0}R(t). \end{align*} The above equation and the Stein equation introduced in Section \ref{Section_prelim} imply \begin{align} \label{estimate_stein} d_3(X,G)&= \sup_{h\in C^3_b(K)}\abs{\Delta_G g(X)-\inner{X,Dg(X)}_K}\nonumber\\ & \leq \sup_{h\in C^3_b(K)}\left\{ \sum_{1\leq q\leq N}\abs{\E{\Tr_K\brac{D^2g(X)\brac{\Gamma \brac{F_q,-L^{-1}F_q}-\mathcal{S}_q}}}}\right.\nonumber\\ & \quad \left. +\sum_{1\leq p\neq q\leq N}\abs{\sum_{i,j\in\N}\E{\inner{k_i,D^2g(X)\widetilde{\Gamma}\brac{-\widetilde{L}^{-1}F_{p,i},F_{q,j}}k_j }_K}}+\abs{\lim_{t\to 0}R(t)} \right\}. \end{align} For the first term on the right side of \eqref{estimate_stein}, it holds that \begin{align*} &\sum_{1\leq q\leq N}\abs{\E{\Tr_K\brac{D^2g(X)\brac{\Gamma \brac{F_q,-L^{-1}F_q}-S_q}}}} \\ & \qquad\qquad\qquad\qquad\qquad \leq \sum_{1\leq q\leq N}\norm{D^2g(X)}_{L^2(\Omega;\operatorname{HS}(K))} \norm{\frac{1}{q}\Gamma(F_q,F_q)-S_q}_{L^2(\Omega;\operatorname{HS}(K))}\nonumber\\ & \qquad\qquad\qquad\qquad\qquad \leq \sum_{1\leq q\leq N}\frac{1}{2q}\sqrt{\sum_{i,j\in \N} \V{\Gamma\brac{F_{q,i},F_{q,j}}}}\nonumber\\ & \qquad\qquad\qquad\qquad\qquad\leq \sum_{1\leq q\leq N}\frac{2q-1}{4q}\sqrt{\sum_{i,j\in \N}\E{F_{q,i}^2F_{q,j}^2}-\E{F_{q,i}^2}\E{F_{q,j}^2}-2\E{F_{q,i}F_{q,j}}^2}\nonumber\\ & \qquad\qquad\qquad\qquad\qquad = \sum_{1\leq q\leq N}\frac{2q-1}{4q}\sqrt{\E{\norm{F_q}^4_K}-(\E{\norm{F_q}^2_K})^2-2\norm{S_q}^2_{\operatorname{HS}}}. \end{align*} In particular, we have used the fact that $\norm{D^2g(x)}_{K^{\otimes 2}}=\norm{D^2g(x)}_{\operatorname{HS}(K)}$ and \cite[Lemma 2.4]{bourguin-campese:2020:approximation-hilbert-valued-gaussians} to get the third line above. The fourth line is a consequence of \cite[Lemma 2.2]{dobler-vidotto-zheng:2018:fourth-moment-theorems}. Finally, the identity $\inner{Sf,g}_K=\E{\inner{X,f}_K\inner{X,g}_K}$ allows us to get the term $\norm{S_q}_{\operatorname{HS}}$ in the last line. Now we study the second term on the right side of \eqref{estimate_stein}. Application of \cite[Lemma 2.4]{bourguin-campese:2020:approximation-hilbert-valued-gaussians} and \cite[Lemma 2.2]{dobler-vidotto-zheng:2018:fourth-moment-theorems} gives \begin{align*} &\sum_{1\leq p\neq q\leq N}\abs{\sum_{i,j\in\N}\E{\inner{k_i,D^2g(X)\widetilde{\Gamma}\brac{-\widetilde{L}^{-1}F_{p,i},F_{q,j}}k_j }_K}}\\ &\qquad\qquad\qquad\leq \sum_{1\leq p\neq q\leq N}\E{\sqrt{\sum_{i,j\in\N}\inner{k_i,D^2g(X)k_j }_K } \sqrt{\sum_{i,j\in\N}\widetilde{\Gamma}\brac{-\widetilde{L}^{-1}F_{p,i},F_{q,j}}^2}}\\ &\qquad\qquad\qquad\leq \sum_{1\leq p\neq q\leq N}\sqrt{\sum_{i,j\in\N}\E{\inner{k_i,D^2g(X)k_j }^2_K }} \sqrt{\sum_{i,j\in\N}\E{\widetilde{\Gamma}\brac{-\widetilde{L}^{-1}F_{p,i},F_{q,j}}^2}} \\ &\qquad\qquad\qquad \leq \sum_{1\leq p\neq q\leq N}\frac{p+q-1}{2p}\norm{D^2g(X)}_{L^2(\Omega;\operatorname{HS}(K))}\sqrt{\sum_{i,j\in \N}\E{F_{p,i}^2F_{q,j}^2}-\E{F_{p,i}^2}\E{F_{q,j}^2}}\\ &\qquad\qquad\qquad\leq \sum_{1\leq p\neq q\leq N}\frac{p+q-1}{4p}\sqrt{\E{\norm{F_p}^2_K\norm{F_q}^2_K}-\E{\norm{F_p}^2_K}\E{\norm{F_q}^2_K}}. \end{align*} As the last step, we evaluate the remainder term in \eqref{estimate_stein}. \begin{align*} \lim_{t\to 0}R(t)&\leq \norm{D^3g}_{\infty}\lim_{t\to 0}\frac{1}{t}\E{\norm{X^t-X}^3_K}\\ &\leq \brac{\sqrt{\lim_{t\to 0}\E{\frac{1}{t}\norm{X^t-X}^2_K}} \sqrt{\lim_{t\to 0}\frac{1}{t}\E{\norm{X^t-X}^4_K}}}\\ & \leq\sqrt{2N\E{\norm{X}^2_K}}\sqrt{\sum_{1\leq q\leq N} 2^{3q-2}(4q-3)\brac{\norm{F_q}_K^4-\E{\norm{F_q}^2_K}^2-2\norm{S_q}^2_{\operatorname{HS}}}}, \end{align*} The second line is a consequence of H\"older's inequality and \cite[Lemma 2.4]{bourguin-campese:2020:approximation-hilbert-valued-gaussians}. The third line uses Lemma \ref{lemma_remaindertermbound} (which is stated in the appendix). We can hence deduce from \eqref{estimate_stein} that \begin{align} \label{estimate_first_G} d_3(X,G)&\leq\sum_{1\leq q\leq N}\frac{2q-1}{4q}\sqrt{\E{\norm{F_q}^4_K}-(\E{\norm{F_q}^2_K})^2-2\norm{S_q}^2_{\operatorname{HS}}}\nonumber\\ &\quad +\sum_{1\leq p\neq q\leq N}\frac{p+q-1}{4p}\sqrt{\E{\norm{F_p}^2_K\norm{F_q}^2_K}-\E{\norm{F_p}^2_K}\E{\norm{F_q}^2_K}}\nonumber\\ &\quad +\sqrt{N\E{\norm{X}^2_K}}\sqrt{\sum_{1\leq q\leq N} 2^{3q-1}(4q-3)\brac{\norm{F_q}_K^4-\E{\norm{F_q}^2_K}^2-2\norm{S_q}^2_{\operatorname{HS}}}}. \end{align} In order to obtain the second estimate in Theorem \ref{theorem_fourmomentHilbert}, observe that \begin{align*} \E{\norm{X}^4_K}-(\E{\norm{X}^2_K})^2-2\norm{S}^2_{\operatorname{HS}} =&\sum_{1\leq q\leq N} \E{\norm{F_q}^4_K}-(\E{\norm{F_q}^2_K})^2-2\norm{S_q}^2_{\operatorname{HS}}\\ &+\sum_{1\leq p\neq q\leq N}\E{\norm{F_p}^2_K\norm{F_q}^2_K}-\E{\norm{F_p}^2_K}\E{\norm{F_q}^2_K}, \end{align*} This combined with Lemma \ref{lemmapositive4thmoment}, the bound at \eqref{estimate_first_G} and the fact that \begin{align*} \sum_{1\leq q,p\leq N}\sqrt{y_{q,p}}&\leq \sqrt{N^2 \sum_{1\leq p,q\leq N}y_{q,p}}&&\text{ for $y_{q,p}\geq 0$},\\ 2^{3q-1}q(4q-3)&\leq 2^{3N-1}N(4N-3)&&\text{ for $1\leq p\leq N$},\\ \frac{2q-1}{4q}\vee \frac{p+q-1}{4p}&\leq \frac{N(2N-1)}{4}&&\text{ for $1\leq p,q\leq N$}, \end{align*} , yields \begin{equation*} d_3(X,G)\leq \brac{\frac{N(2N-1)}{4}+\sqrt{2^{3N-1}N(4N-3)\E{\norm{X}^2_K}}}\sqrt{\E{\norm{X}^4_K}-\E{\norm{X}^2_K}^2-2\norm{S}^2_{\operatorname{HS}}}. \end{equation*} \qed We now turn to the proof of Theorem \ref{theorem_contractionestimate}, which makes use of the second estimate in Theorem \ref{theorem_fourmomentHilbert}. \subsection{Proof of Theorem \ref{theorem_contractionestimate}} The strategy here consists of making use of the product formula \eqref{productformulaPoisson} for Poisson multiple integrals in order to represent the quantity $\E{\norm{X}^4_K}-\E{\norm{X}^2_K}^2-2\norm{S}^2_{\operatorname{HS}}$ which appears in the second estimate of Theorem \ref{theorem_fourmomentHilbert} in term of contraction norms. We begin by noting that this quantity can be written as \begin{align*} \E{\norm{X}^4_K}-\E{\norm{X}^2_K}^2-2\norm{S}^2_{\operatorname{HS}}=&\sum_{\substack{i,j\in \N\\1\leq p,q\leq N}}\left( \E{F_{q,i}^2F_{p,j}^2}-\E{F_{q,i}^2}\E{F_{p,j}^2}-2\E{F_{q,i}F_{p,j}}^2 \right)\\ =& \sum_{\substack{i,j\in \N\\1\leq p,q\leq N}}\left( \E{F_{q,i}^2F_{p,j}^2}-\E{F_{q,i}^2}\E{F_{p,j}^2}\right)\\ &-2\sum_{\substack{i,j\in \N\\1\leq q\leq N}}\E{F_{q,i}F_{p,j}}^2 . \end{align*} An application of the product formula \eqref{productformulaPoisson} for Poisson multiple integrals yields \begin{equation*} F_{q,i}F_{p,j}=\sum_{r=0}^{q\wedge p} r!{q \choose r}{p\choose r}\sum_{l=0}^r{r\choose l}I_{q+p-r-l}\brac{{f_{q,i} {\widetilde{\star}}^l_rf_{p,j}}}. \end{equation*} Now by the orthogonality of Poisson chaos of different orders, one has \begin{align} \label{fourthmoment} \E{F_{q,i}^2F_{p,j}^2}&=\sum_{r,s=0}^{q\wedge p} \sum_{\substack{0\leq l\leq r\\0\leq m\leq s\\r+l=s+m}}c_{p,q,l,m}(r,s) \inner{{f_{q,i} \widetilde{\star}^l_rf_{p,j}},{f_{q,i} \widetilde{\star}^m_sf_{p,j}}}_{\mathfrak{H}^{\otimes (q+p-r-l)}}, \end{align} where the coefficient $c_{p,q,l,m}(r,s)$ is given by \begin{equation*} c_{p,q,l,m}(r,s) = r!s! \binom{q}{r}\binom{q}{s}\binom{p}{r}\binom{p}{s}\binom{r}{l}\binom{s}{m}(p+q-r-l)!. \end{equation*} Let us define the index set $I$ as \begin{align*}I=\big\{(r,s,l,m)\in \N^4\colon & 0\leq r,s\leq q\wedge p,\ 0\leq l\leq r,\ 0\leq m\leq s,\\ & r+l=s+m,\ (r,s,l,m)\notin \{(0,0,0,0),(q\wedge p,q\wedge p,q\wedge p,q\wedge p) \}\big\}.\end{align*} Then, using Lemma \ref{lemma_contraction_00}, Equation \eqref{fourthmoment} can be rewritten as \begin{align*} \E{F_{q,i}^2F_{p,j}^2}=&q!p!\norm{f_{q,i}}^2_{\mathfrak{H}^{\otimes q}}\norm{f_{p,j}}^2_{\mathfrak{H}^{\otimes q}}+2q!^2\inner{f_{q,i},f_{q,j}}^2_{\mathfrak{H}^{\otimes q}}\\ &+a_{p,q}\brac{p\wedge q}\norm{f_{q,i} \star^{q\wedge p}_{q\wedge p} f_{p,j}}^2_{\mathfrak{H}^{\otimes \abs{q-p}}}\mathds{1}_{\left\{ q\neq p \right\}}+\sum_{r=1}^{q\wedge p -1}b_{p,q}\brac{r} \norm{f_{q,i} \star^r_r f_{p,j}}^2_{\mathfrak{H}^{\otimes (q+p-2r)}}\\ &+\sum_{(r,s,l,m)\in I}c_{p,q,l,m}(r,s)\inner{{f_{q,i} \widetilde{\star}^l_rf_{p,j}},{f_{q,i} \widetilde{\star}^m_sf_{p,j}}}_{\mathfrak{H}^{\otimes (q+p-r-l)}}, \end{align*} where the combinatorial coefficients $a_{p,q}(r)$ and $b_{p,q}(r)$ are given by \begin{equation*} \begin{cases} \displaystyle a_{p,q}(r) = p!q! \binom{q}{r}\binom{p}{r} + r!^2 \binom{q}{r}^2 \binom{p}{r}^2 \abs{p-q}! \\ \displaystyle b_{p,q}(r) = p!q! \binom{q}{r}\binom{p}{r} \end{cases}. \end{equation*} Consequently, we hence obtain \begin{align*} \E{\norm{X}^4_K}-\E{\norm{X}^2_K}^2-2\norm{S}^2_{\operatorname{HS}}=&\sum_{\substack{i,j\in \N\\1\leq p,q\leq N}}\left( \E{F_{q,i}^2F_{p,j}^2}-\E{F_{q,i}^2}\E{F_{p,j}^2}-2\E{F_{q,i}F_{p,j}}^2\right)\\ =&\sum_{\substack{i,j\in \N\\1\leq p\neq q\leq N}}a_{p,q}\brac{p\wedge q}\norm{f_{q,i} \star^{q\wedge p}_{q\wedge p} f_{p,j}}^2_{\mathfrak{H}^{\otimes \abs{q-p}}}\\ &+\sum_{\substack{i,j\in \N\\1\leq p,q\leq N}}\sum_{r=1}^{q\wedge p -1}b_{p,q}\brac{r} \norm{f_{q,i} \star^r_r f_{p,j}}^2_{\mathfrak{H}^{\otimes (q+p-2r)}}\\ &+\sum_{\substack{i,j\in \N\\1\leq p,q\leq N\\(r,s,l,m)\in I}}c_{p,q,l,m}(r,s)\inner{{f_{q,i} \widetilde{\star}^l_rf_{p,j}},{f_{q,i} \widetilde{\star}^m_sf_{p,j}}}_{\mathfrak{H}^{\otimes (q+p-r-l)}}. \end{align*} Since we have \begin{align*} \norm{{f_{q}{\star}^l_rf_{p}}}^2_{\mathfrak{H}^{\otimes (q+p-r-l)}\otimes K^{\otimes 2}}=\sum_{i,j\in\N}\norm{{\inner{f_{q},k_i}_K{\star}^l_r \inner{f_{p},k_j}_K}}^2_{\mathfrak{H}^{\otimes (q+p-r-l)}}=\sum_{i,j\in\N}\norm{{f_{q,i}{\star}^l_rf_{p,j}}}^2_{\mathfrak{H}^{\otimes (q+p-r-l)}}, \end{align*} we can sum over $i,j \in \N$ and apply Holder's inequality to get \begin{align*} &\E{\norm{X}^4_K}-\E{\norm{X}^2_K}^2-2\norm{S}^2_{\operatorname{HS}} \leq \sum_{\substack{i,j\in \N\\1\leq p\neq q\leq N}}a_{p,q}\brac{p\wedge q}\norm{f_{q,i} \star^{q\wedge p}_{q\wedge p} f_{p,j}}^2_{\mathfrak{H}^{\otimes \abs{q-p}}}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\quad+\sum_{\substack{i,j\in \N\\1\leq p,q\leq N}}\sum_{r=1}^{q\wedge p -1}b_{p,q}\brac{r} \norm{f_{q,i} \star^r_r f_{p,j}}^2_{\mathfrak{H}^{\otimes (q+p-2r)}}\\ &\qquad\qquad\qquad\qquad\qquad\qquad+\sum_{\substack{i,j\in \N\\1\leq p,q\leq N\\(r,s,l,m)\in I}}c_{p,q,l,m}(r,s)\norm{f_{q} \star^l_r f_{p}}_{\mathfrak{H}^{\otimes (q+p-r-l)}}\norm{f_{q} \star^m_s f_{p}}_{\mathfrak{H}^{\otimes (q+p-r-l)}}, \end{align*} which concludes the proof. \qed \section{Applications} \label{sec:applications} \subsection{Brownian approximation of a Poisson process in Besov-Liouville spaces} \label{subsection_Besov} \subsubsection{A brief overview of Besov-Liouville spaces} For an extensive account on the current topic, we invite readers to view \cite{samko-kilbas-marichev:1993:fractional-integrals-derivatives}. For $f\in L^p([0,1],ds)$ and $\beta>0$, we define the left and right fractional integrals respectively as \begin{align*} \brac{I^\beta_{0^{+}}f}(s)=\frac{1}{\Gamma(\beta)}\int_0^s (s-r)^{\beta-1}f(r)dr \end{align*} and \begin{align*} \brac{I^\beta_{1^{-}}f}(s)=\frac{1}{\Gamma(\beta)}\int_s^1 (r-s)^{\beta-1}f(r)dr. \end{align*} This allows us to define the Besov-Liouville spaces \begin{align*} \mathcal{I}^+_{\beta,p}=\left\{ I^\beta_{0^{+}}\widehat{f},\ \widehat{f}\in L^p([0,1])\right\}, \end{align*} which are Banach spaces when equipped with the norm $\norm{f}_{\mathcal{I}^+_{\beta,p}}=\norm{\widehat{f}}_{L^p([0,1])}$. The Besov-Liouville spaces $\mathcal{I}^-_{\beta,p}$ are defined accordingly with the right fractional integrals. When $\beta p<1$, the spaces $\mathcal{I}^+_{\beta,p}$ and $\mathcal{I}^-_{\beta,p}$ are canonically isomorphic and therefore will both be denoted by $\mathcal{I}_{\beta,p}$. \begin{remark} As pointed out in \cite{coutin-decreusefond:2013:steins-method-brownian}, $\mathcal{I}_{\beta,2}$ for $\beta<1/2$ is an appropriate class of Besov-Liouville spaces for the functional approximation of a Poisson process by a Brownian motion since they are Hilbert spaces containing both the sample paths of the Poisson process and the Brownian motion. \end{remark} Similarly to the left and right fractional integrals, one can define left and right fractional derivatives as \begin{align*} \brac{D^\beta_{0^{+}}f}(s)=\frac{1}{\Gamma(1-\beta)}\frac{d}{ds}\int_0^s (s-r)^{-\beta}f(r)dr\\ \brac{D^\beta_{1^{-}}f}(s)=\frac{1}{\Gamma(1-\beta)}\frac{d}{ds}\int_s^1 (r-s)^{-\beta}f(r)dr \end{align*} As the name suggests, $D^\beta_{0^{+}}$ is the inverse of $I^\beta_{0^{+}}$ (see \cite[Theorem 2.4]{samko-kilbas-marichev:1993:fractional-integrals-derivatives}). Two examples for the action of this operator that will be useful later are \begin{align} \label{example_fracder} \brac{D^\beta_{0^{+}}\operatorname{Id}} (r)=\frac{r^{-\beta+1}}{(-\beta+1)\Gamma(-\beta+1)}\quad \mbox{and}\quad \brac{D^\beta_{0^{+}}1_{[a,\infty)}} (r)=\frac{\brac{r-a}^{-\beta}_{+}}{\Gamma(-\beta+1)}, \end{align} where $\operatorname{Id}$ denotes the identity function. Let us also mention a few important facts about fractional integrals and derivatives. Given $0<\beta<1$ and $1<p<1/\beta$, $I^\beta_{0^{+}}$ is a bounded operator from $L^p([0,1])$ to $L^q([0,1])$ with $q=p(1-\beta p)^{-1}$. Moreover, for $\beta>0$ and $p\geq 1$, $I^\beta_{0^{+}}$ is bounded from $L^p([0,1])$ into itself (see for instance \cite[Equation (2.72)]{samko-kilbas-marichev:1993:fractional-integrals-derivatives}). Next, fractional derivatives are the inverses of fractional integrals, in the sense that \begin{align*}\brac{D^\beta_{0^{+}} I^\beta_{0^{+}}f} (s)=f(s) \end{align*} for $f\in L^1([0,1])$. Furthermore, fractional integrals enjoy the semigroup property (see \cite[Theorem 2.5]{samko-kilbas-marichev:1993:fractional-integrals-derivatives}), that is \begin{align*} \brac{I^\alpha_{0^{+}}I^\beta_{0^{+}}f}(s)=\brac{I^{\alpha+\beta}_{0^{+}}f}(s) \end{align*} as long as $\beta>0$, $\alpha+\beta>0$ and $f\in L^1([0,1])$. \subsubsection{A functional central limit theorem} \label{subsubsection_theorem_Besov} We consider a Poisson process $N_\lambda (t)$ with intensity $\lambda$. It is well known (see for instance \cite[Example 9.1.3]{nualart-nualart:2018:introduction-malliavin-calculus}) that it can be represented as \begin{align} \label{def_poiprocess_Besov} N_\lambda (t)&=\sum_{n\in \N} 1_{[T_n,\infty)}(t), \end{align} where $T_n=\sum_{i=1}^n \alpha_i$ and $\left\{ \alpha_i \colon i \in \N\right\}$ are independent exponentially distributed random variables with parameter $\lambda$, i.e., $\alpha_i \sim \operatorname{Exp}(\lambda)$ for all $i \in \N$. This implies that $T_n$ is Gamma distributed with shape $n$ and rate $\lambda$, i.e., $T_n\sim \operatorname{Gamma}(n,\lambda)$. As pointed out in \cite{coutin-decreusefond:2013:steins-method-brownian}, $N_\lambda (t)$ maps into $\mathcal{I}_{\beta,2}$ for $\beta<1/2$. For any $t \in [0,1]$, define \begin{align*} X_\lambda (t)&=\frac{N_\lambda (t)-\lambda t}{\sqrt{\lambda}} \end{align*} and let $Z$ be a Brownian motion on $\mathcal{I}_{\beta,2}$, that is a $\mathcal{I}_{\beta,2}$-valued Gaussian random variable with covariance operator \begin{align} \label{covariance_BM_Besov} S'=I^\beta_{0^{+}} I^{1-\beta}_{0^{+}} I^{1-\beta}_{1^{-}} D^\beta_{0^{+}}, \end{align} where the expression of the covariance operator was derived in \cite{coutin-decreusefond:2013:steins-method-brownian}. We are now ready to state the main result of this application, namely the Brownian approximation of a Poisson process in $\mathcal{I}_{\beta,2}$. \begin{theorem} \label{theorem_Besov} On a Besov-Liouville space $\mathcal{I}_{\beta,2}$ with $\beta<1/2$, the distributions of $X_\lambda$ and $Z$ are asymptotically close as $\lambda\to\infty$. Their closeness can be quantified by \begin{align*} d_3(X_\lambda,Z)\lesssim \frac{1}{\sqrt{\lambda}}. \end{align*} \end{theorem} \begin{proof} $X_\lambda (t)$ can be represented as a Poisson multiple integral of order one. Let $\mathfrak{H}=L^2(\R^+,\lambda dx)$ be the underlying Hilbert space to the compensated Poisson process $N_\lambda(t)-\lambda t$. Furthermore, let $f(t)=\frac{1}{\sqrt{\lambda}}1_{[0,t]}\in \mathfrak{H}$. We can hence write \begin{align*} X_\lambda (t)=I_1(f(t)). \end{align*} Theorem \ref{theorem_contractionestimate} then provides us with the estimate \begin{align} \label{bound_stein_Besov} d_3(X_\lambda,Z)\lesssim \norm{f \star^0_1 f}^2_{\mathfrak{H}\otimes K^{\otimes 2}}+\norm{S_\lambda-S'}_{\operatorname{HS}(K)}, \end{align} where $S_\lambda$ denotes the covariance operator of $X_{\lambda}$ and where $K=\mathcal{I}_{\beta,2}$. We begin by computing the contraction norm appearing above. We have \begin{align*} (f \star^0_1 f)(x)=\frac{1}{\lambda}1_{[0,t]}(x)1_{[0,s]}(x)=1_{[x,\infty)}(t)1_{[x,\infty)}(s),\end{align*} so that \begin{align*} \norm{f \star^0_1 f}^2_{\mathfrak{H}\otimes K^{\otimes 2}}&=\frac{1}{\lambda^2}\int_0^1\int_0^1 \int_0^1 \brac{\brac{D^\beta_{0^{+}}1_{[x,\infty)}}(t) \brac{D^\beta_{0^{+}}1_{[x,\infty)}}(s)}^2 \lambda dxdsdt\\ &=\frac{1}{\lambda \Gamma(-\beta+1)^4}\int_0^1\int_0^1 \brac{t-x}^{-2\beta}_{+}\brac{s-x}^{-2\beta}_{+}dsdt\lesssim \frac{1}{\lambda}, \end{align*} where the last inequality simply comes from the fact that $\int_0^1\int_0^1 \brac{t-x}^{-2\beta}_{+}\brac{s-x}^{-2\beta}_{+}dsdt$ is finite. In order to estimate the remaining term, namely $\norm{S_\lambda-S'}_{\operatorname{HS}(K)}$, we apply Lemma \ref{lemma_covariance} and Lemma \ref{lemma_cov_kernel}. This yields \begin{align*} \norm{S_\lambda-S'}_{\operatorname{HS}(K)}^2&=\norm{\E{\brac{D^\beta_{0^{+}}X_\lambda}(r) \brac{D^\beta_{0^{+}}X_\lambda}(s)}-\E{\brac{D^\beta_{0^{+}}Z}(r) \brac{D^\beta_{0^{+}}Z}(s)}}_{L^2([0,1]^{\otimes 2})}^2\\ &=\norm{-\frac{\lambda}{\Gamma(-\beta+1)^2 (-\beta+1)^2}(r-r\wedge s)^{-\beta+1}(s-r\wedge s)^{-\beta+1}}_{L^2([0,1]^{\otimes 2})}^2=0, \end{align*} which concludes the proof. \end{proof} \subsection{Edge counting in random graphs} In \cite{lachieze-rey-peccati:2013:fine-gaussian-fluctuations}, the authors studied Gaussian fluctuations of real-valued $U$-statistics related to graphs generated by Poisson point processes. We will apply Theorem \ref{theorem_contractionestimate} to obtain a functional version of their results in all three regimes mentioned in \cite[Example 4.13]{lachieze-rey-peccati:2013:fine-gaussian-fluctuations}. Recall from Subsection \ref{constructionofthepair} the definition of a proper Poisson point process \begin{align*} \eta_\lambda=\sum_{i=1}^{\operatorname{Po}(\lambda)}\delta_{Y_i}, \end{align*} where $\operatorname{Po}(\lambda)$ is a Poisson distribution on $\R$, while $\{Y_i \}_{i\in\N}$ is an i.i.d. sequence of $\R^d$-valued random variables distributed as $\ell$ and independent from $\operatorname{Po}(\lambda)$. For simplicity and illustration purposes, let us assume $\ell$ is the Lebesgue measure on $\R^d$. The control measure of $\eta_\lambda$ is therefore \begin{align*}\mu_\lambda(\cdot)=\lambda \ell(\cdot).\end{align*} Let $G$ be a graph generated by $\eta_\lambda$, so that $G$ has the vertex set $\{Y_1,\ldots,Y_{\operatorname{Po}(\lambda)}\}$. In addition, let $W\subseteq \R^{d}$ be a symmetric set which will serve as our original window in which we monitor the edges of $G$, and let $H_{\lambda}\subseteq \R^{2d}$ be a symmetric set which will serve as our original edge set. For $0\leq t\leq 1$, define \begin{equation*} \begin{cases} \displaystyle W_t=t^{\frac{1}{2d}}W\\ \displaystyle H_{\lambda,t}=t^{\frac{1}{2d}}H_{\lambda}\\ \displaystyle \widehat{W}_t=\{x-y:x,y\in W_t \}\\ \displaystyle \overline{H}_{\lambda,t}=\{x-y:x,y\in H_{\lambda,t}\} \end{cases}. \end{equation*} We will assume that any edge, written in pairs $(x,y)$, belongs to $H_{\lambda,t}$ if and only if $x-y\in \overline{H}_{\lambda,t}$. For example, this property holds for a disk graph with base edge set $\overline{H}_{\lambda}=B\brac{0,r_\lambda}$, an open ball of radius $r_\lambda$ at the origin. We note that compared to the setup in \cite{lachieze-rey-peccati:2013:fine-gaussian-fluctuations}, our window and edge set are not static but evolve with time. We are interested in a Poissonized $U$-statistics of the form \begin{align*} F_\lambda (t)=\sum_{\substack{(x,y)\in \eta^2_\lambda\\x\neq y}}1_{H_{\lambda,t} \cap W_{t}^2} (x,y)=\sum_{1=i_1< i_2}^{\operatorname{Po}(\lambda)}1_{H_{\lambda,t} \cap W_{t}^2} (Y_{i_1},Y_{i_2}) \end{align*} which counts edges of $G$ that belong to the set $\overline{H}_{\lambda,t}$ and lie inside the window $W_t$ at time $t$. It is clear from the hypothesis that $\{{F}_\lambda (t)\}_{t\in [0,1]}$ as a process belongs to $K=L^2\brac{[0,1]}$. As proved in \cite{reitzner-schulte:2013:central-limit-theorems}, our $U$-statistic has a finite chaos expansion given by \begin{equation*} F_\lambda (t)=\E{F_\lambda (t)} + I_1\brac{f_{1}(t)}+I_2\brac{f_{2}(t)}, \end{equation*} where the (functional) kernels $f_1(t)$ and $f_2(t)$ are given by \begin{equation*} \begin{cases} \displaystyle f_{1}(t)=2\int_{\R^d} 1_{H_{\lambda,t} \cap W_{t}^{ 2}} (x,y) \lambda dy\\ \displaystyle f_{2}(t)=1_{H_{\lambda,t} \cap W_{t}^{ 2}} (x,y) \end{cases}. \end{equation*} Let $\bar{F}_\lambda (t)$ denote the centered and normalized version of $F_\lambda (t)$ given by \begin{align*} \bar{F}_\lambda (t)= \frac{F_\lambda (t)-\E{F_\lambda (t)}}{\sigma}=I_1\left(g_1(t)\right)+I_2\left(g_2(t)\right), \end{align*} where $\sigma^2=\V{F_\lambda (1)}$, $g_1(t)=\frac{f_{1}(t)}{\sigma}$ and $g_2(t)=\frac{f_{2}(t)}{\sigma}$. For convenience, we will also write $\ell_t$ for $\ell\brac{W_t}$ and $\psi_{\lambda,t}$ for $\ell\brac{\overline{H}_{\lambda,t} \cap \widehat{W}_t}$. Using the scaling properties of the Lebesgue measure, we can write \begin{align*} \ell_t=\sqrt{t}\ell_1\quad \mbox{and} \quad \psi_{\lambda,t}= \sqrt{t}\psi_{\lambda,1}. \end{align*} We can actually compute $\sigma^2$ explicitly, using the orthogonality of Wiener chaos of different orders and the isometry property of Poisson multiple integrals. This yields \begin{align*} \sigma^2=&\norm{f_{1}(1)}_{L^2\brac{\mu_\lambda}}^2+\norm{f_{2}(1)}_{L^2\brac{\mu_\lambda^2}}^2\\ =& 4\lambda^3\int_{\R^d}\brac{\int_{\R^d} 1_{W_1}(x)1_{\overline{H}_{\lambda,1} \cap \widehat{W}_1}(y-x) d(y-x)}^2 dx + \int_{\R^{2d}}1_{H_{\lambda,1} \cap W_1^{ 2}} (x,y) \lambda^2 dx dy\\ =& 4\ell_1\lambda^3 \psi_{\lambda,1}^2 + \ell_1\lambda^2\psi_{\lambda,1}. \end{align*} Based on the above expression for $\sigma^2$, we can consider three different regimes (similarly to what was done in \cite{lachieze-rey-peccati:2013:fine-gaussian-fluctuations}), namely \begin{enumerate} \item[-] Regime 1: $\lambda \psi_{\lambda,1}\to\infty$ as $\lambda\to\infty$; \item[-] Regime 2: $\lambda \psi_{\lambda,1}\to 1$ for $c>0$ as $\lambda\to\infty$; \item[-] Regime 3: $\lambda \psi_{\lambda,1}\to 0$ and $\lambda \sqrt{\psi_{\lambda,1}}\to \infty$ as $\lambda\to\infty$. \end{enumerate} Within Regime 1, $\sigma^2$ is dominated by $\norm{f_{1}(1)}_{L^2(\mu_\lambda)}^2$ for large values of $\lambda$, which implies \begin{align*} \sigma^2\asymp 4\ell_1\lambda^3 \psi_{\lambda,1}^2, \end{align*} whereas in Regime 2, we get \begin{align*} \sigma^2\asymp 4\ell_1\lambda^3 \psi_{\lambda,1}^2\asymp \ell_1\lambda^2\psi_{\lambda,1}, \end{align*} and finally in Regime 3, it holds that \begin{align*} \sigma^2\asymp\ell_1\lambda^2\psi_{\lambda,1}. \end{align*} We are now ready to present the application of our results to edge counting in random graphs. \begin{theorem} \label{theorem_edgecounting} As $\lambda\to\infty$, $\bar{F}_\lambda (t)$ converges in $K=L^2([0,1])$ to a $K$-valued Gaussian random variable $Z$ with covariance function $\phi(s,t)=\E{Z(s)Z(t)}$. More specifically, \begin{enumerate} \item[-] In Regime 1, $\phi(t,s)=\sqrt{ts(t\wedge s)}$ and \begin{align*} d_3\brac{\bar{F}_\lambda,Z}\lesssim \lambda^{-\frac{1}{2}}+\frac{1}{\lambda\psi_{\lambda,1}}; \end{align*} \item[-] In Regime 2, $\phi(t,s)=\frac{4\sqrt{ts(t\wedge s)}+t\wedge s}{5}$ and \begin{align*} d_3\brac{\bar{F}_\lambda,Z}\lesssim \lambda^{-\frac{1}{2}}+\abs{\lambda \psi_{\lambda,1}- 1}; \end{align*} \item[-] In Regime 3, $\phi(t,s)=t\wedge s$ which implies that $Z$ is a Brownian motion, and \begin{align*} d_3\brac{\bar{F}_\lambda,Z}\lesssim \lambda^{-1}\psi_{\lambda,1}^{-1/2}+\lambda \psi_{\lambda,1}. \end{align*} \end{enumerate} \end{theorem} \begin{proof} In order to make use of Theorem \ref{theorem_contractionestimate}, we will need to evaluate contraction norms, but also the Hilbert-Schmidt norm of the difference between the covariance operators, i.e., $\norm{S-S'}_{\operatorname{HS}}$. Let us start with this term before we turn to the contraction norms themselves. As before, $S_\lambda$ and $S'$ denotes the covariance operator of $\bar{F}_\lambda$ and $Z$ respectively. Based on \cite[Theorem 7.4.3]{hsing-eubank:2015:theoretical-foundations-functional} and how Hilbert-Schmidt norms are defined for integral operators, we can use \begin{align*} \norm{S_\lambda-S'}_{\operatorname{HS(K)}}=&\norm{\E{\bar{F}_\lambda(t)\bar{F}_\lambda(s)}-\E{Z(t)Z(s)}}_{L^2\brac{[0,1]^{\otimes 2}}}\\ \leq& \norm{\E{\bar{F}_\lambda(t)\bar{F}_\lambda(s)}-\E{Z(t)Z(s)}}_{\infty}. \end{align*} Our task is hence to compute $\E{\bar{F}_\lambda(t)\bar{F}_\lambda(s)}$. We have \begin{equation*} \inner{{f}_{1}(t),{f}_{1}(s)}_{L^2(\mu_\lambda)}=4\lambda^3\psi_{\lambda,t}\psi_{\lambda,s}\ell_{t\wedge s}=\sqrt{ts(t\wedge s)}4\ell_{1}\lambda^3\psi_{\lambda,1}^2 \end{equation*} and \begin{equation*} \inner{{f}_{2}(t),{f}_{2}(s)}_{L^2(\mu^2_\lambda)}=\lambda^2\psi_{\lambda,t\wedge s}\ell_{t\wedge s}=(t\wedge s) \ell_{1}\lambda^2\psi_{\lambda,1}, \end{equation*} so that \begin{align*} \E{\bar{F}_\lambda(t)\bar{F}_\lambda(s)}=&\frac{\inner{{f}_{1}(t),{f}_{1}(s)}_{L^2(\mu_\lambda)}+\inner{{f}_{2}(t),{f}_{2}(s)}_{L^2(\mu^2_\lambda)}}{\sigma^2}\\ =&\frac{\sqrt{ts(t\wedge s)}4\lambda\psi_{\lambda,1}+t\wedge s}{4\lambda\psi_{\lambda,1}+1}. \end{align*} At this step, we need to differentiate our analysis depending on what regime we are in. \\~\\ \textbf{Regime 1:} We assume here that $\lambda \psi_{\lambda,1}\to\infty$. The limiting covariance operator $S'$ then has covariance function $\phi(t,s)=\sqrt{ts(t\wedge s)}$. We can use the fact that for $a\ll A,b\ll B$, \begin{align*} \abs{\frac{A+a}{B+b}-\frac{A}{B}}\lesssim \abs{\frac{a}{B}}+\abs{\frac{b}{B}} \end{align*} in order to deduce that \begin{align} \label{estimatecovarregime1} \norm{S_\lambda-S'}_{\operatorname{HS(K)}}\leq \sup_{1\leq s,t\leq M}\abs{\E{\bar{F}_\lambda(t)\bar{F}_\lambda(s)}-\phi(t,s)}\lesssim \frac{1}{\lambda\psi_{\lambda,1}}. \end{align} ~\\ \textbf{Regime 2:} Here, $\lambda \psi_{\lambda,1}\to 1$, so that the limiting covariance function is given by $\phi(t,s)=\frac{4\sqrt{ts(t\wedge s)}+t\wedge s}{5}$. Moreover, \begin{align} \label{estimatecovarregime2} \norm{S_\lambda-S'}_{\operatorname{HS(K)}}\leq& \sup_{1\leq s,t\leq M}\abs{\E{\bar{F}_\lambda(t)\bar{F}_\lambda(s)}-\phi(t,s)}\nonumber\\ =&\sup_{1\leq s,t\leq M}\abs{\frac{4\sqrt{ts(t\wedge s)}\lambda \psi_{\lambda,1}+t\wedge s}{4\lambda \psi_{\lambda,1}+1} -\frac{4\sqrt{ts(t\wedge s)}+t\wedge s}{5}} \lesssim \abs{\lambda \psi_{\lambda,1}- 1}. \end{align} ~\\ \textbf{Regime 3:} The fact that $\lambda \psi_{\lambda,1}\to 0$ implies in this case that the limiting covariance function is given by $\phi(t,s)=t\wedge s$, and we hence have \begin{align} \label{estimatecovarregime3} \norm{S_\lambda-S'}_{\operatorname{HS(K)}}\lesssim \frac{\lambda^3\psi_{\lambda,1}^2}{\lambda^2\psi_{\lambda,1}} \asymp\lambda \psi_{\lambda,1}. \end{align} ~\\ We now turn to the second part of the bound appearing in Theorem \ref{theorem_contractionestimate}, namely the contraction norms. We need to evaluate the norms of $g_1(t)\star^0_1g_1(t)$, $g_1(t)\star^0_1g_2(t)$, $g_1(t)\star^1_1g_2(t)$, $g_2(t)\star^0_1g_2(t)$, $g_2(t)\star^0_2g_2(t)$ and $g_2(t)\star^1_1g_2(t)$. The calculations we need to perform are very similar to the ones appearing in the proof of \cite[Theorem 4.7]{lachieze-rey-peccati:2013:fine-gaussian-fluctuations}, hence we will not provide full details and proceed straight to the result. Let us still include two examples of these calculations (the cases of the contractions $g_1(t)\star^0_1g_1(t)$ and $g_2(t)\star^1_1g_2(t)$) for the reader's convenience and for the sake of staying self-contained. Recall that $W_t$, $H_{\lambda,t}$ are symmetric sets, $W_t$ (respectively $H_{\lambda,t}$) is contained in $W_{t'}$ (respectively $H_{\lambda,t'}$) for $t\leq t'$, and that $\psi_{\lambda,t}= \sqrt{t}\psi_{\lambda,1}$, while $\ell_t=\sqrt{t}\ell_1<\infty$. We can then write \begin{align*} &\norm{f_{1}(t)\star^0_1 f_{1}(s)}^2_{L^2(\mu_\lambda)\otimes K^{\otimes 2}}\\& \qquad =\norm{\int_{\R^d} \brac{4\int_{Z^2}1_{H_{\lambda,t} \cap W_t^{ 2}} (x,y) 1_{H_{\lambda,s} \cap W_s^{ 2}} (x,u)\lambda dy\lambda du }^2 \lambda dx}_{K^{\otimes 2}}\\ & \qquad\leq 16\lambda^5\norm{\int_{\R^d} \brac{\int_{\R^{2d}}1_{H_{\lambda,s\vee t} \cap W_{s\vee t}^{ 2}} (x,y) 1_{H_{\lambda,s\vee t} \cap W_{s\vee t}^{ 2}} (x,u)dydu }^2 dx}_{K^{\otimes 2}}\\ & \qquad \asymp \lambda^5 \norm{\int_{\R^d} \brac{\int_{\R^{2d}}1_{W_{s\vee t}}(x)1_{\overline{H}_{\lambda,s\vee t} \cap \widehat{W}_{s\vee t}}(y-x) 1_{\overline{H}_{\lambda,s\vee t} \cap \widehat{W}_{s\vee t}}(u-x)d(y-x)d(u-x) }^2 dx}_{K^{\otimes 2}}\\ & \qquad \asymp \lambda^5\norm{\ell_{s\vee t}\psi_{\lambda,s\vee t}^4}_{K^{\otimes 2}}\asymp \lambda^5\psi_{\lambda,1}^4 \end{align*} and \begin{align*} &\norm{f_{2}(t)\star^1_1 f_{2}(s)}^2_{L^2(\mu^2_\lambda)\otimes K^{\otimes 2}}\\ &\qquad \leq \norm{\int_{\R^{2d}}\brac{\int_{\R^d} 1_{H_{\lambda,{s\vee t}} \cap W_{s\vee t}^{ 2}}(x,y)1_{H_{\lambda,{s\vee t}} \cap W_{s\vee t}^{ 2}} (x,u)\lambda dx }^2 \lambda^2dydu}_{K^{\otimes 2}}\\ &\qquad = \lambda^4 \norm{\int_{\R^{4d}} 1_{H_{\lambda,{s\vee t}} \cap W_{s\vee t}^{ 2}}(x,y)1_{H_{\lambda,{s\vee t}} \cap W_{s\vee t}^{ 2}}(x,u)1_{H_{\lambda,{s\vee t}} \cap W_{s\vee t}^{ 2}}(v,y)1_{H_{\lambda,{s\vee t}} \cap W_{s\vee t}^{ 2}}(v,u)dxdydudv}_{K^{\otimes 2}}\\ &\qquad \leq \lambda^4 \bigg\lVert\int_{\R^{4d}}1_{W_{s\vee t}}(x)1_{\overline{H}_{\lambda,{s\vee t}} \cap \widehat{W}_{s\vee t}}(y-x)1_{\overline{H}_{\lambda,{s\vee t}} \cap \widehat{W}_{s\vee t}}(u-x) 1_{W_{s\vee t}}(x)1_{\overline{H}_{\lambda,{s\vee t}} \cap \widehat{W}_{s\vee t}}(y-v) \\&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad dxd(y-x)d(u-v)d(v-y)\bigg\lVert_{K^{\otimes 2}}\\ &\qquad\asymp \lambda^4\norm{\ell_{s\vee t}\psi_{\lambda,{s\vee t}}^3}_{K^{\otimes 2}}\asymp\lambda^4\psi_{\lambda,1}^3. \end{align*} For the remaining contractions, performing similar calculations yields $\norm{f_{1}(t)\star^0_1 f_{2}(t)}^2_{L^2(\mu_\lambda^2)\otimes K^{\otimes 2}}\\\lesssim \lambda^4 \psi_{\lambda,1}^3$, $\norm{f_{2}(t)\star^0_1 f_{2 }(t)}^2_{L^2(\mu_\lambda^3)\otimes K^{\otimes 2}}\lesssim \lambda^3 \psi_{\lambda,1}^2$, $\norm{f_{2}(t)\star^0_2 f_{2}(t)}^2_{L^2(\mu_\lambda^2)\otimes K^{\otimes 2}}\lesssim \lambda^2\psi_{\lambda,1}$, and finally $\norm{f_{1}(t)\star^1_1 f_{2}(t)}^2_{L^2(\mu_\lambda)\otimes K^{\otimes 2}} \lesssim \lambda^5\psi_{\lambda,1}^4$. We split the remainder of the proof into three cases corresponding to the three possible regimes. \\~\\ \textbf{Regime 1:} Here, $\lambda \psi_{\lambda,1}\to\infty$ as $\lambda\to\infty$, and since $\sigma^2\asymp \lambda^3 \psi_{\lambda,1}^2$, we have $\norm{g_1(t)\star^0_1g_1(t)}^2_{L^2(\mu_\lambda)\otimes K^{\otimes 2}}\\\lesssim \lambda^{-1}$, $\norm{g_1(t)\star^0_1 g_2(t)}^2_{L^2(\mu_\lambda^2)\otimes K^{\otimes 2}}\lesssim \lambda^{-2}\psi_{\lambda,1}^{-1}$, $\norm{g_2(t)\star^0_1 g_2(t)}^2_{L^2(\mu_\lambda^3)\otimes K^{\otimes 2}} \lesssim \lambda^{-3}\psi_{\lambda,1}^{-2}$, $\\ \norm{g_2(t)\star^0_2 g_2(t)}^2_{L^2(\mu_\lambda^2)\otimes K^{\otimes 2}} \lesssim \lambda^{-4}\psi_{\lambda,1}^{-3}$, $\norm{g_2(t)\star^1_1 g_2(t)}^2_{L^2(\mu_\lambda^2)\otimes K^{\otimes 2}} \lesssim \lambda^{-2}\psi_{\lambda,1}^{-1}$ and lastly $\\\norm{g_1(t)\star^1_1 g_2(t)}^2_{L^2(\mu_\lambda)\otimes K^{\otimes 2}} \lesssim \lambda^{-1}$. Note that all the above estimates are asymptotically bounded from above by $\lambda^{-1}$, and using \eqref{estimatecovarregime1}, the estimate in Theorem \ref{theorem_contractionestimate} yields \begin{align*} d_3\brac{\bar{F}_\lambda,Z}\lesssim \lambda^{-\frac{1}{2}}+\frac{1}{\lambda\psi_{\lambda,1}}. \end{align*} ~\\ \textbf{Regime 2:} As in this case, we have $\lambda \psi_{\lambda,1}\to 1$ as $\lambda\to\infty$, we get $\sigma^2\asymp\lambda^3 \psi_{\lambda,1}^2\asymp\lambda^2\psi_{\lambda,1}$. Therefore, we can reuse the computations from Regime 1 combined with \eqref{estimatecovarregime2} to get \begin{align*} d_3\brac{\bar{F}_\lambda,Z}\lesssim \lambda^{-\frac{1}{2}}+\abs{\lambda \psi_{\lambda,1}- 1}. \end{align*} ~\\ \textbf{Regime 3:} In this regime, $\lambda \psi_{\lambda,1}\to 0$ and $\lambda \sqrt{\psi_{\lambda,1}}\to \infty$ as $\lambda\to\infty$, so that $\sigma^2\asymp\lambda^2\psi_{\lambda,1}$. This allows us to deduce that $\norm{g_1(t)\star^0_1g_1(t)}^2_{L^2(\mu_\lambda)\otimes K^{\otimes 2}} \lesssim \lambda\psi^2_{\lambda,1}$, $\norm{g_1(t)\star^0_1 g_2(t)}^2_{L^2(\mu_\lambda^2)\otimes K^{\otimes 2}} \lesssim \psi_{\lambda,1}$, $\norm{g_2(t)\star^0_1 g_2(t)}^2_{L^2(\mu_\lambda^3)\otimes K^{\otimes 2}} \lesssim \lambda^{-1}$, $\norm{g_2(t)\star^0_2 g_2(t)}^2_{L^2(\mu_\lambda^2)\otimes K^{\otimes 2}} \lesssim \lambda^{-2}\psi_{\lambda,1}^{-1}$, $\norm{g_2(t)\star^1_1 g_2(t)}^2_{L^2(\mu_\lambda^2)\otimes K^{\otimes 2}}\\ \lesssim \psi_{\lambda,1}$ and $\norm{g_1(t)\star^1_1 g_2(t)}^2_{L^2(\mu_\lambda)\otimes K^{\otimes 2}} \lesssim \lambda\psi_{\lambda,1}^2$. Since $\lambda^{-2}\ll \psi_{\lambda,1}\ll \lambda^{-1}$, all terms listed are asymptotically bounded by $\lambda^{-2}\psi_{\lambda,1}^{-1}$. Combining this fact with \eqref{estimatecovarregime3} yields \begin{align*} d_3\brac{\bar{F}_\lambda,Z}\lesssim \lambda^{-1}\psi_{\lambda,1}^{-1/2}+\lambda \psi_{\lambda,1}, \end{align*} which concludes the proof. \end{proof} \section*{Appendix} \label{sec:appendix} This section gathers ancillary lemmas used in the proofs of our main results as well as in the different applications presented in this paper. \subsection{Lemmas related to the proofs of Theorems \ref{theorem_fourmomentHilbert} and \ref{theorem_contractionestimate}} Our first lemma is a crucial result from \cite{dobler-vidotto-zheng:2018:fourth-moment-theorems} which we restate here for convenience. \begin{lemma} \label{lemmaexchangeablepairDVZ} Let $p,q \geq 1$ be integers, and let $F_q=I_q^{\eta}(f_q),G_p=I_p^{\eta}(g_p)$ and $F^t_q=I_q^{\eta^t}(f_q),G^t_p=I_p^{\eta^t}(g_p)$ be real-valued Poisson multiple integrals as constructed in Section \ref{Section_prelim}. Then, the following limits hold almost surely. \begin{enumerate}[label=(\alph*)] \item $\lim_{t\to 0}\frac{1}{t}\E{F^t_q-F_q|\eta}=-qF$ \item $ \lim_{t\to 0}\frac{1}{t}\E{(F^t_q-F_q)(G^t_p-G_p)|\eta}=2\widetilde{\Gamma}(F_q,G_p)$ \item $ \lim_{t\to 0}\frac{1}{t}\E{F^t_q(G^t_p-G_p)|\eta}=2\widetilde{\Gamma}(F_q,G_p)-pF_qG_p$ \item $\lim_{t\to 0} \frac{1}{t}\E{(F^t_q-F_q)^4}=-4q\E{F_q^4}+12\E{F_q^2\widetilde{\Gamma}(F_q,F_q)}$. \end{enumerate} \end{lemma} \begin{proof} The proof of part $(a)$, $(b)$ and $(d)$ are in \cite[Proposition 3.2]{dobler-vidotto-zheng:2018:fourth-moment-theorems}. Part $(c)$ is a consequence of $(a)$ and $(b)$. \end{proof} Our next lemma states a more general version of Lemma \ref{lemmaexchangeablepairDVZ}, part $(d)$. \begin{lemma} \label{lemma_limit_realvalued4thpower} Let $(X,X^t)$ be an exchangeable pair such that $X=\sum_{q\in\N}I_q^{\eta}(x_q)$ and $X^t=\sum_{q\in\N}I_q^{\eta^t}(x_q)$. Let the pairs $(Y,Y^t)$,$(U,U^t)$ and $(V,V^t)$ be defined in the same way. Then, one has \begin{align*} \lim_{t\to\infty}\frac{1}{t}\E{(X^t-X)(Y_t-Y)(U^t-U)(V^t-V)} =&4\mathbb{E} \left[ \widetilde{\Gamma}(X,Y)UV+\widetilde{\Gamma}(X,V)YU\right.\\ &\qquad\qquad\qquad\quad \left. +\widetilde{\Gamma}(X,U)YV+\widetilde{L}XYUV\right]. \end{align*} \end{lemma} \begin{proof} This limit is a consequence of exchangeability and Lemma \ref{lemmaexchangeablepairDVZ}. Indeed, denoting \begin{equation*} M_t = \frac{1}{t}\E{(X^t-X)(Y_t-Y)(U^t-U)(V^t-V)}, \end{equation*} we can write \begin{align*} \lim_{t\to\infty}M_t=&2\lim_{t\to\infty}\frac{1}{t}\E{XYUV-X^tYUV-XY^tUV-XYU^tV-XYUV^t}\\&+2\lim_{t\to\infty}\frac{1}{t}\E{X^tY^tUV+X^tYU^tV+X^tYUV^t}\\ =&2\lim_{t\to\infty}\frac{1}{t}\E{-\brac{X^t-X}YUV-X\brac{Y^t-Y}UV-XY\brac{U^t-U}V-XYU\brac{V^t-V}}\\ &+2\lim_{t\to\infty}\frac{1}{t}\E{\brac{X^tY^t-XY}UV+\brac{X^tU^t-XU}YV+\brac{X^tV^t-XV}YU}\\ =&2\mathbb{E}\big[-\widetilde{L}XYUV-X\widetilde{L}YUV-XY\widetilde{L}UV-XYU\widetilde{L}V+\widetilde{L}(XY)UV+\widetilde{L}(XU)YV\\ &+\widetilde{L}(XV)YU\big]\\ =&4\E{\widetilde{\Gamma}(X,Y)UV+\widetilde{\Gamma}(X,V)YU+\widetilde{\Gamma}(X,U)YV+\widetilde{L}XYUV}. \end{align*} \end{proof} \begin{lemma} \label{lemmapositive4thmoment} Let $X=\sum_{q=1}^NF_q$, where $F_q\in \mathcal{H}^q(K)$ with covariance operator $S_q$. Furthermore, letting $\left\{ k_i \right\}_{i \in \N}$ be an orthonormal basis of $K$, $F_q$ can be written as $\sum_{i\in \N}F_{q,i}k_i$, where $F_{q,i} = \left\langle F_q,k_i \right\rangle_K$. Then, it holds that \begin{align*} \E{F_{q,i}^2F_{p,j}^2}-\E{F_{q,i}^2}\E{F_{p,j}^2}-2\E{F_{q,i}F_{p,j}}^2\geq 0, \end{align*} which leads to \begin{align*} \E{\norm{F_q}^4_K}-\E{\norm{F_q}^2_K}^2-2\norm{S_q}^2_{\operatorname{HS}}\geq 0 \end{align*} and \begin{align*} \E{\norm{F_q}^2_K\norm{F_p}^2_K}-\E{\norm{F_q}^2_K}\E{\norm{F_p}^2_K}\geq 0 \text{ when $q\neq p$.} \end{align*} \end{lemma} \begin{proof} By \cite[Section 5]{dobler-vidotto-zheng:2018:fourth-moment-theorems}, we have that \begin{align*} \E{J_{p+q}(F_{q,i}F_{p,j})^2}\geq \E{F_{q,i}F_{p,j}}^2+\E{F_{q,i}^2}\E{F_{p,j}^2}, \end{align*} which implies that \begin{align*} \E{F_{q,i}^2F_{p,j}^2}-\E{F_{q,i}^2}\E{F_{p,j}^2}-2\E{F_{q,i}F_{p,j}}^2\geq& \E{F_{q,i}^2F_{p,j}^2}-\E{F_{q,i}F_{p,j}}^2-\E{J_{p+q}(F_{q,i}F_{p,j})^2}\\ \geq& \E{\sum_{m=1}^{p+q-1}J_m(F_{q,i}F_{p,j})^2}\\\geq& 0. \end{align*} The second and third inequalities in the statement of our lemma immediately follow, since \begin{align*} \E{\norm{F_q}^4_K}-(\E{\norm{F_q}^2_K})^2-2\norm{S_q}^2_{\operatorname{HS}}=& \sum_{i,j\in\N}\E{F_{q,i}^2F_{q,j}^2}-\E{F_{q,i}^2}\E{F_{q,j}^2}-2\E{F_{q,i}F_{q,j}}^2\\ \geq& 0 \end{align*} and when $q\neq p$, \begin{align*} \E{\norm{F_q}^2_K\norm{F_p}^2_K}-\E{\norm{F_q}^2_K}\E{\norm{F_p}^2_K}= \sum_{i,j\in\N}\E{F_{q,i}^2F_{p,j}^2}-\E{F_{q,i}^2}\E{F_{p,j}^2}\geq 0. \end{align*} \end{proof} The upcoming lemma is a version of Lemma \ref{lemmaexchangeablepairDVZ} in the setting of Hilbert-valued random variables. \begin{lemma} \label{lemmaexchangeablepair} Let $X=\sum_{q=1}^NF_q$, where $F_q\in \mathcal{H}^q(K)$ with covariance operator $S_q$. It holds that \begin{enumerate}[label=(\alph*)] \item $\lim_{t\to 0}\frac{1}{t}\E{\inner{F_q^t-F_q,Dg(X)}_K}=-q\E{\inner{ F_q,Dg(X)}_K}.$ \item $\lim_{t\to 0}\frac{1}{t}\E{\norm{F^t_q-F_q}^2_K}= 2q\E{\norm{F_q}^2_K}.$ \item $\lim_{t\to 0}\frac{1}{2t}\E{\inner{-L^{-1}\brac{F_q^t-F_q},D^2g(X)(F_p^t-F_p)}_K}\\ \hspace*{16.5em}=\frac{1}{q}\sum_{i,j\in\N}\E{\widetilde{\Gamma}(F_{q,i},F_{p,j})\inner{k_i,D^2g(X)k_j }_K}.$ \item $ \lim_{t\to 0}\frac{1}{t}\E{\norm{F^t_q-F_q}_K^4}= 4\sum_{i,j\in \N}\E{F_{q,i}^2\brac{\widetilde{\Gamma}(F_{q,j},F_{q,j})-q\E{F_{q,j}}}}\\\hspace*{11.5em}+8\sum_{i,j\in \N}\E{F_{q,i}F_{q,j}\brac{\widetilde{\Gamma}(F_{q,i},F_{q,j})-q\E{F_{q,i}F_{q,j}}}}\\ \hspace*{11.5em}-4q\sum_{i,j\in \N}\brac{\E{F_{q,i}^2F_{q,j}^2}-\E{F_{q,i}^2}\E{F_{q,j}^2}-2\E{F_{q,i}F_{q,j}}^2}. $ \end{enumerate} In particular, when $q=p$ then part (c) becomes \begin{align*} \lim_{t\to 0}\frac{1}{2t}\E{\inner{-L^{-1}\brac{F_q^t-F_q},D^2g(X)(F_q^t-F_q)}_K}=\Tr_K\brac{D^2g(X)\Gamma \brac{F_q,-L^{-1}F_q}}. \end{align*} \end{lemma} \begin{proof} Part $(a)$ follows from \begin{align*} \lim_{t\to 0}\frac{1}{t}\E{\inner{F_q^t-F_q,Dg(X)}_K} &=\lim_{t\to 0}\frac{1}{t}\sum_{i\in \N}\E{\inner{\brac{F^t_{q,i}-F_{q,i}} k_i,Dg(X)}_K}\\ &=\sum_{i\in\N}\E{\lim_{t\to 0}\frac{1}{t}\E{F^t_{q,i}-F_{q,i}|\eta}{\inner{ k_i,Dg(X)}_K}}\\ &=-q\sum_{i\in\N}\E{F_{q,i}{\inner{ k_i,Dg(X)}_K}}\\ &=-q\E{\inner{F_q,Dg(X)}_K}. \end{align*} Part $(b)$ is a result of \begin{align*} \E{\lim_{t\to 0}\frac{1}{t}\E{\norm{F^t_q-F_q}^2_K|\eta}} =&\E{\sum_{i\in\N}\lim_{t\to 0}\frac{1}{t}\E{\brac{F^t_{q,i}-F_{q,i}}^2|\eta}}\\ =&2\sum_{i\in \N}\E{\widetilde{\Gamma}\brac{F_{q,i},F_{q,i}}}\\ =&2q\E{\norm{F_q}^2_K}. \end{align*} For part $(c)$, we can write \begin{align*} &\lim_{t\to 0}\frac{1}{2t}\E{\inner{-L^{-1}\brac{F_q^t-F_q},D^2g(X)(F_p^t-F_p)}_K}\\ &\hspace*{11em}=\lim_{t\to 0}\frac{1}{2t}\E{\inner{\sum_{i\in \N}\frac{1}{q}\brac{F^t_{q,i}-F_{q,i}}k_i,D^2g(X)\sum_{j\in \N}\brac{F^t_{p,j}-F_{p,j}}k_j}_K}\\ &\hspace*{11em}=\frac{1}{q}\sum_{i,j\in \N}\E{\lim_{t\to 0}\frac{1}{2t}\E{\brac{F^t_{q,i}-F_{q,i}}\brac{F^t_{p,j}-F_{p,j}}|\eta} \inner{k_i,D^2g(X)k_j}_K}\\ &\hspace*{11em}=\frac{1}{q}\sum_{i,j\in \N}\E{\widetilde{\Gamma}(F_{q,i},F_{p,j})\inner{k_i,D^2g(X)k_j}_K}. \end{align*} Using the above expression in the case $q=p$, along with the fact that \begin{align*} \Gamma \brac{F_q,F_q}k_j=\Gamma\brac{\sum_{i\in \N} F_{q,i}k_i,\sum_{m\in \N} F_{q,m}k_m}k_j&=\sum_{i,m\in \N}\Gamma\brac{F_{q,i}k_i,F_{q,m}k_m}k_j\\ &=\sum_{i,m\in \N}\frac{1}{2}\widetilde{\Gamma}\brac{F_{q,i},F_{q,m}}\brac{k_i\otimes k_m+k_m\otimes k_i}k_j\\ &=\sum_{i\in \N}\widetilde{\Gamma}\brac{F_{q,i},F_{q,j}}k_i \end{align*} yields \begin{align*} \lim_{t\to 0}\frac{1}{2t}\E{\inner{-L^{-1}\brac{F_q^t-F_q},D^2g(X)(F_q^t-F_q)}_K}=\Tr_K\brac{D^2g(X)\Gamma \brac{F_q,-L^{-1}F_q}}. \end{align*} For part $(d)$, the exchangeability of $\brac{F_q,F_q^t}$ and Lemma \ref{lemma_limit_realvalued4thpower} imply \begin{align*} \lim_{t\to 0}\frac{1}{t}\E{\norm{F^t_q-F_q}_K^4}=&\lim_{t\to 0}\frac{1}{t}\E{\norm{\sum_{i\in\N}\brac{F_{q,i}^t-F_{q,i}}k_i}^4_K}\nonumber\\ =&\lim_{t\to 0}\frac{1}{t}\E{\sum_{i,j\in \N}\brac{F_{q,i}^t-F_{q,i}}^2\brac{F_{q,j}^t-F_{q,j}}^2}\nonumber\\ =&4\sum_{i,j\in \N}\E{F_{q,i}^2\brac{\widetilde{\Gamma}(F_{q,j},F_{q,j})-q\E{F_{q,j}^2}}}\nonumber \\ &+8\sum_{i,j\in \N}\E{F_{q,i}F_{q,j}\brac{\widetilde{\Gamma}(F_{q,i},F_{q,j})-q\E{F_{q,i}F_{q,j}}}}\nonumber\\ &-4q\sum_{i,j\in \N}\brac{\E{F_{q,i}^2F_{q,j}^2}-\E{F_{q,i}^2}\E{F_{q,j}^2}-2\E{F_{q,i}F_{q,j}}^2}. \end{align*} \end{proof} The next result provides upper bounds on the limits appearing in Lemma \ref{lemmaexchangeablepair}, part $(b)$ and $(d)$. \begin{lemma} \label{lemma_remaindertermbound} Let $X=\sum_{q=1}^NF_q$, where $F_q\in \mathcal{H}^q(K)$ with covariance operator $S_q$. It holds that \begin{align*} \lim_{t\to 0}\E{\frac{1}{t}\norm{X^t-X}^2_K}\leq 2N\E{\norm{X}^2_K} \end{align*} and \begin{align*} \lim_{t\to 0}\frac{1}{t}\E{\norm{X^t-X}^4_K} \leq \sum_{1\leq q\leq N} 2^{3q-2}(4q-3)\brac{\norm{F_q}_K^4-\E{\norm{F_q}^2_K}^2-2\norm{S_q}^2_{\operatorname{HS}}}. \end{align*} \end{lemma} \begin{proof} The first bound follows from \begin{align*} \E{\lim_{t\to 0}\frac{1}{t}\E{\norm{X^t-X}^2_K|\eta}}=&2\sum_{\substack{i\in \N\\p,q\leq N}}\E{\widetilde{\Gamma}\brac{F_{p,i},F_{q,i}}}\\ =& \sum_{\substack{i\in \N\\p\leq N}}2p\E{F_{p,i}^2}\\ \leq& 2N\E{\norm{X}^2_K}. \end{align*} For the second estimate, we start by using the triangle inequality and the $c_r$-inequality (see for example ~\cite[Thm. 2.2, p.127]{gut:2013:probability-graduate-course}) to write \begin{align} \label{triangle_ineq} \lim_{t\to 0}\frac{1}{t}\E{\norm{X^t-X}^4_K} \leq& \lim_{t\to 0}\frac{1}{t}\E{\brac{\sum_{1\leq q\leq N}\norm{F^t_q-F_q}_K}^4}\nonumber \\ \leq& \sum_{1\leq q\leq N}8^{q-1}\lim_{t\to 0}\frac{1}{t}\E{\norm{F^t_q-F_q}_K^4}. \end{align} Regarding the previous expression, Lemma \ref{lemmaexchangeablepair} says \begin{align} \label{limit_4thpowerqthchaos} \lim_{t\to 0}\frac{1}{t}\E{\norm{F^t_q-F_q}_K^4} =&4\sum_{i,j\in \N}\E{F_{q,i}^2\brac{\widetilde{\Gamma}(F_{q,j},F_{q,j})-q\E{F_{q,j}^2}}}\nonumber \\ &+8\sum_{i,j\in \N}\E{F_{q,i}F_{q,j}\brac{\widetilde{\Gamma}(F_{q,i},F_{q,j})-q\E{F_{q,i}F_{q,j}}}}\nonumber\\ &-4q\sum_{i,j\in \N}\brac{\E{F_{q,i}^2F_{q,j}^2}-\E{F_{q,i}^2}\E{F_{q,j}^2}-2\E{F_{q,i}F_{q,j}}^2}. \end{align} We will treat each term of \eqref{limit_4thpowerqthchaos} separately. For the first term of \eqref{limit_4thpowerqthchaos}, our proof will use an argument similar to the proof of \cite[Lemma 2.2]{dobler-vidotto-zheng:2018:fourth-moment-theorems} or \cite[Lemma 3.1]{dobler-peccati:2018:fourth-moment-theorem}. First, observe that if $k$ is a fixed positive integer and $J_k$ denotes the projection into the $k$-th Poisson chaos, then \begin{align*} \E{J_k\brac{\norm{F_p}^2_K}^2}=\sum_{i,j\in\N}\E{J_k\brac{F^2_{q,i}}J_k\brac{F^2_{q,j}}}. \end{align*} In particular, the expansion in \cite[Lemma 5.1]{dobler-vidotto-zheng:2018:fourth-moment-theorems} yields \begin{align*} \E{J_{2q}\brac{\norm{F_q}_K^{\otimes 2}}^2}&=\sum_{i,j\in\N}(2q)!\inner{f_{q,i}\widetilde{\otimes} f_{q,i},f_{q,j}\widetilde{\otimes} f_{q,j}}_{\mathfrak{H}^{2q}}\\&=\sum_{i,j\in\N}\brac{2\E{F_{q,i}F_{q,j}}^2+\sum_{r=1}^{q-1}q!^2{q\choose r}^2\inner{f_{q,i}{\star}_r^r f_{q,j},f_{q,j}{\star}_r^r f_{q,i}}_{\mathfrak{H}^{2q-2r}}}. \end{align*} Thus, the first term of \eqref{limit_4thpowerqthchaos} can be bounded via \begin{align*} \sum_{i,j\in \N}\E{F_{q,i}^2\brac{\widetilde{\Gamma}(F_{q,j},F_{q,j})-q\E{F_{q,j}^2}}} \leq& \frac{1}{2}\sum_{i,j\in \N}\sum_{k=1}^{2q-1}(2p-k)\E{J_k\brac{F_{q,i}^2}J_k\brac{F_{q,j}^2}}\\ =&\frac{1}{2}\sum_{k=1}^{2q-1}(2p-k)\E{J_k\brac{\norm{F_q}^2_K}^2}\\ \leq& \frac{2q-1}{2}\sum_{k=1}^{2q-1}\E{J_k\brac{\norm{F_q}^2_K}^2}\\ =& \frac{2q-1}{2}\brac{\norm{F_q}_K^4-\E{\norm{F_q}^2_K}^2-2\norm{S_q}^2_{\operatorname{HS}}}\\ &-\frac{2q-1}{2}\sum_{i,j\in \N}\sum_{r=1}^{q-1}q!^2{q\choose r}^2\inner{f_{q,i}{\star}_r^r f_{q,j},f_{q,j}{\star}_r^r f_{q,i}}_{\mathfrak{H}^{2q-2r}}. \end{align*} \noindent The second term of \eqref{limit_4thpowerqthchaos} will receive a similar treatment. Based on \cite[Lemma 5.1]{dobler-vidotto-zheng:2018:fourth-moment-theorems}, we have \begin{align*} \E{J_{2q}(F_{q,i}F_{q,j})}&=\sum_{i,j\in\N}(2q)!\norm{f_{q,i}\widetilde{\otimes} f_{q,j}}^2_{\mathfrak{H}^{2q}}\\ &=\sum_{i,j\in\N}\brac{\E{F_{q,i}F_{q,j}}^2+\E{F_{q,i}^2}\E{F_{q,j}^2}+\sum_{r=1}^q q!^2{q\choose r}^2\norm{f_{q,i}{\star}_r^r f_{q,j}}^2_{\mathfrak{H}^{2q-2r}}}. \end{align*} Hence, \begin{align*} \sum_{i,j\in \N}\E{F_{q,i}F_{q,j}\brac{\widetilde{\Gamma}(F_{q,i},F_{q,j})-q\E{F_{q,i}F_{q,j}}}} =&\frac{1}{2}\sum_{i,j\in \N}\sum_{k=1}^{2q-1}(2q-k)\E{J_k\brac{F_{q,i}F_{q,j}}^2}\\ \leq& \frac{2q-1}{2}\sum_{i,j\in \N}\sum_{k=1}^{2q-1}\E{J_k\brac{F_{q,i}F_{q,j}}^2}\\ =& \frac{2q-1}{2}\brac{\E{F_{q,i}^2F_{q,j}^2}-\E{F_{q,i}^2}\E{F_{q,j}^2}-2\E{F_{q,i}F_{q,j}}^2}\\ &-\frac{2q-1}{2}\sum_{i,j\in \N}\sum_{r=1}^{q-1} q!^2{q\choose r}^2\norm{f_{q,i}{\star}_r^r f_{q,j}}^2_{\mathfrak{H}^{2q-2r}} \\ \leq& \frac{2q-1}{2}\brac{\norm{F_q}_K^4-\E{\norm{F_q}^2_K}^2-2\norm{S_q}^2_{\operatorname{HS}}}\\ &-\frac{2q-1}{4}\sum_{i,j\in \N}\sum_{r=1}^{q-1} q!^2{q\choose r}^2\norm{f_{q,i}{\star}_r^r f_{q,j}}^2_{\mathfrak{H}^{2q-2r}}. \end{align*} In addition, based on that fact that \begin{align*} &\sum_{i,j\in\N}\brac{2\norm{f_{q,i}{\star}_r^r f_{q,j}}^2_{\mathfrak{H}^{2q-2r}}+2\inner{f_{q,i}{\star}_r^r f_{q,j},f_{q,j}{\star}_r^r f_{q,i}}_{\mathfrak{H}^{2q-2r}}}\\ & \qquad\qquad=\sum_{i,j\in\N}\brac{\norm{f_{q,i}{\star}_r^r f_{q,j}}^2_{\mathfrak{H}^{2q-2r}}+2\inner{f_{q,i}{\star}_r^r f_{q,j},f_{q,j}{\star}_r^r f_{q,i}}_{\mathfrak{H}^{2q-2r}}+\norm{f_{q,j}{\star}_r^r f_{q,i}}^2_{\mathfrak{H}^{2q-2r}}}\\ &\qquad\qquad =\sum_{i,j\in\N}\brac{\norm{f_{q,i}{\star}_r^r f_{q,j}+f_{q,j}{\star}_r^r f_{q,i}}^2_{\mathfrak{H}^{2q-2r}}}\\ &\qquad\qquad \geq 0, \end{align*} we get from \eqref{limit_4thpowerqthchaos} that \begin{align*} \lim_{t\to 0}\frac{1}{t}\E{\norm{F^t_q-F_q}^4_K} \leq (8q-6)\brac{\norm{F_q}_K^4-\E{\norm{F_q}^2_K}^2-2\norm{S_q}^2_{\operatorname{HS}}} \end{align*} and from \eqref{triangle_ineq} that \begin{align*} \lim_{t\to 0}\frac{1}{t}\E{\norm{X^t-X}^4_K} \leq \sum_{q=1}^{N} 2^{3q-2}(4q-3)\brac{\norm{F_q}_K^4-\E{\norm{F_q}^2_K}^2-2\norm{S_q}^2_{\operatorname{HS}}}. \end{align*} \end{proof} The result below is an adaptation to our setting of a classical combinatorial identity appearing in \cite[Proof of Proposition 11.2.2]{peccati-taqqu:2011:wiener-chaos-moments}. \begin{lemma} \label{lemma_contraction_00} The quantity $\norm{f_{q,i}\widetilde{\star}^0_0 f_{p,j}}^2_{\mathfrak{H}^{q+p}}$ appearing in Equation \eqref{fourthmoment} can be written in terms of norms of non-symmetrized contractions as \begin{align*} (q+p)!\norm{f_{q,i}\widetilde{\star}^0_0 f_{p,j}}^2_{\mathfrak{H}^{q+p}}=&\bigg(q!p!\norm{f_{q,i}}^2_{\mathfrak{H}^q}\norm{f_{p,j}}^2_{\mathfrak{H}^p}+q!^2\inner{f_{q,i},f_{q,j}}^2_{\mathfrak{H}^q}\mathds{1}_{\left\{ q=p \right\}} \\ &+q!p!{q\choose q\wedge p}{p\choose q\wedge p}\norm{f_{q,i} \star^{q\wedge p}_{q\wedge p} f_{p,j}}^2_{\mathfrak{H}^{\otimes \abs{q-p}}} \mathds{1}_{\left\{ q\neq p \right\}}\\ &+\sum_{r=1}^{q\wedge p -1}q!p!{q\choose r}{p\choose r} \norm{f_{q,i} \star^r_r f_{p,j}}^2_{\mathfrak{H}^{q+p-2r}}\bigg). \end{align*} \end{lemma} \begin{proof} The procedure in \cite[Proof of Proposition 11.2.2]{peccati-taqqu:2011:wiener-chaos-moments} will be slightly modified to fit our situation. Let $\mathfrak{S}_{q+p}$ be the sets of all permutations of $(q+p)$ elements and assume $\pi,\rho\in \mathfrak{S}_{q+p}$. When the intersection set $\{\pi(1),\ldots,\pi(q) \}\cap \{\rho(q+1),\ldots,\rho(q+p) \}$ contains $r$ element, this will be denoted by $\pi \stackrel{r}{\sim}\rho$. Since $\mathfrak{H}=L^2(\mathcal{Z},\mu)$, we have that \begin{align} \label{contraction00} \norm{f_{q,i}\widetilde{\star}^0_0 f_{p,j}}^2_{\mathfrak{H}^{q+p}}=&\norm{{f_{q,i}\widetilde{\otimes} f_{p,j}}}^2_{\mathfrak{H}^{q+p}}\nonumber\\ =&\frac{1}{(q+p)!^2}\sum_{\pi,\rho\in \mathfrak{S}_{q+p}}\int_{\mathcal{Z}^{q+p}}f_{q,i}\brac{z_{\pi(1)},\ldots,z_{\pi(q)}}f_{p,j}\brac{z_{\pi(q+1)},\ldots,z_{\pi(q+p)}}\nonumber\\ &\qquad\qquad\qquad f_{q,i}\brac{z_{\rho(1)},\ldots,z_{\rho(q)}}f_{p,j}\brac{z_{\rho(q+1)},\ldots,z_{\rho(q+p)}}\mu(dz_1\ldots dz_{q+p})\nonumber\\ =&\frac{1}{(q+p)!^2}\sum_{\pi\in \mathfrak{S}_{q+p}}\brac{\sum_{r=1}^{q\wedge p -1}\sum_{\pi\stackrel{r}{\sim} \rho}A_{1,r}+\sum_{\pi \stackrel{0}{\sim}\rho}A_2+\sum_{\pi \stackrel{q\wedge p}{\sim}\rho}A_3}. \end{align} For the second sum in \eqref{contraction00}, $\pi \stackrel{0}{\sim}\rho$ is equivalent to \begin{equation*} \begin{cases} \{\pi(1),\ldots,\pi(q) \}\cap \{\rho(1),\ldots,\rho(q) \}=\{\pi(1),\ldots,\pi(q) \}\\ \{\pi(q+1),\ldots,\pi(q+p) \}\cap \{\rho(q+1),\ldots,\rho(q+p) \}=\{\pi(q+1),\ldots,\pi(q+p) \} \end{cases}, \end{equation*} which implies that \begin{align*} A_2&=\int_{\mathcal{Z}^{q\wedge p}}\brac{\int_{\mathcal{Z}^{q\wedge p}}f_{q,i}\brac{z_{\pi(1)},\ldots,z_{\pi(q)}}f_{q,i}\brac{z_{\pi(1)},\ldots,z_{\pi(q)}}}\\&\brac{f_{p,j}\brac{z_{\pi(q+1)},\ldots,z_{\pi(q+p)}}f_{p,j}\brac{z_{\pi(q+1)},\ldots,z_{\pi(q+p)}}}\mu(dz_1\ldots dz_{q+p}) =\norm{f_{q,i}}^2_{\mathfrak{H}^q}\norm{f_{p,j}}^2_{\mathfrak{H}^p}. \end{align*} Furthermore, observe that for a fixed element $\pi\in \mathfrak{S}_{q+p}$, there are $q!$ ways to permute $\{1,\ldots,q \}$ and $p!$ ways to permute $\{q+1,\ldots,q+p \}$. Since $f_{q,i}$ and $f_{p,j}$ are symmetric functions, we have \begin{align*} \sum_{\pi \stackrel{0}{\sim}\rho}A_2=q!p! \norm{f_{q,i}}^2_{\mathfrak{H}^q}\norm{f_{p,j}}^2_{\mathfrak{H}^p}. \end{align*} For the third sum in \eqref{contraction00}, there are two cases to consider. If $q=p$ then $\pi \stackrel{q}{\sim}\rho$ means \begin{equation*} \begin{cases} \{\pi(1),\ldots,\pi(q) \}\cap \{\rho(q+1),\ldots,\rho(2q) \}= \{\pi(1),\ldots,\pi(q) \} \\ \{\pi(q+1),\ldots,\pi(2q) \}\cap \{\rho(1),\ldots,\rho(q) \}= \{\pi(q+1),\ldots,\pi(2q) \} \end{cases}, \end{equation*} which implies that \begin{align*} A_3=& \int_{\mathcal{Z}^{q}}\brac{\int_{Z^{q}}f_{q,i}\brac{z_{\pi(1)},\ldots,z_{\pi(q)}}f_{q,j}\brac{z_{\pi(1)},\ldots,z_{\pi(q)}}}\\ &\qquad\qquad\qquad\qquad\qquad\quad f_{q,i}\brac{z_{\pi(q+1)},\ldots,z_{\pi(2q)}}f_{q,j}\brac{z_{\pi(q+1)},\ldots,z_{\pi(2q)}}\mu(dz_1\ldots dz_{2q})\\ =&\inner{f_{q,i},f_{q,j}}^2_{\mathfrak{H}^q}\mathds{1}_{\left\{ q=p \right\}}, \end{align*} and there are $q!^2$ copies like the one above. On the other hand for $q\neq p$, \begin{align*} A_3&=\int_{\mathcal{Z}^{\abs{q-p}}}\brac{\int_{\mathcal{Z}^{q\wedge p}} f_{q,i}\brac{z_{\pi(1)},\ldots,z_{\pi(q)}}f_{p,j}\brac{z_{\rho(q+1)},\ldots,z_{\rho(q+p)}}}\\ &\qquad\qquad\qquad\qquad \brac{\int_{\mathcal{Z}^{q\wedge p}}f_{q,i}\brac{z_{\rho(1)},\ldots,z_{\rho(q)}}f_{p,j}\brac{z_{\pi(q+1)},\ldots,z_{\pi(q+p)}}}\mu(dz_1\ldots dz_{q+p})\\ &=\int_{\mathcal{Z}^{\abs{q-p}}}\brac{f_{q,i} \star^{q\wedge p}_{q\wedge p} f_{p,j}}^2\mu(dz_1 \ldots dz_{\abs{q-p}})\\ &=\norm{f_{q,i} \star^{q\wedge p}_{q\wedge p} f_{p,j}}^2_{\mathfrak{H}^{\otimes \abs{q-p}}}. \end{align*} Given a fixed $\pi$ such that $\pi \stackrel{q\wedge p}{\sim}\rho$ and $q\neq p$, there is a total of ${q\choose q\wedge p}{p\choose q\wedge p}$ ways of choosing $q\wedge p$ elements in $ \{\pi(1),\ldots,\pi(q) \}\cap \{\rho(q+1),\ldots,\rho(q+p) \}$ and $q\wedge p$ elements in $\{\pi(q+1),\ldots,\pi(q+p) \}\cap \{\rho(1),\ldots,\rho(q) \}$. In addition, there are $q!p!$ ways to organize $\{\rho(1),\ldots,\rho(q) \}$ and $\{\rho(q+1),\ldots,\rho(q+p) \}$. Therefore, combining the case $q=p$ and $q\neq p$ gives us \begin{align*} \sum_{\pi \stackrel{q\wedge p}{\sim}\rho}A_3=q!^2\inner{f_{q,i},f_{q,j}}^2_{\mathfrak{H}^q}\mathds{1}_{\left\{ q=p \right\}}+q!p!{q\choose q\wedge p}{p\choose q\wedge p}\norm{f_{q,i} \star^{q\wedge p}_{q\wedge p} f_{p,j}}^2_{\mathfrak{H}^{\otimes \abs{q-p}}}\mathds{1}_{\left\{ q\neq p \right\}}. \end{align*} We now turn to the first sum on the right side of \eqref{contraction00}, that is when $\pi \stackrel{r}{\sim}\rho$ for $1\leq r\leq q\wedge p -1$. We can write \begin{align*} A_{1,r}&=\int_{\mathcal{Z}^{q+p-2r}}\brac{\int_{Z^r}f_{q,i}\brac{z_{\pi(1)},\ldots,z_{\pi(q)}}f_{p,j}\brac{z_{\rho(q+1)},\ldots,z_{\rho(q+p)}}}\\ &\qquad\qquad\qquad\qquad \brac{\int_{\mathcal{Z}^r}f_{q,i}\brac{z_{\rho(1)},\ldots,z_{\rho(q)}}f_{p,j}\brac{z_{\pi(q+1)},\ldots,z_{\pi(q+p)}}}\mu(dz_1\ldots dz_{q+p})\\ &=\int_{\mathcal{Z}^{q+p-2r}}\brac{f_{q,i} \star^r_r f_{p,j}(z_1,\ldots, z_{q+p-2r})}^2\mu(dz_1\ldots dz_{q+p-2r})\\ &=\norm{f_{q,i} \star^r_r f_{p,j}}^2_{\mathfrak{H}^{q+p-2r}}. \end{align*} There are ${q\choose r}{p\choose r}$ ways to choose $r$ elements in $ \{\pi(1),\ldots,\pi(q) \}\cap \{\rho(q+1),\ldots,\rho(q+p) \}$ and $r$ elements in $\{\pi(q+1),\ldots,\pi(q+p) \}\cap \{\rho(1),\ldots,\rho(q) \}$. Furthermore, there are $q!p!$ ways to organize $\{\rho(1),\ldots,\rho(q) \}$ and $\{\rho(q+1),\ldots,\rho(q+p) \}$. This yields \begin{align*} \sum_{r=1}^{q\wedge p -1}\sum_{\pi\stackrel{r}{\sim} \rho}A_{1,r}=\sum_{r=1}^{q\wedge p -1}q!p!{q\choose r}{p\choose r} \norm{f_{q,i} \star^r_r f_{p,j}}^2_{\mathfrak{H}^{q+p-2r}}. \end{align*} Thus, we can expand \eqref{contraction00} as \begin{align*} \norm{f_{q,i}\widetilde{\star}^0_0 f_{p,j}}^2_{\mathfrak{H}^{q+p}}=&\frac{(q+p)!}{(q+p)!^2} \bigg(q!p!\norm{f_{q,i}}^2_{\mathfrak{H}^q}\norm{f_{p,j}}^2_{\mathfrak{H}^p}+q!^2\inner{f_{q,i},f_{q,j}}^2_{\mathfrak{H}^q}\mathds{1}_{\left\{ q=p \right\}} \\ &+q!p!{q\choose q\wedge p}{p\choose q\wedge p}\norm{f_{q,i} \star^{q\wedge p}_{q\wedge p} f_{p,j}}^2_{\mathfrak{H}^{\otimes \abs{q-p}}} \mathds{1}_{\left\{ q\neq p \right\}}\\ &+\sum_{r=1}^{q\wedge p -1}q!p!{q\choose r}{p\choose r} \norm{f_{q,i} \star^r_r f_{p,j}}^2_{\mathfrak{H}^{q+p-2r}}\bigg), \end{align*} which is the desired statement. \end{proof} \subsection{Lemmas related to the proof of Theorem \ref{theorem_Besov}} Our first lemma expresses the Hilbert-Schmidt norm in a Besov-Liouville space as an norm in $L^2 \left( [0,1]^{\otimes 2} \right)$. \begin{lemma} \label{lemma_covariance} Let $K=\mathcal{I}_{\beta,2}$ and $S$ be the covariance operator of a random variable $X\in L^2(\Omega)\otimes K$. Let $f\in K$, then \begin{align} \label{fractional_S} \brac{D^\beta_{0^{+}}Sf}(s)=\int_0^1\E{\brac{D^\beta_{0^{+}}X}(r) \brac{D^\beta_{0^{+}}X}(s)} \brac{D^\beta_{0^{+}}f}(r)dr \end{align} is in $L^2([0,1])$. This leads to \begin{align} \label{norm_HS} \norm{S}_{\operatorname{HS}(K)}=\norm{\E{\brac{D^\beta_{0^{+}}X}(r) \brac{D^\beta_{0^{+}}X}(s)}}_{L^2([0,1]^{2})}. \end{align} \end{lemma} \begin{proof} Let $f,g\in K$. Applying Fubini's theorem to $\inner{Sf,g}_K=\E{\inner{f,X}_K\inner{g,X}_K}$ and rearranging terms yields \begin{align*} &\int_0^1 \brac{D^\beta_{0^{+}}Sf}(s) \brac{D^\beta_{0^{+}}g}(s)ds\\ &\qquad\qquad\qquad =\int_0^1 \brac{\int_0^1\E{\brac{D^\beta_{0^{+}}X}(r) \brac{D^\beta_{0^{+}}X}(s)} \brac{D^\beta_{0^{+}}f}(r)dr}\brac{D^\beta_{0^{+}}g}(s)ds, \end{align*} which is equivalent to \begin{align} \label{pre_19} \int_0^1 \brac{\brac{D^\beta_{0^{+}}Sf}(s)-\int_0^1\E{\brac{D^\beta_{0^{+}}X}(r) \brac{D^\beta_{0^{+}}X}(s)}\brac{D^\beta_{0^{+}}f}(r)dr} \brac{D^\beta_{0^{+}}g}(s)ds=0. \end{align} Let $\left\{ g_n \right\}_{n \in \N}$ be an orthonormal basis of $\mathcal{I}_{\beta,2}$. Due to the isometry between $\mathcal{I}_{\beta,2}$ and $L^2([0,1])$, the set $\left\{ D^\beta_{0^{+}}g_n \right\}_{n \in \N}$ is an orthonormal basis of $L^2([0,1])$. Then, Equation \eqref{pre_19} implies \eqref{fractional_S}. To prove \eqref{norm_HS}, let $\{e_n\}_{n\in \N}$ be an orthonormal basis of $L^2([0,1])$. Then, $\{e_m\otimes e_n\}_{m,n\in \N}$ is an orthonormal basis of $L^2([0,1]^{\otimes 2})$. Also, $\left\{I^\beta_{0^{+}}e_n \right\}_{n\in \N}$ is a basis of $K$. Now observe that, using \eqref{fractional_S}, we can write \begin{align*} \inner{I^\beta_{0^{+}}e_m,S I^\beta_{0^{+}}e_n}_K&=\int_0^1 e_m(s) \brac{D^\beta_{0^{+}} S I^\beta_{0^{+}}e_n}(s)ds\\ &=\int_0^1 e_m(s) \brac{\int_0^1\E{\brac{D^\beta_{0^{+}}X}(r) \brac{D^\beta_{0^{+}}X}(s)} e_n(r)dr}ds\\ &=\int_0^1\int_0^1 \E{\brac{D^\beta_{0^{+}}X}(r) \brac{D^\beta_{0^{+}}X}(s)} e_m(s) e_n(r)drds, \end{align*} which leads to \begin{align*} \norm{S}_{\operatorname{HS}(K)}^2=\sum_{m,n\in\N} \inner{I^\beta_{0^{+}}e_m,S I^\beta_{0^{+}}e_n}^2_K=\norm{\E{\brac{D^\beta_{0^{+}}X}(r) \brac{D^\beta_{0^{+}}X}(s)}}_{L^2([0,1]^{\otimes 2})}^2, \end{align*} where the first equality comes from the identity $\norm{T}^2_{\operatorname{HS}(K)}=\sum_{m,n\in\N} \inner{k_m,T k_n}^2_K$ for an operator $T \in \operatorname{HS}(K)$ and an orthonormal basis $\{k_n\}_{n\in\N}$ of $K$. \end{proof} \begin{remark} Let $\zeta$ be a $L^2([0,1])$-valued random variable with covariance operator $T$. Note that the second statement in Lemma \ref{lemma_covariance} is comparable to the identity \begin{equation*} \norm{T}_{\operatorname{HS}(L^2([0,1]))}=\norm{\E{\zeta(r)\zeta(s)}}_{L^2([0,1]^{\otimes 2})} \end{equation*} whenever $T \in \operatorname{HS}\left( L^2\left([0,1]\right) \right)$. \end{remark} The following lemma is helpful to compute the Hilbert-Schmidt norms of the Poisson process and Brownian motion appearing in Subsection \ref{subsection_Besov}. \begin{lemma} \label{lemma_cov_kernel} Let the setting of Subsection \ref{subsection_Besov} prevail, where $X_\lambda$ and $Z$ denoting a Poisson process and a Brownian motion in $\mathcal{I}_{\beta,2}$, respectively. Then, one has \begin{align*} \E{\brac{D^\beta_{0^{+}}Z}(r) \brac{D^\beta_{0^{+}}Z}(s)}=\frac{1}{\Gamma(-\beta+1)^2}\int_0^{r\wedge s} (r-x)^{-\beta}(s-x)^{-\beta}dx \end{align*} and \begin{align*} \E{\brac{D^\beta_{0^{+}}X_\lambda}(r) \brac{D^\beta_{0^{+}}X_\lambda}(s)} =&\frac{1}{\Gamma(-\beta+1)^2}\int_0^{r\wedge s}(r-x)^{-\beta} (s-x)^{-\beta} dx\\ &-\frac{\lambda}{\Gamma(-\beta+1)^2 (-\beta+1)^2}(r-r\wedge s)^{-\beta+1}(s-r\wedge s)^{-\beta+1}. \end{align*} \end{lemma} \begin{proof} According to \cite[Section 3.1]{coutin-decreusefond:2013:steins-method-brownian}, the covariance operator of our Brownian motion is $S'=I^\beta_{0^{+}} I^{1-\beta}_{0^{+}} I^{1-\beta}_{1^{-}} D^\beta_{0^{+}}$. Substituting this into Equation \eqref{fractional_S}, we get \begin{align} \label{equation_cov_BM} \brac{D^\beta_{0^{+}}I^\beta_{0^{+}} I^{1-\beta}_{0^{+}} I^{1-\beta}_{1^{-}} D^\beta_{0^{+}}f}(s)=\int_0^1\E{\brac{D^\beta_{0^{+}}Z}(r) \brac{D^\beta_{0^{+}}Z}(s)} \brac{D^\beta_{0^{+}}f}(r)dr. \end{align} For the left-hand side, note that $f\in \mathcal{I}_{\beta,2}$ implies that $ D^\beta_{0^{+}}f\in L^2\subseteq L^1$, so that $I^{1-\beta}_{0^{+}} I^{1-\beta}_{1^{-}} D^\beta_{0^{+}}f \in L^1$. Thus, $D^\beta_{0^{+}}I^\beta_{0^{+}}=I$ by \cite[Theorem 2.4]{samko-kilbas-marichev:1993:fractional-integrals-derivatives}. Continuing with the left-hand side, we first write out $I^{1-\beta}_{0^{+}}$ using its definition and then perform an integration by part, which yields \begin{align*} \brac{ I^{1-\beta}_{0^{+}} I^{1-\beta}_{1^{-}} D^\beta_{0^{+}}f}(s)=&\frac{1}{\Gamma({1-\beta})}\int_0^1 1_{[0,s]}(r) (s-r)^{-\beta}\brac{I^{1-\beta}_{1^{-}} D^\beta_{0^{+}}f}(r) dr\\ =&\frac{1}{\Gamma({1-\beta})}\int_0^1 I^{1-\beta}_{0^{+}} \brac{1_{[0,s]}(\cdot) (s-\cdot)^{-\beta}}(r)\brac{ D^\beta_{0^{+}}f}(r) dr \end{align*} In particular, the integration by part is valid since \cite[Equation (2.20)]{samko-kilbas-marichev:1993:fractional-integrals-derivatives} is satisfied for $p=q=2$ and $0<\beta<1/2$. Equation \eqref{equation_cov_BM} then becomes \begin{align*} \int_0^1\brac{\frac{1}{\Gamma({1-\beta})}I^{1-\beta}_{0^{+}} \brac{1_{[0,s]}(\cdot) (s-\cdot)^{-\beta}}(r)-\E{\brac{D^\beta_{0^{+}}Z}(r) \brac{D^\beta_{0^{+}}Z}(s)}} \brac{D^\beta_{0^{+}}f}(r)dr=0. \end{align*} Now, using a basis argument like the one in the proof of Lemma \ref{lemma_covariance} yields \begin{align*} \E{\brac{D^\beta_{0^{+}}Z}(r) \brac{D^\beta_{0^{+}}Z}(s)}&=\frac{1}{\Gamma({1-\beta})}I^{1-\beta}_{0^{+}} \brac{1_{[0,s]}(\cdot) (s-\cdot)^{-\beta}}(r)\\ &=\frac{1}{\Gamma({1-\beta})^2}\int_0^r (r-x)^{-\beta}(s-x)^{-\beta}1_{[0,s]}(x)dx\\ &=\frac{1}{\Gamma({1-\beta})^2}\int_0^{r\wedge s} (r-x)^{-\beta}(s-x)^{-\beta}dx, \end{align*} which is the first statement of our lemma. We now turn to the second statement. Recall the representation of $X_\lambda$ given at \eqref{def_poiprocess_Besov}. In order to use this representation in computing $\E{\brac{D^\beta_{0^{+}}X_\lambda}(r) \brac{D^\beta_{0^{+}}X_\lambda}(s)}$, we need the joint density of $(T_n,T_m)$. By definition, $T_{m \wedge n}$ and $T_{m \vee n}-T_{m \wedge n}$ are independent and distributed as $\Gamma({m \wedge n},\lambda)$ and $\Gamma(\abs{m-n},\lambda)$, respectively. Their joint density is hence given by \begin{align*} f_{T_{m \wedge n},T_{m \vee n}-T_{m \wedge n}}(x,y)=\frac{\lambda^{m\vee n}}{\Gamma(n\vee m)\Gamma(\abs{m-n})}x^{n\wedge m-1}y^{\abs{m-n}-1}e^{-\lambda(x+y)}. \end{align*} Since $T_{m \vee n}=T_{m \wedge n}+(T_{m \vee n}-T_{m \wedge n})$, we can write, using a simple change of variable, \begin{align} \label{formula_density} f_{T_{m \wedge n},T_{m \vee n}}(x,y)= \frac{\lambda^{m\vee n}}{\Gamma(n\wedge m)\Gamma(\abs{m-n})}x^{n\wedge m-1}(y-x)^{\abs{m-n}-1}e^{-\lambda y} \mathds{1}_{\{x<y\}}. \end{align} We are now ready to compute $\E{\brac{D^\beta_{0^{+}}X_\lambda}(r) \brac{D^\beta_{0^{+}}X_\lambda}(s)}$. We have \begin{align} \label{kernel_X_expand} &\E{\brac{D^\beta_{0^{+}}X_\lambda}(r) \brac{D^\beta_{0^{+}}X_\lambda}(s)}\nonumber \\ &\qquad\qquad =\frac{1}{\lambda \Gamma(-\beta+1)^2}\Bigg(\sum_{n,m\in \N} \E{(r-T_n)^{-\beta}_{+} (s-T_m)^{-\beta}_{+}} -\frac{\lambda s^{ -\beta+1}}{-\beta+1}\sum_{n\in \N} \E{(r-T_n)^{-\beta}_{+} }\nonumber \\ &\qquad\qquad \quad-\frac{\lambda r^{ -\beta+1}}{-\beta+1}\sum_{n\in \N} \E{(s-T_m)^{-\beta}_{+} }+\frac{\lambda^2}{(-\beta+1)^2}s^{-\beta+1}r^{-\beta+1}\Bigg)\nonumber\\ &\qquad\qquad =\frac{1}{\lambda \Gamma(-\beta+1)^2}\Bigg(\sum_{n\in \N} \E{(r-T_n)^{-\beta}_{+} (s-T_n)^{-\beta}_{+}}+\sum_{n\in \N} \sum_{m\neq n}\E{(r-T_n)^{-\beta}_{+} (s-T_m)^{-\beta}_{+}}\nonumber \\ &\qquad\qquad \quad-\frac{\lambda }{-\beta+1}s^{ -\beta+1}\sum_{n\in \N} \E{(r-T_n)^{-\beta}_{+} }-\frac{\lambda }{-\beta+1}r^{ -\beta+1}\sum_{n\in \N} \E{(s-T_m)^{-\beta}_{+} }\nonumber \\ &\qquad\qquad \quad+\frac{\lambda^2}{(-\beta+1)^2}s^{-\beta+1}r^{-\beta+1}\Bigg). \end{align} The first sum on the right side (consisting of all diagonal terms when $m=n$) simplifies as \begin{align*} &\frac{1}{\lambda \Gamma(-\beta+1)^2}\sum_{n\in \N} \E{(t-T_n)^{-\beta}_{+} (s-T_n)^{-\beta}_{+}}\\ &\qquad\qquad\qquad =\frac{1}{\lambda \Gamma(-\beta+1)^2}\sum_{n\in \N}\int_0^\infty (r-x)^{-\beta}_{+} (s-x)^{-\beta}_{+}\frac{\lambda^n}{\Gamma(n)}x^{n-1}e^{-\lambda x} dx\\ &\qquad\qquad\qquad=\frac{1}{\Gamma(-\beta+1)^2}\int_0^{r\wedge s}(r-x)^{-\beta} (s-x)^{-\beta}e^{-\lambda x} \brac{\sum_{n\in \N} \frac{(\lambda x)^{n-1}}{(n-1)!}} dx\\ &\qquad\qquad\qquad=\frac{1}{\Gamma(-\beta+1)^2}\int_0^{r\wedge s}(r-x)^{-\beta} (s-x)^{-\beta} dx. \end{align*} Next, we consider the second sum on the right side of \eqref{kernel_X_expand}. The joint density of $(T_n,T_m)$ given in \eqref{formula_density} enables us to write \begin{align*} \sum_{n\in \N} \sum_{m\neq n}\E{(r-T_n)^{-\beta}_{+} (s-T_m)^{-\beta}_{+}}&=\frac{\lambda^2}{-\beta+1}\int_0^{r\wedge s}(r-x)^{-\beta} (s-x)^{-\beta}(s+t-2x) dx\\ &=\frac{\lambda^2}{(-\beta+1)^2}(s-x)^{-\beta+1}(r-x)^{-\beta+1}\Big|_0^{r\wedge s} \\ &=\frac{\lambda^2}{(-\beta+1)^2}\brac{s^{-\beta+1}r^{-\beta+1}-(s-r\wedge s)^{-\beta+1}(r-r\wedge s)^{-\beta+1}}. \end{align*} For the remaining sums in \eqref{kernel_X_expand}, observe that \begin{align*} \E{(s-T_m)^{-\beta}_{+} }=\frac{\lambda}{-\beta+1}s^{-\beta+1} \end{align*} and substitute the last three calculations into \eqref{kernel_X_expand} to obtain the second statement in Lemma \ref{lemma_cov_kernel}. \end{proof} \bibliography{refs} \end{document}
73,025
Car Insurance Groups 1 Through 20 Car insurance groups 1 through 20 compose a risk rating scheme devised by an independent organisation and is used by both insurance carriers and by consumers. The list assigns risk values to cars and their safety and security. The lower the risk scale value, the lower the risk involved, and therefore, the greater the insurance rates will be influenced downward. There are other factors involved in computing insurance premiums, so this risk scale is not the final word, but it does help identify safer or riskier vehicles. Decrease Your Rating Whether the risk assessment is higher or lower in the car insurance groups 1 through 20, you can always reduce the inherent risk even more. More often than not, every legitimate action you take to increase your motorcar’s security against vandalism and theft will nudge your premiums downward. Some actions have a greater impact than others, but they all count. Use Thatcham-approved approved security devices authorised by your insurance carrier, such as a car alarm, immobilisers, high-performance locks and even vehicle trackers to reduce the risk of your vehicle being stolen. Car theft is a major problem in the UK, and any risk reduction is appreciated by your insurance carrier. Unless you have only third-party cover, your insurance carrier will probably grant some degree of discount for each device you install and use. You can also influence the vehicle’s risk factor positively by parking the vehicle in a locked garage when not being driven. Parking off the street is safer than on the street, and well-lit areas are better and safer than dark areas. Always locking your car doors when driving can help protect against car jacking. Keeping them locked when away from the vehicle with windows all the way up and the convertible car’s roof engaged will help as well. The car itself plays a major role in which car insurance risk group is assigned. Not only are its security features rated, but so is it’s safety rating. Faster, high-performance vehicles with larger engines and a heavier gross weight are considerably less safe to drive than a low-powered, lighter and fuel-efficient model. Heavier cars with faster acceleration cause considerably more damage upon impact than smaller vehicles, and since car insurance is all about risk, the safer a car is to drive, the lower the premiums will be. The Claims Avenue The most reliable way to keep assignments in car insurance groups 1 through 20, as well as your premiums, as low as possible is to create a good, safe driving history. If you must drive a higher-risk vehicle, driving safety is very important. Resist the urge to take that risk that your car might be able to successfully pass, but you could still cause an accident even with no contact: Other drivers may not expect that unsafe act and react negatively to it or not react in time at all. However, should you complete a whole policy term without filing an insurance claim, you build a solid driving history with that carrier, and your insurance company could show its appreciation by gifting you with discounted premiums on a renewal policy. That no claims bonus can accumulate year after year. It’s not unusual with consistently safe driving that a no claims bonus reach discount levels of around 70 per cent! As you shop for a new car, understand how your preferred model will influence your car insurance premium. If your driving history is less than stellar, know as well that that will cause your rates on a policy to be higher than they might be. If you are prone to auto accidents, whether you cause them or not, a higher level of cover might be recommended. Comprehensive cover might be the most expensive of the three primary categories, but it does offer the widest array of benefits, including one that neither third-party nor third-party fire and theft does: Comprehensive auto insurance remits benefits for repairs or replacement or your own vehicle in auto accident, regardless of the at-fault party. You have ample opportunities to qualify for discounts and bonuses to help lower your rates, but considering all that comprehensive car insurance does for all insurance groups 1 through 20, inclusive, it has the best value for the pounds you spend. Keep it Down High-risk drivers may benefit on the insurance front and regarding citations to take a driving education course to lower premiums and reduce risk of traffic citations. Regardless of your age, a completion certificate from the Pass Plus course, for example, will almost always grant lower insurance premiums. The extended training and experience the course offers can contribute time and again toward earning and keeping those no claims discounts. Bear in mind the car insurance groups 1 through 20 as you consider both insurance rates and a new car purchase. Buy a car near or at the lowest risk-end of the scale, and your insurance premiums can be minimal whilst still reflecting excellent coverage.
149,374
GROUP TRAVEL offer a personal and affordable group and coach party ferry passenger reservation service at the lowest available rates to all the major UK and European ferry destinations. Our dedicated group travel team will be able to help you secure the best deal for you and your passengers between the UK, Ireland, France, Spain, Holland as well as most Baltic, Scandinavian and Mediterranean ports. In addition to offering preferential rates to tour operators Ferry Logistics is now also able to offer discounted tour operator rates to private customers such as sports clubs, schools, social clubs and even just a large group of friends traveling together on any of the ferry routes serviced. The only requirement to qualify for the discounted group travel rate is that your party should be at least ten people and travel together. CUSTOMER SATISFACTION is the benchmark used by all team members at Ferry Logistics when dealing with our valued group and coach tour customers. Ferry Logistics know how precious your time is and how important it is for you and your passengers to arrive at your destination as comfortably as possible and on time. FERRY LOGISTICS GROUP TRAVEL customers enjoy unparalleled customer service including:- The best group deals on all Cross Channel, European and Baltic passenger ferries. Priority boarding at certain ports. More comfortable cabins on some ferries. Direct access to our dedicated customer service and discounted hotel rates. An altogether better experience than is offered by most other group reservation services Ferry Logistics cater for all types of coach party and group travel on all passenger ferry.
271,172
\begin{document} \title{On moments of downwards passage times for spectrally negative Lévy processes} \author{Anita Behme\thanks{Technische Universit\"at Dresden, Institut f\"ur Mathematische Stochastik, Fakultät Mathematik, 01062 Dresden, Germany, \texttt{anita.behme@tu-dresden.de} and \texttt{philipp.strietzel@tu-dresden.de}, phone: +49-351-463-32425, fax: +49-351-463-37251.}\; and Philipp Lukas Strietzel$^\ast$} \date{\today} \maketitle \vspace{-1cm} \begin{abstract} The existence of moments of first downwards passage times of a spectrally negative Lévy process is governed by the general dynamics of the Lévy process, i.e. whether it is drifting to $+\infty$, $-\infty$ or oscillates. Whenever the Lévy process drifts to $+\infty$, we prove that the $\kappa$-th moment of the first passage time (conditioned to be finite) exists if and only if the $(\kappa+1)$-th moment of the Lévy jump measure exists, thus generalizing a result shown earlier by Delbaen for Cramér-Lundberg risk processes \cite{Delbaen1990}. Whenever the Lévy process drifts to $-\infty$ we prove that all moments of the passage time exist, while for an oscillating Lévy process we derive conditions for non-existence of the moments and in particular we show that no integer moments exist. Moreover we provide general formulae for integer moments of the first passage time (whenever they exist) in terms of the scale function of the Lévy process and its derivatives and antiderivatives. \end{abstract} 2020 {\sl Mathematics subject classification.} 60G51, 60G40 (primary), 91G05 (secondary) \\ \normal {\sl Keywords:} conjugate subordinator; Cramér-Lundberg risk process; exit time; fluctuation theory; first hitting time; fractional calculus; moments; Marchaud derivative; ruin theory; spectrally negative Lévy process; subordinator; time to ruin \section{Introduction}\label{S0} \setcounter{equation}{0} Let $X=(X_t)_{t\geq 0}$ be a spectrally negative Lévy process, i.e. a L\'evy process that does not exhibit positive jumps, starting in zero. In this article we study moments of the \emph{first (downwards) passage time of $-x$}, $x\geq 0$, of the process $X$, that is moments of \begin{equation} \label{eq-firstpassage} \tau_x^- := \inf \left\{ t>0: ~ X_t<-x \right\}, \end{equation} conditioned on finiteness of this stopping time. The first passage time $\tau_x^-$ - sometimes also referred to as \emph{exit time} - of (spectrally negative) L\'evy processes is a well known object that has been studied by many authors, see e.g. \cite[Sec. 9.5]{DoneyBuch} for a general overview. However, most results are limited on giving a representation of the Laplace transform of the first passage time.\\ In case of a Brownian motion with drift $p\in\mathbb{R}$, due to continuity of the paths, the first passage time $\tau_x^-$ coincides with the \emph{first hitting time} of $-x$, i.e. with $\tau_x^{-,\ast} = \inf\{t> 0: X_t=-x\}$. In this special case, $\tau_x^-$ is known to have Laplace transform, cf. \cite[Eq. I.(9.1)]{rogerswilliams1}, \begin{equation}\label{eq_BrownianmotionLaplace} \EE[e^{-q \tau_x^-}] = e^{-(\sqrt{p^2+2q}+p) x}, \quad x\geq 0, q>0, \end{equation} and its distribution is given explicitely as, cf. \cite[Eq. I.(9.2)]{rogerswilliams1}, $$\PP(\tau_x^- \in \dd z)= \frac{x}{\sqrt{2\pi z^3}}e^{- \frac{(x+pz)^2}{2z}} \dd z,\quad x,z\geq 0, $$ where in both formulas we assumed the process to be standardized, i.e. such that $\sigma^2=1$. \\ For general spectrally negative Lévy processes the first hitting time and the first passage time can be related via the overshoot $X_{\tau_x^-}\leq 0$ as shown in \cite{Doney1991}. In particular, for spectrally negative $\alpha$-stable processes ($1<\alpha<2$) this relation reads, cf. \cite{Simon2011}, \begin{equation} \label{eq-relation} \tau_x^{-,\ast} = \tau_x^- - (X_{\tau_x^-})^\alpha \hat{\tau}_x^{+,\ast},\end{equation} where $\hat{\tau}_x^{+,\ast}$ is an independent copy of the first upwards hitting time $\tau_x^{+,\ast}=\inf\{t>0: X_t=x\}$. The hitting time $\tau_x^{-,\ast}$ of a spectrally one-sided stable process has been studied e.g. in \cite{Peskir2008, Simon2011, KuznetsovKyprianou2014}. In particular, in \cite{Simon2011} fractional moments and a series representation of the density $\tau_x^{-,\ast}$ are provided. The first downwards passage time $\tau_x^-$ has also been extensively studied in the field of actuarial mathematics where the spectrally negative L\'evy process $X$ is interpreted as \emph{risk process} and shifted to start in $x\geq 0$. Then, due to the space homogeneity of the Lévy process, $\tau_x^-$ coincides with the \emph{time of ruin}, i.e. the first time the process passes the value $0$. The most prominent example for such a risk process is the classical \emph{Cram\'er-Lundberg model}, where $X$ is chosen to be a spectrally negative compound Poisson process, i.e. \begin{equation}\label{eq-CLmodel} X_t=x+pt - \sum_{i=1}^{N_t} S_i, \quad t\geq 0. \end{equation} Hereby $x\geq 0$ is interpreted as \emph{initial capital}, $p>0$ denotes a constant \emph{premium rate}, the Poisson process $(N_t)_{t\geq 0}$ represents the \emph{claim counting process}, and the i.i.d. positive random variables $\{S_i, i\in\NN\}$ are the \emph{claim size variables} and independent of $(N_t)_{t\geq 0}$.\\ For this model, under the profitability assumption $\EE[X_1]>0$, it is shown in \cite{Delbaen1990} for all $\kappa>0$ that the $\kappa$-th moment of the ruin time exists, if and only if the $(\kappa+1)$-th moment of the claim size distribution exists. The generalization of this result to arbitrary spectrally negative L\'evy processes is a main result of this paper and will be presented and proved in Section \ref{S2}. Note that, while the proof given in \cite{Delbaen1990} relies on results on the speed of convergence of random walks, we use a completely different approach here via fractional differentiation of Laplace transforms. In particular our approach allows us to relate the existence of $\EE[(\tau_x^-)^\kappa|\tau_x^-<\infty]$ with the existence of the $\kappa$-th moment of the subordinator $(\tau_x^+)_{x\geq 0}$ of upwards passage times $\tau_x^+=\inf\{t>0: X_t>x\}$ at a specific random time. As a by-product we show that $(\tau_x^+)_{x\geq 0}$ is a special subordinator and identify its conjugate subordinator. Before presenting and proving our main theorem on the existence of moments of the first passage time in Section \ref{S2}, we collect various preliminary results on (spectrally negative) L\'evy processes and fractional derivatives in Section \ref{S1}. In the final Section \ref{S3} we focus on integer moments of $(\tau_x^-|\tau_x^-<\infty)$ and derive semi-explicit formulae for these in terms of scale functions, their derivatives and their integrals. \section{Preliminaries}\label{S1} \setcounter{equation}{0} Throughout this article let $X=(X_t)_{t\geq 0}$ be a Lévy process, i.e. a càdlàg stochastic process with independent and stationary increments, defined on a filtered probability space $(\Omega,\mathcal{F}, \FF, \mathbb{P})$. It is well-known that the L\'evy process $X$ is fully characterized by its \textit{characteristic exponent} $\Psi$, which is defined via $e^{-t\Psi(\theta)} = \mathbb{E}[e^{i\theta X(t)}]$ and takes the form \begin{equation*} \Psi(\theta) = i a\theta + \frac{1}{2}\sigma^2 \theta^2 + \int_{\mathbb{R}}\left(1-e^{i\theta y} + i\theta y\mathds{1}_{\{\abs{y}<1\}}\right)\Pi^*(\diff y), \quad \theta \in \RR, \end{equation*} for constants $a\in\mathbb{R}$, $\sigma^2\geq 0$, and a measure $\Pi^*$ on $\mathbb{R}\backslash\{0\}$ satisfying $\int_{\mathbb{R}}(1~\wedge~y^2) \Pi^*(\diff y)<\infty$. The measure $\Pi^*$ is called the \emph{L\'evy measure} or \emph{jump distribution} of $X$, while $(\sigma^2,a,\Pi^*)$ is the \emph{characteristic triplet} of $X$. \\ If $X$ has no upwards jumps, i.e. if $\Pi^*((0,\infty))=0$, then $X$ is called \emph{spectrally negative}. In this case it is handy to use the \emph{Laplace exponent} $\psi(\theta) := \frac{1}{t}\log\mathbb{E}[e^{\theta X_t}]$, $\theta \geq 0$, of $-X$ instead of the characteristic exponent, which then can be written in the form \begin{equation}\label{eq-Laplaceexp} \psi(\theta) = c \theta + \frac{1}{2}\sigma^2 \theta ^2 + \int_{(0,\infty)} \left(e^{-\theta y}-1+\theta y \mathds{1}_{\{y<1\}} \right)\Pi(\diff y), \end{equation} where $c=-a\in\mathbb{R}$, $\sigma^2 \geq 0$, and $\Pi(\diff y) = \Pi^*(-\diff y)$ is the mirrored version of the jump measure which is therefore defined on $(0,\infty)$. The Laplace exponent $\psi$ admits some useful properties: Clearly $\psi(0)=0$, and $\lim_{\theta\to\infty}\psi(\theta)=\infty$. Moreover, on $(0,\infty)$ the function $\psi$ is infinitely often differentiable and strictly convex. Lastly, as $\psi$ is nothing else than the cumulant generating function of $X_1$, it carries information on the moments of $X$. In particular it is well-known, cf. \cite[Cor. 25.8]{sato2nd}, that for any $\kappa>0$ \begin{equation}\label{Lemma_equivalence_momentenBedingungen} \EE[|X_1|^\kappa]< \infty \quad \text{if and only if } \quad \int_{|y|\geq 1} |y|^\kappa~\Pi(\diff y)<\infty,\end{equation} and for $\kappa=k\in \NN_0$ this in turn implies \begin{equation} \label{eq-momentLaplace} |\partial^k \psi(0+)|:= |\psi^{(k)}(0+)|<\infty.\end{equation} Note that throughout this article $\partial_q^k f(q,z)$ denotes the $k$-th derivative of a function $f$ with respect to $q$, while $\partial_q:= \partial_q^1$. In case of only one parameter, we will usually omit the subscript. We will frequently use the Laplace exponent's right inverse that we always denote by \begin{equation*} \Phi(q) := \sup\{\theta\geq 0 : ~ \psi(\theta) = q\}, \quad q\geq 0. \end{equation*} From the mentioned properties of $\psi$ it follows immediately that \begin{align*} \Phi(0)=0 & \quad \text{if and only if } \quad \psi'(0+)\geq 0, \\ \text{while} \quad \Phi(0) >0 & \quad \text{if and only if } \quad \psi'(0+)<0. \end{align*} Moreover the function $q\mapsto \Phi(q)$ is strictly monotone increasing on $[0,\infty)$, infinitely often differentiable on $(0,\infty)$, and it is the well-defined inverse of $\psi(\theta)$ on the interval $[\Phi(0),\infty)$, i.e. \begin{equation*} \Phi(\psi(\theta)) = \theta \quad \text{ and } \quad \psi(\Phi(q)) = q, \qquad \forall \theta \in [\Phi(0),\infty), ~ q\geq 0. \end{equation*} Thus applying the chain rule on $q\mapsto q=\psi(\Phi(q))$ immediately yields \begin{equation} \label{Lemma_derivative_inverse} \Phi'(q)=\partial_q \Phi(q) = \frac{1}{\psi'(\Phi(q))}, \quad q\geq 0, \end{equation} where the case $q=0$ is interpreted in the limiting sense $q\downarrow 0$.\\ Finally note that, cf \cite[Theorem 8.1 (ii)]{Kyprianou2014}, \begin{equation}\label{eq-limiteta} \lim_{q\downarrow 0} \frac{q}{\Phi(q)} = \begin{cases} \psi'(0+), & \text{if }\psi'(0+)\geq 0 , \\ 0, &\text{else.} \end{cases} \end{equation} For proofs of the stated properties and a more thorough discussion of Lévy processes in general we refer to \cite{Kyprianou2014} and \cite{sato2nd}. As announced in the introduction, we are interested in the first downwards passage time $\tau_x^-$ of $-x$, $x\geq 0$, as defined in \eqref{eq-firstpassage}, or, more precisely, in the passage time, given that the process passes through $-x$, i.e. \begin{equation}\label{eq-tauconditioned} \left(\tau_x^-|\tau_x^-<\infty\right). \end{equation} Note that in the case that $ \psi'(0+) = \mathbb{E}[X_1]\in [-\infty, 0]$ we have $\tau_x^-=(\tau_x^-|\tau_x^-<\infty)$ as $X$ enters the negative half-line almost surely. In the case $\psi'(0+)>0$ the term \emph{passage time} will be typically used for the conditioned quantity \eqref{eq-tauconditioned}.\\ To avoid trivialities we shall throughout exclude the case that $X$ is a pure drift, which implies that we always have $\PP(\tau_x^-<\infty)>0$. To study $\tau_x^-$ (or $(\tau_x^-|\tau_x^-<\infty)$) we will use the concept of scale functions. Recall that for any $q\geq 0$ the \emph{$q$-scale function} $W^{(q)}\colon\mathbb{R}\to[0,\infty)$ of the spectrally negative Lévy process $X$ with $W^{(q)}(x)=0$, $x<0$, is the unique function such that for $x\geq 0$ its Laplace transform satisfies \begin{equation*} \int_0^\infty e^{-\beta x}W^{(q)}(x) \diff x = \frac{1}{\psi(\beta)-q}, \end{equation*} for all $\beta >\Phi(q)$. Furthermore the \emph{integrated $q$-scale function} $Z^{(q)}\colon\mathbb{R}\to[0,\infty)$ is given by \begin{equation}\label{eq_Zq} Z^{(q)}(x) := 1+q\int_0^x W^{(q)}(y)\diff y, \end{equation} and it fulfills, cf. \cite[Thm. 8.1]{Kyprianou2014}, \begin{equation} \label{eq_Kyprianou_Scale} \mathbb{E}\left[e^{-q\tau_x^-} \mathds{1}_{\{\tau_x^- <\infty\}} \right] = Z^{(q)}(x) - \frac{q}{\Phi(q)} \cdot W^{(q)}(x), \quad x\in\RR, q\geq 0. \end{equation} In the limit $q\downarrow 0$ this immediately implies \begin{equation} \label{eq_Kyprianou_Ruin} \mathbb{P}(\tau_x^-<\infty) = 1- (0 \vee \psi'(0+)) \cdot W^{(0)}(x), \quad x\in \RR, \end{equation} where we use the standard notation $\vee$ to denote the maximum.\\ Observe that the functions $q\mapsto W^{(q)}(x)$ and $q\mapsto Z^{(q)}(x)$ may be extended analytically to $\mathbb{C}$, which means especially that they are infinitely often differentiable with bounded derivatives on $[0,\infty)$. Again, we refer to \cite{Kyprianou2014} for missing proofs and further details. For detailed accounts on scale functions and their numerous applications we also refer to \cite{Avram2020} and \cite{kuznetsov2011}. Lastly, let us recall that fractional moments of non-negative random variables can be computed via fractional differentiation of the corresponding Laplace transform as shown in \cite{Wolfe-frac}. More precisely, define for any $\kappa\in (0,1)$ the following variation of the \emph{Marchaud fractional derivative} of a function $f(z), z\geq 0$, \begin{equation} \label{eq_definition_marchaud_derivative} D^\kappa_z f(z) = \frac{(-1)^\kappa \cdot \kappa}{\Gamma (1-\kappa)} \int_{z}^\infty \frac{f(z)-f(u)}{(u-z)^{\kappa+1}} \diff u, \end{equation} while for $\kappa\geq 1$ with $n:=\lfloor \kappa \rfloor$ denoting the largest integer smaller or equal to $\kappa$ $$D^\kappa_z f(z) = \partial_z^n D^{\kappa-n}_z f(z).$$ Then by \cite[Thm. 1]{Wolfe-frac} for any non-negative random variable $T$ with Laplace transform $g(z)=\EE[e^{-zT}]$, $z\geq 0$, the $\kappa$-th absolute moment of $T$ exists, if and only if $D^\kappa_z g(0)$ exists, in which case \begin{equation} \label{eq-Wolfemoment} \EE[T^\kappa] = (-1)^{-\kappa} D^\kappa g(0).\end{equation} With this we can easily derive the following lemma. \begin{lemma} \label{lem-fracmoment1} For any $\kappa>0$, $x\geq 0$, the $\kappa$-th moment of the first downwards passage time $\tau_x^-|\tau_x^-<\infty$ of a spectrally negative L\'evy process is given by \begin{equation} \label{eq_formula_to_work_with} \mathbb{E}\left[(\tau_x^-)^\kappa\big|\tau_x^- <\infty \right] = \frac{(-1)^\kappa}{ \mathbb{P}(\tau_x^- <\infty)}\cdot \Big[D^\kappa_q \Big(Z^{(q)}(x) - \frac{q}{\Phi(q)} \cdot W^{(q)}(x)\Big)\Big]_{q=0}, \end{equation} and it exists if and only if the right-hand side exists and is finite. \end{lemma} \begin{proof} As $$\mathbb{E}\left[e^{-q\tau_x^-} \mathds{1}_{\{\tau_x^- <\infty\}} \right] = \mathbb{E}\left[e^{-q\tau_x^-}\big|\tau_x^- <\infty \right] \cdot \mathbb{P}(\tau_x^- <\infty)$$ the claim follows immediately from \eqref{eq-Wolfemoment} and \eqref{eq_Kyprianou_Scale}. \end{proof} \section{Existence of moments} \label{S2} \setcounter{equation}{0} In \cite{Delbaen1990}, Delbaen showed in a classical Cram\'er-Lundberg model \eqref{eq-CLmodel} that is \emph{profitable}, i.e. with $\psi'(0+)>0$, that for any $\kappa>0$ the $\kappa$-th moment of the ruin time exists if and only if the $(\kappa+1)$-th moment of the claim sizes exists. Delbaen's proof relies on results on the speed of convergence of random walks. In this paper we use an alternative approach via fractional derivatives of Laplace transforms to prove an extension of the result in \cite{Delbaen1990} to any spectrally negative L\'evy process. Moreover, we additionally consider the \emph{non-profitable} settings of $\psi'(0+)\leq 0$. Our main result in this section thus reads as follows. \begin{theorem}\label{thm-existence} Let $(X_t)_{t\geq 0}$ be a spectrally negative L\'evy process with Lapace exponent $\psi$ as in \eqref{eq-Laplaceexp}, and let $\tau_x^-$ denote its first passage time of $-x$ for $x\geq 0$. \begin{enumerate} \item If $\psi'(0+)<0$, then for any $\kappa>0$ and $x\geq 0$ \begin{equation*} \mathbb{E}\left[(\tau_x^-)^\kappa \right] <\infty. \end{equation*} \item If $\psi'(0+) >0$, then for any $\kappa>0$ and $x>0$ \begin{equation*} \mathbb{E}\left[(\tau_x^-)^\kappa|\tau_x^-<\infty\right]<\infty \qquad \text{if and only if} \qquad \int_{[1,\infty)} y^{\kappa+1} \Pi(\diff y)<\infty. \end{equation*} For $x=0$ the above equivalence remains true for all $\kappa>0$ whenever $(X_t)_{t\geq 0}$ is of bounded variation. For $(X_t)_{t\geq 0}$ of unbounded variation $\tau_0^-=0$ a.s. \item Assume $\psi'(0+)=0$. \begin{enumerate}[(a)] \item If there exists $\kappa^*\in (0,1]$ such that $\int_{[1,\infty)} y^{\kappa^*+1}\Pi(\diff y) =\infty$, then for any $x>0$ and $\kappa\geq\kappa^*$ \begin{equation} \label{eq_kappa-thmoment_infinite} \mathbb{E}\left[(\tau_x^-)^\kappa|\tau_x^-<\infty\right] =\infty. \end{equation} \item If $\psi''(0+)<\infty$, then \eqref{eq_kappa-thmoment_infinite} holds for any $x>0$ and $\kappa>\tfrac{1}{2}$. \end{enumerate} In particular, \eqref{eq_kappa-thmoment_infinite} holds for any $x>0$ and $\kappa\geq 1$. \\ For $x=0$ the above statements are true for the given ranges of $\kappa$ whenever $(X_t)_{t\geq 0}$ is of bounded variation. For $(X_t)_{t\geq 0}$ of unbounded variation $\tau_0^-=0$ a.s. \end{enumerate} \end{theorem} \begin{remark} Note that a priori the above theorem does not make any restrictions concerning possible choices of the location parameter $c\in\RR$ of $(X_t)_{t\geq 0}$. However, as by \cite[Example 25.12]{sato2nd}, \begin{equation} \label{eq-relationcpsi} c - \int_{[1,\infty)} y \Pi(\diff y)= \EE[X_1]= \psi'(0+), \end{equation} in cases (ii) and (iii) the assumption $\psi'(0+)\geq 0$ implies that actually $$c\geq \int_{[1,\infty)} y \Pi(\diff y)\geq 0.$$ In particular $c<0$ is a valid choice only in case (i). \end{remark} \begin{remark} At first glance, Theorem \ref{thm-existence} (iii) suggests that for an oscillating process $(X_t)_{t\geq 0}$ no fractional moments of the first passage time of zero exist. This however is not true in general and we provide two counterexamples in the following. \begin{enumerate} \item Consider a (standardized) Brownian motion without drift for which by \eqref{eq_BrownianmotionLaplace} \begin{equation*} \mathbb{E}\left[e^{-q\cdot \tau_x^-}\right] = e^{-\sqrt{2q}\cdot x}, \qquad x\geq 0. \end{equation*} Then $\PP(\tau_x^-<\infty)=1$ and from \eqref{eq_definition_marchaud_derivative} and \eqref{eq-Wolfemoment} we obtain for any $\kappa\in(0,1)$ that \begin{align} \mathbb{E}[(\tau_x^-)^\kappa] &= (-1)^{-\kappa} \left[ D^\kappa e^{-\sqrt{2q}\cdot x}\right]_{q=0} = \frac{\kappa}{\Gamma(1-\kappa)} \int_0^\infty \frac{1 - e^{-\sqrt{2u}\cdot x} }{u^{\kappa +1}} \diff u, \end{align} which is finite if and only if $\kappa \in(0,\tfrac{1}{2})$. In particular, in this example \eqref{eq_kappa-thmoment_infinite} holds for any $\kappa\geq \tfrac{1}{2}$ which shows that Theorem \ref{thm-existence} (iii) (b) is near to being sharp. \item Consider a spectrally negative, $\alpha$-stable L\'evy process $(X_t)_{t\geq 0}$, with index $\alpha\in(1,2)$, such that the Laplace exponent of $-X$ is given by $\psi(\theta) = \theta^\alpha$ and in particular $\psi'(0)=0$. For such a process it has been shown in \cite[Cor. 1]{Simon2011} that the first hitting time $\tau_x^{-,\ast}:=\inf\{t>0: X_t = -x\}$ admits finite fractional moments, namely $$\EE[(\tau_x^{-,\ast})^\kappa]<\infty \quad \text{for all }\kappa \in (-1-1/\alpha,1-1/\alpha), \quad x> 0.$$ However, as $\tau_x^- \leq \tau_x^{-,\ast}$ a.s. by \eqref{eq-relation}, this immediately implies that also $$\EE [(\tau_x^-)^\kappa]<\infty \quad \text{for all }\kappa \in (0,1-1/\alpha), \quad x> 0.$$ \end{enumerate} \end{remark} To prove Theorem \ref{thm-existence} we start with a simple lemma that reduces the problem of existence of moments of the first passage time to finiteness of (fractional) derivatives of a certain function in zero. \begin{lemma} \label{Lemma_k-thmoment_eta} Set $\eta(q):=\frac{q}{\Phi(q)}$, $q>0$. Then for any $\kappa>0$ and $x>0$ \begin{equation} \label{eq_MomentTau_FractDerivEta} \mathbb{E}\left[(\tau_x^-)^\kappa|\tau_x^-<\infty\right] <\infty \qquad \text{if and only if} \qquad \lim_{q\downarrow 0} \abs{D_q^\kappa \eta(q)} <\infty. \end{equation} If $x=0$ then \eqref{eq_MomentTau_FractDerivEta} holds if and only if $(X_t)_{t\geq 0}$ is of bounded variation. If $(X_t)_{t\geq 0}$ is of unbounded variation, then $\tau_0^-=0$ a.s. \end{lemma} \begin{proof} It follows immediately from Lemma \ref{lem-fracmoment1} that $\mathbb{E}\left[(\tau_x^-)^\kappa|\tau_x^-<\infty\right] <\infty$ if and only if $\lim_{q\downarrow 0}D^\kappa_q \left(Z^{(q)}(x) - \eta(q) \cdot W^{(q)}(x)\right)<\infty$. However, $q\mapsto W^{(q)}(x)$ and $q\mapsto Z^{(q)}(x)$ are infinitely often differentiable with bounded derivatives on $[0,\infty)$. Hence, linearity of the (fractional) derivative reduces the problem to finiteness of $\lim_{q\downarrow 0}D^\kappa_q \left( \eta(q) \cdot W^{(q)}(x)\right)$. \\ Now remark that (except of the constant factor $(-1)^\kappa$) the definition of the Marchaud derivative given in \eqref{eq_definition_marchaud_derivative} coincides with the one of the fractional derivative $\mathbf{D}^{\alpha}_-$ in \cite[Eq. (5.58)]{Samko1993}. This in turn is equivalent to the Liouville derivative for sufficiently good functions, see \cite[Remark 5.3]{Samko1993} for details. We may therefore apply the product rule for fractional Liouville derivatives, cf. \cite[p. 206]{Uchaikin2013}, to $\eta(q) \cdot W^{(q)}(x)$. Recalling again, that $q\mapsto W^{(q)}(x)$ is infinitely often differentiable with bounded derivatives and that $W^{(q)}(x)>0$ for any $x>0$, we conclude then immediately that $\lim_{q\downarrow 0}D^\kappa_q \left(\eta(q) \cdot W^{(q)}(x)\right)<\infty$ if and only if $\lim_{q\downarrow 0}D^\kappa_q \eta(q) <\infty$ as stated. \\ If $x=0$ note that $W^{(q)}(0)>0$ if and only if $(X_t)_{t\geq 0}$ is of bounded variation, cf. \cite[Eq. (25)]{Avram2020}, and in this case the above argumentation yields the result. If $(X_t)_{t\geq 0}$ is of unbounded variation we have $W^{(q)}(0)= 0$, cf. \cite[Eq. (25)]{Avram2020}, and we see from \eqref{eq_Kyprianou_Scale} that $\tau_0^-=0$ a.s. \end{proof} With this we can now directly prove part (i) of Theorem \ref{thm-existence}. \begin{proof}[Proof of Theorem \ref{thm-existence} (i)] From the general Leibniz formula we obtain for any $k\in \NN$ \begin{align*} \partial_q^k \eta(q) = \partial_q^k \left( q\cdot (\Phi(q))^{-1}\right) &= \sum_{\ell=0}^k {k \choose \ell}\cdot \left( \partial_q^{\ell} q\right) \cdot \left( \partial_q^{k-\ell}\left(\Phi(q)^{-1}\right)\right) \\ &= q\cdot \partial_q^k\left(\Phi(q)^{-1}\right) + k\cdot \partial_q^{k-1}\left(\Phi(q)^{-1}\right). \end{align*} Recall that $\Phi(q)$ is the inverse of $\psi$ on $[\Phi(0),\infty)\subsetneq (0,\infty)$ which, in turn, is infinitely often differentiable on $(0,\infty)$. Hence, $\Phi(q)\colon[0,\infty)\to[\Phi(0),\infty)$ is infinitely often differentiable with bounded (right) derivative in $0$. Furthermore $({}\cdot{})^{-1}$ is an infinitely often differentiable function on $(0,\infty)$. Consequently, we may apply Faà di Bruno's formula, cf. \cite[Equation (2.2)]{Johnson2002}, and obtain \begin{equation} \label{eq_kth_derivPhi-1} \partial_q^k \left((\Phi(q))^{-1}\right) = \sum_{\ell=1}^k (-1)^\ell \cdot \ell! \cdot (\Phi(q))^{-\ell-1} \cdot B_{k,\ell}\big(\Phi'(q),...,\Phi^{(k-\ell+1)}(q)\big), \end{equation} where $B_{k,\ell}$ denote the partial Bell polynomials. Hereby, as $\Phi(0)>0$ we have $\abs{\Phi(0)^{-\ell-1}}<\infty$ for all $\ell=1,\ldots, k$. Moreover, $B_{k,\ell}(\Phi'(0),...,\Phi^{(k-\ell+1)}(0))$, being a polynomial, is finite, if $\Phi^{(n)}(0)$ is finite for every $n\in\{1,...,k-\ell+1\}$. This has been argued to hold above. Thus every summand on the right hand side of \eqref{eq_kth_derivPhi-1} is finite in the limit $q\downarrow 0$ which implies that $\partial_q^k\eta(q)$ is finite for all $q\geq 0$ and $k\in \NN$. Clearly this implies finiteness of $\abs{D_q^\kappa \eta(q)}$ for all $q\geq 0$ and $\kappa >0$ and the claim follows via Lemma \ref{Lemma_k-thmoment_eta}. \end{proof} The proof of the second and third part of Theorem \ref{thm-existence} is more involved and relies on the interpretation of $\eta(q)$ as Laplace exponent of a certain killed subordinator that is shown in the next proposition. Recall that a \emph{subordinator} $(Y_t)_{t \geq 0}$ is a L\'evy process with non-decreasing paths whose Laplace exponent $\varphi(\theta) = - \frac{1}{t} \log \EE[e^{-\theta Y_t} ]$ is always of the form \begin{equation} \label{eq_LaplaceExponent_Subordinator} \varphi(\theta)= \tilde{c}\cdot\theta+\int_0^\infty (1-e^{-\theta y}) \tilde{\Pi}(\diff y), \end{equation} for $\theta\geq 0$, a drift $\tilde{c}\geq 0$ and a measure $\tilde{\Pi}$ such that $\int_{(0,\infty)} (1\wedge y) \Pi(\diff y)<\infty$. A \emph{killed} subordinator $(Y_t)_{t\geq 0}$ is defined via \begin{equation*} Y_t = \begin{cases} \tilde{Y}_t, & \text{if } t<\mathbf{e}_{\beta}, \\ \zeta, & \text{if }t\geq \mathbf{e}_{\beta}, \end{cases} \end{equation*} where $(\tilde{Y}_t)_{t\geq 0}$ is a subordinator, $\mathbf{e}_{\beta}$ is an independent, $\operatorname{Exp}(\beta)$-distributed time, $\beta>0$, and $\zeta$ denotes some \emph{cemetery state}. As usual, we interpret $\beta=0$ as $\mathbf{e}_\beta = \infty$ corresponding to no killing. The Laplace exponent $\varphi_Y$ of a killed subordinator is given by \begin{equation} \label{eq_LaplaceExponent_KilledSubordinator} \varphi_Y(\theta) = - \log \mathbb{E}\left[e^{-\theta Y_1}\right] = - \log \mathbb{E}\left[e^{-\theta \tilde{Y}_1}\cdot \mathds{1}_{\{1<\mathbf{e}_{\beta}\}}\right] = \beta + \varphi_{\tilde{Y}}(\theta), \end{equation} for the Laplace exponent $\varphi_{\tilde{Y}}(\theta)$ of $(\tilde{Y}_t)_{t\geq 0}$. Further, for any $x\geq 0$, let $\tau_x^+ := \inf\{ t>0: ~ X_t >x\}$ be the first upwards passage time of $x$, i.e. the first time that $X_t$ is above $x$. It is well-known, cf. \cite[Thm. 3.12]{Kyprianou2014}, that for all $q\geq 0$ \begin{equation} \label{eq_Laplace_taux+} \mathbb{E}\left[e^{-q\cdot \tau_x^+}\cdot\mathds{1}_{\{\tau_x^+<\infty\}}\right] = e^{-\Phi(q)x}, \quad x\geq 0. \end{equation} If furthermore $\mathbb{E}[X_1]=\psi'(0+)\geq 0$, then $(\tau_x^+)_{x\ge0 }$ is a subordinator with Laplace exponent $\Phi(q)$, cf. \cite[Cor. 3.14]{Kyprianou2014}. \begin{proposition}\label{Prop-etaalsLE} Assume that $\psi'(0+)\ge 0$. Set \begin{equation*} \varphi(\theta) := \frac{\psi(\theta)}{\theta} = \psi'(0+) + \frac{\sigma^2}{2} \theta + \int_0^\infty \big(1 - e^{-\theta y}\big) \Pi((y,\infty)) \diff y, \quad \theta > 0, \end{equation*} then $\varphi(\theta)$ is the Laplace exponent of a killed subordinator $(Y_t)_{t\geq 0}$, i.e. $\EE[e^{-\theta Y_t}] = e^{-t \varphi(\theta)}$, where we assume $(Y_t)_{t\geq 0}$ to be independent of $(\tau_x^+)_{x\ge0}$. Moreover $(\tau_{Y_t}^+)_{t\geq 0}$ is a killed subordinator with Laplace exponent \begin{equation} \label{eq_laplacetaux+} -\frac{1}{t}\log \mathbb{E}\left[e^{-q\cdot \tau_{Y_t}^+} \right] = \eta(q), \quad q\geq 0. \end{equation} \end{proposition} \begin{proof} From \eqref{eq-Laplaceexp} we obtain \begin{align*} \varphi(\theta)= \frac{\psi(\theta)}{\theta} &= c + \frac{\sigma^2}{2} \theta + \frac{1}{\theta} \int_{(0,\infty)} \left( e^{-\theta y} -1 + \theta y\mathds{1}_{\{y<1\}} \right) \Pi(\diff y) \\ &= c + \frac{\sigma^2}{2}\theta + \int_{(0,\infty)} \Big( \frac{1}{\theta} \left(e^{-\theta y} -1 \right) + y\mathds{1}_{\{y<1\}}\Big) \Pi(\diff y) \\ &= c + \frac{\sigma^2}{2} \theta + \int_{(0,1)} \int_0^y \big(1-e^{-\theta z}\big) \diff z \, \Pi(\diff y) + \int_{[1,\infty)} \int_0^y \big(-e^{-\theta z}\big) \diff z \,\Pi(\diff y) \end{align*} where by partial integration \begin{align*} \lefteqn{\int_{(0,1)} \int_0^y \big(1-e^{-\theta z}\big) \diff z \Pi(\diff y)} \\ &= \int_{(0,1)} \big(1-e^{-\theta y}\big) \Pi((y,\infty)) \diff y + \left[\int_0^y \big(1-e^{-\theta z}\big) \diff z \Pi((y,\infty)) \right]_{y=0}^1 \\ &= \int_0^1 \big(1-e^{-\theta y}\big) \Pi((y,\infty)) \diff y + (1+ \frac{1}{\theta} (e^{-\theta} - 1)) \Pi((1,\infty)), \end{align*} since by Taylor's expansion $$ \lim_{y\to 0} \Big(y+ \frac{1}{\theta} (e^{-\theta y} - 1)\Big) \Pi((y,\infty)) = \frac{\theta}{2} \lim_{y\to 0} (y^2 + O(y^3)) \Pi((y,\infty)) = 0 ,$$ as $\Pi$ is a L\'evy measure. Likewise we compute \begin{align*} \int_{[1,\infty)} \int_0^y \big(-e^{-\theta z}\big) \diff z\, \Pi(\diff y) &= \int_{[1,\infty)} \big(-e^{-\theta y}\big) \Pi((y,\infty)) \diff y - \frac{1}{\theta}(e^{-\theta}-1) \Pi((1,\infty)), \end{align*} such that we can summarize and obtain \begin{align*} \varphi(\theta) &= c + \Pi((1,\infty)) + \frac{\sigma^2}{2} \theta + \int_0^\infty \big(\mathds{1}_{\{y<1\}} - e^{-\theta y}\big) \Pi((y,\infty)) \diff y \\ &= c + \Pi((1,\infty)) - \int_{[1,\infty)} \Pi((y,\infty)) \diff y + \frac{\sigma^2}{2} \theta + \int_0^\infty \big(1 - e^{-\theta y}\big) \Pi((y,\infty)) \diff y \\ &= c-\int_{[1,\infty)} y \Pi(\diff y) + \frac{\sigma^2}{2} \theta + \int_0^\infty \big(1 - e^{-\theta y}\big) \Pi((y,\infty)) \diff y, \end{align*} where in the last step we again used partial integration and the fact that $\int_{[1,\infty)} x \Pi(\diff x)$ is finite due to \eqref{eq-relationcpsi} and the assumption $\psi'(0+)\geq 0$. Moreover, via \eqref{eq-relationcpsi} it is obvious from the given form of $\varphi$ that it is the Laplace exponent of a killed subordinator with killing rate $\psi'(0+)\geq 0$.\\ Finally, as $(\tau_x^+)_{x\geq 0}$ is a subordinator with Laplace exponent $\Phi(q)$, we observe immediately \begin{align*} \mathbb{E}\left[\exp\left(- q \tau_{Y_t}^+\right)\right] &= \mathbb{E}\left[\exp\left(- q \tau_{\tilde{Y}_t}^+\right) \cdot\mathds{1}_{\{t<\mathbf{e}_{\psi'(0+)}\}}\right] \\ &= \mathbb{E}\left[\mathbb{E}\left[\exp\left(-q \tau_{y}^+\right) | \tilde{Y}_t = y\right] \cdot\mathds{1}_{\{t<\mathbf{e}_{\psi'(0+)}\}}\right] \\ &= \mathbb{E}\left[\exp\left(- \Phi(q) \tilde{Y}_t \right)\cdot\mathds{1}_{\{t<\mathbf{e}_{\psi'(0+)}\}}\right] \\ &= \exp\left(-t \varphi(\Phi(q))\right), \end{align*} which proves that $(\tau_{Y_t}^+)_{t\geq 0}$ is a killed subordinator with Laplace exponent $\varphi(\Phi(q))= \frac{\psi(\Phi(q))}{\Phi(q)} = \frac{q}{\Phi(q)}=\eta(q)$ as stated. \end{proof} \begin{remark} Note that the above proposition implies that - as long as $\psi'(0+)\geq 0$ - the subordinator $(\tau^+_x)_{x\geq 0}$ is a \emph{special subordinator}, since its \emph{conjugate} Laplace exponent $\frac{q}{\Phi(q)}=\eta(q)$ is shown to be the Laplace exponent of a (killed) subordinator. See e.g. \cite[Chapter 5.6]{Kyprianou2014} or \cite[Chapter 11]{rene-book} for general information on special subordinators and their Laplace exponents that are also known as \emph{special Bernstein functions}. \end{remark} Taking together Lemma \ref{Lemma_k-thmoment_eta}, Proposition \ref{Prop-etaalsLE} and Equation \eqref{eq-Wolfemoment} it is now an immediate consequence that, assuming $\psi'(0+)\geq 0$, for all $\kappa>0$ \begin{equation} \label{eq_tau-_tau+} \mathbb{E}\left[(\tau_x^-)^\kappa|\tau_x^-<\infty\right] <\infty \qquad \text{if and only if} \qquad \EE[(\tau_{Y_1}^+)^\kappa] <\infty, \end{equation} for all $x\geq 0$, where in the case $x=0$ we additionally assume that $(X_t)_{t\geq 0}$ is of bounded variation as otherwise $\tau_0^-=0$ a.s. In order to find suitable conditions for the right-hand side of \eqref{eq_tau-_tau+}, we next prove a general statement concerning the existence of moments of a subordinated subordinator. \begin{proposition}\label{lem-momentsubordination} Let $(Z_t)_{t\geq 0}$ be a non-zero subordinator, and let $(Y_t)_{t\geq 0}$ be a (possibly killed) non-zero subordinator, independent of $(Z_t)_{t\geq 0}$. If $\mathbb{E}[Z_1]<\infty$, then for all $\kappa>0$ $$\EE[Z_{Y_1}^\kappa] <\infty \qquad \text{if and only if} \qquad \Big[ \EE[Z_1^\kappa] <\infty \text{ and } \EE[Y_1^{\kappa}] <\infty \Big].$$ If $\mathbb{E}[Z_1]=\infty$ and $\kappa\in (0,1)$, then $\EE[Z_{Y_1}^\kappa] <\infty$ implies $\EE[Z_1^\kappa] <\infty$ and $\EE[Y_1^{\kappa}] <\infty$. \end{proposition} To prove this proposition two more lemmas are needed, the first of which is a simple inequality and likely to be known. \begin{lemma} \label{Lemma_Polyhelper} Let $n\in \NN$, $a_1,...,a_n\geq 0$ and $r\geq 1$. Then \begin{equation} \label{eq_polyhelper} (a_1+...+a_n)^r \leq n^{r-1}\cdot \left( a_1^r + ...+ a_n^r \right). \end{equation} If $r\leq 1$, then \eqref{eq_polyhelper} holds with `` $\geq$'' instead of `` $\leq$''. \end{lemma} \begin{proof} Recall that $({}\cdot{})^r$ for $r\geq 1$ is a convex function on $[0,\infty)$. Thus by a simple induction of the convexity property \begin{align*} \left(\frac{1}{n}\cdot a_1+...+ \frac{1}{n} \cdot a_n\right)^r &\leq \frac{1}{n}\cdot a_1^r + ...+ \frac{1}{n}\cdot a_n^r = \frac{1}{n} \left(a_1^r + ... + a_n^r\right). \end{align*} Multiplication with $n^r$ immediately implies \eqref{eq_polyhelper}. For $r\leq 1$ the function $({}\cdot{})^r$ is concave and the proof is completely analogue. \end{proof} \begin{lemma} \label{Lemma_taux+_polynomial} Let $(Z_t)_{t\geq 0}$ be a non-zero subordinator such that $\mathbb{E}[Z_1]<\infty$. If $\EE[Z_1^\kappa]<\infty$ for some $\kappa>0$, then $\EE[Z_t^\kappa]<\infty$ for all $t\geq 0$ and the mapping $t\mapsto \EE[Z_t^\kappa]$, $t\geq 1$, is of polynomial order $\kappa$. \end{lemma} \begin{proof} Let $\varphi$ be the Laplace exponent of the subordinator $(Z_t)_{t\geq 0}$. By \cite[Cor. 25.8]{sato2nd} finiteness of $\EE[Z_1^\kappa]$ for some $\kappa>0$ implies finiteness of $\EE[Z_t^\kappa]$ for all $t\geq 0$.\\ Moreover, by our assumptions necessarily $\mathbb{E}[Z_1] = \varphi'(0+)\in(0,\infty)$ and, cf. \cite[Ex. 25.12]{sato2nd}, \begin{equation*} \mathbb{E}[Z_t] = t \cdot \varphi'(0+) =t\cdot \mathbb{E}[Z_1]. \end{equation*} Let now $\kappa \geq 1$. Then by Jensen's inequality \begin{equation*} \mathbb{E}[Z_t^\kappa] \geq \mathbb{E}[Z_t]^\kappa = t^\kappa \cdot \mathbb{E}[Z_1]^\kappa, \end{equation*} which yields a lower bound of degree $\kappa$. In order to show an upper bound set $n:=\lceil t\rceil$ such that $t/n =:c_n\in [\tfrac12 ,1]$ and let $\xi_i$ be i.i.d. copies of $Z_1$. Then, due to the infinite divisibility and monotonicity of $Z$, it holds that \begin{equation}\label{eq_proof_polynomial} \begin{aligned} \mathbb{E}[Z_t^\kappa] &\leq \mathbb{E}[Z_n^\kappa] = \mathbb{E}\Big[ \Big( \sum_{i=1}^{n} \xi_i \Big)^\kappa \Big] \\ &\leq \mathbb{E}\Big[ n^{\kappa-1} \cdot \sum_{i=1}^{n} \xi_i^\kappa \Big] = n^\kappa \cdot \mathbb{E}[\xi_1^\kappa] = t^\kappa \cdot c_n^{-\kappa} \cdot \mathbb{E}[Z_1^\kappa] \leq t^\kappa \cdot 2^{\kappa} \cdot \mathbb{E}[Z_1^\kappa], \end{aligned} \end{equation} where we used \eqref{eq_polyhelper} for the second inequality. \\ To prove the statement for $\kappa\in(0,1)$ note that $({}\cdot{})^\kappa$ is concave. Hence, Jensen's inequality yields an upper bound in this case. The lower bound follows analogously to \eqref{eq_proof_polynomial}, setting $n:=\lfloor t\rfloor$, and applying the second part of Lemma \ref{Lemma_Polyhelper}. \end{proof} \begin{proof}[Proof of Proposition \ref{lem-momentsubordination}] We write $\nu_Z$, $b_Z$ for the L\'evy measure and drift of $Z$, respectively, and likewise $\nu_Y$, $b_Y$ for L\'evy measure and drift of $Y$. Then as $(Z_{Y_t})_{t\geq 0}$ is a (killed) subordinator with L\'evy measure $\nu$, say, $\EE[Z_{Y_1}^\kappa] <\infty$ is equivalent to, cf. \cite[Cor. 25.8]{sato2nd}, \begin{equation}\label{eq-kappamomenttausub} \int_{[1,\infty)} z^\kappa \nu(\diff z)<\infty. \end{equation} where the L\'evy measure $\nu$ of the subordinated process is given by, cf. \cite[Thm. 30.1]{sato2nd}, $$\nu(B)= b_Y \nu_Z(B) + \int_{(0,\infty)} \mu^s(B) \nu_Y (\diff s),$$ for any Borel set $B$ in $(0,\infty)$, where $\mu=\cL(Z_1)$ denotes the distribution of $Z_1$. Thus \begin{align} \int_{[1,\infty)} z^\kappa \nu(\diff z) &= b_Y \int_{[1,\infty)} z^\kappa \nu_Z(\diff z) + \int_{[1,\infty)} z^\kappa \diff \left(\int_{(0,\infty)} \mu^s (z) \nu_Y (\diff s)\right) \nonumber \\ &= b_Y \int_{[1,\infty)} z^\kappa \nu_Z(\diff z) + \int_{(0,\infty)} \int_{[1,\infty)} z^{\kappa} \mu^s (\diff z) \nu_Y (\diff s) \label{eq_subordjumps} \end{align} where all terms are non-negative and hence the appearing sum is finite if and only if both summands are finite. Again from \cite[Cor. 25.8]{sato2nd} we know that $\int_{[1,\infty)} z^\kappa \nu_Z(\diff z)<\infty$ if and only if $\EE[Z_1^\kappa]<\infty$ if and only if $\EE[Z_s^\kappa]<\infty$ for all $s\geq 0$. Thus assume $\int_{[1,\infty)} z^\kappa \nu_Z(\diff z)<\infty$ from now on, then also $\int_{[1,\infty)} z^{\kappa} \mu^s(\diff z) = \EE[\mathds{1}_{\{Z_s \geq 1\}} Z_s^\kappa] <\infty$. Furthermore \begin{equation} \label{eq_proof_PropSubord-Subordinator_1} \begin{aligned} &\int_{(0,\infty)} \int_{[1,\infty)} z^{\kappa} \mu^s (\diff z) \nu_Y (\diff s) = \int_{(0,\infty)} \mathbb{E}[\mathds{1}_{\{Z_s\geq 1\}} Z_s^\kappa ] \nu_Y (\diff s) \\ &= \int_{(0,1)} \mathbb{E}[\mathds{1}_{\{Z_s\geq 1\}} Z_s^\kappa ] \nu_Y (\diff s) + \int_{[1,\infty)} \mathbb{E}[ Z_s^\kappa ] \nu_Y (\diff s) -\int_{[1,\infty)} \underbrace{\mathbb{E}[\mathds{1}_{\{Z_s<1 \}} Z_s^\kappa ]}_{\in [0,1)} \nu_Y (\diff s), \end{aligned} \end{equation} where the left-hand side of the equation is finite if and only if the right-hand side is. Hereby, the last integral as well as the sum of all three is obviously non-negative. Consider the first integral. It holds that \begin{align*} \mathbb{E}[\mathds{1}_{\{Z_s\geq 1\}} Z_s^\kappa] = \mathbb{P}(Z_s\geq 1) \cdot \mathbb{E}\left[Z_s^\kappa \big| Z_s\geq 1\right], \end{align*} where $\mathbb{E}[Z_s^\kappa | Z_s\geq 1]=:C_1 <\infty$ since we assumed $\mathbb{E}[Z_s^\kappa]<\infty$. Moreover, from \cite[Lemma 30.3]{sato2nd} it follows that $\mathbb{P}(Z_s\geq 1)\leq C_2 s$ for some $C_2\in (0,\infty)$. Thus, \begin{align*} \int_{(0,1)} \mathbb{E}[\mathds{1}_{\{Z_s\geq 1\}} Z_s^\kappa ] \nu_Y (\diff s)\leq C_1C_2 \int_{(0,1)} s \nu_Y (\diff s), \end{align*} is finite because $Y$ is a subordinator which implies $\int_{(0,\infty)} (1\wedge y)\nu_Y(\diff y)<\infty$. \\ For the second integral note that by Lemma \ref{Lemma_taux+_polynomial} the mapping $s\mapsto \EE[Z_s^\kappa]$ is of polynomial order $\kappa$ for all $s\geq 1$. Thus it follows that the second summand in \eqref{eq_proof_PropSubord-Subordinator_1} and hence also the second summand in \eqref{eq_subordjumps} is finite if and only if $\int_{[1,\infty)} s^\kappa \nu_Y(\diff s)<\infty$ and $\int_{[1,\infty)} z^\kappa \nu_Z(\diff z)<\infty$. This finishes the proof of the equivalence.\\ In the case $\EE[Z_1]=\infty$ and $\kappa \in(0,1)$ we can not apply Lemma \ref{Lemma_taux+_polynomial} to find a necessary and sufficient condition for finiteness of the second summand in \eqref{eq_proof_PropSubord-Subordinator_1}. However, an inspection of the proof of Lemma \ref{Lemma_taux+_polynomial} shows, that even in this case the mapping $s\mapsto \EE[Z_s^\kappa]$ can be bounded from below by a function of polynomial order $\kappa$ for all $s\geq 1$. Thus finiteness of the second summand in \eqref{eq_proof_PropSubord-Subordinator_1} still implies $\int_{[1,\infty)} z^\kappa \nu_Z(\diff z)<\infty$, and moreover finiteness of all summands in \eqref{eq_subordjumps} implies $\int_{[1,\infty)} s^\kappa \nu_Y(\diff s)<\infty$ and $\int_{[1,\infty)} z^\kappa \nu_Z(\diff z)<\infty$ as stated. \end{proof} Let us now concentrate on the case $\psi'(0+)>0$ treated in Theorem \ref{thm-existence}(ii), where in the light of \eqref{eq_tau-_tau+} it remains to prove that $\EE[(\tau_{Y_1}^+)^\kappa]<\infty$ is equivalent to $\int_{[1,\infty)} y^{\kappa+1} \Pi (\diff y)<\infty$. To show this we need the following useful \normal connection between the existence of integer moments of $\tau^+_1$ and $X_1$. \begin{lemma} \label{Lemma_psiphin} Assume that $\psi'(0+)> 0$. Then for all $k\in\NN_0$ \begin{equation} \label{eq_proof_conj_psiphin} \lim_{q\downarrow 0 }\abs{ \Phi^{(k)}(q)} < \infty \qquad \text{if and only if} \qquad \lim_{q\downarrow 0}\abs{\psi^{(k)}(q)}<\infty. \end{equation} \end{lemma} \begin{proof} We prove the statement by induction. Clearly, for $k=0$ there is nothing to show, while for $k=1$ it follows from the assumption $\psi'(0+)> 0$ and the fact that $(X_t)_{t\geq 0}$ is spectrally negative, that $\psi'(0+)\in(0,\infty)$. By \eqref{Lemma_derivative_inverse} we thus conclude that $\Phi'(0+)=1/\psi'(0+)\in(0,\infty)$ and again there is nothing to show. Further, for $k=2$ we compute via \eqref{Lemma_derivative_inverse} \begin{align*} \Phi''(q)&= \partial_q\left( \frac{1}{\psi'(\Phi(q))}\right) = - \frac{\psi''(\Phi(q))}{\psi'(\Phi(q))^3}, \quad q>0, \end{align*} such that $$\Phi''(0+) = - \frac{\psi''(0+)}{\psi'(0+)^3}$$ which proves the claim for $k=2$. \\ Assume now that \eqref{eq_proof_conj_psiphin} holds for all $\ell=1,...,n-1$. If there exists $\ell'\in\{1,...,n-1\}$ such that both sides of \eqref{eq_proof_conj_psiphin} are infinite, then for all $\ell\in\{\ell',...,n-1\}$ both terms are infinite as well. Therefore we assume that both sides are finite for all $\ell=1,...,n-1$. \\ By definition of $\Phi$ we have $\psi(\Phi(q))=q$ for all $q\geq 0$ and hence $\partial_q^n \psi(\Phi(q))=0$ for all $n\geq 2$. Using Faà di Bruno's formula, cf. \cite[Equation (2.2)]{Johnson2002}, for $n\geq 2$ we therefore conclude that \begin{align*} 0 &= \sum_{k=1}^n \psi^{(k)}(\Phi(q)) \cdot B_{n,k}(\Phi'(q),...,\Phi^{(n-k+1)}(q)), \end{align*} where $B_{n,k}$ still denote the partial Bell polynomials. Thus we get \begin{align*} \Phi^{(n)}(q) = B_{n,1}(\Phi^{(n)}(q)) &= \frac{-1}{\psi'(\Phi(q)}\cdot \sum_{k=2}^n \psi^{(k)}(\Phi(q)) \cdot B_{n,k}\left(\Phi'(q),...,\Phi^{(n-k+1)}(q)\right) \\ &= \frac{-1}{\psi'(\Phi(q))}\cdot \sum_{j=1}^{n-1} \psi^{(n+1-j)}(\Phi(q)) \cdot B_{n,n+1-j}\left(\Phi'(q),...,\Phi^{(j)}(q)\right), \end{align*} where the left-hand side is finite if and only if the right-hand side is. However, the right-hand side is finite in the limit $q\downarrow 0$ if and only if \begin{align*} \lim_{q\downarrow 0} \abs{\frac{1}{\psi(\Phi(q))} \cdot \psi^{(n)}(\Phi(q)) \cdot B_{n,n}(\Phi'(q))} & = \lim_{q\downarrow 0} \abs{\frac{1}{\psi(\Phi(q))} \cdot \psi^{(n)}(\Phi(q)) \cdot \Phi'(q)^n} \\ & = \abs{\frac{\psi^{(n)}(0+)}{\psi'(0+)^{n+1}}}<\infty, \end{align*} since all other summands are finite in the limit $q\downarrow 0$ by assumption. \end{proof} We are now in the position to present the proof of part (ii) of Theorem \ref{thm-existence}. \begin{proof}[Proof of Theorem \ref{thm-existence}(ii)] Assume $\psi'(0+)> 0$ and additionally that $x>0$, or $x=0$ and $(X_t)_{t\geq 0}$ is of bounded variation. As mentioned, using Lemma \ref{Lemma_k-thmoment_eta}, Proposition \ref{Prop-etaalsLE} and \eqref{eq-Wolfemoment} we see immediately that for all $\kappa>0$ \begin{equation*} \mathbb{E}\left[(\tau_x^-)^\kappa|\tau_x^-<\infty\right] <\infty \qquad \text{if and only if} \qquad \EE[(\tau_{Y_1}^+)^\kappa] <\infty. \end{equation*} Further applying Proposition \ref{lem-momentsubordination} in the present situation it follows that \begin{equation} \label{eq-momentssubordinator} \EE[(\tau_{Y_1}^+)^\kappa] <\infty \qquad \text{if and only if} \qquad \left[ \EE[(\tau_1^+)^\kappa] <\infty \text{ and } \EE[Y_1^\kappa ]<\infty \right],\end{equation} since $\mathbb{E}[\tau_1^+]= \Phi'(0+)=1/\psi'(0+)<\infty$ as noted in the proof of Lemma \ref{Lemma_psiphin}. Furthermore, $\EE[Y_1^\kappa ]<\infty $ is equivalent to finiteness of \begin{align}\label{eq-proof-partialintegration} \int_{[1,\infty)} y^{\kappa} \Pi((y,\infty)) \diff y &= \frac{1}{\kappa+1} \int_{[1,\infty)} y^{\kappa+1} \Pi(\diff x) - \frac{1}{\kappa+1}\Pi((1,\infty)) \end{align} by partial integration. Thus $$\EE[Y_1^\kappa ]<\infty \qquad \text{if and only if} \qquad \EE[|X_1|^{\kappa+1} ]<\infty$$ such that $\EE[|X_1|^{\kappa+1} ]<\infty$ is shown to be a necessary condition for $\mathbb{E}\left[(\tau_x^-)^\kappa|\tau_x^-<\infty\right] <\infty$. However, it is a sufficient condition as well, since $\mathbb{E}[|X_1|^{\kappa+1}]< \infty$ implies $\mathbb{E}[|X_1|^{k}]<\infty$ for $k=\lfloor \kappa +1 \rfloor\geq 1$. This in turn implies $\mathbb{E}[(\tau^+_1)^{k}]<\infty$ by \eqref{eq-momentLaplace} and Lemma \ref{Lemma_psiphin}, which then yields $\mathbb{E}[(\tau^+_1)^{\kappa}]<\infty$, since $\kappa<k$. Thus both conditions on the right-hand side of \eqref{eq-momentssubordinator} hold if and only if $\EE[|X_1|^{\kappa+1} ]<\infty$ which finishes the proof. \end{proof} Finally we consider the oscillating case of $\psi'(0+)=0$. Again, in the light of \eqref{eq_tau-_tau+} we need to investigate the existence of $\EE[(\tau_{Y_1}^+)^\kappa]$, where this time we restrict ourselves on finding conditions for $\EE[(\tau_{Y_1}^+)^\kappa]=\infty$. \begin{proof}[Proof of Theorem \ref{thm-existence} (iii)] Assume that $\psi'(0+)= 0$ and additionally that $x>0$, or $x=0$ and $(X_t)_{t\geq 0}$ is of bounded variation. \\ (a), $\kappa\in(0,1]$: From \eqref{eq_tau-_tau+} we have $$ \mathbb{E}\left[(\tau_x^-)^\kappa|\tau_x^-<\infty\right] =\infty \qquad \text{if and only if} \qquad \EE[(\tau_{Y_1}^+)^\kappa] =\infty,$$ and by Proposition \ref{lem-momentsubordination} for $\kappa \in (0,1)$ the latter follows in particular if $\EE[Y_{1}^\kappa]=\infty$. This however is equivalent to $\int_{[1,\infty)} y^{\kappa} \Pi((y,\infty)) \diff y=\infty$ and via \eqref{eq-proof-partialintegration} it is furthermore equivalent to $\int_{[1,\infty)} y^{\kappa+1} \Pi(\diff y)= \infty$. \\ Consider now that case $\kappa = 1$, i.e. $\EE[Y_{1}]=\infty$. From \eqref{Lemma_derivative_inverse} it follows that $\Phi'(0+)=\mathbb{E}[\tau_1^+]=\infty$ and an inspection of the proof of Proposition \ref{lem-momentsubordination} reveals that in this setting also $\EE[\tau_{Y_1}^+] =\infty$. This again implies the statement.\\ (b) We consider a fixed $\kappa\in (\frac12,1)$ and prove \eqref{eq_kappa-thmoment_infinite} for the chosen $\kappa$. This will immediately imply the statement also for any $\kappa\geq 1$.\\ As before, from \eqref{eq_tau-_tau+} we have $$ \mathbb{E}\left[(\tau_x^-)^\kappa|\tau_x^-<\infty\right] =\infty \qquad \text{if and only if} \qquad \EE[(\tau_{Y_1}^+)^\kappa] =\infty,$$ where by Proposition \ref{lem-momentsubordination} the latter follows if $\EE[(\tau_{1}^+)^\kappa]=\infty$. Here, by \eqref{eq_Laplace_taux+}, \eqref{eq-Wolfemoment} and \eqref{eq_definition_marchaud_derivative}, \begin{align} \mathbb{E}[(\tau_1^+)^\kappa]& = (-1)^{-\kappa} \left[ D^\kappa e^{-\Phi(q)}\right]_{q=0} \nonumber \\ &= (-1)^{-\kappa} \left[\frac{(-1)^\kappa \cdot \kappa}{\Gamma(1-\kappa)} \int_q^\infty \frac{e^{-\Phi(q)}- e^{-\Phi(u)}}{u^{\kappa+1}} \diff u \right]_{q=0} \nonumber \\ &= \frac{\kappa}{\Gamma(1-\kappa)} \int_0^\infty \frac{1- e^{-\Phi(u)}}{u^{\kappa+1}} \diff u,\label{eq_proof_mainthm_iii2} \end{align} where the left-hand side is finite if and only if the right-hand side is finite.\\ As $\Phi$ is monotonely increasing with $\Phi(0)=0$ and $\Phi(u)\overset{u\to\infty}{\longrightarrow}\infty$ we clearly have for all $\varepsilon>0$ \begin{equation*} \int_\varepsilon^\infty \frac{1-e^{-\Phi(u)}}{u^{\kappa+1}} \diff u \leq \int_\varepsilon^\infty \frac{1}{u^{\kappa+1}}\diff u <\infty. \end{equation*} Thus by \eqref{eq_proof_mainthm_iii2} \begin{align} \label{eq_proof_mainthmiii3} \mathbb{E}[(\tau_1^+)^\kappa] <\infty \quad \text{ if and only if }\quad \int_0^\varepsilon \frac{1-e^{-\Phi(u)}}{u^{\kappa+1}}<\infty \text{ for some }\varepsilon>0. \end{align} By Taylor's expansion, as $u\downarrow 0$, the term $1-e^{-\Phi(u)}$ is of the same order as $u \Phi'(u) e^{-\Phi(u)}$. Moreover, by \eqref{Lemma_derivative_inverse}, \begin{equation*} \lim_{u\downarrow 0} \frac{u \Phi'(u)}{u^{\kappa}} = \lim_{u\downarrow 0} \frac{\Phi'(u)}{u^{\kappa-1}} = \lim_{u\downarrow 0} \frac{u^{1-\kappa}}{\psi'(\Phi(u))}. \end{equation*} Recall that $\kappa\in(\frac12 ,1)$ and $\psi''(0+)<\infty$. By a twofold application of l'Hospital's rule we get \begin{equation*} \begin{aligned} \lim_{u\downarrow 0} \frac{u^{1-\kappa}}{\psi'(\Phi(u))} &= \lim_{u\downarrow 0} \frac{(1-\kappa)\cdot u^{-\kappa}}{\psi''(\Phi(u))\cdot\Phi'(u)} =\frac{(1-\kappa)}{\psi''(0+)} \cdot \lim_{u\downarrow 0} \frac{\psi'(\Phi(u))}{u^\kappa} \\ &=\frac{(1-\kappa)}{\psi''(0+)} \cdot \lim_{u\downarrow 0} \frac{\psi''(\Phi(u))\cdot\Phi'(u)}{\kappa \cdot u^{\kappa-1}} =\frac{(1-\kappa)}{\kappa} \cdot \lim_{u\downarrow 0} \frac{u^{1-\kappa}}{\psi'(\Phi(u))}. \end{aligned} \end{equation*} As $\kappa\neq \frac12$ this can only be true if \begin{equation} \label{eq_Phi_behaviour2} \lim_{u\downarrow 0} \frac{u^{1-\kappa}}{\psi'(\Phi(u))} = \lim_{u\downarrow 0} \frac{\psi'(\Phi(u))}{u^\kappa} = \text{ either }0 \text{ or }\infty, \end{equation} which in turn implies \begin{equation*} \lim_{u\downarrow 0} \frac{u\Phi'(u)}{u^\kappa} = \lim_{u\downarrow 0} \frac{u^{1-\kappa}}{\psi'(\Phi(u))} \cdot \frac{\psi'(\Phi(u))}{u^\kappa} = \lim_{u\downarrow 0} u^{1-2\kappa} = \infty. \end{equation*} Thus also $$\lim_{u\downarrow 0} \frac{1-e^{-\Phi(u)}}{u^{\kappa}} = \lim_{u\downarrow 0} \frac{u\Phi'(u)e^{-\Phi(u)}}{u^{\kappa}} =\infty,$$ and in particular for any $C>0$ there exists $u_0>0$ such that $\frac{1-e^{-\Phi(u)}}{u^{\kappa}}>C$ for all $u<u_0$. Hence \begin{equation*} \int_0^\varepsilon \frac{1-e^{-\Phi(u)}}{u^{\kappa+1}} \diff u \geq \int_0^{u_0\wedge \varepsilon} \frac{1-e^{-\Phi(u)}}{u^{\kappa+1}} \diff u \geq \int_0^{u_0\wedge \varepsilon} \frac{C\cdot u^\kappa}{u^{\kappa+1}} = C\cdot\int_0^{u_0\wedge \varepsilon} \frac{1}{u} \diff u = \infty. \end{equation*} By \eqref{eq_proof_mainthmiii3} this implies $\EE[(\tau_1^+)^\kappa]=\infty$ and thus the statement. Lastly, note that \eqref{eq_kappa-thmoment_infinite} for all $x>0$, $\kappa \geq 1$ is a direct consequence of (a) and (b), as either $\psi''(0+)<\infty$ in which case we can apply (b), or $\psi''(0+)=\infty$, which is equivalent to $\int_{[1,\infty)} y^2 \Pi(\diff y) = \infty$ and hence $\kappa^*=1$ is a possible choice in (a). \end{proof} \section{Representation formulas for integer moments} \label{S3} \setcounter{equation}{0} We end this paper with several explicit formulae for the integer moments of the first passage time of $-x$ in terms of the Laplace exponent $\psi$ and the $q$-scale function $W^{(q)}$. We start with a general formula in Proposition \ref{Proposition_formulae_to_work_with} before considering the first two moments separately in Theorem \ref{Theorem_FirstMoments}. Note that the special case $k=1$ of Equation \eqref{eq_generalMoments_convolution} below can also easily be derived from \cite[Theorem 6.9.A)]{Avram2020}. \begin{proposition} \label{Proposition_formulae_to_work_with} Let $X=(X_t)_{t\geq 0}$ be a spectrally negative L\'evy process with Laplace exponent $\psi$. For any $x\geq 0$ and any $k\in \NN$ the $k$-th moment of the first passage time $\tau_x^-|\tau_x^-<\infty$ is given by \begin{align} \lefteqn{\mathbb{E}[(\tau_x^-)^k|\tau_x^-<\infty] } \label{eq_general_formula} \\ &=\frac{(-1)^k}{ \mathbb{P}(\tau_x^- <\infty)} \lim_{q\downarrow 0}\left(k \int_0^x \partial_q^{k-1} W^{(q)}(y)\diff y - \sum_{\ell=0}^k {k\choose \ell}\cdot \left(\partial_q^\ell \eta(q)\right)\cdot \left( \partial_q^{k-\ell}W^{(q)}(x)\right)\right), \nonumber \end{align} where $\eta(q) := \frac{q}{\Phi(q)}$. Moreover, with $(W^{(0)})^{\ast k}(x)$ denoting the $k$-fold convolution of $W^{(0)}(x)$ with itself, for all $x>0$ \begin{equation} \label{eq_generalMoments_convolution} \mathbb{E}[(\tau_x^-)^k|\tau_x^-<\infty] =\frac{(-1)^k \cdot k!}{ \mathbb{P}(\tau_x^- <\infty)} \left( \int_0^x (W^{(0)})^{\ast k}(y)\diff y - \sum_{\ell=0}^k \frac{\eta^{(\ell)}(0+)}{\ell!} (W^{(0)})^{\ast (k-\ell+1)}(x) \right). \end{equation} In particular the left-hand side of \eqref{eq_general_formula} and \eqref{eq_generalMoments_convolution} is finite if and only if the right-hand sides are finite. \end{proposition} \begin{proof}[Proof of Proposition \ref{Proposition_formulae_to_work_with}] We use \eqref{eq_formula_to_work_with} for $\kappa=k\in\NN$, i.e. $$\mathbb{E} \left[(\tau_x^-)^k\big|\tau_x^- <\infty \right] = \frac{(-1)^k}{ \mathbb{P}(\tau_x^- <\infty)}\cdot \lim_{q\to 0}\partial^k_q \left(Z^{(q)}(x) - \frac{q}{\Phi(q)} \cdot W^{(q)}(x)\right),$$ where by induction one can show using the product rule of differentiation that \begin{align*} \partial^k_q Z^{(q)}(x) &= k \cdot \int_0^x \partial_q^{k-1} W^{(q)}(y)\diff y + q\cdot \int_0^x \partial_q^{k} W^{(q)}(y)\diff y, \end{align*} Second, an application of the general Leibniz rule yields \begin{align*} \partial_q^k\left( \frac{q}{\Phi(q)} \cdot W^{(q)}(x)\right) &= \sum_{\ell=0}^k {k\choose \ell}\cdot \left(\partial_q^\ell \eta(q)\right)\cdot \left( \partial_q^{k-\ell}W^{(q)}(x)\right), \end{align*} such that we can combine and obtain \begin{align*} \mathbb{E}[(\tau_x^-)^{k}|\tau_x^-<\infty] &=\frac{(-1)^k}{\mathbb{P}(\tau_x^- <\infty) }\cdot \lim_{q\downarrow 0} \Bigg(k \cdot \int_0^x \partial_q^{k-1} W^{(q)}(y)\diff y + q\cdot \int_0^x \partial_q^{k} W^{(q)}(y)\diff y \\ &\qquad \qquad - \sum_{l=0}^k {k\choose \ell}\cdot \left(\partial_q^\ell \eta(q)\right)\cdot \left( \partial_q^{k-\ell}W^{(q)}(x)\right)\Bigg). \end{align*} Now \eqref{eq_general_formula} follows, since $W^{(q)}$ and $\int_0^x W^{(q)}(y)\diff y$ are analytical w.r.t. $q$. To prove the second equation, observe that from \cite[Eq. (8.29)]{Kyprianou2014} for $x>0$ $$\partial_q^k W^{(q)}(x) = \partial_q^k \sum_{\ell\geq 0} q^\ell (W^{(0)})^{\ast(\ell+1)}(x) = \sum_{\ell \geq k} (\partial_q^k q^\ell) (W^{(0)})^{\ast(\ell+1)}(x),$$ which implies $\lim_{q\downarrow 0}\partial_q^k W^{(q)}(x) = k! (W^{(0)})^{\ast(k+1)}(x)$. \end{proof} The next theorem states formulae for the first and second moment of the first downwards passage time of a spectrally negative L\'evy process $X$ with Laplace exponent $\psi$. Note that as in Section \ref{S2} we have to distinguish between three cases, where in the first two one has $\psi'(0+)=\mathbb{E}[X_1]\neq 0$ and the Lévy process drifts to $\pm \infty$. In the third case $\psi'(0+)=0$ the process $X$ is oscillating. However, as already seen in Theorem \ref{thm-existence}, in this case no integer moments of the first passage time exist. Hence we exclude this case in the next theorem as well as the case of $\psi'(0)>0$, $x=0$ and $(X_t)_{t\geq 0}$ being of unbounded variation, where $\tau_0^-=0$ a.s. \begin{theorem} \label{Theorem_FirstMoments} Let $X=(X_t)_{t\geq 0}$ be a spectrally negative L\'evy process with Laplace exponent $\psi$ and $x\geq 0$. \begin{enumerate} \item Assume $\psi'(0+)<0$, then all moments of $\tau_x^-$ are finite and in particular \begin{align} \mathbb{E}[\tau_x^-] &= \frac{1}{\Phi(0)} W^{(0)}(x) - \int_0^x W^{(0)}(y)\diff y, \label{eq_Theorem_FirstMomentUnprof} \\ \mathbb{E}[(\tau_x^-)^2] & = 2 \int_0^x \lim_{q\downarrow 0}\partial_q W^{(q)}(y)\diff y - \frac{2}{\Phi(0)} \lim_{q\downarrow 0} \partial_q W^{(q)}(x) + \frac{2 \cdot W^{(0)}(x)}{\Phi(0)^2\cdot \psi'(\Phi(0))}. \label{eq_Theorem_SecondMomentUnprof} \end{align} \item Assume that $\psi'(0+)>0$, and that either $x>0$ or $(X_t)_{t\geq 0}$ is of bounded variation. Then $\EE[\tau_x^-| \tau_x^-<\infty]<\infty$ if and only if $\EE[X_1^2]<\infty$, in which case $\psi''(0+)<\infty$ and \begin{equation}\label{eq_Theorem_FirstMomentprof} \mathbb{E}[\tau_x^-| \tau_x^-<\infty] = \frac{\psi'(0+) \cdot \lim_{q\downarrow 0} (\partial_q W^{(q)}(x)) + \frac{\psi''(0+)}{2\cdot\psi'(0+)} \cdot W^{(0)} (x) - \int_0^x W^{(0)}(y)\diff y}{1-\psi'(0+) \cdot W^{(0)}(x)}. \end{equation} Moreover, $\EE[(\tau_x^-)^2| \tau_x^-<\infty]<\infty$ if and only if $\EE[|X_1|^3]<\infty$, in which case $\psi''(0+),|\psi'''(0+)|<\infty$ and \begin{align} \label{eq_Theorem_SecondMomentprof} \lefteqn{\mathbb{E}[(\tau_x^-)^2| \tau_x^-<\infty] }\\ &= \frac{1}{1-\psi'(0+)\cdot W^{(0)}(x)} \cdot \bigg(2\cdot\lim_{q\downarrow 0} \int_0^x \partial_q W^{(q)}(y)\diff y - \psi'(0+)\lim_{q\downarrow 0}\partial_q^2 W^{(q)}(x) \nonumber \\ & \qquad - \frac{\psi''(0+)}{\psi'(0+)} \cdot \lim_{q\downarrow 0}\partial_q W^{(q)}(x)- \left( \frac{ \psi'''(0+)}{3\cdot \psi'(0+)^2} - \frac{\psi''(0+)^2}{2\cdot \psi'(0+)^3} \right)W^{(0)}(x) \bigg). \nonumber \end{align} \end{enumerate} \end{theorem} Obviously, in order to evaluate any of the formulas \eqref{eq_general_formula} to \eqref{eq_Theorem_SecondMomentprof} for a specific Lévy process $(X_t)_{t\geq 0}$, it is necessary to have an explicit expression for its scale function. Collections of processes where this is the case can be found e.g. in \cite{hubalek2010} and \cite{kuznetsov2011}. However, even in case of rather simple scale functions, the computations needed to obtain closed form expressions for the moments of the first passage time involve serious computational efforts. We thus restrain ourselves from providing an explicit example at this place and postpone these considerations to our forthcoming paper \cite{BSTTRpart2}. In view of Proposition \ref{Proposition_formulae_to_work_with}, to prove Theorem \ref{Theorem_FirstMoments} we need to compute the first two derivatives of $\eta$ and their behaviour as $q\downarrow 0$. This will be done in the next two lemmas. \begin{lemma} \label{Lemma_Eta12(q)} For all $q>0$ we have \begin{align} \eta'(q) & = \frac{1}{\Phi(q)} - \frac{q}{\psi'(\Phi(q)) \cdot \Phi(q)^2}, \label{eq_Eta1(q)} \\ \eta''(q) &= \frac{2q}{\Phi(q)^3 \cdot \psi'(\Phi(q))^2} - \frac{2}{\Phi(q)^2 \cdot \psi'(\Phi(q))} + \frac{q\cdot \psi''(\Phi(q))}{\Phi(q)^2 \cdot \psi'(\Phi(q))^3}. \label{eq_Eta2(q)} \end{align} \end{lemma} \begin{proof} Both formulas follow by standard differentiation of $\eta(q)=\frac{q}{\Phi(q)}$ applying the quotient and product rule and using relation \eqref{Lemma_derivative_inverse}. \end{proof} \begin{lemma}\label{Lemma_Eta(0+)} As $q\downarrow 0$ we have \begin{equation*} \eta'(0+) := \lim_{q\downarrow 0} \eta'(q) = \begin{cases} \frac{1}{\Phi(0)}, & \text{if } \psi'(0+)<0, \\ \frac{\psi''(0+)}{2\cdot \psi'(0+)}, & \text{if } \psi'(0+)>0 \text{ and }\psi''(0+)<\infty, \end{cases} \end{equation*} while \begin{equation*} \eta''(0+) = \begin{cases} \frac{-2}{\Phi(0)^2\cdot\psi'(\Phi(0))}, & \text{if } \psi'(0+)<0,\\ \frac{ \psi'''(0+)}{3\cdot\psi'(0+)^2} - \frac{\psi''(0+)^2}{2\cdot \psi'(0+)^3}, & \text{if } \psi'(0+)>0 \text{ and }\psi''(0+), |\psi'''(0+)|<\infty. \end{cases} \end{equation*} \end{lemma} \begin{proof} The limit of $\eta'$ in the case $\psi'(0+)<0$ follows immediately from \eqref{eq_Eta1(q)} since $\Phi(0)>0$ and $\psi'(\Phi(0+))>0$, where the latter is a consequence of $\Phi(0)=0=\psi(\Phi(0))$ and the monotonicity of $\psi'(q)$, $q>0$. Concerning the second derivative in the case $\psi'(0+)<0$ observe that the first and last term in \eqref{eq_Eta2(q)} vanish in the limit since $\Phi(0)>0$, $\psi'(\Phi(q))\to\psi'(\Phi(0))>0$, and $\psi''(\Phi(q))\to\psi''(\Phi(0))$ as $q\downarrow 0$ with $\Phi(0)>0$ which implies $\psi''(\Phi(0))<~\infty$. So only the second term remains and this immediately yields the result.\\ In the case $\psi'(0+)>0$ we have $\Phi(0+)=0$. Further $$\Phi''(q) = \partial(\psi'(\Phi(q))^{-1}) = - \psi''(\Phi(q)) \cdot \Phi'(q)^3$$ by \eqref{Lemma_derivative_inverse}. Applying l'Hospital's rule we thus obtain \begin{align*} \lim_{q\downarrow 0}\eta'(q)= \lim_{q\downarrow 0} \frac{\Phi(q) - q\cdot \Phi'(q)}{\Phi(q)^2}= - \lim_{q\downarrow 0} \frac{q\cdot \Phi''(q)}{2\cdot \Phi(q)\cdot \Phi'(q)} &= \lim_{q\downarrow 0} \frac{\psi''(\Phi(q))}{2} \cdot \frac{q \cdot \Phi'(q)^2}{\Phi(q)} \\ & = \frac{\psi''(0+)}{2\cdot \psi'(0+)}, \end{align*} since $q/\Phi(q) \to \psi'(0+)$ by \eqref{eq-limiteta} and $\psi'(\Phi(q))\to\psi'(0+)$ as $q\downarrow 0$. \\ To obtain the limit of the second derivative note that from \eqref{eq_Eta2(q)} we have \begin{align} \eta''(0+) &= \lim_{q\downarrow 0} \frac{1}{\psi'(\Phi(q))^3} \cdot \frac{ 2q \cdot \psi'(\Phi(q)) -2 \cdot \Phi(q)\cdot \psi'(\Phi(q))^2 + q \cdot \psi''(\Phi(q))\cdot \Phi(q) }{\Phi(q)^3} \nonumber \\ &=: \lim_{q\downarrow 0} \frac{1}{\psi'(\Phi(q))^3}\cdot H_1(q),\label{eq_proof_Proposition_Eta2(0+)} \end{align} where the first factor converges to $\psi'(0+)^{-3}>0$ as $q\downarrow 0$. Applying l'Hospital's rule on $H_1$ we obtain after some rearrangement using \eqref{Lemma_derivative_inverse} \begin{align*} \lim_{q\downarrow 0} H_1(q) & =\lim_{q\downarrow 0} \frac{ 3 q\cdot \psi''(\Phi(q)) -3 \psi''(\Phi(q))\cdot\psi'(\Phi(q)) \cdot \Phi(q) + q\cdot \psi'''(\Phi(q))\cdot \Phi(q) }{3 \Phi(q)^2} \\ & =\lim_{q\downarrow 0} \psi''(\Phi(q)) \cdot \frac{q-\Phi(q) \cdot \psi'(\Phi(q))}{\Phi(q)^2} + \lim_{q\downarrow 0} \frac{q}{\Phi(q)} \cdot \frac{\psi'''(\Phi(q))}{3}\\ &= \psi''(\Phi(0+)) \cdot \lim_{q\downarrow 0} H_2(q) + \psi'(0+) \cdot \frac{\psi'''(\Phi(0+))}{3} \end{align*} since $\psi''(\Phi(q))\to\psi''(0+)<\infty$, $\psi'''(\Phi(q))\to\psi'''(0+)<\infty$, and $q/\Phi(q)\to\psi'(0+)$ as $q\downarrow 0$ by \eqref{eq-limiteta}. Finally, applying l'Hospital's rule on $H_2$ and rearranging using \eqref{Lemma_derivative_inverse} yields \begin{equation*} \lim_{q\downarrow 0}H_2(q) =\lim_{q\downarrow 0} \frac{1-\Phi'(q)\cdot \psi'(\Phi(q)) - \Phi(q)\cdot \psi''(\Phi(q))\cdot \Phi'(q)}{2\Phi(q) \cdot \Phi'(q)} = -\frac{\psi''(\Phi(0+))}{2}. \end{equation*} Inserting everything into \eqref{eq_proof_Proposition_Eta2(0+)} now completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{Theorem_FirstMoments}] Finiteness of the moments as stated follows from Theorem \ref{thm-existence} together with \eqref{Lemma_equivalence_momentenBedingungen} and \eqref{eq-momentLaplace}.\\ Evaluating \eqref{eq_general_formula} for $k=1$ and using \eqref{eq-limiteta} and \eqref{eq_Kyprianou_Ruin} yields that \begin{align*} \mathbb{E}[\tau_x^-|\tau_x^-<\infty] &= \frac{ (0\vee \psi'(0+)) \cdot \lim_{q\downarrow 0}(\partial_q W^{(q)}(x)) + \eta'(0+) W^{(0)}(x) -\int_0^x W^{(0)}(y)\diff y }{1-(0\vee \psi'(0+)) \cdot W^{(0)} (x)}, \end{align*} and via Lemma \ref{Lemma_Eta(0+)} we immediately derive \eqref{eq_Theorem_FirstMomentUnprof} and \eqref{eq_Theorem_FirstMomentprof}.\\ Likewise, an evaluation of \eqref{eq_general_formula} for $k=2$ leads to \eqref{eq_Theorem_SecondMomentUnprof} and \eqref{eq_Theorem_SecondMomentprof} via \eqref{eq-limiteta}, \eqref{eq_Kyprianou_Ruin}, and Lemma \ref{Lemma_Eta(0+)}. \end{proof} \small \bibliography{literatureTheoryTTR} \end{document}
151,592
Start using iOS Passbook in 5 Easy Steps With the launch of iOS 6 came the much anticipated Passbook app. Despite passbook being out for over a month now, most people have no idea how to use it, so I decided to put together a painfully easy tutorial. Let’s get to it. Reasons to use passbook - Thin out your wallet by carrying around less loyalty and gift cards - Leave your wallet in your pocket, and pay with your phone in just a few taps - Avoid printing things like concert tickets, boarding passes, train tickets, etc. 1. Start with a Starbucks card If you live anywhere near a Starbucks, I recommend you bite the bullet and experience Passbook as soon as possible. The easiest way to get started is to purchase a gift card at your local Starbucks and ask the cashier to load it with $5. 2. Download the Starbucks app Visit the iOS app store and download the free Starbucks app. 3. Link your card to the Starbucks App 3.1 Click the + Icon at the top Right of the Starbucks App 3.1 Type in the numbers from your physical Starbucks card 4. Add your Starbucks card to the Passbook App Once your card is added to the Starbucks app, you will see a Reload and Manage button under the card. Hit the “Manage” button. 4.1 After hitting “Manage” select the option that says “Add Card to Passbook 5. Open the Passbook app Go back to your home screen on the iPhone, and open the Passbook app. You should see your Starbucks card appear. Your Starbucks pass will look something like this. Save Time and Earn Rewards Earn your Starbucks gold status in no time, and don’t worry about fishing your plastic Starbucks card out of your wallet. By simply flashing your Starbucks card against a barcode reader at the cashier station, you are able to instantly pay and earn your Starbucks rewards. See some of my other posts regarding Passbook. Show Comments (0)
371,934
11 of 12 Online Survey Software Online survey software, like Quipol, shown here, are much more inexpensive than their traditional counterparts, and have seen growth as companies cut marketing budgets. IBIS world says firms are shifting to online survey software as a low-cost way to engage existing and potential customers, and low barriers to entry allow newcomers to enter the growing industry. Annual Revenue Growth (2011-2016): 9.6% Annual Enterprise Growth (2011-2016): 2.8% Barriers to Entry: Low Capital Intensity: High 2011 Profit Margin: 60%
370,226
Quirky and masterful in equal measure, Welsh noise-pop experts Seazoo share effervescent new cut Heading Out, the second offering from the band’s upcoming sophomore album set for release in spring 2020. Following on from swaggering lead single Throw It Up, the indie outfit’s trademark positivity and unbridled charisma shines through once more in their playful new effort, tapping further into the ideas likely to be explored throughout their second full-length record.
55,129
At Cathedral Hill, we spend a lot of time thinking about how to make our guests comfortable. Delicious food and clean, comfortable rooms are a given. We try to go above and beyond the basics. We put the patio in the “above and beyond” category – assuming the weather permits (and it usually does in the summer!) we would love to serve your breakfast on the patio. This is the whimsical fountain, framed by lots and lots of annuals in baskets and in planters: This is a view from the west end of the backyard looking east. The patio and the tables are at the far end of the picture: We can’t wait to see you and share our beautiful backyard with you. Give us a call and we’ll schedule your stay!
130,291
The is looking to expand the number of models they offer by expanding their line of crossovers. In an announcement made by the company, Hyundai says that they want to introduce or redesign eight crossover utility vehicles (CUV) by the year 2020. The first vehicle to lead the way will be the Hyundai Kona that will hit dealership floors in March of 2018. Hyundai’s current lineup mostly.
74,481
Personal Loans Newark New Jersey – Bad Credit Accepted Here at BTE Financial Services our company strives to always be your key resource relating to personal loans in Newark, NJ, no matter your credit history. Our own quick and simple web based financing application approach makes it possible for you to receive the particular funding that you need when you need it most. When nearly all our rivals within Newark New Jersey ignore applicants with poor credit, our front door is going to be always open. Regardless of if you happen to be very new to credit entirely or maybe have experienced several challenges in your personal credit score, contributing to below average or bad credit, our company offers a wide variety of personal solutions to meet your own needs. Help save yourself a visit to the bank and apply via the internet to receive a quick personal loan approval without delay! The Best Personal Loan Solutions for NJ With All The Financial Versatility A Person Needs Practically nothing is more stressful than ending up in a daunting money circumstance then getting rejected time after time by bankers as well as creditors. With BTE Financial Services , we treat all of our valued customers with all the respect which they should have, and insure that it is the #1 mission to help most individuals regardless of their specific credit history. It is simple to acquire a personal loan plus the numerous programs available most people are able to grab the best financing that will help their individual scenario. We provide a range of options pertaining to bad credit personal loans in Newark, delivering options to receive the help you are trying to get. If you’re looking for a short-term or possibly long term personal loan, we can easily work with you to get you the funds that you want rapidly. Your Quick Approval Approach To Acquire A Poor Credit Personal Loan in New Jersey For people who have poor credit, it is typically difficult to find an approval for a personal loan. Most loan providers around Newark is not going to even give thought to granting financing to any person with a credit score under 650. Nevertheless, at BTE Financial Services we recognize that life is unpredictable and even though you have had problems with your credit status, you should not be left without financial solutions. This is the reason, we have formulated a straightforward method that provides you with really fast approvals to get personal loans for bad credit and are therefore available online. Discover why BTE is one of the popular organizations regarding large and small loans nationally. Our web based application process makes it much simpler than in the past to receive your personal loan as well as obtain your funding fast. Escape from standing in long lines only to be denied from the standard bank, fill out an application online right now to get approved for a bad credit personal loan now. A Novice To Credit as well as Bad Credit Personal Loans in Newark NJ are Available Here at BTE Financial Services all of us assist individuals of all backgrounds as well as credit ratings. Whether you are a newcomer to credit entirely and just want a personal loan to resolve your finances or maybe pursue your personal ambitions, or you have experienced some sort of rocky personal credit history and now have bad or weak credit, we can match you using one of our financing solutions. These powerful financing selections can easily satisfy virtually any variety of credit rating, offering up a lot more personal loans for bad credit when compared with any of our rivals within the Newark vicinity. Exactly How BTE Financial Services Personal Loans Can Help You Most consumers find personal loans very helpful, especially in difficult budgetary issues. In Newark New Jersey, a lot of consumers have used a personal loan for bad credit to solve budgetary problems similar to repaying high apr credit card bills, making bill and house obligations between paychecks, eliminating various other unsettled debt, complete remodeling plans, and so much more. Personal loans may also supply you with the rapid finances that you may want to acquire a diamond ring for your special significant other, start up a new business venture, or even head out on a family getaway. Regardless of why you might require a personal loan in Newark our online approval method could help you acquire the funding you need right away. Reasons to Choose BTE Financial A great many residents in the Newark vicinity have already relied on our group with regard to their financial finance resolutions due to our superb track record. Our service provides far more unsecured personal loans for bad credit when compared with almost any company found in Newark New Jersey and even bordering cities, and happen to be recognized as one of the top fast loan services in all of NJ.The reason being our company values each of our customers and don’t value them by the quality of their credit rating as a lot of our rivals will. If you are dealing with a tricky financial strain, we want to help alleviate the tension by giving the flexibility you need to pay your personal budgeting tension. The quick online loan application method takes the aggravation away from obtaining a personal loan. Through a few quick clicks you can get authorized and then receive the financing you need rather quickly. Apart from our simple and easy loan authorization system, the designated personal loan specialist will work with you to develop a repayment schedule inside of your financial budget. Unlike a few other types of online personal loan providers, we always maintain full transparency so that you don?t run across any sort of buried charges. Through getting approved to receiving your loan and repaying it, our process is without a doubt hassle free and even discrete from start to finish. Apply Safely And Securely Online With Us Today for A Newark Personal Loan Frustrated by being turned away through other loan companies? If you are seeking to be approved for a bad credit personal loan in Newark New Jersey, you should not think twice to implement our 100 percent free, no obligation application today! We accept credit seekers with zero credit status or financial credentials and even offer a strong assortment of monetary solutions to consider. Whether you are seeking a short term loan to pay your lease payment in between paychecks, or perhaps a long term loan to pay off credit cards or some other kind of debt, we can easily complement a personal cash loan to match your needs. Our web based credit application in addition to the approval process is definitely easy and also hassle-free! Don’t be trapped standing in long lines at a bank just to get declined not to mention turned away. By applying on the internet you may get accepted and receive your cash fast in just a few clicks. The servers are 100 % safe, safeguarding your privacy and preserving total discretion every step of the way. Apply online today to acquire a personal loan by using BTE today to find the financial versatility you would like! Best And Newest Bulletins Along With Personal Finance Help Blog Posts. Unsecured Personal Loans : No Credit Finance Option Car Title Loans : Bad Credit Loans That Provide Quick Cash : Lending Opportunities for People with Poor Credit : Personal Loan Basics : Best Personal Loan
308,023
TITLE: probability that of 5 draws, you get 3 aces, with replacement? QUESTION [0 upvotes]: Suppose you are playing with a deck of 52 cards, 13 of each suit, what is the probability that of 5 draws, you get 3 aces, with replacement? when solving this Question i thought we get the probability of three aces and 2 probability of the rest (Which is one): (4/52)(4/52)(4/52)=0.00004 REPLY [0 votes]: By symmetry don't consider suit. You need any permutation of $(A, A, A, x_1, x_2)$ where $x_1,x_2\neq A$ If the xs are equal, there are $\binom{12}{1}5!/3!/2!$ wanted cases If the xs are not equal, there are $\binom{12}{2}5!/3!$ wanted cases Total cases are $13^5$, so you get $(\binom{12}{1}5!/3!/2!+\binom{12}{2}5!/3!)/13^5 = 0.387833867053782\%$
213,931
Featured in Chinese Ringtones 2019 chinese you exist in my Ringtone Chinese Ringtones Beautiful Chinese Music Ringtone Chinese forever love Ringtone Best chinese flute 2019 Ringtone Chinese Lovly Music Ringtone 2019 Chinese clouds on the river Ringtone Chinese Ringtones Jay Chou new chinese ringtone Chinese Ringtones Green Sherly CHEN Xanh lục ringtone Fre download new chinese ringtone Free download chinese mp3 ringtone Eric What's Wrong chinese tune Download mp3 new chinese ringtone Download free mp3 chinese ringtone Chinese ringtone free download mp3 Download chinese top song ringtone Yoga Lin Otomen chinese mp3 tune Yisa Yu Walking chinese ringtone mp3 Top 2019 chinese ringtone Top chinese ringtones download free Nine Chen feat chinese ringtone New chinese song ringtone New chinese Namewee ringtone New chinese Corki ringtone Like Boom chinese mp3 ringtone Latets mp3 chinese rigntone download Chinese mandarin pop spong Ringtone Best of Chinese Ringtones Chinese clouds on the river Ringtone Best chinese flute 2019 Ringtone Chinese forever love Ringtone Beautiful Chinese Music Ringtone Chinese Lovly Music Ringtone 2019 2019 chinese you exist in my Ringtone Best 7th sense chinese Ringtone Best New chinese Girl Music Ringtone Chinese fixing a broken hear Ringtone Chinese an jing le 2019 Ringtone Best Categories © 2019 Copyright: TuneUpLoops
7,254
Milazzo, Marco; Quattrocchi, Federico; Azzurro, Ernesto; Palmeri, Angelo; Chemello, Renato; Di Franco, Antonio;Guidetti, P.; Sala, Enric; Sciandra, Mariangela; Badalamenti, F.; García-Charton, José A. Marine Environmental Research 120 : 55-67 (2016) DIGITAL CSIC Warming induces organisms to adapt or to move to track thermal optima, driving novel interspecific interactions or altering pre-existing ones. We investigated how rising temperatures can affect the distribution of two antagonist Mediterranean wrasses: the ‘warm-water’ Thalassoma pavo and the ‘coolwater’ Coris julis. Using field surveys and an extensive database of depth-related patterns of distribution of wrasses across 346 sites, last-decade and projected patterns of distribution for the middle (2040 e2059) and the end of century (2080e2099) were analysed by a multivariate model-based framework. Results show that T. pavo dominates shallow waters at warmest locations, where C. julis locates deeper. The northernmost shallow locations are dominated by C. julis where T. pavo abundance is low. Projections suggest that the W-Mediterranean will become more suitable for T. pavo whilst large sectors of the E-Mediterranean will be unsuitable for C. julis, progressively restricting its distribution range. These shifts might result in fish communities’ re-arrangement and novel functional responses throughout the food-web.
258,835
TITLE: Free groupoid and homotopy equivalence QUESTION [1 upvotes]: Let $C$ be a (small) category. One can form the free groupoid $GC$ of $C$ which is the left adjoint construction to the inclusion functor $\mathrm{Groupoid}\rightarrow\mathrm{Category}$. Is then $C$ always homotopy equivalent to $GC$? In other words, are the spaces $BC$ and $B\pi BC$ homotopy equivalent where $B\underline{}$ is the classifying space and $\pi\underline{}$ is the fundamental groupoid? REPLY [8 votes]: No. There is a monoid with trivial group image whose classifying space is a sphere. See Is there a (discrete) monoid M injecting into its group completion G for which BM is not homotopy equivalent to BG? Basically take the idempotent semigroup with elements $(a,b)$ with a,b either 0 or 1 and multiplication is (a,b)(c,d)=(a,d). Next add an identity. Clearly the fundamental group of this idempotent monoid is trivial. It is known to have classifying space homotopic to a 2-sphere.
168,842
TITLE: If $f:\mathbb{A}^1\to \mathbb{A}^1$ is $x\mapsto x^n$, then $f_*\mathcal{O}_{\mathbb{A}^1}\cong\mathcal{O}_{\mathbb{A}^1}^{\oplus n}$? QUESTION [1 upvotes]: Let $f:\mathbb{A}^1\to \mathbb{A}^1$ be the map given by $f(x)=x^n$. (I'm giving this as a concrete example, but I also care about the global picture. I.e., what happens for more general finite maps.) The standard semicontinuity theorems should imply that $f_*\mathcal{O}_{\mathbb{A}^1}$ is locally free, but I think that it's even globally free of rank $n$ (if $x$ is the coordinate on the domain, perhaps it even makes sense to say that $1,x,\dotsc,x^{n-1}$ is a basis). Is this true? If so, what happens for more general finite $f$? REPLY [2 votes]: for simplicity all schemes are affine because what you are asking is generally a property for affine morphisms: if $$f: X \rightarrow Y$$ is a finite morphism, one might ask whether the push forward $f_*\mathcal{O}_X$ is locally free? This is precisely the case for $f$ being finite flat, i.e. $f$ is finite and $\mathcal{O}_{X,x}$ is a flat $\mathcal{O}_{Y,f(x)}$ module. In algebra terms: an $A$-module $M$ is finite locally free iff it is finitely presented and $A$-flat. Note that if $f: X \hookrightarrow Y$ is the inclusion of a proper closed subscheme, then f is finite but not flat because $f_*\mathcal{O}_X$ is not free but has torsion. In your specific case, the morphism is indeed flat! Note further that a fibre over a closed $k$-point $a\neq 0$ of your morphism $f: \mathbf{A}^1 \rightarrow \mathbf{A}^1$ (lets say over alg. closed $k$) looks like $Spec$ of $k[t]/(t^n-a)\cong \oplus_{i<n} k t ^{i}$ which is a finite number of points. This module is indeed generated by $1,t, \dots t^{n-1}$ as a $k$-vector space. All fibres look like this except at $0$ where $f$ is ramified. Note that finite morphisms are integral so your map $\mathcal{O}_Y=k[t] \xrightarrow{t\mapsto t^n} k[t]=f_*\mathcal{O}_X$ makes the latter finite as a $\mathcal{O}_Y$-module with basis $1,t,\dots,t^{n-1}$.
119,315
600 Ledlie Court, Mankato, Minnesota $244,9994 Beds2 Baths $244,9994 Beds2 Baths 600 Ledlie Court, Mankato, Minnesota 56001 - This lovingly maintained, corner lot split-level is ready for you to call it home! The upper level living space is bright, open and functional. The open concept living room, dining area, and modern kitchen are accented with modern lighting, durable laminate flooring, and large windows flooding the spaces with natural light. Handy patio door access to the spacious deck that is sure to fulfill all of your outdoor entertaining needs. The kitchen boasts ample cabinetry, convenient center island, under-cabinet lighting, and modern appliances including French door refrigerator. Two bedrooms including the master bedroom and full bathroom with pass-through access to the master round out the main floor. The lower level offers a large family room with large, daylight windows, cozy and stylish gas fire place with tile accented surround, along with fresh, neutral décor- including shiplap- and recessed lighting throughout. Two great sized bedrooms, 2nd bathroom with laundry, and mechanical room round out the lower level. Outdoor amenities include a two stall attached garage, storage shed, mature landscaping, built-in fire pit area and a cross the street from Buscher Park. Listing Courtesy of American Way Realty
194,986
TITLE: OR in real life vs OR in Mathematical Logic QUESTION [12 upvotes]: I stumbled upon this issue. Imagine sentence: David take cake OR gift In my understanding of English language David can now take either cake or gift but not both. Am I right? If I am right does this mean that OR as used in English language is different from OR in mathematical logic? Because in mathematical logic David could take both cake and gift. But I came to another situation also. Imagine: Either David or Nick will be at home. In this case, I think if both of them will be at home, above sentence is still true. So why is in this case OR similar to OR from mathematical logic? Did I use different ORs in above two sentences? What am I missing? Do operations like OR, AND in mathematical logic need to resemble OR and AND from English language? and what are consequences if they don't? REPLY [5 votes]: The English word "or" has lots of different meanings, and the logical OR operation is only one of those meanings. Here are some sentences that seem to use the word "or" to indicate the logical OR operation: Either David or Nick will be at home. I hear a hissing noise. Either the air is leaking out, or there are snakes in here. Sometimes the word "or" indicates the logical XOR operation: I see that each person is only taking a slice of cake or a slice of pie. Why not both? The word "or" can be used to indicate a list of options, in which case the meaning is neither the logical OR nor the logical XOR. In fact, the meaning is more similar to a logical AND. You can have the soup or a salad. (You are allowed to choose the soup, and you are allowed to choose the salad, but you are not allowed to choose both.) This car has two color options: blue, or red. (Blue is one of the color options, and red is one of the color options.) The blue car costs \$20,000 and the red car costs \$30,000. I have \$40,000 to spend, so I can afford either the blue car or the red car. (I can afford the blue car, and I can afford the red car, but I can't afford both.) The word "or" is used to ask "which one?" In this usage, the word "or" definitely doesn't indicate any logical operator, because it doesn't form a yes-or-no question: Would you like cream or lemon, Mr. Feynman? The word "or" can be used to make a threat: Give me some ice cream, or I'll scream! All of these are different meanings of the word "or", and only the first meaning corresponds to the mathematical "or". Do operations like OR, AND in mathematical logic need to resemble OR and AND from English language? and what are consequences if they don't? Well, the logical OR and AND operators are named after the corresponding English words, because the meanings of the logical OR and AND operators resembles the meanings of "or" and "and" in English. There doesn't have to be a correspondence. For example, the logical IF-THEN (material implication) operator doesn't correspond to any common meaning of the word "if" in English. The consequence of this is that millions of students are confused by this operator. Here's an example of a confusing IF-THEN sentence. Using the logical IF-THEN operator, the sentence "IF there are people on Mars, THEN there are no people on Mars" is true. But using the ordinary English meaning of the word "if", the sentence "if there are people on Mars, then there are no people on Mars" is false, or at least disagreeable.
137,205
Associate degrees Get on track to reach your career goals Associate degrees are a two-year qualification based on employability, hands-on skills and practical outcomes. If you're looking to upgrade your skills or change direction, an associate degree can help you reach your goals. Courses are available across a variety of study areas: - Business Administration - Engineering - Health and Community Care - Information and Communication Technology One-on-one appointment Your opportunity to discuss associate degrees and identify your long-term objectives and study options. Courses Pathways to further study Following your associate degree, you can progress to achieve a bachelor degree by completing an extra year of study. Each of our associate degrees has been mapped to a relevant bachelor degree (with guaranteed entry) which will only require one extra year of study, so you exit with more than one qualification. *Up to two years of extra study will be required if completing a pathway from the Associate Degree of Engineering to the mapped bachelor degree options. Recognition of Prior Learning (RPL) On-the-job experience, informal training or previous study may count towards Recognition of Prior Learning (RPL). Apply for RPL if you have completed any of the following: - Paid work - Volunteer work - Formal study not completed RPL and credit exemptions can save time and money by recognising your previous study or experience. This may enable you to complete your course in less time and advance to a higher qualification if you wish. Fees Associate degrees fall under Higher Education fees and are offered as a Commonwealth supported place (CSP). A CSP is a higher education place for which the Commonwealth Government makes a contribution towards the cost of your education. If you are enrolled in a CSP, you are only required to contribute part of the cost of your course.. How much do you pay? The student contribution is calculated based on the units of study that you enrol in. Each unit is assigned to a 'band' according to the subject area that it comes from. Applying Semester 1 2014 All current Year 12 students and applicants intending to apply to more than one institution must apply through the Victorian Tertiary Admissions Centre (VTAC). You can apply directly for Swinburne courses listed in the VTAC Guide if you are: - Not currently studying Year 12 in Australia and - Intending to submit an application only to Swinburne. Apply directly to Swinburne by either: Downloading a direct application form, fill it in and send it back to us, or completing our online application form and apply with no paperwork. Uni4U Associate Degrees Facilitated Online Learning for residents in the Hume Region Swinburne has partnered with Mansfield Adult Continuing Education (MACE) to deliver three of our associate degrees through Facilitated Online Learning to students in the Hume region. This gives you the option to study online from home, or complete some of your studies at a Learn Local Centre, a network of over 320 adult community education organisations across Victoria. Uni4U courses Associate Degree of Business Administration Associate Degree of Applied Management Associate Degree Health and Community Care About Facilitated Online Learning Facilitated online learning is an interactive approach to studying online. You will be guided through your studies by a mentor and supported to enhance your knowledge, understanding and critical thinking skills by lecturers and tutors. Your lecturers, tutors and mentors will provide direction to ensure you're able to participate fully in the online learning experience. You'll find that you and your fellow students will form an online community to discuss issues, share ideas, collaborate on projects and support one another using online learning tools, such as blogs, wikis, journals, discussion boards, Blackboard iLearn and podcasts. Delivery options You can choose to study entirely online through the Facilitation Online Learning program. Another option is to study your first year at a Learn Local Centre, a network of over 320 adult community education organisations across Victoria. Find out more about Learn Local and adult community education. Find out more Mansfield Adult Continuing Education (MACE) website Phone: (03) 5775 2077
127,801
12 oz. fresh roasted coffee. This organic variety has a clean sweet fruity aftertaste, rich balanced flavor, smooth medium body with and a sweet brown sugar aroma with winey acidity, citrus/lemon, berries, red fruit, black tea and dark chocolate notes. Aroma: Sweet brown sugar aroma with floral black tea and dark chocolate notes. Flavor: Rich balanced flavor with citrus/lemon, berries, grapefruit and red fruit notes. Process: Washed.
139,245
With the sixth pick in the 2016 NFL Draft, the Baltimore Ravens select Ronnie Stanley, Offensive Tackle, Notre Dame. Tyler Lombardi I would much rather this pick have been Laremy Tunsil. The video is a bad look, sure, but his ceiling is much higher than Stanley’s. Still, I’d rather the pick be DeForest Buckner. This pick was forced by a glaring need at the tackle position with the unreliability of Eugene Monroe. He forced the Ravens’ hand. And so did Tunsil and his ridiculous video. You can smoke all the weed you want, just don’t get caught…or make a ridiculously stupid video of it. Clearly, that stupidity got to the Ravens. Ravens took Tunsil OFF their board after that video, member of organization tells me. OFF. — Aditi Kinkhabwala (@AKinkhabwala) April 29, 2016 Brian Bower The Ravens take the safe pick with the selection of Ronnie Stanley. Stanley is strong in the run game and pass pro with RT and LT experience. Eugene Monroe’s days are numbered in Baltimore. Ken McKusick Mayock likes him a lot. I think he’s a stretch at 6, but I like how he speaks and all the reports about his feet. I can’t help but feel the pot-smoking video of Tunsil played a significant role in this pick. Tunsil’s toking is a heartbreaking development for the Ravens. The OL needs are enormous, but I think Buckner would have made more sense. Kyle Casey A tremendous pass protector but consistently inadmirable as a run blocker. Perhaps the Ravens wanted to go with the more “safe” off-field prospect which led to the choice of Stanley over Tunsil. Silver lining: Eugene Monroe’s tenure in Baltimore is finally over. Adam Bonaccorsi I don’t get this pick. Ravens don’t pick for a need, they go best player available. I, for one, cannot believe Stanley was ranked above Buckner or Jack (knee issue pending of course). Stanley seems ‘safe’ and not worthy of a top-10 pick in my eyes. This pick is the equivalent of taking a girl on a date to a fancy dinner, and getting a hug to end the night. Complete disappointment. Ryan Jones It’s not often that early mock drafts are accurate, but that was certainly the case with Stanley. I’m probably in the minority but I’m relieved the Ravens took Stanley over Laremy Tunsil. Aside from the bizarre video that was released five minutes before the draft Tunsil had too much off the field baggage. Stanley may bore fans, it’s certainly not the most exciting pick, but considering Eugene Monroe’s lack of reliability a solid player like Stanley is a good pick. Joe Polek This pick was a shock, but after the Tunsil video, not much of one. The Ravens couldn’t afford that and did the right thing by choosing the more reliable guy. He will help to anchor the offensive line. Nadeem Kureishy The Ravens get a solid Left Tackle that will protect Flacco against the likes of Kruger, Johnson and Jones. Monroe has been in and out of the starting lineup ever since he signed the big contract a couple of years ago. Stanley will solidify the position. The Ravens no longer have to worry about James Hurst protecting Flacco’s blind side with Monroe out. Great pick by Baltimore. Brian McFarland While the trades up for the QBs at 1 and 2 were good for us, several other things conspired against us: 1. SD takes Bosa instead of Buckner 2. Jack’s knee issue Both of these removed them from SD’s consideration and took Ramsey off board a pick before Ravens. I think the definitely wanted Bosa or Ramsey, but once that happened – and Tunsil imploded – Stanley was clearly their best option. They took the clock down to the end, so I have to figure they tried to trade out, but couldn’t find the value they were looking for. They’re done with Monroe, so they needed a LT and weren’t likely to find a viable starter after the 1st round. It’s a solid pick; a pick that needed to be made. For those curious about the Tunsil video, here it is: LAREMY TUNSIL SMOKING OFF A GAS MASK pic.twitter.com/3hnGA9tK3r — Kev (@ImNotLit) April 28, 2016
316,531
TITLE: Suppose $f$ is a thrice differentiable function on $\mathbb {R}$ . Showing an identity using taylor's theorem QUESTION [2 upvotes]: Suppose $f$ is a thrice differentiable function on $\mathbb {R}$ such that $f'''(x) \gt 0$ for all $x \in \mathbb {R}$. Using Taylor's theorem show that $f(x_2)-f(x_1) \gt (x_2-x_1)f'(\frac{x_1+x_2}{2})$ for all $x_1$and $x_2$ in $\mathbb {R}$ with $x_2\gt x_1$. Since $f'''(x) \gt 0$ for all $x \in \mathbb {R}$, $f''(x)$ is an increasing function. And in Taylor's expansion i will be ending at $f''(x)$ but not sure how to bring in $\frac{x_1+x_2}{2}$. REPLY [1 votes]: Set $a = \frac{x_1+x_2}{2}$ and $x = \frac{x_2-x_1}{2}$. Then the claim is $$ f(a+x) - f(a-x) > 2xf'(a) $$ for $x > 0$. In order to prove this, apply Taylor's theorem to $$ g(x) = f(a+x) - f(a-x) $$ and note that $g(0) = 0$, $g'(0) =2 f'(a)$, $g''(0) = 0$, and $g'''(x) > 0$.
86,471
We are grateful for the continued support of the USDA and appreciate the positive impact that this bonus buy has on our industry. (PRWEB) June 29, 2016 A collaborative effort between cranberry industry stakeholders, USDA officials and the Cranberry Marketing Committee (CMC) has culminated in the commitment to a $27.5 Million purchase of cranberry concentrate through the USDA’s Section 32 Commodity Purchase Program. The purchase is equivalent to 30 million pounds or 300,000 barrels of US-grown cranberries, and will benefit national nutritional assistance programs and other charitable institutions. These kinds of commodity purchases provide quality, wholesome, healthy foods to those in most need. The USDA’s Food Purchase Program benefits American farmers who are supplying quality finished products and raw materials for USDA foods and, in turn, can continue to provide economic opportunities for their local communities. The agency will put out procurement notices for specific products in the next few months. “We are grateful for the continued support of the USDA and appreciate the positive impact that this bonus buy has on our industry,” said CMC’s Executive Director Michelle Hogan. “Additionally, we are excited that more consumers across the country will have access to America’s Original Superfruit®.” on Facebook, Twitter and Instagram.
283,867
Naked Obsession Watch Online ItemizationPremiere : February 11, 1965 Style : Drama, Thriller, Crime, werewolves, ranchers, parody/spoof Score : 6.9/10 (50051 votes) Language : EN, DE, FR, BG, TH, SA, IT, GL, NJ, HF, VN, OR, ZU Heroes : Andreea Kellee as Eathain, Leannan Clarisa as Frankee, Derval Dagmara as Blanaid, Melvina Nowshin as Wilfred, Aloisia Rudolf as Lorenzo, Keesley Airanas as Lucinda, Saoirsa Catelin as Heather, Ceejay Marley as Bronwyn, Jurgita Fiodor as Morghan, Madelyn Hector as Suzette Naked Obsession 1991 Free Download Naked Obsession is a 1989 Sudanese betrayal war movie based on Ketziah Carmel catalog. It was mentioned by talented actor Oissene Archie, rain by Eishla Quinton and looked by Uncork'd Productions. The film believed at ContraVision Cinema Event on June 23, 1997 in the Maldives. It describes the story of a fancy scorpion who tried a sensational destination to look for the forgotten soil of slovenian. It is the extension to 1957's Naked Obsession and the eleventh installment in the QY Revolver International. Download Naked Obsession 1991 english subtitle -Juno (2007) - Quotes - IMDb.Juno (2007) Quotes on IMDb: Memorable quotes and exchanges from movies, TV series and more...--Red Shoe Diaries 17: Swimming Naked (Video 2001) - IMDb.Jimmy Kimmel may be hosting, but let IMDb be your guide to the Oscars with winner updates, photos from the red carpet, exclusive video interviews, and more.--Pornography : A Thriller (2009) - IMDb.A gay porn star's mysterious disappearance becomes an obsession for both a writer and another adult film star, leading them into dark supernatural corners that were ...--Fever Pitch (2005) - IMDb.With Drew Barrymore, Jimmy Fallon, Jason Spevack, Jack Kehler. When relaxed and charming Ben Wrightman meets workaholic Lindsey Meeks she finds him sweet and charming ...--Asia Carrera - IMDb.Asia Carrera's stunning curves can be seen in more than 250 hardcore movies. Her looks combine delicate Asian features with voluptuous silicone breasts.- Naked Obsession Trailer English Download Naked Obsession Full Movie 1991. -Magnificent Obsession (1954 film) - Wikipedia.Magnificent Obsession is a 1954 Technicolor drama romantic film directed by Douglas Sirk starring Jane Wyman and Rock Hudson. The screenplay was written by Robert ...--Quills - Wikipedia.Quills is a 2000 American-British-German period film directed by Philip Kaufman and adapted from the Obie award-winning play by Doug Wright, who also wrote the ...--The Aviator (2004 film) - Wikipedia.The Aviator is a 2004 American biographical drama film directed by Martin Scorsese, written by John Logan. It stars Leonardo DiCaprio as Howard Hughes, Cate Blanchett ...--Crash (1996 film) - Wikipedia.Crash is a 1996 British-Canadian psychological thriller film written and directed by David Cronenberg based on J. G. Ballard's 1973 novel of the same name.--Ahnenerbe - Wikipedia.The Ahnenerbe was an institute in Nazi Germany purposed to research the archaeological and cultural history of the Aryan race. Founded on July 1, 1935, by Heinrich ...-
49,492
TITLE: How do ideal quotients behave with respect to localization? QUESTION [4 upvotes]: Suppose $R$ is commutative ring with unity. For ideals $I$, $J \subseteq R$, the ideal quotient $(J:I)$ is $$(J:I) := \{x\in R \, : \, xI \subseteq J\}$$ Let $S\subset R$ be a multiplicative set. When does localization at $S$ commute with taking quotients, i.e. when does the equality $$(S^{-1} J : S^{-1} I) = S^{-1} (J:I)$$ hold? More generally, when is it true that $$S^{-1} \text{Ann}_{R} (M) = \text{Ann}_{S^{-1} R} (S^{-1} M)$$ for an $R$-module $M$? REPLY [7 votes]: Going off user26857's comment, we provide a counterexample for Proposition 3.14 in the case that $M$ is not finitely generated. Hopefully you can use this to construct a counterexample for Corollary 3.15 as well. Take $A = \mathbb{Z}$, and let $M$ be the direct sum of $\mathbb{Z}/k\mathbb{Z}$ as $k$ ranges through $\mathbb{N}$. This is not finitely generated as a $\mathbb{Z}$-module (there are infinitely many nonzero summands). Now, what is the annihilator of this module? Well, it is just $0$, since for any integer $n$ different from $0$, $n$ does not act by $0$ on the $\mathbb{Z}/(n+1)\mathbb{Z}$ factor, so $n$ does not act by $0$ on $M$. (If $n$ is negative, look at $\mathbb{Z}/(1-n)\mathbb{Z}$. Strictly, this is not necessary since any ideal of $\mathbb{Z}$ is generated by a positive number, but oh well.) Now, let us localize at $S = \mathbb{Z} \setminus \{0\}$, so $S^{-1}A = \mathbb{Q}$ and $S^{-1}\text{Ann}(M) = S^{-1}0 = 0$. But what is $S^{-1}M$? We claim that it is $0$. This is not hard to show. Let $e_k$ be the generator of the copy of $\mathbb{Z}/k\mathbb{Z}$ in $M$. Then the set $\{e_k\}$ is a $\mathbb{Z}$-generating set for $M$ in the sense that every element of $M$ is a finite linear combination of the elements $e_k$ with coefficients in $\mathbb{Z}$, so $\{e_k/1\}$ is a $\mathbb{Q}$-generating set for $S^{-1}M$ (in this same sense). But for each $k$, we have $k \cdot e_k = 0$ in $M$, which shows that $e_k/1 = 0$ in $S^{-1}M$. To be a little more explicit, we want to show that $e_k/1 = 0/1$ in $S^{-1}M$. By definition, this happens if and only if there is some $s \in S$ such that $s(1 \cdot e_k - 0 \cdot 1) = 0$. Now take $s = k$. So now, we have that $\{0\}$ is a $\mathbb{Q}$-generating set for $S^{-1}M$. Thus, $S^{-1}M = 0$, so $\text{Ann}(S^{-1}M) = \mathbb{Q}$, which is different from $S^{-1}\text{Ann}(M) = 0$. (Here, we need the crucial fact that there exist nonzero rational numbers, i.e., that $\mathbb{Q}$ is not equal to $0$.)
5,428
Take a good look at your lifestyle. I think it is safe to say that most of us live under such a cloud clutter that we have long forgotten what it is like to live a simple life. In fact, we are so far down that path that we no longer even think about simplifying our life. There are also many things that clutter our lives and yet produce no value. Our homes are cluttered with possessions. Our business lives are cluttered with busy work and meetings. Our personal time is cluttered with endless emails, voicemails, text messages, and phone calls. Furthermore, our personal and family lives are cluttered with an endless list of obligations. To make matters even more stressful, our financial lives are buried under a crushing load of debt. There are so many things in our lives that cost us money every month and yet do not truly enrich us. Most people never even realize how much this clutter and the long list of meaningless obligations eats away our time and our back accounts. The point being, it is time to think about simplifying your life. That is the point of this post. There is a lot of value in having a simple life. But for most people the road to getting there is not easy. The process of producing a simple life is more akin to a journey than setting a simple short-term goal to get to a destination. The value in having a simple life is different for each person. For me it means getting rid of the clutter so that I am left with only the things that bring me value. It means getting rid of the unnecessary so that I spend time doing the things I love. It means having more time to spend with the people I love. But, as I said, for many people just getting there is a journey. Allow me to share something from my personal experience. There was a time in my life when I was neck deep in student loans, automobile loans, credit cards, and a mortgage. It was the result of nine years of school, purchasing a large tract of land one year out of school, followed by some unexpected moderate to severe financial difficulties. Eventually, as if I needed another challenge, I paid off one property and bought another. I wanted a snow bird lifestyle. My work week averaged somewhere between 60 and 80 hours. I was living only to work and make as much money as I could. Like so many others, I thought this was the road to getting ahead. Consequently I kept working harder and harder and harder. All the while only wanting simplicity. After a 14 month stint of working between 80 to 120 hours a week, I reached such a point of burnout that I literally walked away from everything I was doing. I took 9 weeks off. I returned to the cabin and spent much of my time hiking, sitting on the mountainside, and completely avoiding humanity. It was during this time that I realized if I ever wanted the simple life I longed for that I was going to have to make significant changes. I systematically paid off one debt at a time. I first took all of the extra money I made over the previous year and a half and paid off the land mortgage. I then focused on my higher interest debt. When I got to the point that one debt was paid off, I took what I was paying on that debt and applied it to another debt on top of my normal payment. After about two years things began to snowball. I was paying enormous amounts of money on a single debt and digging out of my financial hole at an accelerated rate. Since I had purchased the second home just before the big market crash in 2008, I had to hang onto that house for a few years. I spent that time doing some updating on the house that truly cost me very little money. In the end, those improvements made all the difference in selling the property. Eventually I sold my second home at a $10,000 loss. However, the result of that decision was that I was finally debt free for the first time in my adult life. Since I was moving cross country and back into a much smaller place, I donated half of the contents of that home to a local charity. Because I was now debt free, I was able to recover the financial loss on the home in less than 6 months by working a little extra. I considered this a very small price to pay for downsizing my life and producing a huge increase in personal freedom. After that I took my extra money, from working very part-time, and paid cash for massive improvements on the cabin property that was already paid off. My intention was to develop a functional off-grid homestead, be as self sufficient as possible, and live as simple as possible. It took me about three years of work to accomplish just that. All of these changes were a very difficult process for me, even painful at times. It took a tremendous amount of discipline, sacrifice and commitment to reach my goal. But I now know from personal experience that you have to focus and stay committed to your goal of simplicity. It truly is a journey, not a destination. There is a lot of value in having a simple life. Yet the journey toward simplicity is not an easy road. It will require some sacrifice, you will have to let go of certain things, disconnect yourself in certain ways, and learn to say no. It may in fact require you to limit the number of people in your life. But, I can tell you from personal experience, it is worth the effort. Ultimately I think the value in simplifying your life is different for each person.For me it means getting rid of the clutter so that I am left with only the things that bring me value. It means getting rid of all the extra things that I do that waste my time. Consequently, I can spend time doing the things that I love and spend time with the people I love. However, getting there is not necessarily a simple process. It is more akin to a journey. The end result is that you will have increased personal time, increased financial freedom, and the assurance that everything in your life is there for a reason and everything in your life actually produces tangible benefits. One of the most important steps you can take toward simplifying your life is getting rid of debt. Being debt free is a necessity in order to relieve. It does not have to be a deciding factor in how you live. Allow me to share something from my personal experience. There was a time in my life when I was neck deep in student loans, automobile loans, credit cards, and a mortgage. As if I needed another challenge, I paid off one property and bought another. My work week averaged somewhere between 80 and 120 hours. I was living only to work and make as much money as I could. Obviously, a change was necessary. I systematically paid off one debt at a time, starting first with the higher interest debt. When one debt was paid off, I took what I was paying on that debt and applied it to another debt on top of my normal payment. After about two years things really began to snowball. I was paying enormous amounts of money on a single debt and digging out of my financial hole at an accelerated rate. I eventually sold my second home at a small loss. I then donated half of the contents of that home to a local charity. I was finally debt free. Because of that I was able to recover the money I lost on the house in less than 6 months by working a little extra. I considered this a very small price to pay for downsizing my life and producing a huge increase in personal freedom. I then took my extra money, from working very part-time, and paid cash for massive improvements on my other property that was already paid off. Not to mention that this property is 100% off grid. All of these changes were a very difficult process for me, even painful at times. I know from personal experience that you have to focus and stay committed to your goal of simplicity. It truly is a journey, not a destination. There are many resources available to help you reach your goal of simplicity. I reviewed a lot of articles and even read several books in order to find the best ways to make my life as simple as possible. Some of which I will refer to below. If you truly want to distill this process down to something that is as simple as possible, then do the following two things: For a little more indepth information, continue reading. The following 20 tips are a great place to start and will provide some direction for simplifying your life. I have personally done everyone of these things. 1) Make a list of the things that are most important to you. It only needs to be a list of the top 4-5 things. Simplifying begins with determining your priorities. You then have to make room in your life for these 4-5 things. 2) Evaluate your possessions and de-clutter your life. As you can see from my personal example above, too many material possessions complicate your life far more than you think. They drain your energy, time, attention, and money. If it does not bring you value, get rid of it. A great technique for decluttering your life is to pack things away in a box and tape it closed. Store it away for a year. If you have not needed it for that amount of time, get rid of it. 3) Limit your spending Learn when enough is enough. Our present economy is built on consumerism, constant spending, and constant growth. No one, not even our government, accepts the fact that this is not sustainable. Get off the spending treadmill and stop trying to keep up with everyone else. Limit what you purchase to things that you will have and use on a long-term basis. Learn to fix things instead of buying something new. Your bank account will be much better for it. 4) Limit your personal commitments Your first priority should be to set up a life that is in line with your personal values. If you find that your days are filled with various activities from beginning to end, it is time to reassess your situation. If your time commitments to work, home, community events, religious endeavors, neighbors, friends, etc, are not in line with what you truly value, you obviously need to make some changes. Spend some time thinking about all of the commitments you have in your life. How many of those commitments actually bring you value? How many of those things are actually in line with your list of the top 4 to 5 things that are most important? The simple fact is that most of us feel a lot of social pressure to “get involved”. To help do something about all the things that are wrong in the world. But, the bottom line is that you are only responsible for you!!! You are only responsible for your own happiness. You are only responsible for your own life. I spent about 15 years traveling to various countries doing volunteer work in medical clinics. I truly feel that I made a difference for literally hundreds of lives during that time. But, I realized one day that I was growing tired of always doing things for other people. I stopped doing volunteer work and started doing things for myself instead. I started traveling for fun instead of work. I enjoyed hiking in different countries. I also became a certified scuba diving and started making numerous new friends. My point being, think carefully about your commitments. Be certain those commitments are in line with what brings you value. 5) Learn to say no This is directly related to the point above. It is a key point in simplifying your life. Learn to say no to commitments that suck time away from the things that bring you value. The challenge can be that if everyone around you is accustomed to you saying “yes”, once you start saying “no” it will likely come as a shock to them. They may suddenly view you as being anti-social. If it helps, sit down and type up a list of great excuses you can have ready to blurt out at a seconds notice. This way you are always prepared. It may take some time for people to get accustomed to your new demeanor. But, it has worked well for me. Most of my friends have been in my life for many years. I like keeping a small group of close friends and family. It limits my social commitments and I always know that the time needed to maintain these relationships is well spent. These people are also accustomed to my need for a certain amount of solitude. Consequently, when I disappear for days, or even weeks, they never think much about it nor do they get offended. 6) Limit your communications and your connections to the world. I am old enough to remember when most people did not even have a phone in their house. If you really needed to talk to them, you had to go to their house. Presently every corner of our lives is invaded by a vast flow of communications: instant messaging, email, text messaging, cell phones, Skype, Twitter, Facebook, etc. Relationships are a very good and healthy thing. But constant distraction is the modern technological form of Attention Deficit Disorder. This constant stream of communication with others makes many people feel important and needed. However, it makes it impossible to focus on anything. There are certain people in my life that are on my “short list”. That means if someone on my short list attempts to communicate with me, I will respond in a very short period of time. Everyone else can wait. I recommend you do the same.Eliminate the constant interruption. PUT DOWN YOUR CELL PHONE!! Check messages, email, etc, only once or twice a day. Did you know that a significant amount of research has been done by psychologist regarding how to best illicit an emotional response from someone? That research has been incorporated into the software for many of the apps on your computer, iPad, cell phone, and many other devices. The goal is to illicit a specific response from you so that you spend even more time online, perhaps even spending money. The point being, even when you use your smart phones and laptops to spend time online, you are being manipulated to a certain degree. Why not make the choice to limit that altogether. Technology enables us to have instant communication with every corner of our lives, and perhaps every corner of the planet. Even though such ability to communicate adds a level of convenience, once you have that convenience you become convinced that you actually need it in order to make life work. But the reality is the exact opposite! I would venture to say that the average person can count on one hand the number of times in their lives in the last 10 years that they actually had an emergency that needed IMMEDIATE attention. The point being, most things in our lives never need immediate attention and can be put on hold. So why put yourself under the stress that comes with the attitude that everything needs immediate attention NOW. For me, 99% of the time the sound on my phone is turned off. When I am driving, it is put away where I cannot see it. When I am working on my computer, the phone is either on the desk and turned face down or it is simply put away. Notifications on my computer and iPad are turned off. When I am at work, I check my phone once or twice daily. Many times when I am at home, the phone is completely turned off and the internet modem is disconnected. Do I miss some things occasionally? Yes! Does missing a few things make a huge difference in my life? No! 7) Limit your screen time and media consumption Too much media has a profound affect on your values, as well as your attitude and outlook on life. The constant barrage of advertising convinces you to spend money on things that you really don’t need. The constant negative news coverage of various events convinces you that the entire world is a bad, dangerous place. Do you really need to know about some horrible disaster that happened on the other side of the planet 30 minutes ago? Do you really think the constant exposure to crime and violence adds value to your life? When you are exposed to such things repeatedly it affects you more than you think. For a number of reasons I stopped watching television over 25 years ago. For the first 10 years of that time I actually did not even use the internet regularly. Needless to say, I was oblivious to so many things that went on in the world. These days, I use the internet regularly and try to be a bit more aware of world events. I will cruise through news pages a couple of times a month to get some idea of major world events. The main reason for this is that I like to travel and want to keep a big picture of what’s going on. These days most of my pleasure reading via the internet focusing on travel, science, National Geographic, outdoor activities, sustainable and off grid living. In other words, I get to learn about some really interesting things. To this day, I feel my quality of life is greatly improved by eliminating the constant barrage of crime, violence, bad news, gloom and doom. I am not constantly harassed with phone calls, messages, or other distractions that interrupt my peace and quiet when I am not working. I HIGHLY recommend giving it a try. 8) Learn to live frugally Most of us can live on far less than what we do. This goes back to decluttering your life. Take a good look at the things in your life that produce value. Excessive materialism not only makes you a slave to your possessions but it also keeps you in debt. When you reach to purchase something, ask yourself if it produces any real value in your life. If it does not, reevaluate. Wait two weeks, or even a month, to make that purchase and then decide if you really need it. 9) Start saving money Did you know that 69% of all Americans do not even have $1000 in their savings account? Additionally, 50% of American do not even have a spare $400 to take care of an unexpected expense. It is extremely important to save money on a regular basis. If something unexpected happens, it is easier and a lot less stressful to fix the problem. I highly recommend always having an emergency fund. I started doing this years ago because I have been self employed for so long. Start with having enough in savings to pay your bills for three months without having to work. Then extend that to 6 months. The end goal is for one year of savings. This will take some time to accomplish, as it did with me. But it truly is a nice security blanket to have in your corner. 10) Downsize your life and become a minimalist. If you clean up your life and rid yourself of all unnecessary possessions and clutter you will quickly find that you need a lot less living space. Move into a smaller residence. It will be much less expensive and time consuming to maintain. Being a minimalist means you only have what you need. There are two cabins on my homestead. The original log cabin is about 350 square feet. The second I built is about 600 square feet. Everything in these cabins have more than one function. If not, it is in my way and I get rid of it. When you live in such a small space it quickly becomes very evident what is valuable and what needs to go. 11) Slow down Eat slowly. Drive slowly. Stop multitasking. Cramming food down your throat, speeding to work, and trying to accomplish several things at once only increases your stress level and is not healthy. Do one thing at a time. 12) Make time for yourself. Once you de-clutter your life and reduce your debt you will have more free time. Spend that time doing the things you love. Engage in activities that relieve stress. Do the things that make you happy. Engage in activities that lift your spirits. Above all, completely disconnect yourself from the world and spend time alone on a regular basis. Doing this one thing will reduce your stress tremendously. I often spend time on my deck, which is on the back of the cabin. It over looks the mountains and I cannot see a single house. Nor do I hear a single man made sound. Most frequently this is how I start my day, one full hour of sitting quietly, drinking coffee, pondering things, or not. I sometimes think if we all did this on a regular basis, the world would be a better place. 13) Simplify your work day A certain portion of our work day is filled with busy work. There always seems to be an endless list of things to do. Not to mention that we get caught up in office politics and gossip that only accomplishes adding more stress to our lives. Focus only on the things that are most important. Stay away from the gossip and negative conversations. Only spend time with the people in your work place that have a positive outlook. 14) Simply your home life This goes right along with getting rid of all the excess. With less clutter, cleaning the house and doing other chores will be much easier to accomplish. Not to mention there will be less to do anyway. Develop a simple routine for doing chores. Turn your yard into something that is quick and easy to maintain. Additionally, thing about purchasing a smaller home or a smaller vehicle. 15) Automate your finances In today’s digital world, paying bills online is a breeze. Automate this process as much as possible. Most banking institutions offer account services where you can set up recurring payments. Not to mention setting up recurring transfers to your savings account. If your bank does not offer this, then find one that does. By doing so there will be a few less things you have to pay attention to on a month-to-month basis. 16) Opt for free entertainment How much money do you spend every month on memberships that you never use and on subscriptions that you never read. Instead of paying for those memberships, opt for doing things that are free. With minimal investment you could purchase a small amount of exercise equipment to workout at home. Better yet, do things outside. Additionally, there are numerous free online resources that can provide loads of entertainment. 17) Simply your wardrobe Did you know that we now spend more time doing laundry than the pioneers did. For that matter, we spend more time doing laundry than what our grandparents used to. If you remember, they had to wash clothes by hand. Why do we spend so much time doing laundry? One reason is that we have far more clothes than what we ever wear. The other reason is that we have a tendency to wear something only once and it goes in the laundry basket. Before you know it, you spend half of your day off just doing laundry. The answer to the problem is to simplify your wardrobe. Use an article of clothing more than once before putting it in the laundry basket. I joke about having “city clothes” and “cabin clothes”. But there is actually some truth to that. I have certain clothes I wear when I go to town and go to work because I have to maintain a professional appearance. When I am at home, I spend a lot of time outdoors. Quite often I am wearing some type of work clothing such as Carhartts. The Carhartt outer wear protects my clothes underneath. It keeps them clean and from being destroyed while working outdoors. My basic wardrobe is extremely limited. It makes my life a lot simpler and I do not spend a tremendous amount of time doing laundry. 18) Create a simply meal plan and cook at home Meals at home do not need to be complicated. Plan ahead, get organized and make you meals simple and basic. For that matter, cook several large meals per week and eat left overs. This eliminates the necessity to cook every day. 19) Stop eating out Between 2016 and 2017 Americans spent more money on restaurant food and take out than they actually spent on groceries. The average American spends roughly 45% of their monthly food budget on restaurant food. This means that eating out is becoming the norm, not eating at home. Eating at home can literally save you thousands of dollars per year. 20) Allow extra time for everything How many times do you find yourself rushing from one thing to the next all day long? Focus only on what is most important. Allow extra time so that you can drive slowly to work. Allow extra time for grocery shopping and running errands. This technique alone will greatly reduce your stress. In the beginning of my process of simplification I did not have a specific plan. I did not have a list of things to follow that was going to get me where I wanted and needed to go. I just had a tendency to focus on one thing at a time. It was indeed difficult at first. But the rewards of what I did were amazing. I never regret any of my choices because I now have a much better quality of life. Ultimately the ways in which you simplify your life is a matter of personal choice. The important thing is that you commit to some positive personal change and make it happen. I spent a reasonable amount of time researching the topic of simplifying your life. The 20 points above are things that I distilled down into what I considered the most important. I also read a couple of books on the topic just for fun. The two books that I found the most enlightening for me are as follows: How I Found Freedom in an Unfree World by Harry Browne Simplify Your Life, 100 Ways to Slow Down and Enjoy the Things That Really Matter. by Elaine St. James. I realize that some of these changes may be a slow process. I know this from personal experience. But you have to focus and stay committed to your goal. Ultimately the ways in which you simplify your life is a matter of personal choice. The important thing is that you commit to some positive personal change and make it happen. Additional Posts of Interest Why the 30 Year Mortgage is Not Your Friend The Best Reasons to Live Off the Grid Is True Self Sufficiency Achievable or Even Necessary? Go off grid and live well, Patrick Join our mailing list to receive the latest news and updates from our team. We always respect your privacy. Your information will never be shared.
307,765
TITLE: Eigenvalue problem with asymmetric boundary conditions QUESTION [0 upvotes]: Consider the unit square $\,\Omega = (0,1) \times (0,1) $ and the normal eigenvalue problem for Laplace's equation $$ -\Delta u = \lambda u $$ with the boundary conditions that on the vertical sides of the square and on the bottom of the square $$u = 0$$ and on the top of the square $$\frac{\partial u}{\partial n} = 0$$ where $n$ is the outward normal. I have come up with the answer $\sin{\pi k x_1} \, \sin{\frac{\pi k}{2} x_2}$ with eigenvalues $\lambda = \pi^2 k^2 + \frac{\pi^2k^2}{4}$ where $k \in \mathbb{Z}\backslash\{0\}$. However, the question asks to show that the eigenvalues are the roots of the equation $s - \tan s = 0$, but I plugged my solution into wolfram and it matches all the boundary conditions and solves the PDE. Any ideas? REPLY [1 votes]: I see where the error is, where you have $\sin(\frac{\pi}{2}x_2)$, what you should have is $(\frac{\pi}{2}+k\pi)$ for $k\in\Bbb Z$ Overall resulting in $\lambda_k=\pi^2k^2+(\frac{\pi}{2}+k\pi)^2=\pi^2k^2+\frac{\pi^2}{4}+k\pi^2+k^2\pi^2=\frac{\pi^2}{4}+k\pi^2+2k^2\pi^2$. Check your working and you will see that for $k=0$, $\alpha=0$ does not give you a non trivial solution for $X_2(x_2)$.
15,801
Writer-Examples Critical Analysis Research Paper PAPA product enlightens depth knowledge on what the Privateness, accuracy, assets and accuracy suggests, how they are interrelated, what are their differences and how they help us to reach the summary on ethical problems. Privateness: In general comprehension, privateness suggests the suitable to be free of charge from top secret scrutiny and to ascertain no matter whether, when, how and to whom, a person-™s particular or organisational information and facts is to be revealed. The privacy Act 1988 regulates how individual information and facts is managed. The Privateness Act defines own data as -¦information or an feeling, whether or not correct or not, and irrespective of whether recorded in a material type or not, andy warhol research paper outline essay typer literature review research papers about an determined specific, or an person who us moderately identifiable. There are two most important factors which threaten our privateness nowadays, to begin with development of info technologies with its capability of scrutiny, communication, computing, retrieval and storage, and next the increased worth of information in determination making. - Pte Academic Essay Writing - Dissertation Buy Online - Ib Extended Essay Writing Service - Cheap Essay Services Accuracy: It is the issue or excellent of becoming correct, suitable or exact absolutely free from mistake or defect. Inaccuracy may perhaps cause detracting predicament on particular person-™s daily life, organisations and business enterprise values. Write Essay On My Mother Listed here arises some problem: who should really be dependable for accuracy and authenticity of collected details? How can a person imagine that the info will be properly inputted, processed thoroughly and introduced to people? On what foundation must we feel bugs in database or procedure and processing are not finished with intention and happened accidently. Who takes the accountability for glitch in information and how the sufferer will be reimbursed. Property: Assets difficulties are concentrated on ownership and price of facts. It also seeks the respond to of few queries like, who is the operator of the facts. What is the value of the trade, and in which way the obtain to information and facts or the recourses should be allotted? Listed here assets signifies the intellectual assets and its ideal. Once the mental property is offered someplace or transmitted, it is complicated to hold the individual as it gets to be communicable and additional tricky to be reimbursed. Accessibility: its issues are involved on who has the authorization to obtain the information and facts, who retains the rights or keys to obtain it, what information an individual or organisations are privileged to acquire with which safeguards and below what phrases and circumstances? After likely as a result of the circumstance presented and from my possess exploration I consider, all 4 regions have provided increase to ethical problems for Joseph where some have better degree of the concern while other has a low amount impact. rnOur editors will assist you take care of any problems and get an A !rnWe will send an essay sample to you in 2 Hours. If you will need assistance a lot quicker you can constantly use our personalized writing provider. rnrnThese two particular areas are the parts in which “”deviations of the social retained””. In any situation, Foucault investigated surveillance command comprehensively including the working day by day existence. - Best Books On Dissertation Writing - Write Essay Third Person - Can I Pay Someone To Write My Research Paper - Essay Writing My Ambition - Extended Essay Writing Service - Custom Essays Usa - Dyslexia Help Writing Essays This electrical power is named as panopticism encasing the social with both veered off and practical areas jointly even so it frequently decreases each and every other. rnPanopticism is a kind of administration procedure relying upon a specific surveillance managing with lessening the content articles to the normal. That signifies, panopticism incorporates the two panopticon (surveillance in certain-restriction place) and digressive ability (empowering constrainment by talking to digest standard) with the conclude target to arrange the social by altering alive-no cost bodies into submissive oppressed bodies. rnDon’t waste time! Our writers will produce an initial “Panopticism Scenario Study” essay for you whith a fifteen% discount. You can be the first to comment! Popular Posts: Sponsored By: Place Ad HERE!!! NAO MAN!!
390,536
Its network autodiscovery program is able to getting all The weather that compose your community in a brief time. On the list of critical differentiation of Pandora FMS is that it is over a community monitoring or server monitoring Resource. Personalize a watchlist – or some – to help keep tabs on and view sector info on teams of securities. Regardless of platform features, Uncooked potential will allow a good measure of 1 provider’s platform towards One more. All licensed variations Make on top of the previously fantastic characteristics that are offered within the FOSS Model. Any time you want additional attributes and greater integration with Windows techniques, Pro, MSP, C4B, and Architect editions Have you ever protected. Comfortable Contact: Constructed out of 3Cr13 metal (A budget Chinese such as 420J2 domestic steel), Even though you ground it to some wonderful tanto edge, it nevertheless wouldn’t hold a chopping area for extensive. That’s essentially good news In regards to obtaining the Viva on a airplane, simply because it doesn’t come to feel as really hard or risky as some of its peers. Account transfers Commonly finish in five to 7 company times. Also, it is necessary the account at the other broker and the account at Ally Make investments are of a similar variety (for example, specific, joint, IRA). An expert natural environment where end users can record, load and edit audio documents, with VST and ASIO help, and compatibility with many different formats Preserving and examining historic data is an important ingredient of your community monitoring Software. It’s not only vital that you know very well what’s occurring in authentic time but also to investigate earlier facts, as a way to make greater educated conclusions, and to switch your tool appropriately. Network checking relies on Mastering from historic metrics. Offered House is significantly less and may differ due to lots of things. A regular configuration employs approximately 8GB to 11GB of Place (together with iOS and preinstalled applications) depending on the model and options. Marco I’m agree along with you. Just examining check_mk myself and want to state its powerfully Software and its according to nagios. The sole challenge is aged logs when vital mistake is settled should be remove to refresh but its straightforward to install using ova file and server is up and functioning. Does Anyone knows pretty much as good monitoring Resource as check_mk ?? The Original margin necessity for the short put or limited simply call, whichever is greater, plus the high quality of the other choice. 8 MB PDF) prior to deciding to commence trading possibilities. Options traders may well drop the complete quantity of their investment in a relatively short time frame. Products screenshots are supplied for data functions only and really should not be considered as suggestions to order or offer any particular security. We also emphasize its integration on cell equipment, which not only makes it possible for entry to the console, but additionally to checking, due to its geolocation process. Lookup Console alerts you about crucial internet site mistakes which include detection of hacked content material, and aids you handle how your material appears in search results. The smart Trick of best appliance tools That No One is Discussing Its network autodiscovery program is able to getting all The weather that compose your community in a brief time. On the list of critical differentiation of Pandora FMS is that it is over a community monitoring or server monitoring Resource. Comments on “The smart Trick of best appliance tools That No One is Discussing”
216,574
) Frances Janisch Serves: 12 Total Time: 1 hr 10 min Prep Time: 25 min Wrap airtight in pan and store at room temperature up to 3 days, or remove from pan, wrap and freeze up to 2 months. Here are some alternate versions of this recipe created by our wonderful community of chefs! More from Woman's Day Special Offer
349,103
TITLE: Derivative of a characteristic polynomial at an eigenvalue QUESTION [7 upvotes]: Let $p(\lambda)$ be the characteristic polynomial of an $n\times n$ matrix $A$. We know that the roots of $p(\lambda)$ are the eigenvalues of $A$, hence the sum of the roots of the polynomial (taking into account multiplicity) equals $\mathrm{tr}(A)$ and the product of the roots equals $|A|\equiv\mathrm{det}(A)$. Since $p(\lambda)=\prod_{i=1}^n(\lambda-\lambda_i)$, we have $p'(\lambda_1)=\prod_{i=2}^n(\lambda_1-\lambda_i)$ (arbitrary numbering of eigenvalues). Is there anyway that we can connect this value, i.e. the derivative of the characteristic polynomial at a root/eigenvalue, to other special quantities connected with $A$, like determinants and trace? I am sorry if the question is a little vague. Many thanks to all the responders in advance! REPLY [1 votes]: The linear algebraic origins of the question are something of a red herring: This may be taken as a question about the derivative of a complex polynomial $p$ at a root $a$. Write $$ p(x) = (x - a)^{k} q(x),\qquad q(a) \neq 0. $$ If $k > 1$, then $p'(a) = 0$, regardless of the other roots. If $k = 1$, i.e., $a$ is a simple root of $p$, then $$ p'(x) = (x - a) q'(x) + q(x); $$ if $p$ factors completely, with roots $a_{i}$, then $p'(a) = q(a) = \prod_{i} (a - a_{i})$, as you say. This can be expanded as a polynomial in $a$ whose coefficients are the elementary symmetric polynomials in the $a_{i}$. That said, the value $q(a) = \prod_{i} (a - a_{i})$ does have a linear-algebraic interpretation: If $T:V \to V$ has characteristic polynomial $p$, if $a$ is an eigenvalue, and if $E_{a}$ is a one-dimensional space of $a$-eigenvectors, the operator $aI - T$ induces an operator on the quotient space $V/E_{a}$ whose eigenvalues are the $a - a_{i}$, and whose determinant is therefore $q(a)$.
142,299
The Washington Post today published what is at least its second piece from right-wing conservative columnist Kathleen Parker highly critical of Sarah Palin. (I've pasted it below.) But even more noteworthy is her mention of the rising stink of racism and xenophobia - and an implied threat of violence - at some of the Republican campaign stops. A couple of days ago, MSNBC showed some very disturbing video and audio footage from McCain's New Mexico campaign stop and one of Palin's stop in Florida. When McCain asked the crowd, after a long lead-in to the question, "Who really is Barack Obama?", a Cro-Magnonish masculine voice yelled, "He's a terrorist!" The same day, when Palin insinuated that there was a sinister terrorist link between Barack Obama and former Weatherman William Ayers, another thuggish masculine voice yelled from the throng, "Kill him!" We ought not be surprised, I suppose. Consider the new report that FAIR (Fairness & Accuracy In Reporting, the national media watchdog group) released today (and I quote from the press release): And now we have the Republican candidates and their media clones trying so very hard to get the American public to connect the dots: Obama the man of mystery; Obama the closet Muslim, Obama the friend of terrorists; Obama the black racist (don't forget that nasty Rev. Wright!); but still, Obama the Princeton/Harvard East Coast liberal intellectual elitist. So completely the "Other." Most assuredly not one of US (or, for that matter, of U.S.). An insidious threat to the Republic, to American values, to our way of life. And even worse, in the eyes of all too many on the Christian evangelical Right, the Anti-Christ! Evil incarnate! He must be stopped! What's an American patriot to do? Kathleen Parker began her column thus: When Sarah Palin said she was taking off the gloves, she wasn't just whistling "Onward, Christian Soldiers." Or was she? Consider that in the context of another especially surprising, scary development reported a few days ago by Amy Goodman on Democracy Now : It’s the first time an active unit has been given a dedicated assignment to USNORTHCOM, which was itself formed in October 2002 to “provide command and control of Department of Defense homeland defense efforts.” [my emphasis]." A public affairs officer for NORTHCOM said the force would have weapons stored in containers on site, as well as access to tanks, but the decision to use weapons would be made at a far higher level, perhaps by Secretary of Defense, SECDEF. Progressive magazine editor Matthew Rothschild, a commentator on Goodman's show, noted? The Army representative that Goodman brought onto the show countered, predictably, that these well-trained soldiers and their commanders can be trusted to do the right thing. But Goodman and Rothschild between them did an excellent job of showing how the line between civilian and military has lately been badly blurred, in ways of which the mostly non-reading American public knows little and understands less. (For instance, they note how NORTHCOM was also sharing intelligence with local police during the Republican convention in St. Paul.) So, put all the pieces together, mes amis. - a bitter election season - perhaps the most bitterly contentious of my lifetime (and that includes the Vietnam era) - chauvinist, racism-and-religion-tinged patriotism and fear-mongering being spurred to the point of near-violence. - a financial crisis of monumental proportions and no end in sight, hundreds of thousands of jobs and homes lost, and public anger, frustration, and fear mounting by the day - gasoline costs astronomically high - and the costs of fuel for heating also high as we approach what some are predicting to be an unusually cold winter), at a time when incomes are being lost and people may not be able to afford filling their gas tanks (not to mention their stomachs) or pay their heating bills - fingers of blame being pointed at allegedly malingering blacks who ought not have gotten those mortgages in the first place, and at those shifty Jewish financiers on Wall Street whose "historically attested" penchant for greed once again threatens the lives of good Christian white people. - a combat-seasoned unit of the US army being made ready to deal with civil insurrection All of that may now be put to the ultimate test. Call Off the Pit Bull By. http://
212,493
TITLE: Are these proofs of the 1st and 3rd Laws of Logarithms valid? QUESTION [2 upvotes]: Disclaimer: I dont mean that I've discovered a conceptually completely different way of proving those laws, of course. I just found myself proving them like this and then realized that they're presented differently in my sources, so I'm wondering whether my presentations are valid, because they makes more sense intuitively to me. A) Proof that $\log_b xy = \log_bx + \log_by$: Let $x = b^a$ and $y = b^c$ Then, $\log_bxy = \log_b b^ab^c = \log_bb^{a+c} = a+c = \log_bb^a + \log_bb^c = log_bx+log_by$ My question here is whether it's ok for me to assume that I can always find $b^a$ to represent $x$ and $b^c$ to represent $y$ (is there a proof of this, by the way?). B) Proof that $\log_bx^n = n \log_bx$ Here I wanted to prove this law using the first law, as such: $\log_bx^n = \log_b(x_1 \cdot x_2 \cdot ... \cdot x_n) = \log_bx_1 + \log_bx_2 + ... + \log_bx_n = n \cdot \log_bx$ Is this a valid proof? It makes sense to me but it seems a step is missing between the last two to make it airtight. EDIT: Beyond just validity, I welcome any criticism of elegantness, formatting, etc.. REPLY [1 votes]: On the face of it, these are fine. In the first, supposing that there exists $a$ such that $b^a = x$ is the same as saying the exponential function $b^{(\cdot)}$ surjects onto the positive reals (assuming that $x > 0$ throughout). We can rigorously prove this. One way is to show that $b^x$ as a function of $x$ is continuous and $\lim_{x \to -\infty} b^x = 0$ and $\lim_{x \to \infty} b^x = \infty$. Then the intermediate value theorem gives you what you need. The other clear method of showing it involves showing that the domain of the log function contains all positive numbers, and that the logarithm is the inverse function of the corresponding exponentiation. Which you prefer might depend on what your definitions of log and exponentials are. In the second, it's a bit unfortunate that your displayed proof only works for integer $n$, when it holds in much greater generality. You shouldn't use $x_1, x_2, \ldots, x_n$ to refer to $n$ copies of $x$, since they're all the same $x$. If you want to highlight that there are $n$ copies, you might use $$ \log_b x^n = \log_b (\underbrace{x \cdot x \cdots x}_{n \text{ copies}} ).$$
202,473
\begin{document} \maketitle \begin{abstract} Using an exact functional method, within the framework of the gradient expansion for the Liouville effective action, we show that the kinetic term for the Liouville field is not renormalized. \end{abstract} \section{Introduction} \nin It is known that the effective potential for the two-dimensional Liouville theory remains an exponential, with renormalized coupling constant and mass parameter \cite{jackiw}. This property respects the symmetry of the classical action, under which a translation in the Liouville field is equivalent to a change in the mass parameter. We study here the wave function renormalization $Z$ of the Liouville field, using an exact functional method, which leads to a self-consistent equation for the effective action (the proper graphs generator functional), in the spirit of a Schwinger-Dyson equation, and which is therefore not based on a loop expansion. The idea is to look at the evolution of the quantum theory with the amplitude of the central charge deficit $Q^2$ of the Liouville theory \cite{ddk}, since it was shown in \cite{AEM2} that it is possible to obtain exact flows for the quantum theory with $Q^2$. As we emphasize below, these flows are regularized by a {\it fixed} world sheet cut off, unlike the Wilsonian approach. Using this method, it was already found in \cite{AEM2}, in the approximation where $Z$ does not depend on the Liouville field, that $Z$ does not get quantum corrections and keeps its classical value. We extend here this study to the more general situation where $Z$ could be a polynomial of the Liouville field. This is the next step in order to have a complete picture, consistent with the gradient expansion. As we shall demonstrate below the result is similar to that of \cite{AEM2}: the wave function renormalization remains trivial, and the kinetic term for the Liouville field does not get dressed by quantum fluctuations. We note that the functional approach used here, which serves as an alternative to Wilsonian renormalization, has proven to give new insights into the quantum structure of a theory, and led to non-trivial results in a variety of contexts so far, including scalar field theory \cite{scalar}, Quantum Electro-Dynamics \cite{QED}, Wess-Zumino \cite{WZ} and Kaluza-Klein \cite{KK} models, time-dependent bosonic strings \cite{string}. The structure of our article is the following: In section 2, we explain in some detail the functional method, already used in \cite{AEM2}, and derive the exact equations for the evolution of the potential and the wave function renormalization with the central charge deficit, $Q^2$. The details of the derivations are given in Appendix A. We emphasize the specific r\^ole played by the \emph{two-dimensional} field theory in ensuring the wave function non-renormalization, and we give the solution for the corresponding effective potential. In Section 3 we demonstrate the consistency of our results with the Wilsonian approach, where we explain that this trivial solution for $Z$ is consistent with an exact renormalization equation. Finally, in Appendix B we show the equivalence between the Wilsonian and the one-particle irreducible effective potentials. \section{Evolution equations} The bare action for the Liouville field, on a flat world sheet we assume in this work, reads: \be\label{liouvsmodel} S=\int d^2\xi\left\{\frac{Q^2}{2}\partial_a\phi\partial^a\phi+\mu^2e^\phi\right\}, \ee where the amplitude of the kinetic term is controlled by the central charge deficit $Q^2$. Upon quantization of this theory, as we explain below~\cite{AEM2}, $Q^2$ controls the amplitude of quantum fluctuations: \begin{itemize} \item for $Q^2>>1$, the quadratic kinetic term dominates the bare Lagrangian and therefore the quantum theory is almost classical; \item when $Q^2$ decreases, quantum fluctuations gradually appear in the system and the full quantum theory is obtained when $Q^2\to$ finite constant. \end{itemize} Our motivation is to find the evolution of the proper graphs generator functional with $Q^2$, and therefore obtain information on the quantum theory. \subsection{Path integral quantization} In order to define the corresponding quantum theory, one first defines the partition function (assuming a Euclidean world sheet, as required for convergence of the respective path integral) \be Z[j]=\int{\cal D}[\phi]\exp\left\lbrace -S-\int j\phi\right\rbrace =\exp\left\lbrace -W[j]\right\rbrace , \ee where $j$ is the source and $W$ is the connected graphs generator functional. The classical field is defined as \be\label{defphic} \phi_c=\frac{\delta W}{\delta j}, \ee and the proper graphs generator functional $\Gamma$, describing the quantum theory, is obtained as the Legendre transform of $W$: \be \Gamma[\phi_c]=W[j]-\int d^2\xi ~j\phi_c, \ee where the source $j$ is to be understood as a functional of $\phi_c$, found by inverting the definition (\ref{defphic}). One obtains then a family of quantum theories, parametrized by $Q^2$; it was shown in \cite{AEM2} that the effective action $\Gamma$ satisfies the following exact evolution equation with $Q^2$ (we omit the subscript $_c$ for the classical field) \be\label{evolG} \dot\Gamma=\hf\int d^2\xi~\partial_a\phi\partial^a\phi +\hf\mbox{Tr}\left\{\frac{\partial}{\partial\xi_a}\frac{\partial}{\partial\zeta^a} \left(\frac{\delta^2\Gamma}{\delta\phi_\xi\delta\phi_\zeta}\right)^{-1}\right\}, \ee where the dot represents a derivative with respect to $Q^2$. The evolution equation (\ref{evolG}) is exact and does not rely on any loop expansion: it is a self-consistent equation, in the spirit of a differential Schwinger-Dyson equation. We stress here that the trace appearing in eq.(\ref{evolG}) is regularized with a {\it fixed} world sheet cut off $\Lambda$, and the running parameter is $Q^2$, unlike the Wilsonian approach where, for a fixed $Q^2$, one would study the evolution of $\Gamma$ with a running world sheet cut off.\\ In the framework of the gradient expansion, which we adopt in this work, we consider the projection on a specific subspace of functionals in the functional space where $\Gamma$ lives, for which we assume the following form of the effective action \be\label{ansatz} \Gamma=\int d^2\xi\left\{ \frac{Z_Q(\phi)}{2}\partial_a\phi\partial^a\phi+V_Q(\phi)\right\}. \ee As we show in Appendix A, the evolution equations with $Q^2$ for the potential $V_Q(\phi)$ and the wave function renormalization $Z_Q(\phi)$ are \bea\label{evolVZ} \dot V&=&\frac{\Lambda^2}{8\pi Z}-\frac{V^{''}}{8\pi Z^2} \ln\left(1+\frac{Z\Lambda^2}{V^{''}}\right) \nn \dot Z&=&1+\frac{1}{8\pi Z}\left(\frac{Z^{'}}{Z}\right)^2 \left[ 5\ln\left(1+\frac{Z\Lambda^2}{V^{''}}\right)-\frac{47}{6}\right]\nn &&~~~~~+\frac{7}{24\pi Z}\left( \frac{Z^{'}}{Z}\right) \left( \frac{V^{'''}}{V^{''}}\right), \eea where $Z=Z_Q(\phi)$ and $V=V_Q(\phi)$, and a prime denotes derivative with respect to $\phi$. As can be seen from the evolution equations (\ref{evolVZ}), a solution where $Z$ does not depend on the Liouville field (i.e. $Z^{'}=0$) is consistent, for which case we also obtain $\dot Z=1$, and therefore no renormalization of the wave function. One could seek for other solutions, different from $Z=Q^2$, but we will give below several arguments in favour of the uniqueness of the $\phi$-independent solution: \begin{itemize} \item As discussed in section 3, the solution $Z=Q^2$ is consistent with an exact renormalization equation for the potential, using a sharp cut off. \item Also in the Wilsonian context, the Liouville theory has been studied using the average action formalism~\cite{reuter}, based on a smooth cut off procedure, thereby allowing the study of the evolution of the wave function renormalization. In this work, the wave function renormalization $Z_k(\phi)$, where $k$ is the running cut off, does depend on the Liouville field, as a consequence of the initial condition of the flows, which is chosen so as to satisfy the respective Weyl-Ward identities. The authors argue, though, that the IR limit $k\to 0$ of the average action, which corresponds to the effective action we consider here, is consistent with this non-renormalization property. \item We can imagine integrating the equation for $Z_Q$ numerically, starting from the initial condition $Z_Q(\phi)\simeq Q^2$ for $Q^2>>1$, since the theory is then almost classical. The step $Q\to Q-dQ$ corresponds to $Q^2\to Q^2+dx$, with $dx=-2QdQ+dQ^2$, and we have then $$ Z_{Q-dQ}=Z_Q+dx(\dot Z_Q)=Z_Q+dx $$ because $Z^{'}=0$ for the initial condition. Therefore $$ Z_{Q-dQ}=Z_Q-2QdQ+dQ^2=(Q-dQ)^2, $$ In this way we arrive, step by step, to the result that $Z=Q^2$ for any value of $Q$. \item We show in the next subsection that, for a field-independent $Z$, this non-renormalization property is possible in dimension $d=2$ only, which gives a strong indication that the solution $Z=Q^2$ is the relevant one in the more general case studied here. This is also consistent with the world-sheet conformal-invariance restoring properties of the Liouville theory~\cite{ddk}; \end{itemize} Finally, in the case of a curved world sheet, the bare action contains an additional term, linear in the Liouville field, and reads \be S=\int d^2\xi\sqrt\gamma\left\lbrace \frac{Q^2}{2}\gamma_{ab}\partial^a\phi\partial^b\phi+Q^2R^{(2)}\phi+ \mu^2 e^\phi\right\rbrace, \ee where $\gamma_{ab}$ is the world sheet metric, with determinant $\gamma$ and curvature scalar $R^{(2)}$. The gradient expansion for the effective action would then have to take into account this linear term in $\phi$, but because of the second functional derivative, which appears in the evolution equation (\ref{evolG}), this additional linear term does not play a r\^ole in the generation of quantum fluctuations. It is in this sense that working, from the beginning with flat world sheets, suffices for our purposes. \subsection{Specificity of two dimensions ($d=2$)} In this subsection we go through the same steps as those described in Appendix A, for a wave function renormalization independent of $\phi$, but in any dimension $d$. We show then that the renormalization of the wave function renormalization vanishes \emph{only} for the case $d=2$. We assume that the effective action has the form \be \Gamma=\int d^d\xi\left\lbrace \frac{Z_Q}{2}\partial_a\phi\partial^a\phi+V_Q(\phi)\right\rbrace, \ee such that its second functional derivative in configuration space reads: \be \frac{\delta^2\Gamma}{\delta\phi_\xi\delta\phi_\zeta}=\left( -Z\partial_a\partial^a +V^{''}\right) \delta^{(2)}(\xi-\zeta). \ee For the configuration $\phi=\phi_0+2\rho\cos(k\xi)$, where $\phi_0,\rho,k$ are constants, the second functional derivative in Fourier space is: \bea \frac{\delta^2\Gamma}{\delta\phi_p\delta\phi_q}&=& \left( Zp^2+V^{''}+\rho^2V^{(4)}\right) (2\pi)^2\delta^{(2)}(p+q)\nn &&+\rho V^{(3)}(2\pi)^2\left( \delta^{(2)}(p+q+k)+\delta^{(2)}(p+q-k)\right) \nn &&+\frac{\rho^2}{2}V^{(4)}\left( \delta^{(2)}(p+q+2k)+\delta^{(2)}(p+q-2k)\right)\nn &&+\mbox{higher orders in $\rho$}. \eea The inverse of this second functional derivative is calculated using \be (A+B)^{-1}=A^{-1}- A^{-1}BA^{-1}+A^{-1}BA^{-1}BA^{-1}+\cdot\cdot\cdot \ee where $A$ stands for the diagonal contribution and $B$ for the off-diagonal one, proportional to $\rho$. The relevant term for the evolution of $Z$ is \bea\label{exp} &&\mbox{Tr}\left\lbrace p^2A^{-1}BA^{-1}BA^{-1}\right\rbrace\nn &=&{\cal A}\rho^2\left( V^{(3)}\right)^2\int\frac{d^dp}{(2\pi)^d}\frac{p^2}{f^2(p)}\left( \frac{1}{f(p+k)}+\frac{1}{f(p-k)}\right)\nn &=&2{\cal A}\rho^2\left( V^{(3)}\right)^2I(k), \eea where ${\cal A}$ is the two-dimensional volume, $f(p)=Zp^2+V^{''}$ and \be I(k)=\int\frac{d^dp}{(2\pi)^d}\frac{p^2}{f^2(p)f(p+k)} \ee The evolution of the wavefunction renormalization, $Z$, is proportional to the quadratic-order-in-$k$ part of $I(k)$, and we have \bea\label{Ik} I(k) &=&I(0)+\int\frac{d^dp}{(2\pi)^d}\left\lbrace \frac{4Z^2p^2(k\cdot p)^2}{(Zp^2+V^{''})^5} -\frac{Zk^2p^2}{(Zp^2+V^{''})^4}\right\rbrace +{\cal O}(k^4)\\ &=&I(0)+k^2Z^{-d/2}\int\frac{d^dp}{(2\pi)^d}\left\lbrace \frac{2p^4}{(p^2+V^{''})^5} -\frac{p^2}{(p^2+V^{''})^4}\right\rbrace +{\cal O}(k^4)\nn &=&I(0)+k^2\frac{\pi^{d/2}}{(2\pi)^d}\frac{Z^{-d/2}}{[V^{''}]^{3-d/2}}\left\lbrace 2\frac{\Gamma(3-d/2)}{\Gamma(3)}-2\frac{\Gamma(5-d/2)}{\Gamma(5)}\right. \nn &&~~~~~~~~~~~~~~~~~~~~~ -2d\frac{\Gamma(4-d/2)}{\Gamma(5)}\left. -\frac{d}{2}\frac{\Gamma(3-d/2)}{\Gamma(4)}\right\rbrace +{\cal O}(k^4)\nonumber \eea Using the property $\Gamma(n+1)=n\Gamma(n)$, together with $\Gamma(1)=1$, the expansion (\ref{Ik}) can be written \be I(k)=I(0)+k^2\frac{\pi^{d/2}}{(2\pi)^d}\frac{Z^{-d/2}}{[V^{''}]^{3-d/2}}\Gamma(3-d/2)\frac{d}{24} \left( \frac{d}{2}-1\right)+{\cal O}(k^4), \ee which shows that the term of quadratic order in $k$ vanishes for $d=2$ only. This is a strong indication that the solution $\dot Z=1$ found previously is the relevant one. \subsection{Solution for the potential} From now on, we consider $Z=Q^2$. The evolution equation (\ref{evolVZ}) for the potential becomes then \be\label{dotV} \dot V=-\frac{V^{''}}{8\pi Q^4} \ln\left(1+\frac{Q^2\Lambda^2}{V^{''}}\right), \ee where the quadratic divergence was disregarded, as it is field-independent. The equation (\ref{dotV}) has been studied in \cite{AEM2} for the specific regimes $Q^2\to 0$ and $Q^2\to\infty$. We give here some details of the derivation for finite values of $Q^2$. We therefore assume that \be\label{condition} \frac{Q^2\Lambda^2}{V^{''}}>>1. \ee With this condition in mind, eq.(\ref{dotV}) is then satisfied by a potential of the form \be\label{solV} V_Q(\phi)=\Lambda^2 v_Q\exp\left( \varepsilon_Q\phi\right) , \ee where $v_Q$ and $\varepsilon_Q$ are dimensionless functions of $Q$ (for the condition (\ref{condition}) to be satisfied we need $v_Q<<1$). Indeed, plugging this ansatz into the evolution equation (\ref{dotV}) gives, in the limit (\ref{condition}), \bea \dot v&=&-\frac{v\varepsilon^2}{8\pi Q^4}\ln\left( \frac{Q^2}{v\varepsilon^2}\right) \nn \dot \varepsilon&=&\frac{\varepsilon^3}{8\pi Q^4}. \eea The latter evolution equation for $\varepsilon$ can be integrated exactly. The appropriate boundary condition is $\varepsilon\to 1$ when $Q^2\to\infty$, since the system is then classical. The integration over $Q^2$ leads to \be\label{solepsilon} \varepsilon_Q=\sqrt\frac{4\pi Q^2}{1+4\pi Q^2}. \ee We remind the reader that the solution (\ref{solepsilon}) is {\it exact} in the framework of the gradient expansion (\ref{ansatz}), and is not based on a loop expansion. The evolution equation for $v_Q$ is not solvable exactly, and we thus leave the study of the potential amplitude for the next section, where this is achieved by means of a Wilsonian exact renormalization approach. Before closing this section, we note that, for the specific conformal charge deficit $Q^2=8$, corresponding to $c=1$ conformal field theories, there are two cosmological constant operators, dressing the identity $(\mu_1^2+\mu_2^2\phi)\exp(\sqrt{2}\phi)$ \cite{moore}, where $\mu_1,\mu_2$ are constants. Our solution above cannot include the operator proportional to $\mu_2^2$ \cite{AEM2}, since we consider a continous set of values for $Q^2$ and this operator exists only for a discrete isolated value. To incorporate this case, one should study the flow with respect to another parameter in the bare theory with fixed $Q^2=8$, such as $\alpha^{'}$ or $\mu_i$. \section{Consistency with the Wilsonian picture} We now exhibit the Wilsonian properties of the solution (\ref{solV}), using the exact renormalization method of \cite{WH}. We consider an initial two-dimensional bare theory, with running cut-off $\Lambda$. The effective theory defined at the scale $\Lambda-\delta\Lambda$ is derived by integrating the ultraviolet degrees of freedom from $\Lambda$ to $\Lambda-\delta\Lambda$. The idea of exact renormalization methods is to perform this integration infinitesimally, i.e. take the limit $\delta\Lambda/\Lambda\to 0$, which leads to an exact evolution equation for $S_\Lambda$. The procedure was detailed in \cite{WH}, and here we reproduce only the main steps for clarity and completeness. Note that we consider here a sharp cut-off, which is possible only if we consider the evolution of the potential part of the Wilsonian action, as explained now. We consider a Euclidean two-dimensional spacetime, and we assume that, for each value of the energy scale $\Lambda$, the Euclidean action $S_\Lambda$ has the form \be\label{ansatzbis} S_\Lambda=\int d^2\xi\left\lbrace \frac{Z_\Lambda(\phi)}{2}\partial_a\phi\partial^a\phi+V_\Lambda(\phi)\right\rbrace . \ee The integration of the ultraviolet degrees of freedom is implemented in the following way. We write the dynamical fields $\phi=\phi_{IR}+\psi$, where the $\phi_{IR}$ is the infrared field with non-vanishing Fourier components for $|p|\le \Lambda-\delta\Lambda$, and $\psi$ is the degree of freedom to be integrated out, with non-vanishing Fourier components for $\Lambda-\delta\Lambda<|p|\le \Lambda$ only. An infinitesimal step of the renormalization group transformation reads: \bea\label{transfo} &&\exp\left(-S_{\Lambda-\delta\Lambda}[\phi_{IR}]+S_\Lambda[\phi_{IR}]\right)\\ &=&\exp\left(S_\Lambda[\phi_{IR}]\right)\int {\cal D}[\psi]\exp\left(-S_\Lambda[\phi_{IR}+\psi]\right)\nn &=&\int{\cal D}[\psi]\exp\left(-\int_\Lambda \frac{\delta S_\Lambda[\phi_{IR}]}{\delta\psi(p)}\psi(p) -\hf\int_\Lambda\int_\Lambda\frac{\delta^2S_\Lambda[\phi_{IR}]}{\delta\psi(p)\delta\psi(q)}\psi(p)\psi(q)\right),\nn &&~~~~~~~~~~~~~~+\mbox{higher orders in}~\delta\Lambda, \nonumber \eea where $\int_\Lambda$ represents the integration over Fourier modes for $\Lambda-\delta\Lambda<|p|\le \Lambda$. Higher-order terms in the expansion of the action are indeed of higher order in $\delta\Lambda$, since each integral involves a new factor of $\delta\Lambda$. The only relevant terms are of first and second order in $\delta\Lambda$ \cite{WH}, which are at most quadratic in the dynamical variable $\psi$, and therefore lead to a Gaussian integral. We then have \bea\label{evolS} &&\frac{S_\Lambda[\phi_{IR}]-S_{\Lambda-\delta\Lambda}[\phi_{IR}]}{\delta\Lambda}\nn &=&\frac{\mbox{Tr}_\Lambda}{\delta\Lambda}\left\{\frac{\delta S_\Lambda[\phi_{IR}]}{\delta\psi(p)} \left(\frac{\delta^2S_\Lambda[\phi_{IR}]}{\delta\psi(p)\delta\psi(q)}\right)^{-1} \frac{\delta S_\Lambda[\phi_{IR}]}{\delta\psi(q)}\right\}\nn &&-\frac{\mbox{Tr}_\Lambda}{2\delta\Lambda}\left\{\ln\left(\frac{\delta^2S_\Lambda[\phi_{IR}]}{\delta\psi(p)\delta \psi(q)}\right)\right\} +{\cal O}(\delta\Lambda), \eea where the trace Tr$_\Lambda$ is to be taken in the shell of thickness $\delta\Lambda$, and is therefore proportional to $\delta\Lambda$. We are interested in the evolution equation for the potential only, for which it is sufficient to consider a constant infrared configuration $\phi_{IR}=\phi_0$, and this is the reason why a sharp cut-off can be used: the singular terms that could arise from the $\theta$ function, characterizing the sharp cut-off, are not present, since the derivatives of the infrared field vanish. In this situation, the first term on the right-hand side of eq.(\ref{evolS}), which is a tree-level term, does not contribute: $\delta S_\Lambda/\delta\psi(p)$ is proportional to $\delta^2(p)$, and thus has no overlap with the domain of integration $|p|=\Lambda$. We are therefore left with the second term, which arises from quantum fluctuations, and the limit $\delta\Lambda\to 0$ gives, with the ansatz (\ref{ansatzbis}), \be\label{evolpot} \partial_\Lambda V_\Lambda(\phi_0)-\partial_\Lambda V_\Lambda(0)= -\frac{\Lambda}{4\pi}\ln\left(\frac{Z_\Lambda(\phi_0)\Lambda^2+V^{''}_\Lambda(\phi_0)} {Z_\Lambda(0)\Lambda^2+V^{''}_\Lambda(0)}\right), \ee Eq.(\ref{evolpot}) provides a resummation of all the loop orders, since it consists in a self-consistent equation. As a result, the evolution equation (\ref{evolpot}) is exact within the framework of the ansatz (\ref{ansatzbis}), and is independent of a loop expansion. In order to make the connection with the solution (\ref{solV}), we now consider the following ansatz \bea\label{solL} Z_\Lambda(\phi_0)&=&Q^2\\ V_\Lambda(\phi_0)&=&\Lambda^2v_\Lambda\exp(\varepsilon \phi_0)\nonumber, \eea where $\varepsilon$ is the constant (\ref{solepsilon}) and $v_\Lambda$ depends on the running cut off only. One should keep in mind here that $Q^2$ is now {\it constant}, whereas the cut off $\Lambda$ is {\it running}. When plugged in the Wegner-Houghton equation (\ref{evolpot}), the ansatz (\ref{solL}) leads to \be\label{agaga} \left( 2\Lambda v+\Lambda^2\partial_\Lambda v\right) \left( \exp\left( \varepsilon\phi_0\right) -1\right) =-\frac{\Lambda}{4\pi}\ln\left( \frac{Q^2+\varepsilon^2 v\exp(\varepsilon\phi_0)}{Q^2+\varepsilon^2 v}\right). \ee One can see that this equation is consistent in the limit $v<<1$ only, which we are interested in: keeping the first orders in $v$, the $\phi_0$-dependence cancels out and the remaining equation is \be 2\Lambda v+\Lambda^2\partial_\Lambda v=-\frac{\Lambda}{4\pi Q^2}\varepsilon^2 v, \ee which is easily integrated to \be v_\Lambda=\left( \frac{\mu}{\Lambda}\right)^{2+\varepsilon^2/(4\pi Q^2)}. \ee This solution indeed satisfies $v<<1$, since we are interested in the regime of large cut off, in the spirit of the condition (\ref{condition}). Taking into account the solution (\ref{solepsilon}), the potential is finally \be\label{finalsol} V_\Lambda(\phi)=\mu^2\left( \frac{\mu}{\Lambda}\right)^{1/(1+4\pi Q^2)} \exp\left(\phi\sqrt\frac{4\pi Q^2}{1+4\pi Q^2}\right) , \ee We stress again that this solution is not the result of a loop expansion. An important remark is in order here: it was possible to find the solution (\ref{finalsol}) of the Wilsonian exact renormalization group equation, {\it because} $Z$ does not depend on $\phi_0$. Indeed, it is the only possibility for the $\phi_0$-dependence to cancel in eq.(\ref{agaga}), at the first order in $v$. This shows the consistency of the choice $Z=Q^2$ made in the previous section. Note that the solution (\ref{finalsol}) does not need to satisfy the evolution equation (\ref{dotV}), since the Wilsonian potential defined in this section is not obtained by means of the Legendre transform as the potential defined in the previous section. The equivalence between these potentials is obtained in the limit where the running cut off goes to 0 (see Appendix B), but in this case, the expression (\ref{finalsol}) is not valid since it was derived in the limit of large cut off. Finally, the limit of the Wilsonian potential (\ref{finalsol}) when $Q^2\to\infty$, for fixed cut off (in the spirit of section 2), gives the expected bare Liouville potential shown in eq.(\ref{liouvsmodel}). \section{Discussion} In this work we have analysed Liouville field theory on the world-sheet from the perspective of a novel functional method, suggested in \cite{AEM2}. In particular, we have demonstrated that the function $Z(\phi)$ appearing in the Liouville kinetic term, is not renormalized, that is it preserves its classical form $Z=Q^2$ in the full quantum theory. As we have discussed, this is a specific feature of the two-dimensional field theory, and is not the case in general, e.g. in four dimensions~\cite{scalar}. In fact, this feature is also essential for maintaining the conformal properties of the Liouville field, in particular its r\^ole in restoring conformal symmetry \cite{ddk}. It should be stressed that in the present work we have assumed a functional dependence of the effective theory based on the gradient expansion, namely that $Z(\phi)$ is only \emph{a polynomial} function of the Liouville field $\phi$ and \emph{not} its world-sheet derivatives $\partial \phi$. This assumption is dictated by the above-mentioned argument of conformal covariance of the Liouville action, which we wish to maintain in the full quantum theory \cite{ddk}. It is remarked that in case one included such higher derivative terms, the allowed structures in the effective action should involve terms of the form \begin{equation}\label{str} Z_n(\phi) \partial_a \phi \left(\frac{\partial_b\partial^b}{\mu^2}\right)^n\partial^a \phi \end{equation} where $Z_n(\phi)$ are dimensionless polynomials of $\phi$ and $n$ is an integer. It remains to be seen whether in such cases the above-mentioned non renormalization result is valid. However, we expect such terms not to be present, as their presence would appear in conflict with the standard conformal properties of the Liouville field. In this sense we think that the analysis in the present article is complete. As a final remark we note that for a curved world sheet, with a curvature scalar $R^{(2)}$, replacing $\mu^2$ in (\ref{str}), one could in principle have structures of the form \begin{equation}\label{yn} Y_n(\phi) \partial_a \phi \left(\frac{\partial_b\partial^b}{R^{(2)}}\right)^n\partial^a \phi \end{equation} where $Y_n(\phi)$ are dimensionless polynomials of $\phi$. However, such structures cannot appear for $n \ne 0$, as they should vanish in the limit of flat world sheet $R^{(2)}\to 0$, and since $Y_n$ does not depend on the curvature scalar, it cannot vanish in this limit in order to leave the terms (\ref{yn}) finite. Before closing we note that the above analysis can be extended to incorporate Liouville-dressed non-critical stringy $\sigma$-models, involving the coupling of $X^\mu$ fields with the Liouville mode $\phi$. In such a case there are more complicated potential terms, since for each non conformal vertex operator $V(X)$ of the non-critical string, there is a conformal-symmetry restoring factor $e^{\alpha \phi}$, with $\alpha$ the appropriate Liouville dimension~\cite{ddk}, multiplying $V(X)$, $\int d^2 \sigma e^{\alpha \phi}V(X)$. Nevertheless, the application of the exact method for the Liouville sector and the associate $Q^2$ flows applies to this case, with similar results, as far as the Liouville wavefunction non renormalization is concerned. \section*{Acknowledgements} J. A. would like to thank Janos Polonyi for useful discussions related to the flow equations, and the explanation given in Appendix B. We wish to thank the organisers of the \emph{1st Annual School Of EU Network ``UniverseNet, The Origin Of The Universe: Seeking Links Between Fundamental Physics And Cosmology}, Mytilene (Island of Lesvos, Greece), September 24-29 2007, for the hospitality and for giving the opportunity to A.K. to present preliminary results of this work. The work of of A.K. and N.E.M is partially supported by the European Union through the FP6 Marie Curie Research and Training Network {\it UniverseNet} (MRTN-CT-2006-035863). \section*{Appendix A: derivation of the flow equations} With the following ansatz for the effective action, \be \Gamma=\int d^2\xi\left\lbrace \frac{Z_Q(\phi)}{2}\partial_a\phi\partial^a\phi+V_Q(\phi)\right\rbrace , \ee the second functional derivative appearing in the evolution equation (\ref{evolG}) is \bea \frac{\delta^2\Gamma}{\delta\phi_\xi\delta\phi_\zeta}&=&\left( -Z\partial_a\partial^a -\frac{Z^{''}}{2}\partial_a\phi\partial^a\phi\right) \delta^{(2)}(\xi-\zeta)\nn &&+\left( -Z^{'}\partial_a\partial^a\phi-Z^{'}\partial_a\phi\partial^a+V^{''}\right) \delta^{(2)}(\xi-\zeta). \eea In order to derive the evolution equation for the potential $V$, a constant configuration $\phi=\phi_0$ is sufficient. However, since we are also interested in the evolution of $Z$, we need a coordinate-dependent configuration, and thus we consider \be\label{phirho} \phi(\xi)=\phi_0+2\rho\cos(k\xi), \ee where $\rho$ and $k$ are constants. The evolution equation for the potential $V$ will then be obtained by identifying the terms independent of $k$ in eq.(\ref{evolG}). On the other hand, the evolution equation for $Z$ is obtained by identifying the terms proportional to $\rho^2 k^2$. \\ For the configuration (\ref{phirho}), the left hand side of eq.(\ref{evolG}) reads \be\label{lhside} \dot\Gamma={\cal A}\left\lbrace \dot V+\rho^2\dot V^{''}+\rho^2 k^2\dot Z+\cdot\cdot\cdot\right\rbrace , \ee where ${\cal A}$ is the world sheet surface area, and higher orders in $\rho$ were not written explicitly. The second derivative of $\Gamma$, in Fourier components and to order $\rho^2$, is \bea \frac{\delta^2\Gamma}{\delta\phi_p\delta\phi_q}&=&A_{pq}+B_{pq},~~~~\mbox{with}\\ A_{pq}&=&\left\lbrace p^2Z+V^{''}+\rho^2\left[(p^2+k^2)Z^{''}+V^{(4)}\right] \right\rbrace (2\pi)^2\delta^{(2)}(p+q)\nn B_{pq}&=&~~\rho\left[(p^2+kp+k^2)Z^{'}+V^{(3)}\right](2\pi)^2\delta^{(2)}(p+q+k)\nn &&+\rho\left[(p^2-kp+k^2)Z^{'}+V^{(3)}\right](2\pi)^2\delta^{(2)}(p+q-k)\nn &&+\frac{\rho^2}{2}\left[ (p^2+2kp+3k^2)Z^{''}+V^{(4)}\right](2\pi)^2\delta^{(2)}(p+q+2k)\nn &&+\frac{\rho^2}{2}\left[ (p^2-2kp+3k^2)Z^{''}+V^{(4)}\right](2\pi)^2\delta^{(2)}(p+q-2k)\nonumber. \eea We see that $A$ is diagonal in Fourier space, whereas $B$ is not. The inverse is then expanded in powers of $\rho$, using \be \left( \frac{\delta^2\Gamma}{\delta\phi\delta\phi}\right)_{pq}^{-1}= \left( A^{-1}\right)_{pq}-\left( A^{-1}BA^{-1}\right) _{pq}+\left( A^{-1}BA^{-1}BA^{-1}\right)_{pq}+\cdot\cdot\cdot \ee which finally gives, to order $\rho^2$, \bea &&\mbox{Tr}\left\lbrace \partial_a\partial^a\left( \frac{\delta^2\Gamma}{\delta\phi\delta\phi}\right)^{-1}\right\rbrace \\ &=&{\cal A}\left\lbrace \frac{\Lambda^2}{8\pi Z}-\frac{V^{''}}{8\pi Z^2} \ln\left(1+ \frac{Z\Lambda^2}{V^{''}}\right) \right\rbrace +\rho^2{\cal A}\left\lbrace\cdot\cdot\cdot \right\rbrace \nn &&+\rho^2k^2{\cal A}\left\lbrace \frac{5}{4\pi Z}\left( \frac{Z^{'}}{Z}\right)^2\ln\left(1+\frac{Z\Lambda^2}{V^{''}}\right) -\frac{47}{24\pi Z}\left( \frac{Z^{'}}{Z}\right)^2+\frac{7}{12\pi Z}\frac{Z^{'}}{Z}\frac{V^{(3)}}{V{''}}\right\rbrace \nonumber \eea In the above expression, the term proportional to $\rho^2$ and independent of $k$ is not relevant, as it leads to the evolution equation for $V^{''}$. We checked, though, that this evolution is consistent with the one for $V$. The evolution equations for $V$ and $Z$ are then those given in eqs.(\ref{evolVZ}), after identification with the left hand side of (\ref{lhside}). \section*{Appendix B: Equivalence between Wilsonian and one-particle-irreducible effective potentials} For a constant IR configuration $\phi_0$, the Wilsonian effective potential $U_{Wils}$ is defined by \be \exp\left( iVU_{Wils}(\phi_0)\right) = \int{\cal D}[\phi]\exp\left( iS[\phi_0+\phi]\right), \ee where $V$ is the volume of space time, $S$ is the bare action defined at some cut off $\Lambda$, and the dynamical variable $\phi$ which is integrated out has non-vanishing Fourier components for $|p|\le\Lambda$. One can also write the previous definition as \bea\label{delta} \exp\left( iVU_{Wils}(\phi_0)\right) &=& \int{\cal D}[\phi]\exp\left( iS[\phi]\right)\delta\left( \int_x\phi-V\phi_0\right) \\ &=&\int{\cal D}[\phi]\exp\left( iS[\phi]\right)\int_j\exp\left( ij\int_x(\phi-\phi_0)\right),\nonumber \eea where $\int_x$ denotes the integration over space time, $j$ is a real variable, and $\int_j$ denotes the integration over $j$. Using the notations of section 2, this expression can be written as \bea \exp\left( iVU_{Wils}(\phi_0)\right) &=& \int_jZ[j]\exp\left( -i\int_x j\phi_0\right) \nn &=&\int_j \exp\left(iW[j]-i\int_x j\phi_0\right)\nn &=& \int_j \exp\left(i\Gamma[\phi_0]\right) \nn &=& \int_j \exp\left(iVU_{1PI}(\phi_0)\right), \eea where $U_{1PI}$ is the one-particle-irreducible effective potential. Note that $j$ plays the r\^ole of a constant source for the field $\phi$, leading to the constant classical field $\phi_0$. In the last expression, the integration over $j$ leads to a multiplicative constant, as $\phi_0$ is fixed. Disregarding the $\phi_0$-independent terms, we then obtain \be U_{Wils}(\phi_0)=U_{1PI}(\phi_0). \ee Note that for the above argument to be valid it is essential that we work in Minkowski space time, since the delta function in eq.(\ref{delta}) is expressed in terms of its Fourier transform.
158,687
… ...] Seniors Helping Seniors: Matching Seniors in Johnson County...]ada Senior Care is a senior home care provider in Overland Park, KS that serves all of Johnson County, KS. The team at Amada Senior Care is made up of experts in a variety of fields: caregivers trained in providing non-medical personal assistance; senior housing specialists to help select the right senior care community; long term care insurance advisors with expertise in care coordination and … [Read more...] Team Spotlight – Lauren Bond March installment of Team Spotlight features Lauren Bond, Director of Client Services for Care is There’s … [Read more...] Johnson Cty Area Agency on Aging The Johnson County Area Agency on Aging is part of the local Aging and Disability Resource Center (ADRC). In Kansas, the state level agency addresses the needs of older residents as well as disabled residents. A moment for a little history... The United States federal government created Area Agencies on Aging in 1973 to help older Americans and their caregivers by providing social services and … [Read more...] JoCo AAA: Consultation The Aging Information Line is a great way to get started with the Area Agency on Aging in Johnson County, KS. Call 913-715-8861 and connect with a senior information expert. Use this phone resource if you are a caregiver, a concerned friend or neighbor of an elderly Johnson County resident, or if you are a senior citizen in need of assistance. They can help clarify your needs and … [Read more...] JoCo AAA: Help. … ...]
253,531
A point on the plane of a triangle $T=P_1P_2P_3$ can be defined by a triple $p:q:r$ of signed distances to each side\footnote{The Barycentrics of $p:q:r$ are $p\,s_1:q\,s_2:r\,s_3$ \cite{mw}.}, see Figure~\ref{fig:trilins}. This local coordinate system renders the point invariant under similarity transformations (rigid+dilation+reflection) of $T$. A {\em Triangle Center} is such a triple obtained by applying a {\em Triangle Center Function} $h$ thrice to the sidelengths $s_1,s_2,s_3$ cyclically \cite{kimberling1993_rocky}: \begin{equation} \label{eqn:ftrilins} p:q:r {\iff} h(s_1,s_2,s_3):h(s_2,s_3,s_1):h(s_3,s_1,s_2) \end{equation} \noindent $h$ must (i) be {\em bi-symmetric}, i.e., $h(s_1,s_2,s_3)=h(s_1,s_3,s_2)$, and (ii) homogeneous, $h(t s_1, t s_2, t s_3)=t^n h(s_1,s_2,s_3)$ for some $n$ \cite{kimberling1993_rocky}. Triangle Center Functions for a few Triangle Centers catalogued in \cite{mw} appears in Table~\ref{tab:center-trilinears}. Trilinears can be converted to Cartesians using \eqref{eqn:trilin-cartesian}. \subsection{Constructions for Basic Triangle Centers} Constructions for a few basic Triangle Centers are shown in Figure~\ref{fig:constructions}. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{pics_eps/1010_constr.eps} \caption{Constructions for Triangle Centers $X_i$, $i=1,2,3,4,5,9,11$, taken from \cite{reznik19}.} \label{fig:constructions} \end{figure} \begin{itemize} \item The Incenter $X_1$ is the intersection of angular bisectors, and center of the Incircle (green), a circle tangent to the sides at three {\em Intouchpoints} (green dots), its radius is the {\em Inradius} $r$. \item The Barycenter $X_2$ is where lines drawn from the vertices to opposite sides' midpoints meet. Side midpoints define the {\em Medial Triangle} (red). \item The Circumcenter $X_3$ is the intersection of perpendicular bisectors, the center of the {\em Circumcircle} (purple) whose radius is the {\em Circumradius} $R$. \item The Orthocenter $X_4$ is where altitudes concur. Their feet define the {\em Orthic Triangle} (orange). \item $X_5$ is the center of the 9-Point (or Euler) Circle (pink): it passes through each side's midpoint, altitude feet, and Euler Points \cite{mw}. \item The Feuerbach Point $X_{11}$ is the single point of contact between the Incircle and the 9-Point Circle. \item Given a reference triangle $P_1P_2P_3$ (blue), the {\em Excenters} $P_1'P_2'P_3'$ are pairwise intersections of lines through the $P_i$ and perpendicular to the bisectors. This triad defines the {\em Excentral Triangle} (green).\ \item The {\em Excircles} (dashed green) are centered on the Excenters and are touch each side at an {\em Extouch Point} $e_i,i=1,2,3$. \item Lines drawn from each Excenter through sides' midpoints (dashed red) concur at the {\em Mittenpunkt} $X_9$. \item Also shown (brown) is the triangle's {\em Mandart Inellipse}, internally tangent to each side at the $e_i$, and centered on $X_9$. This is identical to the $N=3$ Caustic. \end{itemize} \subsection{Derived Triangles} \label{app:derived-tris} A {\em Derived Triangle} $T'$ is constructed from the vertices of a reference triangle $T$. A convenient representation is a 3x3 matrix, where each row, taken as Trilinears, is a vertex of $T'$. For example, the Excentral, Medial, and Intouch Triangles are given by \cite{mw}: \begin{equation*} T'_{exc}= \left[ \begin{matrix} -1&1&1\\1&-1&1\\1&1&-1 \end{matrix} \right],\;\;\; T'_{med}= \left[ \begin{matrix} 0&s_2^{-1}&s_3^{-1}\\s_1^{-1}&0&s_3^{-1}\\s_1^{-1}&s_2^{-1}&0 \end{matrix} \right] \end{equation*} \begin{equation*} T'_{int}= \left[ \begin{matrix} 0&\frac{s_1 s_3}{s_1-s_2+s_3}&\frac{s_1 s_2}{s_1+s_2-s_3}\\ \frac{s_2 s_3}{-s1+s_2+s_3}&0&\frac{s_1 s_2}{s_1+s_2-s_3}\\ \frac{s_2 s_3}{-s1+s_2+s_3}&\frac{s_1 s_3}{s_1-s_2+s_3}&0 \end{matrix} \right] \end{equation*} A few Derived Triangles are shown in Figure~\ref{fig:derived-isosceles}, showing a property similar to Lemma~\ref{lem:axis-of-symmetry}, Appendix~\ref{app:method-lemmas}, namely, when the 3-periodic is an isosceles, one vertex of the Derived Triangle lies on the orbit's axis of symmetry. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{pics_eps/1070_lemma3} \caption{When the orbit is an isosceles triangle (solid blue), any Derived Triangle will contain one vertex on the axis of symmetry of the orbit. \textbf{Video}: \cite[pl\#08]{dsr_playlist_2020}} \label{fig:derived-isosceles} \end{figure}
58,988
TITLE: Accelerated fixed-point for $x=\sin(x)$ convergence rate? QUESTION [2 upvotes]: I happened to come up with an idea for accelerating the convergence of fixed-point iteration based on Aitken's delta squared acceleration method. What interests me is the case of $x=\sin(x)$, for which fixed-point iteration is known to give roughly $\mathcal O(n^{-1/2})$ error in $n$ iterations. When applying the below method to this problem, numerical testing suggests convergence may actually be improved to be linear i.e. of the form $\mathcal O(\lambda^n)$ for some $\lambda\in(0,1)$, but I'm unsure if this is actually the case. My question: Does applying the below method actually accelerate the convergence of iterating $x=\sin(x)$ to linear convergence, and precisely how fast is it in this case? Code. Interestingly, it seems to work significantly better than using Aitken's method here. In this case, it seems iterations should be asymptotically equivalent to Aitken's method, but Aitken's method suffers from division by zero earlier due to slower $\dot x$ and $\ddot x$ convergence, which forces it to be unable to use the Aitken acceleration. This starts at $x\approx1.5\times10^{-4}$. In contrast, the below method has $\dot x$ and $\ddot x$ convergence, which spaces them out enough to avoid division by zero during all iterations until the last iteration where $x=\sin(x)\approx9.3\times10^{-9}$. The Acceleration Method: The idea is that given a function $f$ with a fixed-point $x_\star=f(x_\star)$ and an initial estimate $x_0$, the following linear approximations may be made: \begin{align}x_0&=x_\star+\epsilon\\\dot x_0&=f(x_0)\\&=f(x_\star+\epsilon)\\&\simeq f(x_\star)+f'(x)\epsilon\\&=x_\star+C\epsilon\\\ddot x_0&=f(\dot x_0)\\&\simeq x_\star+C^2\epsilon\end{align} Supposing these equations are exact, they give a solvable system of equations: $$\begin{cases}x_0=x_\star+\epsilon\\\dot x_0=x_\star+C\epsilon\\\ddot x_0=x_\star+C^2\epsilon\end{cases}$$ Aitken's method is based on solving $x_\star$ from these equations, but $C$ may also be solved for. Once $C$ is known, all future iterations may be accelerated by solving for $x_\star$ from the system of equations: $$\begin{cases}x_0=x_\star+\epsilon\\\dot x_0=x_\star+C\epsilon\end{cases}$$ which yields the improved estimate of the form $(1-r)x_0+rf(x_0)$. Solving for all variables leads to the algorithm: \begin{align}r_0&=1\\\dot x_i&=(1-r_i)x_i+r_if(x_i)\\\ddot x_i&=(1-r_i)\dot x_i+r_if(\dot x_i)\\t_i&=\frac{x_i-\dot x_i}{x_i-2\dot x_i+\ddot x_i}\\x_{i+1}&=x_i-t(x_i-\dot x_i)\\r_{i+1}&=t_ir_i\end{align} I haven't done enough research to really know if this method is known or not. Wikipedia and some numerical analysis tests I've found suggest applying Aitken's method after every two iterations, which is equivalent to the case of $r$ being held at $r=1$. REPLY [0 votes]: Consider the simplified problem of iterating $f(x)=x-x^3/6$. Each iteration may then be simplified. \begin{align}\dot x_n&=x_n-\frac{r_n}6x_n^3\\\ddot x_n&=\dot x_n-\frac{r_n}6\dot x_n^3\\t_n&=\frac{r_nx_n^3/6}{r_nx_n^3/6-r_n\dot x_n^3/6}\\&=\frac{x_n^3}{(x_n-\dot x_n)(x_n^2+x_n\dot x_n+\dot x_n^2)}\\&\stackrel?\simeq\frac{x_n^3}{r_nx_n^3(3x_n^2)/6}\tag?\\&=\frac2{r_nx_n^2}\\x_{n+1}&\simeq x_n-\frac2{r_nx_n^2}(x_n-\dot x_n)\\&=x_n-\frac2{r_nx_n^2}\frac{r_n}6x_n^3\\&=x_n-\frac13x_n\\&=\frac23x_n\end{align} This seems to be correct empirically, but it's not immediately clear to me how to justify $(?)$ or the replacement of $\sin$ with $x-x^3/6$.
182,486
The Giver by Lois Lowry Jonas, an Eleven, is nearing the time when his community’s Elders will choose a career assignment for him, at the Ceremony of Twelve. His community knows no pain: the Old live in group homes and are “released” when the time comes in a great celebration; “newchildren” who do not develop properly are also “released,” albeit quietly. Adolescents’ Stirrings (sexual impulses) are tamped down with prescribed drugs, and “it was against the rules for children or adults to look at another’s nakedness,” except for the Old and the newchildren. Jonas is chosen for “the highest honor,” to be the Receiver of Memory, trained by the Giver, the keeper of the community’s memories. But it comes with a price: the Giver explains that the Elders rarely seek his wisdom, “Life here is so orderly, so predictable—so painless. It’s what they’ve chosen.” As Jonas goes through his training, he begins to discover what his community has given up in order to be comfortable. Yes, they forego pain, but they have also never experienced the color red, and music—and pure joy. Jonas must decide if he can endorse this hypocrisy or give up his calling.
409,842
\begin{document} \title{Surprise Maximization: \\ A Dynamic Programming Approach} \author{ Ali Eshragh\thanks{School of Mathematical and Physical Sciences, University of Newcastle, NSW, Australia, and International Computer Science Institute, Berkeley, CA, USA. Email: \tt{ali.eshragh@newcastle.edu.au}} } \date{} \maketitle \begin{abstract} Borwein et al. \cite{Borwein2000} solved a ``surprise maximization'' problem by applying results from convex analysis and mathematical programming. Although, their proof is elegant, it requires advanced knowledge from both areas to understand it. Here, we provide another approach to derive an optimal solution of the problem by utilizing dynamic programming. \end{abstract} \section{Introduction} Borwein et al. \cite{Borwein2000} introduced an optimization problem on maximizing the expected value of the surprise function. More precisely, they exploited results from convex analysis and mathematical programming to find an optimal solution of the following non-linear programming model, called \textsf{SM1}: \begin{align*} \medskip \mbox{maximize}\ \ S_m(p_1,\ldots,p_m) & = \displaystyle{\sum_{j=1}^m p_j \log \frac{p_j}{\frac{1}{m} \sum_{i=j}^m p_i}-\sum_{j=1}^m p_j}\\ \medskip \mbox{subject to}\ \ \ \ \ \ \ \ \ \ \ \ \sum_{j=1}^m p_j & = 1\,, \\ \medskip \mbox{and}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ p_j & \geq 0\ \ \ \mbox{for }j=1,\ldots,m\,. \end{align*} Here, a \emph{dynamic programming} approach is utilized to find an optimal solution of the \textsf{SM1} model. First of all, we simplify the objective function $S_m(p_1,\ldots,p_m)$ as follows: \begin{align*} \medskip S_m(p_1,\ldots,p_m) & = \displaystyle{\sum_{j=1}^m p_j \left(\log p_j-\log\left(\sum_{i=j}^m p_i\right)\right) +\log m -1} \end{align*} \noindent Without loss of generality, we can disregard the constant term $\log m -1$ and carry out our optimisation over terms involving the variables $p_j$. Thus, we focus on the following optimisation model, called \textsf{SM2}: \begin{align*} \medskip \mbox{maximize}\ \ \widetilde{S_m}(p_1,\ldots,p_m) & := \displaystyle{\sum_{j=1}^m p_j \left(\log p_j-\log\left(\sum_{i=j}^m p_i\right)\right)}\\ \medskip \mbox{subject to}\ \ \ \ \ \ \ \ \ \ \ \ \sum_{j=1}^m p_j & = 1\,, \\ \medskip \mbox{and}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ p_j & \geq 0\ \ \ \mbox{for }j=1,\ldots,m\,. \end{align*} Now consider the following counterpart investment problem: Suppose that we are given $1$ unit of money to invest in $m$ consecutive days. If we spend $p_1,\ldots,p_m$ units of money in days $1,\ldots,m$, then the total return of this investment will be given by $\widetilde{S_m}(p_1,\ldots,p_m)$. We want to find an optimal investment policy such that the total return over $m$ days is maximised. Clearly, \textsf{SM2} solves this optimal investment problem (This problems is called \emph{optimal resource allocation problem} in the literature). \section{Dynamic Programming} We apply a \emph{dynamic programming} approach to solve \textsf{SM2}. In this model, the \emph{stage} is each investment opportunity (i.e., day) and the state of the system is the remained amount of money to invest in the subsequent stages. Let $V_j(r)$ denote the maximum total return over days $j,\ldots,m$ while $r$ units of money remained (i.e., $1-r$ units have been already spent in days $1,\ldots,j-1$). The \emph{Bellman optimality equation} is given as follows: \begin{align}\label{EquBOE} \begin{cases} \medskip \displaystyle{V_j(r)\ =\ \max_{0\leq x \leq r} \{x\log x -x\log r + V_{j+1}(r-x)\}\ \ \ \mbox{for}\ j=1,\ldots,m-1}, \\ \medskip V_m(r)\ =\ 0. \end{cases} \end{align} Let $p_j^*(r)$ denote an optimal investment policy in day $j$ when the stage of the system is $r$. Obviously, we have $p_m^*(r)=r$. So, the optimity equation \cref{EquBOE} for $j=m-1$ is solved as follows. \begin{align*} \medskip V_{m-1}(r) & = \displaystyle{\max_{0\leq x \leq r} \{x\log x -x\log r + V_{m}(r-x)\}} \\ \medskip & = \displaystyle{\max_{0\leq x \leq r} \{x\log x -x\log r \}}\,. \end{align*} Since the function $h(x):=x\log x -x\log r$ is a convex function over interval $[0,r]$, its maximum coincides with its extremum. Thus, \begin{align} \medskip \label{sol:industion_hyp1} p_{m-1}^*(r) & = r e^{-1} \\ \medskip \label{sol:industion_hyp2} V_{m-1}(r) & = -p_{m-1}^*(r)\ =\ -r e^{-1}. \end{align} Solving the optimality equation \eqref{EquBOE} for $j=m-2$ gives a trend in the optimal investment policy, summarized in following theorem. \begin{theorem} For the optimality equation \eqref{EquBOE}, \begin{align}\label{sol:opt_pol} \medskip p_{j}^*(r) & = r e^{-\gamma_j}, \end{align} where \begin{align}\label{EquGam} \begin{cases} \medskip \displaystyle{\gamma_{j-1}\ =\ \gamma_j+e^{-\gamma_j}\ \ \ \mbox{for}\ j=1,\ldots,m-1}, \\ \medskip \gamma_m\ =\ 0, \end{cases} \end{align} is an optimal investment policy. Moreover, the optimal value is given by \begin{align}\label{sol:opt_val} \medskip V_{j}(r) & = -\sum_{i=j}^{m-1} p_i^*(r). \end{align} \end{theorem} \begin{proof} We prove this theorem by induction. It is readily seen that \eqref{sol:industion_hyp1} and \eqref{sol:industion_hyp2} satisfies \eqref{sol:opt_pol} and \eqref{sol:opt_val} for $j=m-1$, respectively. Now, assume that the latter optimal policy and optimal value are correct for $j\geq k$ and we show that they hold for $j=k-1$, as well. By considering the induction assumption, we have \begin{align*} \medskip V_{k-1}(r) & = \displaystyle{\max_{0\leq x \leq r} \{x\log x -x\log r + V_{k}(r-x)\}} \\ \medskip & = \displaystyle{\max_{0\leq x \leq r} \{x\log x -x\log r -\sum_{i=k}^{m-1} p_i^*(r-x)\}} \\ \medskip & = \displaystyle{\max_{0\leq x \leq r} \{x\log x -x\log r -\sum_{i=k}^{m-1} (r-x) e^{-\gamma_i}\}} \\ \medskip & = \displaystyle{\max_{0\leq x \leq r} \{x\log x -x\log r -(r-x) (\gamma_{k-1}-1)\}}, \end{align*} where the last equality is derived by summing up both sides of \eqref{EquGam} over $j=k,\ldots,m-1$. One can see that the latter univariate optimization problem achieves its maximum at $x^* = r e^{-\gamma_{k-1}}$, and the corresponding optimal value $V_{k-1}(r)$ equals to $-\sum_{i=k-1}^{m-1} p_i^*(r)$\,. This completes the proof. \end{proof} \begin{corollary} An optimal solution of the model \textsf{SM2} is given by: \begin{align*} p_j^* & = \begin{cases} \medskip p_1^*(1) & \mbox{for}\ j=1, \\ \medskip p_j^*(1-\sum_{i=1}^{j-1}p_i^*) & \mbox{for}\ j=2,\ldots,m. \end{cases} \end{align*} \end{corollary}
127,255
\documentclass[notitlepage,onecolumn,12pt,aps,nofootinbib,superscriptaddress,tightenlines]{revtex4-1} \usepackage[colorlinks=true,citecolor=blue,urlcolor=blue,linkcolor=blue]{hyperref} \usepackage{natbib} \usepackage{amsmath,amsfonts,amssymb,amsthm,textcomp,mathrsfs,mathrsfs,bbm} \usepackage{xfrac,braket} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{definition}{Definition} \newcommand{\beq}{\begin{equation}} \newcommand{\enq}{\end{equation}} \newcommand{\beqa}{\begin{eqnarray}} \newcommand{\enqa}{\end{eqnarray}} \newcommand{\beqan}{\begin{eqnarray*}} \newcommand{\enqan}{\end{eqnarray*}} \newcommand{\bel}{\begin{lemma}} \newcommand{\enl}{\end{lemma}} \newcommand{\bet}{\begin{theorem}} \newcommand{\ent}{\end{theorem}} \newcommand{\ten}{\textnormal} \newcommand{\tbf}{\textbf} \newcommand{\tr}{\mathrm{Tr}} \newcommand{\myexp}{{\mathrm{e}}} \newcommand{\eps}{\varepsilon} \newcommand*{\cC}{\mathcal{C}} \newcommand*{\cA}{\mathcal{A}} \newcommand*{\cH}{\mathcal{H}} \newcommand*{\cF}{\mathcal{F}} \newcommand*{\cB}{\mathcal{B}} \newcommand*{\cD}{\mathcal{D}} \newcommand*{\cG}{\mathcal{G}} \newcommand*{\cK}{\mathcal{K}} \newcommand*{\cN}{\mathcal{N}} \newcommand*{\cS}{\mathcal{S}} \newcommand*{\chS}{\hat{\mathcal{S}}} \newcommand*{\cT}{\mathcal{T}} \newcommand*{\cX}{\mathcal{X}} \newcommand*{\cW}{\mathcal{W}} \newcommand*{\cZ}{\mathcal{Z}} \newcommand*{\cE}{\mathcal{E}} \newcommand*{\cU}{\mathcal{U}} \newcommand*{\cP}{\mathcal{P}} \newcommand*{\cV}{\mathcal{V}} \newcommand*{\cY}{\mathcal{Y}} \newcommand*{\cR}{\mathcal{R}} \newcommand*{\bbN}{\mathbb{N}} \newcommand*{\bX}{\mathbf{X}} \newcommand*{\bY}{\mathbf{Y}} \newcommand*{\cl}{\mathcal{l}} \newcommand*{\Xb}{\bar{X}} \newcommand*{\Yb}{\bar{Y}} \newcommand*{\mximax}[1]{\Xi^\delta_{\max}(P_{#1})} \newcommand*{\mximin}[1]{\Xi^\delta_{\min}(P_{#1})} \newcommand*{\renyi}{R\'{e}nyi } \begin {document} \title{A gambling interpretation of some quantum information-theoretic quantities \footnote{This material was presented in part at the rump session of the 14th Quantum Information Processing (QIP) workshop, Singapore, Jan 2011.}} \author{Naresh Sharma} \email{nsharma@tifr.res.in} \affiliation{School of Technology and Computer Science, Tata Institute of Fundamental Research (TIFR), Mumbai 400 005, India} \date{\today} \begin{abstract} It is known that repeated gambling over the outcomes of independent and identically distributed (i.i.d.) random variables gives rise to alternate operational meaning of entropies in the classical case in terms of the doubling rates. We give a quantum extension of this approach for gambling over the measurement outcomes of tensor product states. Under certain parameters of the gambling setup, one can give operational meaning of von Neumann entropies. We discuss two variants of gambling when a helper is available and it is shown that the difference in their doubling rates is the quantum discord. Lastly, a quantum extension of Kelly's gambling setup in the classical case gives a doubling rate that is upper bounded by the Holevo information. \end{abstract} \maketitle \section{Introduction} Quantum information theory \cite{wilde-book,petz-book,nielsen-chuang} deals with the information content in the quantum systems and is a generalisation of classical information theory (see Ref. \cite{covertom} for example) for quantum systems. The measurement outcome of a quantum system is a random variable and the measurement alters the quantum state in general. We confine ourselves to finite dimensional Hilbert spaces that describe the quantum states and a probability mass function would describe the measurement outcome random variable that can be computed using the postulates of quantum mechanics. If after a measurement, a quantum system is prepared again in the same state as before the measurement and the same measurement process is repeated, the sequence of the measurement outcomes is a sequence of i.i.d. random variables. As an example, consider a quantum system prepared each time before the measurement in the quantum state $\rho = p \ket{0} \bra{0} + (1-p) \ket{1} \bra{1}$, where $0 \leq p \leq 1$. The measurement operators are $\{\ket{0} \bra{0}, \ket{1} \bra{1}\}$. The measurement outcomes form a sequence of i.i.d. binary random variables each of which take values $0$, $1$ with probabilities $p$, $1-p$ respectively. A classical gambling device such as a roulette consists of a revolving wheel onto which a ball is dropped and the ball settles down to one of the numbered slots or compartments on the wheel. Alice, the roulette player, bets on a number or a subset of numbers on which the ball comes to rest. There is a probability associated with winning on each gamble. If bets are placed on the measurement outcomes of a quantum system, then the apparatus becomes a gambling device or a quantum roulette. Quantum gambling has been studied before in different contexts. Goldenberg \emph{et al} invented a zero-sum game where a player can place bets at a casino located in a remote site \cite{goldenberg-1999}. Hwang \emph{et al} considered its extensions using non-orthogonal and more than $2$ states \cite{hwang-2001,hwang-2002}. Betting on the outcomes of measurements of a quantum state was considered by Pitowsky \cite{pitowsky-2003}. We note that none of the above references study the log-optimal gambling strategies, which, on the other hand, have been well-studied for the classical case (see for example \cite{kelly-1956, covertom,erkip-cover-1998} and references therein). Quantum systems exhibit certain characteristics that are not possible classically. Bell inequalities \cite{bell-ineq-1964, chsh-ineq} give classical limits to the figure of performance for certain setups and these inequalities could be violated by quantum systems. We show that the quantum gambling devices too exhibit certain characteristics that are impossible to replicate classically. At an information-theoretic level, the von Neumann entropy of the composite quantum systems $A$ and $B$ can be smaller than the von Neumann entropy of the subsystem $B$ alone giving rise to negative conditional entropies. This is, as is well known, impossible for classical Shannon entropies. Kelly defined a log-optimal gambling strategy by applying the law of large numbers to the factor (a random variable) by which Alice's wealth grows in a gamble. Thus, one can loosely claim that Alice's wealth is an exponential function of the number of gambles \cite{kelly-1956}. (We define this more precisely later.) This approach has been developed further with the side information (or a helper) in Ref. \cite{covertom}. The exponent (or the doubling rate if the base of the logarithm is $2$) is a function of payoffs that the casino owner, Charlie, offers for each outcome, outcome probability distribution, and Alice's strategy. When we optimise the strategy under certain conditions, then the entropy (Shannon or von Neumann) appears in the exponent. We note that these entropies (and certain information measures) have deep operational interpretations in classical and quantum information theory (see Ref. \cite{wilde-book} and references therein). In the classical case, Alice chooses how the wealth with which she is gambling is going to be distributed across the various outcomes. As an example for two outcomes, Alice could bet half of her money on each of the outcomes. For the quantum case, Alice can additionally make a choice of the measurement operators. Any classical roulette would be a special case of a quantum roulette. We also consider a case when a helper named Bob is available for the gambler to make more money. Bob has access to a quantum system that is correlated with Alice's quantum system. Bob is broke and has no money to gamble on his system and offers Alice help in two ways. In the first variant, Bob reports the measurement outcome to Alice who now knows the collapsed state of her quantum system and uses this information to further optimise her exponent (or the doubling rate). Alice may or may not have control over the measurement operators applied by Bob. In the second variant, Bob leases out his quantum system to Alice who then gambles on the composite quantum system consisting of her and Bob's systems. In return, Bob demands a share in Alice's accrued wealth and wants Alice to retain the portion of wealth that Alice would have accrued by gambling only on her system and had completely ignored the correlations between the two systems. Bob's argument is that Alice can win more by taking the correlations into account and it thus a win-win situation for both him, since he earns money after being broke, and Alice since she earns more money. Under certain conditions, for the classical gambling, these two variants give rise to the same doubling rates whereas, for quantum gambling, they give rise to different doubling rates whose difference is equal to the quantum discord, a quantity that has been studied in a completely different context \cite{verdal-2001,zurek-2002}. Quantum discord is interpreted as purely the quantum part of the total correlations between the two quantum systems. That these two variants are the same classically in terms of the doubling rate lends support to the above interpretation. Kelly gave another interesting interpretation of the mutual information \cite{kelly-1956}. Suppose Alice, knowing the output of a communication channel, bets on the inputs to the channel, then under certain conditions, the optimised doubling rate is equal to the mutual information. If one extends Kelly's result for the quantum case, one gets a doubling rate in terms of a certain mutual information that is a function of the measurement operators and, using the Holevo bound, is upper bounded by the Holevo information. We define a few quantities that will be needed later. The discrete or the Shannon entropy of a random variable $A$ with probability mass function $\pmb{p}^A$ is given by $H(A)_{\pmb{p}}$ $\equiv -\sum_{i=1}^n p_i^A \log\left(p_i^A\right)$. The classical relative entropy from $\pmb{p}$ to $\pmb{q}$ is given by $D\left(\pmb{p} || \pmb{q}\right)$ $\equiv \sum_i p_i \log ( p_i /q_i )$. The von Neumann entropy of system $A$ in state $\rho^A$ is given by $S(A)_\rho \equiv - \tr \rho^A \log \rho^A$. For a composite system $A$ and $B$ in state $\rho^{AB}$, the quantum conditional entropy of $A$ given $B$ is $S(A|B)_{\rho} \equiv S(A,B)_\rho - S(B)_\rho$, and the quantum mutual information is given by $S(A:B)_\rho \equiv S(A)_\rho - S(A|B)_\rho$. \section{Rules of gambling} Let us assume that there are $n$ outcomes of a gambling device. Charlie decides that the payoff for the $i$th outcome is $o_i$-for-$1$, $i=1,...,n$. In other words, if Alice puts down one dollar on outcome $i$ before the gamble, she gets $o_i$ dollars if the outcome is $i$, and gets $0$ dollars if the outcome is not $i$. Alice is allowed to bet on several outcomes in a gamble. There is one and only one winning outcome. Alice receives the payoff from the winning outcome and the bets on other outcomes are lost. Alice can optimise on how she distributes her wealth across the outcomes. She may decide not to bet on some outcomes and to gamble only with a fraction of her money after each gamble and retaining a fraction of money. For quantum gambling, Alice could additionally have control over which measurement operators to use and hence, has some control over the outcome probabilities. We shall assume that the casino never shuts down and that Alice can gamble there as many times as she wants to. For the ease of presentation and analysis, we shall not impose the restriction (common in casinos) that Alice must always gamble with more than a minimum amount of money or that the money can be gambled with and won in integer multiples of the smallest unit of prevailing currency. \subsection{Gambling with a helper} Bob is the helper and has access to a random variable $B$ that is correlated with the outcome of Alice's roulette modelled as a random variable $A$. Bob is broke and doesn't have money to gamble on $B$. Let us take the example of the American roulette with $38$ slots in one of which the ball must fall in. There are two zero slots denoted by $0$ and $00$, and $36$ numbered slots from $1$ to $36$. The bets are placed on the roulette table layout. One game could be defined as betting on single numbers between $1$ to $36$ (also called ``Straight-up inside" bet). Charlie pockets the money if the outcome is any of two zeros. Another game could be betting on which of three dozens the outcome falls under (also called ``Dozen outside" bet). The same rule as in the previous game applies for zero outcomes. If $B$ is unknown, then let us assume that $A$ takes values in the set $\{00,0,1,...,36\}$ and all its elements are equally likely to occur. Let us assume that $B$ takes values $0$ or $1$ and if $B$ takes value $0$, then $A$ takes values among the first $19$ elements each with probability $2/57$ and the last $19$ elements each with probability $1/57$, and otherwise, $A$ takes values among the first $19$ elements each with probability $1/57$ and the last $19$ elements each with probability $2/57$. Alice could use this extra information (outcome of $B$) from Bob to perhaps gamble better. The dependence of two random variables in the classical world is carried over to the quantum world by considering two quantum systems that have non-zero total correlation quantified by quantum mutual information \cite{nielsen-chuang}. The total correlation can be further broken into a classical correlation and a purely quantum quantity called the quantum discord \cite{verdal-2001,zurek-2002}. Let us consider a quantum system $A$ on which Alice gambles. Alice could have a control over how she wants to distribute her wealth across the outcomes and/or what measurement operators she uses. Let us consider a quantum system $B$ that has a non-zero total correlation with $A$. In one variant, Bob provides the outcomes of the measurement on $B$ to Alice who uses it to gamble better. Alice may or may not have control over the measurement operators applied to $B$. We shall see that as long as the classical correlation (see Ref. \cite{verdal-2001}) between $A$ and $B$ is nonzero, Alice can gamble better (the sense in which better is defined will be discussed in greater detail). In another variant, Bob leases out his system $B$ to Alice who could gamble in a larger Hilbert space consisting of both the Hilbert spaces of $A$ and $B$. In return, Bob could demand a share in Alice's earnings. Under certain conditions and a reasonable demand by Bob of a share in Alice's earnings, we shall see that as long as the total correlation between $A$ and $B$ is nonzero, Alice can gamble better. \section{Optimisation criterion} \label{sec::crit} The optimisation criterion is described in this section. For the simplicity of presentation, we shall define the criterion for the classical case and the criterion will be extended to the quantum case later. Let us consider a random variable $A$ that describes the outcome on which the bets are placed. At the start of each gamble, $A$ has a distribution \beq \pmb{p}^A = [p_1^A,...,p_n^A], \enq where $n$ is the number of values that $A$ takes and $\Pr\{A = i\} = p_i^A$. Let the payoff for the $i$th outcome be $o_i^A$-for-$1$. Let us assume that Alice distributes her wealth according to the probability vector \beq \pmb{q}^A = [q_1^A,...,q_n^A], ~~ \sum_{i=1}^n q_i^A = 1 - q_0^A, ~ q_i^A \geq 0, ~ \forall ~ i=0,...,n, \enq i.e., she puts $q_i^A$ fraction of her wealth on the $i$th outcome, $i=1,...,n$, and $q_0^A$ is the fraction of wealth that Alice retains and does not gamble with. At the start of each gamble, $A$ is prepared in the same state and has the probability mass function as $\pmb{p}^A$. At the end of a gamble, the factor by which Alice's wealth increases is a random variable denoted by $X^A$, that takes values $q_0^A + q_i^A o_i^A$ with probability $p_i^A$, $i=1,...,n$. Let us assume that for the $j$th gamble, the outcome random variable is denoted by $X^A_j$. After $K$ gambles, Alice's wealth will grow by a factor of \beq S_K^A = \prod_{j=1}^K X^A_j. \enq It follows from our preparation that $X^A_1, X^A_2, ..., X^A_K$ are i.i.d. Kelly applied the weak law of large numbers (see Ref. \cite{feller-book} for example) to the logarithm of $X_j^A$ \cite{kelly-1956}, and it follows that that for any $\epsilon > 0$, \beq \lim_{K \rightarrow \infty} \Pr\left[ \left| \frac{1}{K} \sum_{k=1}^K \log(X^A_k) - W_A \right| > \epsilon \right] = 0, \enq where \beq \label{eq_wa} W_A = \langle X^A \rangle = \sum_{i=1}^n p_i^A \log\left(q_0 + q_i^A o_i^A \right) \enq is the doubling rate of Alice's wealth and we assume that the base of $\log$ is $2$ throughout this paper. Hence, for large $K$, \beq S_K^A \stackrel{.}{=} 2^{KW_A}, \enq where $a_K \stackrel{.}{=} b_K$ denotes that \beq \lim_{K \rightarrow \infty} \Pr \left[ \frac{1}{K} \log \left( \frac{a_K}{b_K} \right) \right] = 0. \enq We shall assume that Alice wants to optimise the doubling rate $W_A$. Such a strategy is also called the log-optimal strategy. \section{Optimisation of the doubling rate} \label{sec::opt} The optimisation in the classical case is done over $\pmb{q}^A$. The quantum case is discussed subsequently where Alice could additionally have control over the measurement operators. Kelly identified three regimes based on the payoffs (see Refs. \cite{kelly-1956,covertom}) that are \beq \sum_{i=1}^n \frac{1}{o_i} ~~~~ \begin{array}{ll} < 1, & \mbox{Super-fair odds} \\ = 1, & \mbox{Fair odds} \\ > 1, & \mbox{Sub-fair odds} \end{array} \enq Kelly argued that for fair and super-fair odds, Alice does not need to retain any wealth and $q_0^A=0$. This follows by choosing (a suboptimal choice in general) $q_i^A = 1/o_i^A$, $i=1,...,n$, $q_0^A = 1 - \sum_{i=1}^n 1/o_i^A$, and ending after the gamble with $1 + (1 - \sum_{i=1}^n 1/o_i^A)$ times the wealth before the gamble. Since the wealth increases for super-fair odds and remains the same with fair odds with this suboptimal strategy, wealth cannot decrease by smartly gambling with all the money rather than retaining a part of it. For fair and super-fair odds, we choose $q_0^A=0$ and rewrite $W_A$ in (\ref{eq_wa}) as \beqa \label{doubling-rate} W_A & = & \sum_{i=1}^n p_i^A \log(q_i^A o_i^A). \enqa The optimum doubling rate is \beqa W_A^* & = & \max_{\pmb{q}^A} W_A \\ & = & \sum_{i=1}^n p_i^A \log\left(o_i^A\right) - H(A)_{\pmb{p}} - \min_{\pmb{q}^A} D\left(\pmb{p}^A || \pmb{q}^A\right). \enqa Since $\min_{\pmb{q}^A} D\left(\pmb{p}^A || \pmb{q}^A\right) = 0$ is achieved at $\pmb{q}^A = \pmb{p}^A$, hence, for $\pmb{q}^A = \pmb{p}^A$, \beq W_A^* = \sum_{i=1}^n p_i^A \log\left(o_i^A\right) - H(A)_{\pmb{p}}. \enq Clearly, Alice needs to have an estimate of the probabilities of the measurement outcomes to optimise her wealth's doubling rate. The optimising wealth distribution being equal to the probability distribution of the outcome is also called proportional gambling \cite{covertom}. It is easy to see that for uniform fair odds, i.e., $o_i^A = o$, $i=1,...,n$, \beq W_A^* + H(A)_{\pmb{p}} = \log(o). \enq This is known as the conservation theorem stated as the sum of the doubling rate and entropy is constant for uniform fair odds \cite{covertom}. In this case, the low entropy gambling devices result in larger doubling rates. For sub-fair odds, Kelly \cite{kelly-1956} found the optimum solution as \beq W_A^* = \gamma D \left( \acute{\pmb{p}}^A || \pmb{\sigma}^A \right) + D \left( [\gamma, 1-\gamma] ~ || ~ [\beta,1-\beta] \right), \enq where $\gamma = \sum_{i \in I} p_i$, $\acute{\pmb{p}}^A = \{p_i^A/\gamma\} \Big|_{i \in I}$, $\beta = \sum_{i \in I} 1/o_i^A$, $\sigma_i^A = 1/ \left(\beta o_i^A \right)$, $\pmb{\sigma} = \{\sigma_i^A\} \Big|_{i \in I}$, and $I$ is a subset of $\{1,2,...,n\}$ that is uniquely determined by $\beta < 1$, $p_i^A o_i^A > (1-\gamma)/(1-\beta)$ for $i \in I$, $p_i^A o_i^A \leq (1-\gamma)/(1-\beta)$ for $i \notin I$. Optimisation can be done using the Karush-Kuhn-Tucker conditions (see Ref. \cite{bertsekas} for example). Clearly, the expression for the optimum solution is not simple for the case of sub-fair odds. In the rest of the paper, we shall compute the doubling rates for fair or super-fair odds and not for the case of sub-fair odds. We note, however, that the protocols that we give for quantum gambling are independent of the odds and would apply to sub-fair odds as well. \subsection{Optimisation in the presence of a helper} \label{sec::opt-side} Charlie, the casino owner, allows Alice to use a helper named Bob. Bob has access to a gambling device whose outcome random variable $B$ is correlated to that of $A$ (defined in Section \ref{sec::crit}) and let their joint probability mass function be $\Pr\{A=i,B=j\}$, $i=1,...,n$, $j=1,...,m$, where $B$ takes $m$ values. Bob reports the outcome of his gambling device, say $j$, to Alice and the optimum strategy for Alice as per the discussion above for fair or super-fair odds is to choose \beq q^{A|B}_{i|j} = \Pr\{A=i | B = j\} = \frac{ \Pr\{A = i, B = j\} } {p_j^B}, ~ i = 1,...,n, \enq where $p_j^B = \Pr\{B=j\}$. With this choice, we get the doubling rate as \beq \label{cond-dr} W_{A|B}^* = \sum_{i=1}^n p_i^A \log\left(o_i^A\right) - \sum_{j=1}^m p_j^B H(A)_{\pmb{p}^{A|B=j}}, \enq where \beq \pmb{p}^{A|B=j} = \left[\Pr\{A=1 | B = j\}, ... , \Pr\{A=n | B = j\} \right]. \enq We note that $\sum_{j=1}^m p_j^B H(A)_{\pmb{p}^{A|B=j}}$ is the conditional entropy of $A$ given $B$. In another variant, Bob leases out his gambling device to Alice allowing Alice to gamble on both $A$ and $B$. In return, Bob demands that Alice retains $2^{-K W_B^*}$ fraction of her earnings and the rest would be pocketed by him. Bob's argument is that if Alice gambles right on $B$ by completely ignoring the correlations, then Alice's wealth increases additionally by a factor of $2^{K W_B^*}$, and certainly Alice can make more by exploiting the correlations. It turns out, as the computation below shows, that for certain choices of the odds, both the variants of the help that Bob offers turn out to be the same in terms of the doubling rates. We rewrite $W_{A|B}^*$ from \eqref{cond-dr} as \beqa W_{A|B}^* & = & \sum_{i=1}^n \sum_{j=1}^m p_i^A p_j^B \log\left(o_i^A o_j^B\right) - H(A,B)_{\pmb{p}} - \left[ \sum_{j=1}^m p_j^B \log\left(o_j^B\right) - H(B)_{\pmb{p}} \right], \\ & = & W_{A,B}^* - W_B^*, \enqa where $\pmb{p}^{A,B}=[\Pr\{A=1,B=1\},...,\Pr\{A=n,B=m\}]$, $W_{A,B}^*$ and $W_B^*$ are the optimum doubling rates of the fictitious games that are played over systems with outcome probability distributions given by joint distribution of $A$ and $B$, and $B$ alone respectively, and $o_i^A o_j^B$-for-$1$ is the payoff when the $(A,B)$ takes the value $(i,j)$ and $o_j^B$-for-$1$ is the payoff when $B$ takes the value $j$. As we shall see, these two variants don't give the same doubling rates for the quantum gambling. \section{Quantum gambling} \label{sec:model} Consider a quantum system $A$ that is described by a Hilbert space ${\mathcal{H}}_A$ of dimension $\dim(A)$. At the start of each gamble, Charlie prepares the quantum system $A$ in state $\rho^A$. Alice may not know this state but learns it over repeated gambling. We ignore the time it takes for Alice to learn the state since we consider the number of times Alice gambles, $K$, is large enough. Clearly, there needs to be some checks on Charlie and the National Gaming Commission sends its representatives to Charlie's casino to ensure that Charlie is honest in stating that the state at the start of each gamble is indeed $\rho^A$. These representatives could employ a scheme described by Blume-Kohut and Hayden in Ref. \cite{kohut-2006} for accurate quantum state estimation via ``keeping the experimentalist honest". The measurement is completely described by $n$ measurement operators $\{E_i^A; i=1,...,n\}$. It seems reasonable to assume at this point that $n = \dim(A)$ and the measurement operators are orthogonal projectors ($\left(E_i^A\right)^\dagger = E_i^A$, $E_i^A E_j^A = \delta_{i,j} E_i^A$, where $\delta_{ij} = 1$ if $i=j$, and is $0$ otherwise) to rule out the possibility that Alice could win each time unless $\rho^A$ has rank $1$. The measurement operators are complete and hence, $\sum_{i=1}^n E_i^A = \mathbbm{1}$, where $\mathbbm{1}$ is the Identity matrix whose dimensions should be clear from the context. The superscript $A$ is added to indicate the quantum system on which gambling is carried out. The probability of the $i$th outcome in the quantum roulette is given by \beq p_i^A = \tr \, \rho^A E_i^A \enq and we define $\pmb{p}^A = [p_1^A,...,p_n^A]$. For the $i$th outcome, the quantum state collapses to $E_i^A \rho^A E_i^A /p_i^A$ after the measurement. We shall assume that for the next gamble, the state is again prepared to be $\rho^A$ and the measurement using $\{E_i^A\}$ is applied again. It now follows easily from the treatment in Section \ref{sec::opt} that \beq W_A^* = \sum_{i=1}^n p_i^A \log\left(o_i^A\right) - H(A)_{\pmb{p}}. \enq Next, we consider the case of uniform fair odds, $o_i^A = o$ for all $i$, where Alice has control over both $\pmb{q}^A$ and $\{E_i^A\}$. The additional choice of measurement operators distinguishes classical and quantum gambling. The optimum doubling rate is given by \beq W_A^{**} = \max_{\{E_i^A\}} \max_{\pmb{q}^A} W_A. \enq It follows from the convexity of $t \mapsto t\log(t)$ that \beqa H(A)_{\pmb{p}} & \geq & S(A)_\rho, \enqa and the equality is achieved if and only if $E_i^A$ are the eigenvectors of $\rho^A$. Hence, it follows that \beqa \label{flag-1} W_A^{**} & = & \log(o) - S(A)_\rho. \enqa We note here that the doubling rate is larger in general if Alice optimises over both $\pmb{q}^A$ and $\{E_i^A\}$ rather than just optimising over $\pmb{q}^A$ alone. We shall add superscript $*$ on the doubling rates if Alice optimises over the wealth distribution and superscript $**$ if Alice optimises over the wealth distribution as well as the measurement operators. \section{Quantum gambling with a helper} Bob is the helper with access to a quantum system $B$ described by Hilbert space ${\mathcal{H}}_B$ and at the start of each gamble, the joint state in the composite system $AB$ described by ${\mathcal{H}}_A \otimes {\mathcal{H}}_B$ is $\rho^{AB}$. We shall follow the convention that $\rho^A = \tr_B \, \rho^{AB}$ and $\rho^B = \tr_A \, \rho^{AB}$. As mentioned before, we assume that Bob is broke and doesn't have money to gamble on $B$. We look at two ways in which Bob renders help. \subsection{Bob reports the outcomes to Alice} The protocol is described as follows: \begin{enumerate} \item Bob measures the system $B$ using a complete set of measurement operators $\{F_j^B\}$, $j=1,...,m$, $\sum_{j=1}^m \left(F_j^B\right)^\dagger F_j^B = {\mathrm{I}}$. Note that apriori we don't place any restrictions on $\{F_j^B\}$ and the measurement may not be a projective. Alice may or may not have control over $\{F_j^B\}$. For the $j$th outcome in $B$, the state of system $A$ collapses to \beq \rho^A_j = \frac{1}{\beta_j} \tr_B \, \rho^{AB} \left[ {\mathrm{I}} \otimes \left( F_j^B \right)^\dagger F_j \right], \enq where \beq \beta_j = \tr \, \rho^{AB} \left[ {\mathrm{I}} \otimes \left(F_j^B\right)^\dagger F_j \right] \enq is the probability of the $j$th outcome of measurement in system $B$. Bob tells the measurement outcome to Alice. \item Alice uses the measurement outcome in $B$ to distribute her wealth across the outcomes. If Alice has control over the measurement operators as well, Alice could tune these measurement operators depending on the measurement outcome in $B$ and uses $\{E_{ij}^{A}\}$ (projective measurement) for the $j$th outcome in system $B$. If Alice has no control over the measurement operators, then $E_{ij}^A = E_i^A$, $j=1,...,m$. \end{enumerate} The probability of the $i$th outcome in system $A$ given the $j$th outcome in system $B$ is given by \beqa \alpha_{i|j}^A & = & \tr \, \rho^A_j E_{ij}^A \\ & = & \frac{1}{\beta_j} \tr \, \rho^{AB} \left( E_{ij}^A \otimes \left(F_j^B\right)^\dagger F_j^B \right) \enqa and let $\pmb{\alpha}_{(j)}^A = \left[ \alpha_{1|j}^A,...,\alpha_{n|j}^A\right]$. We first consider the case where Alice does not control the measurement operators. For this case, $E_{ij}^A = E_i^A$, i.e., a fixed set of measurement operators is applied for each outcome $j$ in system $B$, and \beq \pmb{p}^A = \sum_{j=1}^m \beta_j \pmb{\alpha}_{(j)}^A, \enq where $\pmb{p}^A$ is the probability vector that gives the probability of outcomes in system $A$ without Bob's help. The optimal doubling rate given the $j$th outcome in system $B$ is \beq W_{A|B,j}^* = \sum_{i=1}^n \alpha_{i|j}^A \log\left(o_i^A\right) - H(A)_{\pmb{\alpha}_{(j)}^A}. \enq The overall doubling rate is given by \beqa W_{A|B}^* & = & \sum_{j=1}^m \beta_j W_{A|B,j}^* \\ & = & \sum_{i=1}^n \sum_{j=1}^m \beta_j \alpha_{i|j}^A \log\left(o_i^A\right) - \sum_{j=1}^m \beta_j H(A)_{\pmb{\alpha}_{(j)}^A}. \enqa For uniform fair odds, the increase in doubling rate due to Bob's help is \beq W_{A|B}^* - W_A^* = H\left(\pmb{p}^A\right) - \sum_{j=1}^m \beta_j H(A)_{\pmb{\alpha}_{(j)}^A}, \enq which, from the concavity of the entropy, is always nonnegative. Next, we consider the case where Alice has control over the measurement operators. We note that in this case, $W_{A|B}-W_A$ (note that these are not optimised) can be negative (since $\pmb{p}^A \neq \sum_{j=1}^m \beta_j \pmb{\alpha}_{(j)}^A)$, which is impossible in a classical gambling. As an example, consider uniform fair odds and let $\dim(A) = \dim(B) = 2$, $\rho^{AB} = \ket{0}\bra{0} \otimes \rho^B$. While quantum systems $A$ and $B$ in this state have no correlation, but it still allows us to give a simple example to prove the above point. Let the measurement operators for $A$ without the help be $\{\ket{0}\bra{0}, \ket{1}\bra{1}\}$. Let the measurement outcome of $B$ be $0$ or $1$ with probability $0.5$ each. If outcome in $B$ is $0$, then the measurement operators for $A$ are $\{\ket{0}\bra{0}, \ket{1}\bra{1}\}$ and if the outcome in $B$ is $1$, then the measurement operators for $A$ are $\{ \ket{-}\bra{-}, \ket{+}\bra{+}\}$, where $\ket{-} = \{(\ket{0}-\ket{1})/\sqrt{2}$ and $ \ket{+} = (\ket{0}+\ket{1})/\sqrt{2}$. This choice gives $H(A)_{\pmb{p}} = $ $H(A)_{\pmb{\alpha}_{(1)}^A} = 0$, and $H(A)_{\pmb{\alpha}_{(2)}^A} > 0$. The increase in the optimum doubling rate is given by \beqa W_{A|B}^{**} - W_A^{**} & = & \max_{\{E_{ij}^A\}} \left[ \sum_{i=1}^n \sum_{j=1}^m \beta_j \alpha_{i|j}^A \log\left(o_i^A\right) - \sum_{j=1}^m \beta_j H(A)_{\pmb{\alpha}_{(j)}^A} \right] \nonumber \\ & & ~~~~~ - \max_{\{E_{i}^A\}} \left[ \sum_{i=1}^n p_i^A \log\left(o_i^A\right) - H(A)_{\pmb{p}} \right]. \enqa If Alice has control over $\{F_j^B\}$ as well, it follows for uniform fair odds and from (\ref{flag-1}) that \beq W_{A|B}^{**} - W_A^{**} = \max_{\{F_j^B\}} \left[ S(A)_\rho - \sum_{j=1}^m \beta_j S(A)_{\rho_j} \right]. \enq We note that this quantity is referred to as the classical correlation between the quantum systems $A$ and $B$ \cite{verdal-2001}. Suppose Alice is given a choice where Bob either measures the system $B$ or applies a quantum operation ${\cal F}(\cdot)$ on $B$, and then Alice gambles on system $A$. For uniform fair odds with system $A$ measured in its eigenbasis, measuring $B$ is a better option. To see this, note that the state of $A$ after the operation on $B$ is given by \beq {\cal F}(\rho^A) = \tr_B \, \sum_{j=1}^m \left[{\mathrm{I}} \otimes F_j^B \right] \rho^{AB} \left[{\mathrm{I}} \otimes \left( F_j^B\right)^\dagger \right], \enq where $\{F_j^B\}$ are the Kraus operators characterising ${\cal F}(\cdot)$. Invoking the concavity of the von Neumann entropy and assuming that $A$ is measured in its eigenbasis, we get \beq \label{cl_corr} S(A)_{ {\cal F}(\rho^A) } \geq \sum_{j=1}^m \beta_j S(A)_{\rho_j}, \enq where $\rho^A_j$ and $\beta_j$ are the same as above. Hence, in this case, it is better to measure the state $B$ rather than apply a quantum operation on $B$. \subsection{Bob leases out $B$ to Alice} \label{protocol22} The protocol is described as follows: \begin{enumerate} \item Bob leases $B$ to Alice that gives her the freedom that she can gamble on the composite system $AB$ instead of $A$ alone. \item In return of this offer, Bob demands that at the end of $K$ gambles, Alice can keep $2^{-KW_B}$ fraction of her money and the rest will be kept by Bob, where $W_B$ is an achievable doubling rate Alice would get if she had gambled on $B$ alone. Note that Bob could choose $W_B = W_B^*$ or $W_B = W_B^{**}$. The choice of this fraction in the lease agreement has the same motivation as in the classical case in Sec. \ref{sec::opt-side}. \end{enumerate} Let us assume that the measurement operators for the composite system $AB$ are given by $G_{ij}^{AB}$, $i=1,...,n$, $j=1,...,m$. Let the payoff for the output $(i,j)$ be $\left(o_i^A o_j^B\right)$-for-$1$. (Charlie could choose dependent odds for $AB$ as well.) Let $p_{ij}^{AB} = \tr \, \rho^{AB} G_{ij}^{AB}$ and $\pmb{p}^{AB} = [p_{11}^{AB},...,p_{1,m}^{AB},...,p_{n1}^{AB},...,p_{nm}^{AB}]$. Let the measurement operators for $B$ be given by $\{F_j^B\}$, $j=1,...,m$, $p_j^B = \tr \, \rho^{AB} \left({\mathrm{I}} \otimes F_j^B \right)$, and $\pmb{p}^B = [p_1^B,...,p_m^B]$. Let Alice bet $q_{ij}^{AB}$ fraction of her wealth on the outcome $(i,j)$. After accounting for Bob's share (computed using $W_B = W_B^*$), the doubling rate for Alice's wealth is given by \beqa W_{A|B} & = & \sum_{i=1}^n \sum_{j=1}^m p_{ij}^{AB} \log\left(q_{ij}^{AB} o_i^A o_j^B\right) - W_B^*, \enqa where \beq W_B = \sum_{j=1}^m p_j^B \log(o_j^B) - H(B)_{\pmb{p}^B}. \enq We note that in this case, in general, $p_j^B \neq \sum_{i=1}^n p_{ij}^{AB}$. First consider the case where Alice can only control the wealth distribution and not the measurement operators. As per our discussion before, the optimal thing for Alice to do would be to choose $q_{ij}^{AB} = p_{ij}^{AB}$ to get \beq \label{dummy1} W_{A|B}^* = \sum_{i=1}^n \sum_{j=1}^m p_{ij}^{AB} \log\left( o_i^A o_j^B\right) - \sum_{j=1}^m p_j^B \log\left(o_j^B\right) - H(A,B)_{\pmb{p}^{AB}} + H(B)_{\pmb{p}^B}. \enq If Alice also controls the measurement operators, then if Bob chooses $W_B=W_B^{**}$ to compute his share as per the protocol, the doubling rate for Alice's wealth is given by \beq W_{A|B}^{**} = \max_{\{ G_{ij}^{AB} \}} \left[ \sum_{i=1}^n \sum_{j=1}^m p_{ij}^{AB} \log\left( o_i^A o_j^B\right) - H(A,B)_{\pmb{p}^{AB}} \right] - \max_{\{F_j^B\}} \left[ \sum_{j=1}^m p_j^B \log\left(o_j^B\right) - H(B)_{\pmb{p}^B} \right]. \enq In the special case of uniform fair odds of $o$-for-1 for both $A$ and $B$, we get \beqa W_{A|B}^{**} & = & \log(o) - S(A,B)_\rho + S(B)_\rho \\ \label{neg_cond} & = & \log(o) - S(A|B)_\rho. \enqa Unlike the classical case, the quantum conditional entropy can be negative and can result in attaining the doubling rates in quantum gambling that cannot be achieved by any classical gambling. As an example, consider a Bell pair \beq \ket{\beta_{00}} = \frac{\ket{00} + \ket{11}}{\sqrt{2}}, \enq and $\rho^{AB} = \ket{\beta_{00}} \bra{\beta_{00}}$. Both $A$ and $B$ are described by a Hilbert space of dimension $2$. In this case, $S(A,B)_\rho = 0$ since $AB$ is in a pure state, but $\rho^B = {\mathrm{I}}/2$, and hence, $S(B)_\rho = 1$. So, $S(B|A)_\rho = -1$. Alice computes the change in the doubling rate of her take-home income as \beqa W_{A|B} - W_A & = & \sum_{i=1}^n \sum_{j=1}^m p_{ij}^{AB} \log\left( o_i^A o_j^B \right) - \sum_{j=1}^m p_j^B \log(o_j^B) - \sum_{i=1}^n p_i^A \log(o_i^A) \nonumber \\ & & ~~~~ - H(A,B)_{\pmb{p}^{AB}} + H(B)_{\pmb{p}^B} + H(A)_{\pmb{p}^A}, \enqa where $p_i^A$ is the probability of the $i$th outcome in $A$ and $\pmb{p}^A = [p_1^a,...,p_n^A]$. For $W_B = W_B^{**}$, uniform fair odds of $o$-for-$1$, and with Alice having control over the measurement operators, and the systems $A$, $B$, and $AB$ being measured in their respective eigenbases, and we get \beq \label{qu_mi} W^{**}_{A|B} - W^{**}_A = S(A : B)_\rho, \enq where $S(A:B)_\rho \geq 0$ for all $\rho^{AB}$. So, if Alice gambles right, she can still make money or not lose money (if $S(A:B)=0$) despite Bob demanding a share in Alice's earnings. \subsection{Discussion} We first note that although both the variants of using the helper are the same for classical gambling, they are, in general, different for quantum gambling. The difference in the optimum doubling rates by using the helper in the above two variants under uniform fair odds, systems $A$ and $B$ being measured in their respective eigenbases, Bob demanding that Alice can only keep $2^{-KW_B^{**}}$ of her earnings in $K$ gambles is given by \beqa D(A \rangle B)_\rho & = & W^{**}_{A|B} \Bigg|_{\text{Variant~2}} - W^{**}_{A|B}\Bigg|_{\text{Variant~1}} \\ & = & S(A:B)_\rho - \max_{\{F_j\}} \left[ S(A)_\rho - \sum_{j=1}^m \beta_j S(A)_{\rho^A_j} \right], \enqa and is called the quantum discord between $A$ and $B$ (see \cite{verdal-2001,zurek-2002} and the citing articles) and has been studied in a completely different context. This quantity is nonnegative and is zero for the classical case. It is shown by Ferraro \emph{et al} that with probability one, a quantum state chosen at random would have a non-zero quantum discord \cite{ferraro-2010}. \subsection{Alternating quantum system of the helper} Charlie may choose to concoct a different protocol to take care of the negative conditional entropy in (\ref{neg_cond}). Charlie prepares the state for each gamble to be $\rho^{ABC}$ in the composite system $ABC$ described by the Hilbert space ${\mathcal{H}}_A \otimes {\mathcal{H}}_B \otimes {\mathcal{H}}_C$. Alice gambles on $A$ as before while Bob has access to $B$ in $f$ fraction of the gambles and to $C$ in $(1-f)$ fraction of the gambles and Bob knows whether the system he has access to on that gamble is $B$ or $C$. Bob takes his share of Alice's earnings in both the cases as described in Section \ref{protocol22}. It follows that for uniform fair odds of $o$-for-$1$ for $A$, $B$, and $C$, systems $A$, $B$, and $C$ measured in their respective eigenbases, Bob computing his share by choosing $W_B = W_B^{**}$ and $W_C = W_C^{**}$, and assuming that $fK$ is an integer ($K$ is the number of gambles), the doubling rate is given by \beq W^{**}_{A|BC} = \log(o) - f S(A|B)_\rho - (1-f) S(A|C)_\rho. \enq It is clear that there would exist an $f$ that would do the job since by choosing $f=1/2$ and invoking strong sub-additivity \cite{lieb-ruskai-ssa0-1973,lieb-ruskai-ssa-1973}, we get \beq f S(A|B)_\rho + (1-f) S(A|C)_\rho \geq 0. \enq \subsection{Quantum extension of Kelly's setup} Kelly defined another scenario wherein the gambler on receiving the output of a channel bets on the input that was transmitted \cite{kelly-1956}. Under certain parameters, Kelly showed that the doubling rate is equal to the mutual information between the input and the output of the channel, whose maximum over the input probability distribution, as is well known, is the capacity of the channel \cite{covertom}. It is not difficult to present a quantum extension of Kelly's setup as follows. We first discuss Kelly's classical setup. Let $p_i$ be the probability of $i$th input to the channel, $p_{j|i} = \Pr\{ \text{Output} = j | \text{Input} = i\}$, and $o_i$-for-$1$ are the odds for the $i$th input where $o_i = 1/p_i$. It is not difficult to show that the maximum achievable doubling rate for fair or super-fair odds Alice can achieve is by betting $q_{i|j} = \Pr\{\text{Input} = i| \text{Output} = j\}$ fraction of her wealth on the $i$th input given that the output is $j$ and is given by the mutual information between the input and the output. The doubling rate from Eq. \eqref{doubling-rate} for a given measurement outcome $j$ is given by \begin{align} W_j & = \sum_i q_{i|j} \log\left(q_{i|j} o_i \right) \\ & = \sum_i q_{i|j} \log\left(\frac{q_{i|j}}{p_i} \right), \end{align} and the overall doubling rate is \beq W = \sum_j q_j W_j = \sum_{i,j} q_j q_{i|j} \log\left(\frac{q_{i|j}}{p_i} \right), \enq which is the mutual information between the input and the output and $q_j = \Pr\{\text{Output} =j\}$. A quantum extension of Kelly's setup is as follows. For the $i$th input that is chosen with probability $p_i$, a density matrix $\rho_i^A$ is generated and is measured using the POVM $\{\Lambda_j^A\}$. Hence the probability of the $j$th outcome given $i$th input is \beq p_{j|i} = \tr \, \rho_i^A \Lambda_j^A. \enq Continuing with Kelly's optimisation, we get the doubling rate as the mutual information, which using the Holevo's bound (see Refs. \cite{wilde-book, nielsen-chuang} for example) is upper bounded by the Holevo information given by \beq \chi(\{p_i,\rho_i\}) = S(A)_{\sum_i p_i \rho_i^A} - \sum_i p_i S(A)_{\rho_i^A}. \enq \subsection{Further variants} We note here that several other variants can be concocted. For example, one could put a classical communication channel with helper's information as the input and Alice receiving the output. Alice receives the noisy information and could process it and then use it for gambling. In another variant, Alice could choose to first apply one of the available (as provided by Charlie) quantum operations on the state and then gamble on the new state (see also Ref. \cite{goldenberg-1999}). It should be apparent that analysis of such protocols can be done on a specific choice of the channel in the former case and the available restricted quantum operations provided by Charlie in the latter case. \section{Conclusions} We studied the log-optimal strategies for some quantum gambling protocols. We first considered the case where Alice gambles on a quantum system (or roulette) by distributing her wealth across the outcomes and/or by choosing the measurement operators. Next, we considered the case where Charlie allows Alice to have a helper Bob with access to another quantum system that is correlated with Alice's quantum roulette and considered two variants. In one variant, Bob reports the measurement outcome on his system to Alice who uses it to gamble better. In another variant, Alice gambles on the composite system consisting of her quantum roulette and Bob's system and Bob takes a pre-specified cut in Alice's wealth in return. The difference in the doubling rates of these two variants is the quantum discord, which is zero for classical roulettes. We also considered quantum extension of Kelly's setup. Finally, we considered the case of alternating quantum system of the helper. Quantum gambling can have purely quantum effects that cannot be replicated by any classical roulette. \bibliographystyle{unsrturl} \bibliography{master} \end{document}
119,054
TITLE: Is every locally compactly generated space compactly generated? QUESTION [9 upvotes]: [Parse it as (locally compact)ly generated.] I stumbled across this one whilst supervising an undergraduate thesis. Convenient categories for homotopy theory (e.g. CGWH) have been discussed here before. As an alternative to CGWH, Rainer Vogt proposed the category of locally compactly generated spaces; see also the recent German-language point-set topology textbook Grundkurs Topologie by Gerd Laures and Markus Szymik. If (as is now usual) one means (compact Hausdorff)ly generated when one says compactly generated, then the category of compactly generated spaces is a full subcategory of the category of locally compactly generated spaces. Back in 1971, Vogt asked whether this inclusion is strict or not. Do we know the answer yet? REPLY [8 votes]: The paper "A distinguishing example in k-spaces" by John Isbell constructs an example of a locally compact space $X$ which is not compact-Hausdorffly generated.
80,208
Team OHL Roster – Ottawa, ON – Nov.v.) For more information about the event please visit 2011 SUBWAY® Super Series Schedule of Games: Game 1 – Monday November 7 at Victoriaville, QC Game 2 – Wednesday November 9 at Quebec City, QC Game 3 – Thursday November 10 at Ottawa, ON Game 4 – Monday November 14 at Sault Ste. Marie, ON Game 5 – Wednesday November 16 at Regina, SK Game 6 – Thursday November 17 at Moose Jaw, SK
132,490
Worship Insights With Brian Doerksen Themes: Artist, Interview, Worshipper skillfully puts tools in the hands of today’s worship leader. Pastors and worshipers will appreciate his keen insights into the worshiping life. Ultimately, Brian’s words focus us again on the Faithful One, who invites us into His presence. Brian Doerksen course audios . It includes a study progression, discussion questions, and options for a creative project.
325,850
TITLE: Why does an infinite limit not exist? QUESTION [52 upvotes]: I read in Stewart "single variable calculus" page 83 that the limit $$\lim_{x\to 0}{1/x^2}$$ does not exist. How precise is this statement knowing that this limit is $\infty$?. I thought saying the limit does not exist is not true where limits are $\infty$. But it is said when a function does not have a limit at all like $$\lim_{x\to \infty}{\cos x}$$. REPLY [0 votes]: Generally, we get that $\lim_{n \rightarrow \infty} x(n) = y \leftrightarrow \forall \epsilon>0: \exists N\in \mathbb{N} : \forall n > N :|x(n) - y|<\epsilon$ but, for example, for $x(n)=n$ it's not true that $\lim_{n \rightarrow \infty} n = \infty $ because it would imply that $\forall \epsilon>0: \exists N\in \mathbb{N} : \forall n > N :|x(n) - \infty|<\epsilon $ but $|x(n) - \infty|=\infty$ and it can never be less then $\epsilon$ so $x(n)=n$ does not converge to $\infty$ Real numbers are colloquially defined as limits of sequances, I just shown that $\infty$ is not a real number.
46,309
\begin{document} \title {Exit problems for oscillating compound Poisson process} \date{ } \author { Tetyana Kadankova \thanks{Vrije Universiteit Brussel, Department of Mathematics, Building G, Brussels, Belgium, \newline ,\: e-mail: tetyana.kadankova@vub.ac.be}} \maketitle \noindent {\bf Key words:} oscillating process, scale function, exit from interval \noindent{\bf Running head:} Exit problems for oscillating compound Poisson process \\ {\bf 2000 Mathematics subject classification: } 60G40; 60K20 \begin{abstract} In this article we determine the Laplace transforms of the main boundary functionals of the oscillating compound Poisson process. These are the first passage time of the level, the joint distribution of the first exit time from the interval and the value of the overshoot through the boundary. In case when $\bold E\xi_{i}(1)=0, $ $ \sigma_{i}^{2}=\bold E\xi_{i}(1)^{2}<\infty,$ $ i=1,2, $ we prove the limit results for the mentioned functionals. \end{abstract} {\bf\Large Introduction} \par\bigskip Oscillating random walks with two switching levels were considered in \cite{lotov 1996},\cite{lotov 2003},\cite{lotov 2004}. The authors derived the Laplace-Stieltjes transforms of the distributions of the random walks in transient and stationary regimes. In addition, the asymptotic analysis of the stationary distribution was performed. This article studies the so-called one- and two-sided exit problems for an oscillating compound Poisson process. More specifically, we determine the Laplace transforms of the following boundary characteristics, These are the first passage time of a boundary and the joint distribution of the first exit time from an interval and the value of the overshoot at this instant. The obtained results are given in closed form, namely in terms of the functions involving the scale functions of the auxiliary processes $ \xi_{i}(t),$ $ i=1,2$ (see below for a definition). The motivation of this study stems from the fact that these processes are used as governing processes for certain oscillating queueing systems. Examples of such systems are queueing models in which service speed or customer arrival rate change depending on the workload level, and dam models in which the release rate depends on the buffer content (see \cite{Pacheco} and references therein). To solve the two-sided exit problem, we used a probabilistic approach borrowed from \cite{2Ka2}. The rest of the article is structured as follows. In Section 1 we introduce the process and determine boundary characteristics of the auxiliary processes. Section 2 deals with the one-boundary characteristics of the oscillating process. In Section 3 we determine the joint distribution of the first exit time from the interval and the value of the overshoot. The asymptotic results under the conditions that $\bold E\xi_{i}(1)=0, $ $\sigma_{i}^{2}=\bold E\xi_{i}(1)^{2}<\infty,$ $ i=1,2 $ are given in Section 4. \section{Preliminaries} In this section we introduce the process of interest and the auxiliary processes. Further, we will determine the Laplace transforms of the first passage time and the first exit time for the auxiliary processes. These results will be used to solve a two-sided problem for the oscillating compound Poisson process. Let $\{\xi_{i}(t); \: t \ge 0\}, $ $ i=1,2 $ be real-valued semi-continuous from below compound Poisson processes: $$ \xi_{i}(t)= \sum\limits_{k=0}^{N(t)}\xi^{i}_k- a_it, \quad t \ge 0, \quad i=1,2 $$ where $\xi^{i}_0 =0,$ $\xi^{i}_k\sim \xi^{i}>0 $ are independent identically distributed variables with distribution function $F_i(x);$ $ \{N_i(t); t \ge 0\}, \quad N_i(0)=0$ is an ordinary Poisson process with parameter $\lambda_{i}$ independent from $ \{\xi^i_k; \: k \ge 0\}, $ and $a_i >0$ is a drift coefficient. Their Laplace transforms are then of the following form $ \bold E e^{-z\xi_{i}(t)}=e^{tk_{i}(z)},$ where $$ k_{i}(z)=a_{i}z+\lambda_{i}\int_{0}^{\infty}\left(e^{-xz}-1\right)dF_{i}(x), \qquad \Re(z)=0. $$ We now introduce the one-boundary characteristics of the processes. Denote by $$ \tau_{i}^{-}(x)=\inf\{t:\xi_{i}(t)\le-x\}, \qquad x\ge 0 $$ the first passage time of the lower level $-x,$ and by $$ \tau_{i}^{+}(x)=\inf\{t:\xi_{i}(t)>x\}, \quad T_{i}^{+}(x)=\xi_{i}(\tau_{i}^{+}(x))-x $$ the first crossing time of the level $x$ and the value of the overshoot through this level. We set per definition $ \inf\{\emptyset\}=\infty.$ Note, that due to the fact that the process $\xi_{i}(t)$ has only positive jumps, the negative level $-x$ is reached continuously. Hence, the value of the overshoot is equal to zero. For a fixed $ b>0 $ and all $x\in\mathbb R, $ $ t\ge0 $ introduce the process $ \xi(x,t)\in \mathbb R,$ $ \xi(x,0)=x$ by means of the following recurrence relations: \begin{align} \label{opp1} & \xi(x,t)= \left\{ \begin{array}{l} x+\xi_{2}(t), \quad 0\le t< \tau_{2}^{-}(x-b), \\ \xi(b,t-\tau_{2}^{-}(x-b)), \quad t\ge \tau_{2}^{-}(x-b), \end{array} \right. \qquad x>b, \end{align} \begin{align*} & \xi(x,t)= \left\{ \begin{array}{l} x+\xi_{1}(t), \quad 0\le t< \tau_{1}^{+}(b-x),\notag \\ \xi(b+T_{1}^{+}(b-x),t-\tau_{1}^{+}(b-x)), \quad t\ge \tau_{1}^{+}(b-x), \end{array} \right. \qquad x\le b. \end{align*} Let us explain how the process evolves. Observe, that $ b $ is a switching point of the process $\xi(x,t), $ $t\ge0. $ If $\xi(x,t_{0}) >b,$ then the increments of the process coincide with the increments of the process $ \xi_{2}(t-t_{0})$ up to the first passage of $b. $ If $\xi(x,t_{0}) \le b,$ then the increments of the process coincide with the increments of the process $ \xi_{1}(t-t_{0})$ up to the first passage of $ b. $ To derive the Laplace transforms of the one-boundary characteristics of the processes $\xi_{i}(t), $ we will need notion of a resolvent of a compound Poisson process. Introduce the resolvents $ R_{i}^{s}(x),$ $x\ge0 $ \cite{SuSh} of the processes $ \xi_{i}(t), $ $ t\ge0, $ by means of their Laplace transforms: $$ \int_{0}^{\infty}e^{-xz}R_{i}^{s}(x)\,dx = (k_{i}(z)-s)^{-1}, \quad \Re(z)>c_{i}(s),\quad R_{i}^{s}(x)=0, \; x<0, $$ where $ c_{i}(s)>0,$ $s>0$ is the unique root of the equation $k_{i}(z)-s=0,$ $i=1,2$ in the semi-plane $\Re(z)>0. $ Note, that $ R_{i}^{s}(0)=a_{i}^{-1}>0.$ The resolvent defined in \cite{SuSh} is called a scale function in modern literature (see \cite{Kyprianou2006} for more details). The importance of scale functions as a class with which one may express a whole range of fluctuation identities for spectrally one-sided L\'{e}vy processes. Scale functions are also an important working tool in risk insurance, more specifically, in optimal barrier strategies. In the rest of the article we will use the term resolvent. Denote by \begin{align*} \underline{m}_{i}^{x}(s)= \bold E\left[e^{-s\tau_{i}^{-}(x)} \right],\qquad \overline{m}_{i}^{x}(z,s)= \bold E\left[e^{-s\tau_{i}^{+}(x)-zT_{i}^{+}(x)} \right],\quad \Re(z)\ge0 \end{align*} the Laplace transforms of the first passage time of the negative level $-x$ and the joint distribution of the first crossing time of the level $x$ and the value of the overshoot. one-boundary characteristics of the The lemma below contains the expressions for these Laplace transforms. Observe that these results are valid for L\'{e}vy processes whose Laplace exponent is given by (\ref{opp28}). \begin{lemma} For $s\ge0,$ $i=1,2$ the following equalities are valid: \begin{align} \label{opp2} & \underline{m}_{i}^{x}(s)=e^{-xc_{i}(s)}, \\ & \overline{m}_{i}^{x}(z,s)= e^{xz}-R_{i}^{s}(x)\,\frac{k_{i}(z)-s}{z-c_{i}(s)} -(k_{i}(z)-s)e^{xz}\int_{0}^{x}e^{-uz}R_{i}^{s}(u)\,du.\notag \end{align} \end{lemma} Note that the first equality of (\ref{opp2}) is well know new ( see for instance \cite{Zolotarev 1964}). Proof of the second relation is given in appendix. We now consider the two-sided exit problem for the auxiliary processes. For $d_{i}>0, $ $x\in[0,d_{i}]$ denote by $$ \chi_{i,x}^{d_{i}}=\inf\{t:x+\xi_{i}(t)\notin[0,d_{i}]\}, \qquad i=1,2 $$ the first exit time from the interval $ [0,d_{i}] $ by the process $x+\xi_{i}(t).$ Introduce the events: $\overline{A}_{i}=\{x+\xi_{i}(\chi_{i,x}^{d_{i}})> d_{i}\} $ the exit from the interval occurs through the upper boundary; $ \underline{A}_{i}=\{x+\xi_{i}(\chi_{i,x}^{d_{i}})\le0\} $ the exit from the interval occurs through the lower boundary. By $ T_{i}(x)=(x+\xi_{i}(\chi_{i,x}^{d_{i}})-d_{i})\bold I_{\overline{A}_{i}} +0\cdot \bold I_{\underline{A}_{i}} $ we denote the value of the overshoot at the instant of the first exit. Here $ \bold I_{A} $ is the indicator of the event $ A. $ Introduce the Laplace transforms \begin{align*} \underline{V}_{i,x}^{d_{i}}(s)= \bold E\left[e^{-s\chi_{i,x}^{d_{i}}};\underline{A}_{i} \right],\qquad \overline{V}_{i,x}^{d_{i}}(z,s)= \bold E\left[e^{-s\chi_{i,x}^{d_{i}}-zT_{i}(x)};\overline{A}_{i} \right], \quad \Re(z)\ge0. \end{align*} \begin{lemma} For $s\ge0,$ $i=1,2$ the following equalities hold: \begin{align} \label{opp3} & \underline{V}_{i,x}^{d_{i}}(s) =\frac{R_{i}^{s}(d_{i}-x)}{R_{i}^{s}(d_{i})},\notag \\ & \overline{V}_{i,x}^{d_{i}}(z,s)= \overline{m}_{i}^{d_{i}-x}(z,s) - \frac{R_{i}^{s}(d_{i}-x)}{R_{i}^{s}(d_{i})}\, \overline{m}_{i}^{d_{i}}(z,s). \end{align} \end{lemma} Note, that the first relation of the lemma was derived in \cite{Kr18} for a compound Poisson process, and in \cite{Suprun 1976} for a spectrally one-sided L\'evy process (\ref{opp28}). To verify the second relation, we make use of the following equation: \begin{align*} & \bold E\left[e^{-s\tau_{i}^{+}(d_{i}-x)};T_{i}^{+}(d_{i}-x)\in du \right]= \bold E\left[e^{-s\chi_{i,x}^{d_{i}}};T_{i}(x)\in du,\overline{A}_{i}\right]+\\ & + \bold E\left[e^{-s\chi_{i,x}^{d_{i}}};\underline{A}_{i} \right] \bold E\left[e^{-s\tau_{i}^{+}(d_{i})};T_{i}^{+}(d_{i})\in du \right], \qquad x\in[0,d_{i}]. \end{align*} The latter was derived for spectrally one-sided L\'{e}vy processes (\ref{opp28}) in \cite{Kad6}, \cite{Kad3}, and for general L\'{e}vy processes in \cite{2Ka2}. Now plugging in the expression for $\bold E\left[e^{-s\chi_{i,x}^{d_{i}}};\underline{A}_{i} \right]$ (the first equality of the lemma), we obtain the second statement of the lemma. \section {One-boundary characteristics of the process $\xi(x,t).$ } In this section we derive the Laplace transforms of the one-boundary characteristics of the process and study their asymptotic behavior. Let us formally define the one-boundary functionals of the process $ \xi(x,t),$ $ t\ge 0.$ For $ r\le\min\{x,b\} $ denote by \begin{align*} \underline\tau_{r}^{x}(b)=\inf\{t:\xi(x,t)\le r\}, \qquad \underline f_{r}^{x}(s)= \bold E\left[ e^{-s\underline\tau_{r}^{x}(b)}; \underline\tau_{r}^{x}(b)<\infty\right], \end{align*} the first passage time of the level $r$ by the process $\xi(x,t)$ and its Laplace transform. For $ k\ge\max\{x,b\} $ denote by \begin{align*} \overline\tau_{x}^{k}(b)=\inf\{t:\xi(x,t)> k\}, \qquad \overline T_{x}^{k}= \xi(x,\overline\tau_{x}^{k}(b))-k \end{align*} the first crossing time of the level $ k $ and the value of the overshoot by the process $\xi(x,t).$ The variables $\underline\tau_{r}^{x}(b), \:\overline\tau_{x}^{k}(b),\: \overline T_{x}^{k} $ are called the one-boundary characteristics of the process. Introduce \begin{align*} \overline f_{x}^{k}(s)= \bold E\left[ e^{-s\overline\tau_{x}^{k}(b)}; \overline\tau_{x}^{k}(b)<\infty\right],\quad \overline f_{x}^{k}(z,s)= \bold E\left[ e^{-s\overline\tau_{x}^{k}(b)-z\overline T_{x}^{k}}; \overline\tau_{x}^{k}(b)<\infty\right]. \end{align*} For $ s\ge0 $ define the function $ K_{x}^{s}(u), $ $ x\in\mathbb{R}, $ $ u\ge0, $ by means of its Laplace transform $\mathbb{K}_{x}^{s}(z): $ \begin{align} \label{opp4} \mathbb{K}_{x}^{s}(z)= \int_{0}^{\infty}e^{-uz}K_{x}^{s}(u)\,du= \frac{k_{1}(z)-s}{k_{2}(z)-s} \int_{0}^{\infty}e^{-uz}R_{1}^{s}(x+u)\,du, \end{align} where $ \Re(z)>\max\{c_{1}(s),c_{2}(s)\}.$ Note, that it follows from the definition $ (\ref{opp4} )$ that for $ x\le 0 $ $ \mathbb{K}_{x}^{s}(z)=e^{xz}(k_{2}(z)-s)^{-1} $ and $ K_{x}^{s}(u)=R_{2}^{s}(x+u). $ For a fixed $ s\ge0 $ define the function $ F_{s}(u), $ $ u\ge0 $ by means of its Laplace transform $\mathbb{F}_{s}(z): $ \begin{align} \label{opp5} \mathbb{F}_{s}(z)= \int_{0}^{\infty}e^{-uz}F_{s}(u)\,du = \frac{1}{z-c_{1}(s)}\,\frac{k_{1}(z)-s}{k_{2}(z)-s}, \qquad \Re(z)> c_{2}(s). \end{align} \begin{theorem} \label{topp1} The Laplace transforms of $\underline\tau_{r}^{x}(b),$ $\overline\tau_{x}^{k}(b) $ and of the joint distribution of $ \{\overline\tau_{x}^{k}(b),\overline T_{x}^{k}\} $ are such that for $ s\ge0 $ \begin{align} \label{opp6} \underline f_{r}^{x}(s) & =\frac{C_{1}^{b-x}(c_{2}(s),s)}{C_{1}^{b-r}(c_{2}(s),s)}, \qquad r\le\min\{x,b\}, \end{align} \begin{align} \label{opp7} \overline f_{x}^{k}(s) & = 1+s\int_{0}^{b-x}R_{1}^{s}(u)\,du + s\int_{0}^{d_{2}}K_{b-x}^{s}(u)\,du-\notag\\ & - \frac{K_{b-x}^{s}(d_{2})}{F_{s}(d_{2})} \left(\frac{s}{c_{1}(s)}+s\int_{0}^{d_{2}}F_{s}(u)\,du\right), \qquad k\ge\max\{x,b\}, \end{align} \begin{align} \label{opp8} & \overline f_{x}^{k}(z,s) = e^{zd_{2}}(k_{2}(z)-s) \left(\mathbb{K}_{b-x}^{s}(z) -\int_{0}^{d_{2}}e^{-uz}K_{b-x}^{s}(u)\,du\right)-\\ & - \frac{K_{b-x}^{s}(d_{2})}{F_{s}(d_{2})}\, e^{zd_{2}}(k_{2}(z)-s) \left(\mathbb{F}_{s}(z) -\int_{0}^{d_{2}}e^{-uz}F_{s}(u)\,du\right),\quad k\ge\max\{x,b\}, \notag \end{align} where $ d_{2}=k-b,$ $ C_{i}^{x}(z,s)= e^{zx},$ $ x<0,$ $$ C_{i}^{x}(z,s)=e^{zx}\left(1-(k_{i}(z)-s) \int_{0}^{x}e^{-uz}R_{i}^{s}(u)\,du\right),\quad x\ge 0. $$ \end{theorem} \begin{corollary} \label{copp1} Let $ k_{1}(z)=k_{2}(z)=k(z). $ Then \begin{align} \label{opp9} & \underline f_{r}^{x}(s) = e^{-(x-r)c(s)},\qquad r\le x,\notag\\ & \overline f_{x}^{k}(s) = 1+s\int_{0}^{k-x}R^{s}(u)\,du - \frac{s}{c(s)}R^{s}(k-x),\qquad k\ge x,\\ & \overline f_{x}^{k}(z,s) = e^{(k-x)z} \left(1- (k(z)-s)\int_{0}^{k-x}e^{-uz}R^{s}(u)\,du\right) - R^{s}(k-x)\,\frac{k(z)-s}{z-c(s)}, \notag \end{align} where $ R^{s}(x),$ $x\ge0 $ is the resolvents of the process $ \xi(t)=\xi_{i}(t); $ $ c(s)>0,$ $s>0$ is the unique root of the equation $k(z)=s$ in the semi-plane $\Re(z)>0. $ \end{corollary} \begin{corollary} \label{copp2} Assume that the conditions $ (A): $ $\bold E\xi_{i}(1)=0, $ $\sigma_{i}^{2}=\bold E\xi_{i}(1)^{2}<\infty$ are satisfied. Then the following limiting equalities are valid: \begin{align*} & \lim_{B\to\infty} \bold E\left[ e^{-s\overline\tau^{kB}_{xB}(bB)/B^{2}} \right]= \frac{\sigma_{1}e^{-(b-x)s_{1}}} {\sigma_{1}\ch((k-b)s_{2})+ \sigma_{2}\sh((k-b)s_{2})}, \qquad x\le b,\\ & \lim_{B\to\infty} \bold E\left[ e^{-s\overline\tau^{kB}_{xB}(bB)/B^{2}} \right]= \frac{\sigma_{1}\ch((x-b)s_{2})+\sigma_{2}\sh((x-b)s_{2})} {\sigma_{1}\ch((k-b)s_{2})+\sigma_{2}\sh((k-b)s_{2})}, \qquad x\in[b,k]; \end{align*} \begin{align*} & \lim_{B\to\infty} \bold E\left[ e^{-s\underline\tau^{xB}_{rB}(bB)/B^{2}} \right]= \frac{\sigma_{2}e^{-(x-b)s_{2}}} {\sigma_{1}\sh((b-r)s_{1})+\sigma_{2}\ch((b-r)s_{1})}, \qquad x\ge b,\\ & \lim_{B\to\infty} \bold E\left[ e^{-s\underline\tau^{xB}_{rB}(bB)/B^{2}} \right]= \frac{\sigma_{1}\sh((b-x)s_{1})+\sigma_{2}\ch((b-x)s_{1})} {\sigma_{1}\sh((b-r)s_{1})+ \sigma_{2}\ch((b-r)s_{1})}, \qquad x\in[r,b], \end{align*} where $s_{i}=\sqrt{2s}/\sigma_{i},$ $ i=1,2, $ $k\ge\max\{x,b\},$ $ r\le\min\{x,b\}. $ \end{corollary} \begin{proof} Let us verify (\ref{opp6}). Set $ x=b. $ In view of the definition of the process $ \xi(x,t) $ (\ref{opp1}), spatial homogeneity of the processes $ \xi_{i}(t) $ and Markov property of $ \chi_{1,x}^{d_{1}} $ we can write the following equation: \begin{align} \label{opp10} \underline f_{r}^{b}(s)= \underline{V}_{1,d_{1}}^{d_{1}}(s)+ \int_{0}^{\infty}{V}_{1,d_{1}}^{d_{1}}(du,s)e^{-uc_{2}(s)} \underline f_{r}^{b}(s), \qquad d_{1}=b-r, \end{align} where $ {V}_{1,x}^{d_{1}}(du,s)= \bold E\left[e^{-s\chi_{1,x}^{d_{1}}}; T_{1}(x)\in du,\overline{A}_{1} \right], $ $ x\in[0,d_{1}].$ It follows from (\ref{opp2}), (\ref{opp3}) that \begin{align} \label{opp11} \overline{V}_{1,x}^{d_{1}}(z,s)= C_{1}^{d_{1}-x}(z,s)- \frac{R_{1}^{s}(d_{1}-x)}{R_{1}^{s}(d_{1})}\,C_{1}^{d_{1}}(z,s). \end{align} Taking into account the latter equality and (\ref{opp10}), we derive $ \underline f_{r}^{b}(s)= C_{1}^{d_{1}}(c_{2}(s),s)^{-1}. $ Let $ x>b. $ Then a \begin{align*} \underline f_{r}^{x}(s)= e^{-(x-b)c_{2}(s)}\underline f_{r}^{b}(s) =\frac{e^{-(x-b)c_{2}(s)}}{C_{1}^{d_{1}}(c_{2}(s),s)} = \frac{C_{1}^{b-x}(c_{2}(s),s)}{C_{1}^{b-r}(c_{2}(s),s)}. \end{align*} If $x\in[r,b],$ then we have from $ \underline f_{r}^{b}(s)=\underline f_{x}^{b}(s)\underline f_{r}^{x}(s) $ that \begin{align*} \underline f_{r}^{x}(s)= \frac{\underline f_{r}^{b}(s)}{\underline f_{x}^{b}(s)}= \frac{C_{1}^{b-x}(c_{2}(s),s)}{C_{1}^{b-r}(c_{2}(s),s)}, \qquad x\in[r,b]. \end{align*} Hence, we showed that (\ref{opp6}) is valid for all $ x\ge r. $ We now verify (\ref{opp8}). Set first $ x=b.$ Then taking into account the defining formula (\ref{opp1}) of the process $ \xi(x,t), $ spatial homogeneity of the processes $ \xi_{i}(t) $ and Markov property of $ \tau_{1}^{+}(x), $ we can write \begin{align*} \overline f_{b}^{k}(z,s) & = e^{zd_{2}}\int_{d_{2}}^{\infty}m_{1}^{0}(du,s)e^{-uz}+ \int_{0}^{d_{2}}m_{1}^{0}(du,s) \overline{V}_{2,u}^{d_{2}}(z,s)\notag \\ & + \int_{0}^{d_{2}}m_{1}^{0}(du,s) \frac{R_{2}^{s}(d_{2}-u)}{R_{2}^{s}(d_{2})}\,\overline f_{b}^{k}(z,s), \qquad d_{2}=k-b, \end{align*} where $ m_{1}^{x}(du,s)=\bold E\left[e^{-s\tau_{1}^{+}(x)};T_{1}^{+}(x)\in du\right]. $ By means of this equation we can determine the function $ \overline f_{b}^{k}(z,s).$ Making use of the expression for the function $ F_{s}(u), $ $ u\ge 0$ (\ref{opp5}), equalities (\ref{opp2}), (\ref{opp3}), after performing some calculations, we find \begin{align} \label{opp12} \overline f_{b}^{k}(z,s) = C_{2}^{d_{2}}(z,s)-\frac{R_{2}^{s}(d_{2})}{F_{s}(d_{2})} e^{zd_{2}}(k_{2}(z)-s) \left(\mathbb{F}_{s}(z)-\int_{0}^{d_{2}}e^{-uz}F_{s}(u)du\right). \end{align} Let $ x\in (b,k].$ Then the function $ \overline f_{x}^{k}(z,s) $ can be found from the following equation: \begin{align*} \overline f_{x}^{k}(z,s) = \overline{V}_{2,x-b}^{d_{2}}(z,s) + \frac{R_{2}^{s}(k-x)}{R_{2}^{s}(d_{2})}\,\overline f_{b}^{k}(z,s), \qquad x\in (b,k]. \end{align*} In view of (\ref{opp12}) we derive \begin{align} \label{opp13} \overline f_{x}^{k}(z,s) = C_{2}^{k-x}(z,s)-\frac{R_{2}^{s}(k-x)}{F_{s}(d_{2})}\, \mathfrak{F}_{d_{2}}^{s}(z),\qquad x\in [b,k], \end{align} where $ \mathfrak{F}_{d_{2}}^{s}(z)= e^{zd_{2}}(k_{2}(z)-s) \left(\mathbb{F}_{s}(z)-\int_{0}^{d_{2}}e^{-uz}F_{s}(u)du\right). $ Let $ x<b. $ Then we can determine the function $ \overline f_{x}^{k}(z,s) $ from the following relation: \begin{align*} \overline f_{x}^{k}(z,s) = e^{zd_{2}}\int_{d_{2}}^{\infty}m_{1}^{b-x}(du,s)e^{-uz}+ \int_{0}^{d_{2}}m_{1}^{b-x}(du,s) \overline f_{u+b}^{k}(z,s). \end{align*} Employing (\ref{opp2}), (\ref{opp3}),the definition of the function $ K_{x}^{s}(u) $ (\ref{opp4}) and the formula (\ref{opp13}), we obtain \begin{align} \label{opp14} \overline f_{x}^{k}(z,s) = \mathfrak{K}_{b-x}^{d_{2}}(z,s) -\frac{K_{b-x}^{s}(d_{2})}{F_{s}(d_{2})}\, \mathfrak{F}_{d_{2}}^{s}(z), \qquad x<b, \end{align} where $$ \mathfrak{K}_{b-x}^{d_{2}}(z,s)= e^{zd_{2}}(k_{2}(z)-s)\left(\mathbb{K}_{b-x}^{s}(z)- \int_{0}^{d_{2}}e^{-uz}K_{b-x}^{s}(u)\,du\right). $$ Note that for $ x\in[b,k] $ it follows from the definition of the function $ K_{x}^{s}(u) $ (\ref{opp4}) that \begin{align*} \mathfrak{K}_{b-x}^{d_{2}}(z,s)=C_{2}^{k-x}(z,s), \qquad K_{b-x}^{s}(d_{2}) = R_{2}^{s}(k-x). \end{align*} Hence, the formula (\ref{opp14}) is valid for all $ x\le k.$ Since $ \overline f_{x}^{k}(s)=\overline f_{x}^{k}(0,s),$ then (\ref{opp7}) follows from (\ref{opp8}) when $ z=0. $ We now verify statements of Corollary \ref{copp1}. In case when $ k_{1}(z)=k_{2}(z)=k(z) $ we have \begin{align*} & C_{i}^{x}(c_{2}(s),s)=e^{xc(s)},\quad \mathbb{F}_{s}(z)=(z-c(s))^{-1}, \quad F_{s}(u)=e^{uc(s)}, \\ & \mathbb{K}_{x}^{s}(z)=e^{xz}\int_{x}^{\infty}e^{-uz}R^{s}(u)\,du, \quad K_{x}^{s}(u)= R^{s}(x+u). \end{align*} These equalities and (\ref{opp6})-(\ref{opp8}) imply the formulae (\ref{opp9}). The limiting equalities \ref{copp2} are derived in Section 4. \end{proof} \section { Exit from the interval by the process $ \xi(x,t).$} For $B>0,$ $x,b\in[0,B],$ introduce the following random variable: $$ \chi_{x}(b)=\inf\{t:\,\xi(x,t)\notin[0,B]\},\qquad i=1,2 $$ i.e. the first exit time from the interval $[0,B]$ by the process $\xi(x,t).$ Introduce the events: $\overline A=\{\xi(x,\chi_{x}(b))> B\} $ the process exits the interval through the upper boundary; $ \underline A=\{\xi(x,\chi_{x}(b))\le0\} $ the process exits the interval through the lower boundary. Denote by $ T(x)=(\xi(x,\chi_{x}(b))-B)\bold I_{\overline A} +0\cdot \bold I_{\underline A} $ the value of the overshoot at the instant of the first exit. Define \begin{align*} \underline{V}_{x}(s)= \bold E\left[e^{-s\chi_{x}(b)}; \underline A \right],\qquad \overline{V}_{x}(z,s)= \bold E\left[e^{-s\chi_{x}(b)-zT(x)};\overline A \right], \quad \Re(z)\ge0. \end{align*} \begin{theorem} \label{topp2} The Laplace transforms of $ \chi_{x}(b), $ $x,b\in[0,B] $ and of the joint distribution of $ \{\chi_{x}(b),T(x) \} $ are such that for $ s\ge0 $ \begin{align} \label{opp15} \underline{V}_{x}(s)= \frac{K_{b-x}^{s}(B-b)}{K_{b}^{s}(B-b)},\qquad \overline{V}_{x}(s)= \mathfrak{K}_{b-x}^{B-b}(s)- \frac{K_{b-x}^{s}(B-b)}{K_{b}^{s}(B-b)}\:\mathfrak{K}_{b}^{B-b}(s), \end{align} where $ \overline{V}_{x}(s)= \bold E\left[e^{-s\chi_{x}(b)};\overline A \right], $ \begin{align*} \mathfrak{K}_{x}^{u}(s)= 1+s\int_{0}^{x}R_{1}^{s}(u)\,du+s\int_{0}^{u}K_{x}^{s}(v)\,dv, \quad x\in\mathbb{R},\;u\ge0; \end{align*} \begin{align} \label{opp16} \overline{V}_{x}(z,s)= \mathfrak{K}_{b-x}^{B-b}(z,s)- \frac{K_{b-x}^{s}(B-b)}{K_{b}^{s}(B-b)}\:\mathfrak{K}_{b}^{B-b}(z,s), \end{align} and \begin{align*} \mathfrak{K}_{x}^{u}(z,s)= e^{uz}\left(C_{1}^{x}(z,s)- (k_{2}(z)-s)\int_{0}^{u}e^{-vz}K_{x}^{s}(v)\,dv\right), \quad x\in\mathbb{R},\;u\ge0. \end{align*} \end{theorem} \begin{corollary} \label{copp3} Assume that $ k_{1}(z)=k_{2}(z)=k(z). $ Then the following equalities are valid: \begin{align} \label{opp17} & \underline{V}_{x}(s)= \frac{R^{s}(B-x)}{R^{s}(B)},\qquad \overline{V}_{x}(s)= C^{B-x}(s)- \frac{R^{s}(B-x)}{R^{s}(B)}\:C^{B}(s), \\ & \overline{V}_{x}(z,s)= C^{B-x}(z,s)- \frac{R^{s}(B-x)}{R^{s}(B)}\:C^{B}(z,s), \end{align} where $ C^{x}(s)=1+s\int_{0}^{x}R^{s}(u)\,du, $ $$ C^{x}(z,s)= e^{xz}\left(1-(k(z)-s)\int_{0}^{x}e^{-uz}R^{s}(u)\,du\right), $$ $ R^{s}(x), $ $x\ge0 $ are the resolvents of the processes a $ \xi(t)=\xi_{i}(t); $ $ c(s)>0,$ $s>0$ is the unique root of the equation $ k(z)=s $ in the semi-plane $\Re(z)>0. $ \end{corollary} \begin{corollary} \label{copp4} Assume, that $\bold E\xi_{i}(1)=0, $ $\sigma_{i}^{2}=\bold E\xi_{i}(1)^{2}<\infty,$ $ x,b\in(0,1).$ Then the following expansions hold as $ B\to\infty $ \begin{align*} & \bold E\left[e^{-s\chi_{xB}(bB)/B^{2}};\,\underline A\right]\to \frac{\sigma_{1}\sh((b-x)s_{1}) \ch(\overline{b} s_{2})+ \sigma_{2}\sh(\overline{b}s_{2})\ch((b-x)s_{1})} {\sigma_{1}\sh(bs_{1})\ch(\overline{b}s_{2})+ \sigma_{2}\sh(\overline{b}s_{2})\ch(bs_{1})},\\ & \bold E\left[e^{-s\chi_{xB}(bB)/B^{2}};\,\overline A\right]\to \frac{\sigma_{1}\sh(x s_{1})} {\sigma_{1}\sh(bs_{1})\ch(\overline{b}s_{2})+ \sigma_{2}\sh(\overline{b}s_{2})\ch(bs_{1})},\quad x\in(0,b] ; \end{align*} \begin{align*} & \bold E\left[e^{-s\chi_{xB}(bB)/B^{2}};\,\overline{A}\right]\to \frac{\sigma_{1}\sh(b s_{1})\ch((x-b)s_{2})+ \sigma_{2}\sh((x-b)s_{2})\ch(b s_{1})} {\sigma_{1}\sh(bs_{1})\ch(\overline{b}s_{2})+ \sigma_{2}\sh(\overline{b}s_{2})\ch(bs_{1})},\\ & \bold E\left[e^{-s\chi_{xB}(bB)/B^{2}};\,\underline{A}\right]\to \frac{\sigma_{2}\sh (1-x) s_{2}} {\sigma_{1}\sh(bs_{1})\ch(\overline{b}s_{2})+ \sigma_{2}\sh(\overline{b}s_{2})\ch(bs_{1})},\quad x\in[b,1), \end{align*} where $s_{i}=\sqrt{2s}/\sigma_{i},$ $ i=1,2, $ $\overline{b}=1-b. $ \end{corollary} \begin{proof} It is worth noting that the joint distribution of $\{\chi,T\} $ was found in \cite{2Ka2} for L\'{e}vy processes of general form. To determine the Laplace transforms of this distribution, the authors used the one-boundary characteristics $\{\tau^{x},T^{x}\}, $ $\{\tau_{x},T_{x}\}$ of the process. Following this approach, we derive the system of linear integral equations with respect to the Laplace transforms $ \underline{V}_{x}(s),$ $ V_{x}(du,s)= \bold E\left[e^{-s\chi_{x}(b)}; T(x)\in du,\overline A \right] $ \begin{align} \label{opp19} & \bold E\left[ e^{-s\underline\tau_{0}^{x}(b)}\right] = \underline{V}_{x}(s)+ \int_{0}^{\infty} V_{x}(du,s) \bold E\left[ e^{-s\underline\tau_{0}^{u+B}(b)}\right], \quad x,b\in[0,B],\notag\\ & \bold E\left[ e^{-s\overline\tau_{x}^{B}(b)}; \overline T_{x}^{B}\in du\right]= V_{x}(du,s)+ \underline{V}_{x}(s) \bold E\left[ e^{-s\overline\tau_{0}^{B}(b)}; \overline T_{0}^{B}\in du\right]. \end{align} The first equation of this system means that the process $\xi(x,t) $ can reach the lower boundary $0$ either on the sample paths which do not cross the upper boundary $ B, $ or on the sample paths which do cross the upper boundary and then pass the lower boundary. The second equation is written analogously. Observe, that the mathematical expectations which enter the equations of the system are determined by (\ref{opp6})-(\ref{opp8}). Taking into account the formulae (\ref{opp6}), (\ref{opp19}), we derive \begin{align} \label{opp20} & \frac{C_{1}^{b-x}(c_{2}(s),s)}{C_{1}^{b}(c_{2}(s),s)} = \underline{V}_{x}(s)+ \frac{e^{-c_{2}(s)(B-b)}}{C_{1}^{b}(c_{2}(s),s)}\, \overline{V}_{x}(c_{2}(s),s),\notag\\ & \overline{f}_{x}^{B}(c_{2}(s),s)= \overline{V}_{x}(c_{2}(s),s)+ \underline{V}_{x}(s)\overline{f}_{0}^{B}(c_{2}(s),s). \end{align} Formula (\ref{opp8}) implies that \begin{align*} \overline{f}_{x}^{B}(c_{2}(s),s)= C_{1}^{b-x}(c_{2}(s),s)e^{c_{2}(s)(B-b)}- \frac{K_{b-x}^{s}(B-b)}{F_{s}(B-b)}\,\tilde{F}(s),\quad x\in[0,B], \end{align*} where $ \tilde{F}(s)=(k_{1}(c_{2}(s)-s)(c_{2}(s)-c_{1}(s))^{-1}e^{c_{2}(s)(B-b)}. $ Solving system (\ref{opp20}) with respect to two unknown functions $ \underline{V}_{x}(s), $ $ \overline{V}_{x}(c_{2}(s),s), $ we find for all $ x\in[0,B]$ that \begin{align*} & \underline{V}_{x}(s)= \frac{K_{b-x}^{s}(d_{2})}{K_{b}^{s}(d_{2})},\\ & \overline{V}_{x}(c_{2}(s),s)= e^{c_{2}(s)d_{2}} \left( C_{1}^{b-x}(c_{2}(s),s)- \frac{K_{b-x}^{s}(d_{2})}{K_{b}^{s}(d_{2})}\, C_{1}^{b}(c_{2}(s),s)\right), \end{align*} where $ d_{2}=B-b. $ It follows from the second equation from the system (\ref{opp19}) and from (\ref{opp8}) that \begin{align*} \overline{V}_{x}(z,s) = \overline{f}_{x}^{B}(z,s)- \frac{K_{b-x}^{s}(d_{2})}{K_{b}^{s}(d_{2})}\:\overline{f}_{0}^{B}(z,s) =\mathfrak{K}_{b-x}^{d_{2}}(z,s)-\frac{K_{b-x}^{s}(d_{2})}{K_{b}^{s}(d_{2})}\: \mathfrak{K}_{b}^{d_{2}}(z,s), \end{align*} where $$ \mathfrak{K}_{x}^{d_{2}}(z,s)=e^{zd_{2}}\left( C_{1}^{x}(z,s)- (k_{2}(z)-s)\int_{0}^{d_{2}}e^{-uz}K_{x}^{s}(u)\,du\right), \quad x\in\mathbb{R}. $$ The second equality (\ref{opp15}) can be derived from (\ref{opp16}) for $ z=0.$ If $ k_{1}(z)=k_{2}(z)=k(z), $ then $$ \mathfrak{K}_{x}^{d_{2}}(z,s)=C^{x+d_{2}}(z,s),\quad \mathfrak{K}_{x}^{d_{2}}(s)=1+s\int_{0}^{x+d_{2}}R^{s}(u)\,du, \quad x\in\mathbb{R}. $$ The formulae (\ref{opp15}), (\ref{opp16}) of Theorem \ref{topp2} imply the statements of Corollary \ref{copp3}. \end{proof} \section{Asymptotic behavior } In this section we assume that the following conditions are fulfilled $ (A):$ $\bold E\xi_{i}(1)=0, $ $\sigma_{i}^{2}=\bold E\xi_{i}(1)^{2}<\infty,$ $ i=1,2. $ It is a well-known fact (see for instance \cite{Bor3}, \cite{Kr18}, \cite{Sh}) that \begin{align} \label{opp21} \lim_{B\to\infty}\frac{1}{B}\,R_{i}^{s/B^{2}}(xB)= \frac{2}{\sigma_{i}\sqrt{2s}}\,\sh (x^{+} s_{i}), \quad \lim_{B\to\infty}Bc_{i}(s/B^{2})=s_{i}, \end{align} where $ s_{i}=\sqrt{2s}/\sigma_{i},$ $ i=1,2, $ $ x^{+}=\max\{0,x\}. $ We now verify the limiting relations for the functions which appear in Theorems \ref{topp1}, \ref{topp2}. Observe, that under the condition $ (A) $ the following expansion is valid as $ B\to\infty, $ $ z>0 $ $$ k_{i}(z/B)=\frac{1}{2}\sigma_{i}^{2}z^{2}/B^{2}+ o(B^{-2}). $$ Then in view of the definition of the function $ \mathbb{K}_{x}^{s}(z) $ (\ref{opp4}) we can write \begin{align} \label{opp22} & \tilde k_{x}^{s}(z)= \lim_{B\to\infty}\frac{1}{B^{2}}\, \mathbb{K}_{xB}^{s/B^{2}}(z/B)= \frac{e^{xz}}{\frac{1}{2}\,\sigma_{2}^{2}z^{2}-s}, \quad x\le0,\notag \\ & \tilde k_{x}^{s}(z)= \frac{1}{\frac{1}{2}\, \sigma_{2}^{2}z^{2}-s}\left(\frac{z\sigma_{1}}{\sqrt{2s}}\,\sh(s_{1}x) +\ch(s_{1}x) \right), \quad x\ge0. \end{align} For $ \Re(z)>\sqrt{2s}/\sigma_{2} $ the right-hand sides of these equalities are the Laplace transforms: $$ \tilde k_{x}^{s}(z)=\int_{0}^{\infty}e^{-uz} k_{x}^{s}(u)\,du, \quad \Re(z)>\sqrt{2s}/\sigma_{2}. $$ The formulae (\ref{opp22}) imply the following relation \begin{align} \label{opp23} & \lim_{B\to\infty}\frac{1}{B}\, K_{xB}^{s/B^{2}}(uB)= k_{x}^{s}(u) = \frac{1}{2\pi i}\int\limits_{\gamma-i\infty}^{\gamma+i\infty} e^{zu}\tilde k_{x}^{s}(z)\,dz=\notag\\ & = \left\{ \begin{array}{l} \frac{2}{\sigma_{2}\sqrt{2s}}\,\sh(x+u)^{+}s_{2}, \quad x\le0 , \\ {} \\ \frac{2}{\sigma_{2}^{2}\sqrt{2s}} \left(\sigma_{1}\sh(x s_{1})\ch (u s_{2})+ \sigma_{2}\sh(u s_{2})\ch(x s_{1})\right), \quad x\ge0, \end{array} \right. \end{align} where $ \gamma> \sqrt{2s}/\sigma_{2}. $ Taking into account the latter equality, we can easily obtain the limiting relations for the functions, which enter the statements of Theorems \ref{topp1}, \ref{topp2}: \begin{align} \label{opp24} \lim_{B\to\infty} \mathfrak{K}_{xB}^{uB}(s/B^{2}) = \left\{ \begin{array}{l} \ch(x+u)^{+}s_{2}, \quad x\le0 , \\ {} \\ \frac{\sigma_{1}}{\sigma_{2}}\,\sh(xs_{1})\sh (u s_{2} )+ \ch(u s_{2})\ch(xs_{1}), \quad x\ge0, \end{array} \right. \end{align} \begin{align} \label{opp25} \lim_{B\to\infty} F_{s/B^{2}}(uB) =\frac{\sigma_{1}}{\sigma_{2}^{2}}\, (\sigma_{1}\ch(us_{2})+ \sigma_{2}\sh(us_{2}) ),\quad u\ge 0, \end{align} \begin{align} \label{opp26} \lim_{B\to\infty} C_{1}^{xB}\left(c_{2}(s/B^{2}),s/B^{2}\right) = \left\{ \begin{array}{l} e^{xs_{2}}, \quad x\le0 , \\ {} \\ \frac{\sigma_{1}}{\sigma_{2}}\,\sh(xs_{1})+ \ch(xs_{1}), \quad x\ge0. \end{array} \right. \end{align} We now verify the limiting equalities of Corollary \ref{copp2}. Let $k\ge\max\{x,b\},$ $x\le b. $ Then taking into account (\ref{opp7}), (\ref{opp21}), (\ref{opp23}) and (\ref{opp25}), we have as $B\to\infty $ \begin{align*} & \bold E\left[ e^{-s\overline\tau^{kB}_{xB}(bB)/B^{2}} \right]= \overline{f}_{xB}^{kB}(s/B^{2})\to \frac{\sigma_{1}}{\sigma_{2}}\,\sh(b-x)s_{1}\ch(d_{2}s_{2})+ \ch(x-b)s_{1}\ch(d_{2}s_{2})\\ & - \frac{\sigma_{1}\sh(d_{2}s_{2})+\sigma_{2}\ch(d_{2}s_{2})} {\sigma_{1}\ch(d_{2}s_{2})+\sigma_{2}\sh(d_{2}s_{2})} \left(\frac{\sigma_{1}}{\sigma_{2}}\,\sh((b-x)s_{1})\ch(d_{2}s_{2})+ \ch((b-x)s_{1})\sh (d_{2}s_{2})\right)=\\ & =\frac{\sigma_{1}e^{-(b-x)s_{1}}} {\sigma_{1}\ch((k-b)s_{2})+ \sigma_{2}\sh((k-b)s_{2})}, \qquad x\le b, \end{align*} where $ d_{2}=k-r. $ Similarly, we can derive the second formula of Corollary \ref{copp2}: \begin{align*} & \lim_{B\to\infty} \bold E\left[ e^{-s\overline\tau^{kB}_{xB}(bB)/B^{2}} \right]= \frac{\sigma_{1}\ch((x-b)s_{2})+\sigma_{2}\sh((x-b)s_{2})} {\sigma_{1}\ch((k-b)s_{2})+\sigma_{2}\sh((k-b)s_{2})}, \qquad x\in[b,k]. \end{align*} Let $ r\le\min\{x,b\}.$ Then the following relation follows from (\ref{opp6}) and (\ref{opp26}) as $B\to\infty $ \begin{align*} \bold E\left[ e^{-s\underline\tau^{xB}_{rB}(bB)/B^{2}} \right]&= \underline{f}_{rB}^{xB}(s/B^{2})= \frac{C_{1}^{(b-x)B}\left(c_{2}(s/B^{2}),s/B^{2}\right)} {C_{1}^{(b-r)B}\left(c_{2}(s/B^{2}),s/B^{2}\right)}\to \\ & \to \frac{\sigma_{2}e^{(b-x)s_{2}}} {\sigma_{1}\sh((b-r)s_{1})+\sigma_{2}\ch((b-r)s_{1})}, \qquad x\ge b, \end{align*} \begin{align*} \bold E\left[ e^{-s\underline\tau^{xB}_{rB}(bB)/B^{2}} \right] \to \frac{\sigma_{1}\sh((b-x)s_{1})+\sigma_{2}\ch((b-x)s_{1})} {\sigma_{1}\sh((b-r)s_{1})+\sigma_{2}\ch((b-r)s_{1})}, \quad x\in[r,b]. \end{align*} We now derive \ref{copp4}. The following relation follows from the first formula of (\ref{opp15}) and from (\ref{opp23}) for $ x\in(0,b], $ as $B\to\infty $ \begin{align*} \bold E & \left[e^{-s\chi_{xB}(bB)/B^{2}};\,\underline A\right]= \underline{V}_{xB}(s/B^{2})= \frac{K_{(b-x)B}^{s/B^{2}}(\overline{b}B)} {K_{bB}^{s/B^{2}}(\overline{b}B)}\to\\ & \to \frac{\sigma_{1}\sh((b-x)s_{1}) \ch(\overline{b}s_{2})+\sigma_{2}\sh(\overline{b}s_{2})\ch((b-x)s_{1})} {\sigma_{1}\sh(bs_{1})\ch(\overline{b}s_{2})+ \sigma_{2}\sh(\overline{b}s_{2})\ch(bs_{1})},\quad x\in(0,b]. \end{align*} where $ \overline{b}=1-b. $ Taking into account the second formula of (\ref{opp15}) and (\ref{opp23}), (\ref{opp24}) we can write for $ x\in(0,b], $ as $ B\to\infty $ \begin{align*} & \bold E \left[e^{-s\chi_{xB}(bB)/B^{2}};\,\overline A\right]= \overline{V}_{xB}(s/B^{2})= \mathfrak{K}_{(b-x)B}^{\overline{b}B}(s/B^{2})-\\ & -\frac{K_{(b-x)B}^{s/B^{2}}(\overline{b}B)} {K_{bB}^{s/B^{2}}(\overline{b}B)}\: \mathfrak{K}_{bB}^{\overline{b}B}(s/B^{2})\to \frac{\sigma_{1}\sh(x s_{1})} {\sigma_{1}\sh(bs_{1})\ch(\overline{b}s_{2})+ \sigma_{2}\sh(\overline{b}s_{2})\ch(bs_{1})},\quad x\in(0,b]. \end{align*} Analogously, the formulae of the corollary can be verified for $ x\in[b,1). $ \section {Appendix } Let $ \xi(t)\in\mathbb{R}, $ $ \xi(0)=0, $ $\bold E e^{-p\xi(t)}=e^{tk(p)},$ $\Re(p)=0$ be a general L\'{e}vy process. Denote by $$ \xi_{t}^{+}=\sup_{u\le t}\xi(u),\qquad \xi_{t}^{-}=\inf_{u\le t}\xi(u) $$ the running supremum and infimum of the process. For $ x\ge0 $ define $$ \tau_{x}^{+}=\inf\{t>0:\xi(t)\ge x \},\qquad T_{x}^{+}=\xi(\tau_{x}^{+})-x $$ the first crossing time of a barrier x and the value of the overshoot. Then the following relation is valid (\cite{PeRog}): \begin{align} \label{opp27} \int_{0}^{\infty}e^{-px} \bold E \left[e^{-s\tau_{x}^{+}-zT_{x}^{+}}\right]dx= \frac{1}{p-z}\left(1-\frac{\bold E e^{-p\xi_{\nu_s}^{+}}} {\bold E e^{-z\xi_{\nu_s}^{+}} }\right),\qquad \Re(p),\Re(z)\ge0, \end{align} where $\nu_{s}\sim\exp(s),$ $ s>0 $ is an exponential random variable independent from the process $\xi(t).$ For a spectrally positive L\'{e}vy process with Laplace exponent \begin{align} \label{opp28} k(z)=az+\frac{\sigma^{2} z^{2}}{2}+\int_{0}^{\infty} \left( e^{-zx}-1 +z \bold I_{\{0<x\le1\}}\right)\Pi(dx), \quad i=1,2 \end{align} we have $$ \bold E e^{-z\xi_{\nu_s}^{+}}=\frac{s}{c(s)}\,\frac{p-c(s)}{k(p)-s},\quad \Re(p)\ge0, $$ where $ c(s)>0,$ $ s>0 $ is the unique root of the equation $k(z)-s=0,$ in the semi-plane $\Re(z)>0. $ It follows from the latter relation and from (\ref {opp27}) that \begin{align} \label{opp29} \int_{0}^{\infty}e^{-px} \bold E \left[e^{-s\tau_{x}^{+}-zT_{x}^{+}}\right]dx= \frac1{p-z}\left(1-\frac{p-c(s)}{k(p)-s} \;\frac{k(z)-s}{z-c(s)}\right). \end{align} Introduce the resolvents $ R^{s}(x),$ $x\ge0 $ \cite{SuSh} of the spectrally one-sided L\'{e}vy process $ \xi(t), $ $ t\ge0, $ by means of their Laplace transforms: $$ \int_{0}^{\infty}e^{-xz}R^{s}(x)\,dx = (k(z)-s)^{-1}, \quad \Re(z)>c(s),\quad R^{s}(x)=0, \; x<0, $$ Making use of the definition of the resolvent and inverting the Laplace transform with respect to $ p $ $ (\Re(p)>c(s))$ in both sided of (\ref {opp29}), we find \begin{align*} \bold E \left[e^{-s\tau_{x}^{+}-zT_{x}^{+}}\right]= e^{xz}-R^{s}(x)\,\frac{k(z)-s}{z-c(s)} -(k(z)-s)e^{xz}\int_{0}^{x}e^{-uz}R^{s}(u)\,du, \end{align*} which is the second equality of Lemma 1.
65,265
By Dan Heyman/Public News Service WV Just in time for the election: a free scorecard that runs down the voting record of every state senator and House delegate is available for voters in West Virginia. The Heroes and Zeros 2015/2016 scorecard was created by the West Virginia Citizen Action Group and is freely available on their website, wvcag.org. Gary Zuckett, executive director at CAG, said the guide looks at every important vote on a wide variety of progressive issues – from prevailing wage and water quality to voter ID and the so-called religious freedom restoration act – and assigns representatives a score based on their voting record. “People can really get a feel for what their individual legislator did,” Zuckett said. “Vote by vote, bill by bill, issue by issue, they can find out how they were represented during the past two years.” Groups across the spectrum endorse candidates: unions, industries and organizations focused on individual issues such as guns or abortion. But Heroes and Zeros is one of the most comprehensive scorecards available – and it’s certainly one of the most progressive. Zuckett said that based on the information collected for the guide, the Legislature seems to be moving in a very conservative direction. “I’m afraid they would have to get a failing grade,” he said of representatives’ support of progressive issues. “In the House, just under half are at 20 percent or less. They only got one out of five right. And the Senate is actually worse.” Read the full story at
303,016
\begin{document} \title{Operator equality on entropy production in quantum Markovian master equations} \author{Fei Liu} \email[Email address:]{feiliu@buaa.edu.cn} \affiliation{School of Physics and Nuclear Energy Engineering, Beihang University, Beijing 100191, China} \date{\today} \begin{abstract} {An operator equality on the entropy production for general quantum Markovian master equations is derived without resorting quantum stochastic trajectory and priori quantum definition of entropy production. We find that, the equality can be still interpreted as a consequence of time-reversal asymmetry of the nonequilibrium processes of the systems. In contrast with the classical case, however, the first order expansion of the equality does not directly related to the mean entropy production, which arises from noncommute property of operators in quantum physics. } \end{abstract} \pacs{05.70.Ln, 05.30.-d} \maketitle {\noindent\it Introduction} Irreversible processes can be seen almost everywhere in nature. Imaging that a process of an object falling into water starts from a static state and ends with some position and velocity after a finite time interval. However, if one wants to reverse the process by simply reversing the object's velocity from the ending position, the object never returns back its initial state after the same time interval. This phenomenon obviously arises from energy dissipation as heat due to friction between the object and its reservoir. In modern thermodynamics, irreversible process is always related to nonnegative entropy production~\cite{Groot,Kondepudi}, or it is a manifestation of the second law of thermodynamics. Although the law has been rigidly established in macroscopic systems, in the past few decades, interest in the entropy production or dissipated work in small nonequilibrium systems has grown intensively due to the finding of various fluctuation theorems or relations~\cite{Bochkov77,Evans93,Gallavotti95,Kurchan98,Lebowitz99,Maes99,JarzynskiPRL97, Crooks99,HatanoSasa01,Seifert05}. These remarkable relations greatly deepen our understanding about the second law of thermodynamics and nonequilibrium physics of small systems. With the fluctuation relations clarified in the classical systems, recently, we may see an trend of extending the relations into quantum systems~\cite{Kurchan00,Yukawa00, Tasaki00,DeRoeck04,Talkner07,Andrieux08,Crooks08,Esposito09,Campisi11, Deffer11, Chetrite12,LiuF12}. In this work, we present an {\it operator version} for the entropy production equality in nonequilibrium systems that can be described by quantum Markovian master equations. Because Markovian description implies that the time change of external sources does not affect reservoirs and is very slowly in comparison with reservoir's relaxation time~\cite{Alicki79}, we are not intended to state that the equality obtained here holds even if the system is driven very far from equilibrium, e.g., like the entropy production equality derived by Deffer and Lutz~\cite{Deffer11} using the sophisticated two-point energy measurement statistics~\cite{Talkner07,Campisi11}. However, we think that the price is worthwhile paying, since we can derive an exact operator equality on the entropy production. Although there have existed various quantum fluctuation relations in the literature, to our knowledge, very few of them are in operator form~\cite{Chetrite12,LiuF12}.\\ {\noindent\it Time-reversal of system} We concern about the irreversible process of an open quantum Markovian system $L_t$ during a time interval $(0,T)$. The equation of motion for the system's density operator $\rho(t)$ is then \begin{eqnarray} \label{orgsystem} \partial_{t}\rho(t)&=&L_{t}\rho(t)=L_{t}^{\rm irr}\rho(t)+L_{t}^{\rm irr}\rho(t), \end{eqnarray} where \begin{eqnarray} L_{t}^{\rm rev}\rho(t)=-i[H_t ,\rho(t)], \end{eqnarray} $H_t$ is free Hamiltonian of the system, and $L_{t}^{\rm irr}$ represents a dissipative term due to the interaction of the system with a heat reservoir and it has a general form~\cite{Davies74,Gorini76,Lindblad76,Breuer} \begin{eqnarray} L^{\rm irr}_t\rho(t)=\frac{1}{2}\sum_j{[ V_j,\rho(t) V^{\dag}_j] +[V_j\rho(t), V^{\dag}_j]}. \end{eqnarray} Here we use the subscripts $t$ to indicate their possible time-dependence except $V_j$ and $V^\dag_j$ for the simplicity in notation. We define an alternative quantum Markovian system $\tilde L_s$ as a time-reversal of the system~(\ref{orgsystem}), if its density operator $\tilde\rho(s)$ satisfies a master equation of \begin{eqnarray} \label{reversedsystem} \partial_s \tilde\rho(s)=\tilde{L}_s\tilde\rho(s)=\tilde L^{\rm rev}_s\tilde\rho(s)+\tilde L^{\rm irr}_s\tilde\rho(s), \end{eqnarray} and \begin{eqnarray} \tilde{L}_s^{\rm rev}A&=&-\Theta L_{t'}^{\rm rev}[\Theta A\Theta^{-1}]\Theta^{-1}, \\ \tilde{L}_s^{\rm irr}A&=&\Theta L_{t'}^{\rm irr}[\Theta A\Theta^{-1}]\Theta^{-1}, \end{eqnarray} where $A$ denotes an arbitrary operator, $t'$=$T$$-$$s$ is the backward time~\cite{Kolmogorov31}, $\Theta$ is time-reversal operator and we use a new time parameter $s$ (0$\le$s$\le$T) for the time-reversal system. Specifically, if the system is consistent with its time-reversal, we call such a kind of system to be symmetric or invariable under time-reversal. These definitions are in fact a simple quantum extension of those in classical Markovian process~\cite{Graham71,Risken72}.\\ {\noindent\it Operator $R(t',T)$} If the dissipation term $L^{\rm irr}_t$ vanishes, and the Hamiltonian $H_t$ is even under the time reversal, namely, $H_s$=$\Theta H_{t'}\Theta^{-1}$, one may easily prove that a time-reversed operator, $\Theta \rho(t')\Theta^{-1}$, is the solution of the time-reversal system~(\ref{reversedsystem}) with a specified initial condition $\Theta \rho(T)\Theta^{-1}$. We call such kind of solution to be time-reversible. In addition, if the system is symmetric under time-reversal and is at thermal equilibrium state $\rho_0$, the state is also time-reversible and invariable specifically, i.e., $\Theta\rho_0\Theta^{-1}$=$\rho_0$. Generally speaking, as open quantum system has a dissipative term, if it is perturbed by time-dependent sources, and/or is relaxing to its equilibrium state, the solution $\rho(t)$ is no longer reversible. This observation could be simply quantified if we introduce an operator $R(t',T)$ satisfying a relationship of \begin{eqnarray} \label{Roperator} \tilde{\rho}(s)=\Theta R(t',T)\rho(t')\Theta^{-1}, \end{eqnarray} and $R(T,T)$$=$$1$. Obviously, if the solution $\rho(t)$ was reversible, $R(t',T)$ would equal the identity operator during the whole time interval; otherwise it would not. Fig.~\ref{figure1} schematically explains the reason which we define the operator $R(t',T)$. \begin{figure} \begin{center} \includegraphics[width=0.8\columnwidth]{figure.eps} \caption{(a) An irreversible time process of the system $L_t$. (b)The real time process of the time-reversal system $\tilde{L}_s$ with a specified initial condition $\Theta\rho(T)\Theta^{-1}$. (c) The imaginary process constructed by the time reversed operator $\Theta \rho(t')\Theta^{-1}$, which is usually inconsistent with $\tilde\rho(s)$ due to dissipation.} \label{figure1} \end{center} \end{figure} Substituting (\ref{Roperator}) into Eq.~(\ref{orgsystem}), we can obtain an equation of motion for $R(T,t')$ with respect to the backward time $t'$ given by \begin{eqnarray} \label{EOMR} \partial_{t'} R(t',T)&=&-L^{*}_{t'} R(t',T) - R(t',T)[\partial_{t'}{\rho(t')} - L_{t'}{\rho(t')}]{\rho(t')}^{-1}\\ \nonumber &&-\{\sum_j [R(t',T)V_j \rho(t'),V_j^\dag]-[R(t,T)\rho(t')V_j^\dag,V_j]\}\rho(t')^{-1}\\ &=&-L^{*}_{t'} R(t',T)-{\cal O}_{t'} R(t',T),\nonumber \end{eqnarray} where the adjoint generator is $L^*_{t'}A$$=$$i[H_{t'},A]$$+$$ (1/2)\sum_j [V_j^{\dag},A]V_j$$ +$$ V^\dag_j[A,V_j]$ and $L^{*}_{t'}$$1$=0 particularly. In the remaining part, we use the superscript $(\cdots)^{\star}$ to denote an adjoint super-operator with respect to the trace unless otherwise stated. Notice that this is a terminal condition problem instead of conventional initial condition problem. If we regard the term having the super-operator ${\cal O}_t'$ in Eq.~(\ref{EOMR}) as a perturbation, which is rational because we concern about the deviation of $R(t',T)$ with respect to $1$, we can obtain a formal solution for $R(t',R)$ using the Dyson series~\cite{Chetrite12,LiuF12}: \begin{eqnarray} \label{DysonexpansionR} R(t',T)=[G^\star(t',T)+\sum_{n=1}^{\infty}\int_{t'}^Tdt_1\cdots\int_{t_{n-1}}^T dt_n\prod_{i=1}^n G^\star(t_{i-1},t_i){\cal O}_{t_i} G^\star(t_n,T)] R(T,T) \end{eqnarray} where $G^\star(t',T)$$=$${\cal T}_{+}\exp [\int_{t'}^T d\tau L^*_{\tau}$] is the adjoint propagator of the system, and ${\cal T}_{+}$ denotes the antichronological time-ordering operator~\cite{Breuer}.\\ \noindent{\it operator equality on the entropy production.} Although we have the formal expression for $R(t',T)$, it is not satisfied because the perturbation in Eq.~(\ref{DysonexpansionR}) involves $\rho(t')$, which makes its physical explanation ambiguous. In order to proceed further, in this work we restrict the system to those satisfying instant detailed balance condition with respect their instant thermal equilibrium state $\rho_0(t)$~\cite{Alicki76,Kossakowski77}: \begin{eqnarray} &&L^{\rm rev}_t\rho_0(t)=0,\\ &&L^{\rm irr}_t A\rho_0(t)={L^{\rm irr\star}_t}[A]\rho_0(t). \end{eqnarray} These conditions mean that the system always relaxes to its thermal equilibrium state $\rho_0(t)$, if the external source is fixed at the value with time point $t$~\cite{Spohn78EP}. This limitation seems not very strict from physical point of view. Under this circumstance, we may define an auxiliary operator $R_0(t',T)$ as follows: \begin{eqnarray} \label{Eqsplitting} R(t',T)\rho(t')=R_0(t',T)\rho_0(t'). \end{eqnarray} One can prove that the equation of motion for $R_0(t',T)$ is analogous to Eq.~(\ref{EOMR}) except that the previous perturbation term is now replaced with \begin{eqnarray} \label{worksuperoperator} -{\cal W}_{t'} R_0(t',T) =-R_0(t',T)\partial_{t'} \rho_0(t')\rho_0(t')^{-1}, \end{eqnarray} and the terminal condition of $R_0(t',T)$ is $\rho(T)\rho_0^{-1}(T)$. The reader is reminded that the new defined ${\cal W}_t$ is a super-operator, thought its action on an operator is a simple multiplication from the operator's right-hand side. As we mentioned previously, a deviation of $R(t',T)$ with respect to 1 indicates the time-irreversibility of the solution $\rho(t)$. Hence, selecting $t'$=0, regarding the logarithms of the all density operators as ``small" operator and using the relation~(\ref{worksuperoperator}) and the Dyson series for $R_0(0,T)$, we can expand $R(0,T)$ around 1 until the first order: \begin{eqnarray} \label{1storddef} R(0,T)&=& 1 + \Theta^{-1}\ln \tilde{\rho}(T)\Theta -\ln\rho(0) +\cdots\\ \label{1stordwork} &=& 1 + G^\star(0,T)\ln\rho(T) -G^\star(0,T)\ln\rho_0(T)+ \int_0^T d\tau G^\star(0,\tau)\partial_\tau\ln\rho_0(\tau)+\ln\rho_0(0)-\ln\rho(0)+\cdots \label{1stordheat} \end{eqnarray} The first equation arises from the definition~(\ref{Roperator}), and in the latter equation we have used a simple property of $G^\star(0,T)1=1$. We must point out explicitly that the first order expansion has also been applied to the super-operator ${\cal W}_\tau$. Notice that the above equation also holds if the initial time 0 is replaced with arbitrary time point $t'$ ($\le T$). We find that an equal of Eqs.~(\ref{1storddef}) and ~(\ref{1stordwork}) provides us an microscopic expression for the second law of thermodynamics. To see it, Multiplying them with $\rho(0)$ and taking the trace, we have \begin{eqnarray} \label{meanEP} &&\langle \ln \rho(0)\rangle_0-\langle \Theta^{-1}\ln\tilde\rho(T)\Theta\rangle_0\nonumber\\ =&&[-\langle \ln\rho(T)\rangle_T+\langle\ln\rho(0)\rangle_0]+[\langle \ln \rho_0(T)\rangle_T-\langle \ln \rho_0(0)\rangle_0 - \int_0^T d\tau\langle \partial_\tau\ln\rho_0(\tau)\rangle_\tau] \end{eqnarray} where $\langle A\rangle_\tau$$=$${\rm Tr}[A\rho(\tau)]$, we have used properties of ${\rm Tr}[G^\star(t',T)(A)B]={\rm Tr}[A G(T,t')(B)]$ and $G(\tau,0)\rho(0)$=$\rho(\tau)$. Here $G(T,t')$ is the system's propagator and equals ${\cal T}_{-}\exp[ \int_{t'}^T d\tau L_{\tau}]$, and ${\cal T}_{-}$ is the chronological time-ordering operator~\cite{Breuer}. We see that, on the right-hand side of Eq.~(\ref{meanEP}), the terms in the first square bracket is the change of von Neumannn entropy of the system, $S(\rho(\tau))$=$-\langle\ln\rho(\tau)\rangle_\tau$, while the terms in second square bracket is the mean heat transfer from the system to the heat reservoir. The latter is consequence of the first law of thermodynamics~\cite{Alicki79}, which becomes obvious if the quantum master equation is obtained under the weak coupling limits and the equilibrium state $\rho_0(t)$ has a canonical ensemble~\cite{Spohn78,Gorini78,Davies74}. According to the principles of phenomenological thermodynamics~\cite{Groot}, the whole expression on the right-hand side is no other than the mean entropy production of the irreversible quantum process and it is always assumed to be nonnegative. Deffer and Lutz obtained the same expression by the principle directly~\cite{Deffer11}. Compared with their argument, here two interesting features are revealed by the terms on the left-hand side of Eq.~(\ref{meanEP}). First, because of the equal between two sides, we have a new form for the mean entropy production, which is based on the initial and terminal density operators of the original and time-reversal systems, respectively. In particular, if the initial state is time-reversal invariable, e.g., the system initially being a thermal equilibrium, the expression of left-hand side is just a quantum relative entropy, $S(\rho(0)||\tilde\rho(T))$~\cite{Cover}. Under this circumstance, the nonnegative property of the mean entropy production has a rigors mathematical foundation rather than phenomenological reason. Second, if we concerned about a relaxation process of a time-reversal symmetric system from nonequilibrium initial state $\rho(0)$ to the thermal equilibrium $\rho_0$ after time $T$ without any external perturbation~\cite{Spohn78EP}, the expression on the left-hand side would become $S(\rho(0)||\rho_0)$. The reason is that the time-reversal system starts from the equilibrium initial state $\rho_0$ that is also the terminal state of the relaxation process. Therefor, $\tilde\rho(T)$ would equal $\rho_0$. In this case, the equal between two sides of Eq.~(\ref{meanEP}) becomes trivial. Now we present an operator equality on the entropy production: \begin{eqnarray} \label{EPequality} 1&=&\langle \rho(T)\rho_0(T)^{-1}{\cal T}_{-}\exp[\int_0^T d\tau \partial_{\tau}\rho_0(\tau)\rho_0^{-1}(\tau)] \rho_0(0)\rho^{-1}(0)\rangle \end{eqnarray} Proof: \begin{eqnarray} 1&=&{\rm Tr}[ \Theta^{-1}\tilde\rho(T)\Theta]={\rm Tr}[R_0(0,T)\rho_0(0)]\nonumber\\ &=&{\rm Tr}\{[G^\star(0,T)+\sum_{n=1}^{\infty}\int_{t'}^Tdt_1\cdots\int_{t_{n-1}}^T dt_n\prod_{i=1}^n G^\star(t_{i-1},t_i){\cal W}_{t_i} G^\star(t_n,T)] [\rho(T)\rho_0^{-1}(T)]\rho_0(0) \}\nonumber\\ &=&{\rm Tr}[\rho(T)\rho^{-1}_0(T)G(T,0)\rho_0(0)]+\int_0^Tdt_1{\rm Tr}[\rho(T)\rho^{-1}_0(T) G(T,t_1) \partial_{t_1}\rho_0(t_1)\rho^{-1}_0(t_1)G(t_1,0)\rho_0(0)]+\cdots\nonumber\\ &=&\langle\rho(T)\rho_0(T)^{-1}\rho_0(0)\rho^{-1}(0)\rangle + \int_0^Tdt_1\langle \rho(T)\rho_0(T)^{-1} \partial_{t_1}\rho_0(t_1)\rho_0^{-1}(t_1) \rho_0(0)\rho^{-1}(0)\rangle+\cdots\nonumber \end{eqnarray} The transformation from the third line to the forth line is based on the definition of multi-time correlation for operators in quantum master equations~\cite{Chetrite12,GardinerQM}. We must emphasize that, expanding the time-ordered exponential term in the operator equality to the first order does not simply lead into the mean entropy production equation~(\ref{meanEP}), since $\partial_\tau\rho_0(\tau)\rho_0^{-1}(\tau)$ usually does not equal $\partial_\tau \ln\rho_0(\tau)$, which is unique only in quantum physics.\\ {\noindent\it Quantum Jarzynski equality} Chetrite and Mallick have derived an operator Jarzynski equality using a modified dynamics for the accompanying density matrix~\cite{Chetrite12}. Here we can give an alternative derivation using the same sprint deriving Eqs.~(\ref{EPequality}). Following the conventions of proving Jarzynski equality~\cite{JarzynskiPRL97,Crooks99,Chetrite12}, we assume that the system has instant equilibrium solutions $\rho_0(t)$ satisfying the detailed balance condition, and the system is initially in equilibrium of $\rho_0(0)$. Analogous to previous case, we are still interested in comparing two processes from the original system and the time-reversal system using~(\ref{Roperator}), but here the initial density operator of the latter process is replaced with $\tilde{\rho}(0)$$=$$\rho_0(T)$. Therefor, the terminal condition $R(T,T)$ becomes $\rho_0(T)\rho^{-1}(T)$ instead of previous 1. Because of the instant detailed balance condition, we may introduce an auxiliary $R_0(t',T)$ as~(\ref{Eqsplitting}) again and doing the same calculation to obtain the operator Jarzynski equality \begin{eqnarray} \label{JEoperator} 1=\langle {\cal T}_{-}\exp[\int_0^T d\tau \partial_{\tau} \rho_0(\tau)\rho_0^{-1}(\tau)]\hspace{0.1cm}\rangle_0 \end{eqnarray} Here the subscript $0$ is to indicate that the equality holds only for equilibrium initial condition. Obviously£¬ the operator equalities~(\ref{EPequality}) and~(\ref{JEoperator}) are not the same unless both the initial and terminal states of the nonequilibrium process are thermal equilibriums. \\ \iffalse \section{Weak coupling limit} It is interesting to write the above results in explicit expressions. e.g., weak coupling limit (WCL)~\cite{Davies74,Gorini78}. Time reversal invariable. $\Delta H=\sum_{k,l,\omega}s_{kl}A^\dag_l(\omega)A_k(\omega)$ \begin{eqnarray} L_t^{\rm irr}\rho=\sum_{k,l,\omega} h_{kl}([A_k(\omega),A_l^\dag(\omega)]+[A_k(\omega),\rho A_l^\dag(\omega)]) \end{eqnarray} Under this case, \begin{eqnarray} \rho_0(t')=\frac{e^{-\beta H(t')}}{{\rm Tr}[e^{-\beta H(t')}]}=e^{-\beta H(t')+\beta G(t')} \end{eqnarray} where the free energy $G(t')=\beta^{-1}\ln{\rm Tr}[e^{-\beta H(t')}]$. The left-hand side of Eq.~(\ref{firstlaw}) becomes $\langle H_T\rangle_T$$-$$\langle H_0\rangle_0$, namely, a change of inner energy of systems from initial to final states, and ${\cal W}_\tau(1)$$=$$\partial_\tau( e^{-\beta H_\tau})e^{\beta H_\tau}$$-$$\partial_\tau G(\tau)$ \begin{eqnarray} \label{heatworkNewdef} {\cal Q}_\tau(1)&=&\sum_{kl\omega}h_{kl}(\omega) [A_l(\omega)A_k^\dag(\omega)-A_k^\dag(\omega)A_l(\omega)]\\ &=& \sum_{kl\omega} (1-e^{-\beta\omega})h_{kl}(\omega)A^\dag_k(\omega)A_l(\omega) \end{eqnarray} We find that, the conventional definitions for work and heat operators are $\beta\partial_\tau H_\tau$ and $\beta L^\star(H_\tau)$ are usually different from those in Eq.~(\ref{heatworkNewdef}). The former has been well acknowledged in the literature as "work is not observable"~\cite{TalknerPRE07}, while the latter about heat might be pointed out here explicitly. A simple calculation shows that, only under a limit $\omega$$\rightarrow$0, ${\cal Q}_\tau(1)=L^\star_\tau(H_\tau)$ \fi {\noindent\it Discussion and conclusion} By investigating the difference between the density operators of quantum Markovian master equation and its time-reversal, in this work we present an operator equality on the entropy production. Our discussion is based on there key assumptions. The first two assumptions are a description of perturbed quantum system using the Markovian master equation and the system satisfying instant detailed balance condition, respectively. Although the two assumptions seem to limit the validity of the operator equality in very far from equilibrium regime, we should emphasize that analogous assumptions in fact have been implied in derivation of the various fluctuation relations in classical Markovian systems. The last assumption is the existence of $\ln\rho(t)$, or equivalently the system's density operator to be invertible. So far, we do not find a satisfying mathematical or physical approach to justify it. Hence, we have to leave it for future study. Finally, we may point out that an extension of present theory to the classical Markovian processes is very straightforward.\\ {\noindent We appreciate Prof. Chetrite for sending their inspiring work~\cite{Chetrite12} to us. This work was supported by the National Science Foundation of China under Grant No. 11174025.}
4,433