text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
The keto diet that is done correctly can indeed lose weight. However, this diet is thought to make you get diarrhea. Is it true? Have you ever had a keto diet? Often called ketogenic, this diet method requires the culprit to limit carbohydrate intake and replace it with fat. Based on the explanation from Dr. Sepriani Timurtini Limbong from KlikDokter, keto diet is claimed to help reduce weight and reduce the risk of various diseases, such as epilepsy and Alzheimer's. There are several things that must be considered before doing a keto diet. The reason is, this diet must be done according to the rules. If not, what will happen later is a health problem. "Keto diet is not entirely safe if applied by those who have diabetes or endocrine disease, because it can cause hypoglycemia or very low blood sugar levels," said Dr. Sepri. On the other hand, a keto diet that is not done properly can also cause diarrhea. Reporting from Women's Health, the reason why a keto diet can have such consequences is because some dieters cannot digest fat properly. Plus they don't exercise regularly. Keep in mind, diet keto diet is 70 to 80 percent fat. If fat is not fully managed by the body, the compound will be removed from the body in the form of liquid feces. The situation can get worse if the keto dieters also consume artificial sweeteners and alcohol. Although diarrhea due to the keto diet tends to be harmless, it can still interfere with and destroy the purpose of the diet that you live. Therefore, so that diarrhea does not occur during the diet method, you are encouraged to increase fiber intake by adding more vegetables to the daily menu. Next articleKeto Diet is Safe to Lose Weight ?
{ "redpajama_set_name": "RedPajamaC4" }
304
Chloe Haines (born 29 February 2000) is an Australian rules footballer who last played for North Melbourne in the AFL Women's (AFLW). Early life From Wynyard, Tasmania, Haines grew up playing basketball with her identical twin sister Libby. They were both introduced to football through a clinic administered by the Burnie Dockers at their school Hellyer College when they were in year 10. Both sisters were included in the 2018 AFLW Academy, which allowed them to train with AFLW clubs and attend development camps. They played for their state at the 2018 AFL Women's Under 18 Championships and later competed in the Eastern Allies team. Both sisters played in a VFL Women's match for Melbourne University – Chloe amassed 12 disposals, five marks and five tackles. AFLW career Haines was drafted by North Melbourne with pick 55 in the 2018 national draft, together with her sister, Libby. She made her debut against the Western Bulldogs in round 3 of the 2019 season. Haines was re-signed for the 2020 season. In June 2020, Haines was delisted by North Melbourne, along with her sister, Libby. References External links Living people 2000 births People from Wynyard, Tasmania Australian rules footballers from Tasmania Burnie Dockers Football Club players North Melbourne Football Club (AFLW) players Melbourne University Football Club (VFLW) players
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,020
\section{Introduction and Statement of Results}\label{intro} Let $N$ be a positive integer. For an odd integer $k$, we denote by $M^{!+\cdots +}_{k/2}(N)$ the space of weakly holomorphic modular forms of weight $k/2$ on $\Gamma_{0}(4N)$ whose $n^{\text{th}}$ Fourier coefficient vanishes unless $(-1)^{(k-1)/2}\,n$ is a square modulo $4N$. For the moment, we assume that $N$ is contained in the set $$\mathfrak{S}=\{1,2,3,5,7,11,13,17,19,23,29,31,41,47,59,71\}.$$ Then the group $\Gamma_{0}^{*}(N)$, which is the group generated by $\Gamma_{0}(N)$ and all Atkin--Lehner involutions $W_{e}$ for $e\parallel N$, has genus $0$. From the correspondence between Jacobi forms and half-integral weight forms (cf. \cite[Theorem 5.6]{EiZa1}), we see that for any $D\in\mathbb{Z}_{>0}$ with $D\equiv\square\pmod{4N}$, there is a unique modular form $g_{D,N}\in M^{!+\cdots +}_{3/2}(N)$ having a Fourier expansion of the form $$g_{D,N}(\tau)=q^{-D}+\sum_{d\geq 0}B^{(N)}(D,d)\, q^{d}\quad(q=e^{2\pi i\tau},~\tau\in\mathbb{H}).$$ Here, $\mathbb{H}$ denotes the complex upper half plane. Let $\ell$ be a prime with $\ell\nmid 4N$. Then the Hecke operator $T_{k/2,4N}(\ell^{2})$, originally defined on the space of weakly holomorphic modular forms of weight $k/2$ on $\Gamma_{0}(4N)$, acts on $M_{k/2}^{!+\cdots+}(N)$. We define $T_{k/2,4N}(\ell^{2n})$ for $n\geq 2$ recursively by $$T_{k/2,4N}(\ell^{2n}):=T_{k/2,4N}(\ell^{2n-2})T_{k/2,4N}(\ell^{2})-\ell^{k-2}T_{k/2,N}(\ell^{2n-4}).$$ For any positive integer $m$ with $\gcd{(m,4N)}=1$, define $T_{k/2,4N}(m^{2})$ multiplicatively and set $$g_{D,N}^{(m)}:=g_{D,N}\mid T_{3/2,4N}(m^{2}).$$ We denote by $B^{(N)}_{m}(D,d)$ the $d^{\text{th}}$ Fourier coefficient of $g_{D,N}^{(m)}(\tau)$: $$g_{D,N}^{(m)}(\tau)=\mbox{(principal part)}+\sum_{d\geq 0}B^{(N)}_{m}(D,d)\, q^{d}.$$ By the works of Zagier \cite{Zag2} and Kim \cite{Kim2}, the coefficients $B^{(N)}_{m}(D,d)$ can be interpreted as traces of CM values of certain modular functions (or traces of singular moduli). Remarkably, the coefficients $B^{(N)}_{m}(D,d)$ show many congruence properties, and many authors studied them. In 2005, Ahlgren and Ono \cite{AhO1} showed that if $p\nmid m$ is an odd prime and $\left(\frac{-d}{p}\right)=1$, then $$B^{(1)}_{m}(1,p^{2}d)\equiv 0\pmod{p}.$$ Edixhoven \cite{Edix1} used the $p$-adic geometry of modular curves to show that, for any $m$ and any $d$ with $\left(\frac{-d}{p}\right)=1$, we have \begin{equation*} B^{(1)}_{m}(1,p^{2n}d)\equiv 0\pmod{p^n}. \end{equation*} When $p$ is an odd prime, Jenkins \cite{Jen1} obtained a recursive formula for $B^{(1)}_{1}(D,p^{2n}d)$ in terms of $B^{(1)}_{1}(D,p^{2k}d)$ with $k<n$. As a corollary he proved that if $\left(\frac{-d}{p}\right)=\left(\frac{D}{p}\right)\neq 0$, then we have \begin{equation*}\label{1.8} B^{(1)}_{1}(D,p^{2n}d)=p^{n}B^{(1)}_{1}(p^{2n}D,d). \end{equation*} Guerzhoy \cite{Guer1} showed that if $D$ and $-d$ are fundamental discriminants with $\left(\frac{-d}{p}\right)=\left(\frac{D}{p}\right)$, then, for any $m$, we have \begin{equation*} B^{(1)}_{m}(D,p^{2n}d)=p^{n}B^{(1)}_{m}(p^{2n}D,d). \end{equation*} In 2012, Ahlgren \cite{Ahl1} proved a general theorem which implies the above results as special cases. On the other hand, Osburn \cite{Osb1} proved that if $d$ is a positive integer such that $-d$ is congruent to a square modulo $4N$ and if $p\neq N$ is an odd prime which splits in $\mathbb{Q}(\sqrt{-d})$, then $$B^{(N)}_{1}(1,p^{2}d)\equiv 0\pmod{p}.$$ Jenkins \cite{Jen2} and Koo and Shin \cite{KS1} obtained the following generalization of Osburn's result: for a positive integer $d$ such that $-d\equiv\square\pmod{4N}$ and an odd prime $p\neq N$ which splits in $\mathbb{Q}(\sqrt{-d})$, $$B^{(N)}_{1}(1,p^{2n}d)\equiv 0\pmod{p^{n}}$$ for all $n\geq 1$. The purpose of this paper is to generalize all these congruences to more general modular forms. To be precise, from now on, we assume that $N\geq 1$ is odd and square-free. For an even Dirichlet character $\chi$ modulo $4N$, we denote by $M^{!}_{k/2}(4N,\chi)$ the space of weakly holomorphic modular forms of weight $k/2$ on $\Gamma_{0}(4N)$ with Nebentypus $\chi$. The subspace of holomorphic forms and that of cuspforms are denoted by $M_{k/2}(4N,\chi)$ and $S_{k/2}(4N,\chi)$ respectively. Let $\mathcal{D}$ be a discriminant form of level $4N$ satisfying some additional conditions which will be given in Section \ref{sec-epsilon}. (For the basics on discriminant forms, see Section \ref{disc} below.) Then $\mathcal{D}$ determines an even Dirichlet character $\chi$ modulo $4N$ and a sign vector $\epsilon=(\epsilon_{p})_{p}$ over $p=2$ or $p\mid N$ with $\chi_{p}\neq 1$, where the character $\chi$ is decomposed into $p$-components: $\chi=\prod_{p}\chi_{p}$. Set $\chi'=\chi\left(\frac{4N}{\cdot}\right)$. We define the associated modular form space $M_{k/2}^{!\epsilon}(N,\chi')$ to be the subspace of $M_{k/2}^{!}(4N,\chi')$ consisting of the forms $f\in M_{k/2}^{!}(4N,\chi')$ satisfying the so-called \emph{$\epsilon$-condition}, which will be defined in Section \ref{sec-epsilon}. We let $$M_{k/2}^{\epsilon}(N,\chi')=M_{k/2}^{!\epsilon}(N,\chi')\cap M_{k/2}(4N,\chi') \text{ and } S_{k/2}^{\epsilon}(N,\chi')=M_{k/2}^{!\epsilon}(N,\chi')\cap S_{k/2}(4N,\chi').$$ Let us give an example. Consider the following even lattice $$L=\left\{\left(\begin{array}{cc}a & b/N \\ c & -a\end{array}\right):a,b,c\in\mathbb{Z}\right\},$$ with $Q(\alpha)=-N\det(\alpha)$ and $(\alpha,\beta)=N\mathrm{tr}(\alpha\beta)$. We denote by $L'$ the dual lattice of $L$. Then the space $M_{k/2}^{!\epsilon}(N,\chi')$ associated with the discriminant form $L'/L$ is exactly the same as the space $M_{k/2}^{!+\cdots+}(N)$. Hence the $\epsilon$-condition can be considered as a generalization of the Kohnen plus condition. Now we further assume that $\chi_{p}\neq 1$ for each $p\mid N$, so $\chi'=1$. In \cite{Zhang2}, Zhang defined a family of forms in $M_{k/2}^{!\epsilon}(N,1)$, called \emph{reduced forms}. (For the definition, see Section \ref{sec-epsilon}.) If a reduced form $f_m$ exists for some $m \in \mathbb Z$, it must be unique and $\chi_{p}(m)\neq-\epsilon_{p}$ for each $p\mid N$. The set of reduced modular forms forms a basis for $M_{k/2}^{!\epsilon}(N,1)$. When $k=3$ and $N\in\mathfrak{S}$, the reduced form $f_{-D}$ exists for each $D>0$ which is a square modulo $4N$ (cf. Proposition \ref{prop-zhang2} below). In fact, $s(-D)f_{-D}=g_{D,N}$ for every $D$ where $s(-D)$ is a scaling constant. Thus the reduced forms are natural generalizations of the forms $g_{D,N}$. In order to generalize the congruences mentioned above to reduced forms, we first need to check integrality of the Fourier coefficients of reduced forms. We establish the following proposition which allows us to check whether a fixed reduced form has integer Fourier coefficients. \begin{proposition} \label{prop-1.1} Let $k$ be an odd integer. Assume that $f=\sum_{n}a(n)q^{n}\in M_{k/2}^{!\epsilon}(N,\chi')\cap\mathbb{Q}(\!(q)\!)$ with bounded denominator, and that $a(n)\neq 0$ for some $n<0$. Furthermore, let $k'$ be the smallest positive integer which satisfies $k'\geq |\mathrm{ord}_{\infty}(f)|/4N$ and $k+12k'> 0$. If $a(n)\in\mathbb{Z}$ for $n\leq\mathrm{ord}_{\infty}(f)+\frac{k+12k'}{12}[\mathrm{SL}_{2}(\mathbb{Z}):\Gamma_{0}(4N)]$, then $a(n)\in\mathbb{Z}$ for all $n$. \end{proposition} Let $\mathcal{D}^{*}$ be the dual discriminant form of $\mathcal{D}$. It is known that the corresponding data to $\mathcal{D}^{*}$ is $(4N,\chi',\epsilon^{*})$ with $\epsilon_{p}^{*}=\chi_{p}(-1)\epsilon_{p}$. Denote by $M_{k/2}^{\epsilon^{*}}(N,\chi')$ the space of modular forms associated to $\mathcal{D}^{*}$. We denote by $a(m,n)$ the $n^{\mathrm{th}}$ Fourier coefficient of the reduced form $f_{m}$. We prove the following theorem which turns the integrality problem for reduced forms into checking finitely many of them. \begin{theorem} \label{thm-1.2} Let $m_{\epsilon}=\max\{m : f_{m}^{*}\in M_{2-k/2}^{\epsilon^{*}}(N,\chi')~\textnormal{exists}\}$. Assume that for all $n\in\mathbb{Z}$ and $m\geq -4N-m_{\epsilon}$, we have $s(m)a(m,n)\in\mathbb{Z}$. Then $s(m)a(m,n)\in\mathbb{Z}$ for all $m,n\in\mathbb{Z}$. \end{theorem} Therefore, to check the integrality of reduced forms, it suffices to show the integrality of a finite number of Fourier coefficients satisfying the conditions of both Proposition \ref{prop-1.1} and Theorem \ref{thm-1.2}. We give an example to illustrate this. \begin{example} \label{ex-int} We consider the space {$M_{1/2}^{!+\cdots +}(7,1)$}. Then we have $m_\epsilon=-1$. Define $$E_{k}(\tau)=1-\frac{2k}{B_{k}}\sum_{n=1}^{\infty}\sigma_{k-1}(n)q^{n}\qquad (2<k\in 2\mathbb{Z})$$ to be the normalized Eisenstein series, and denote by $[\cdot,\cdot]_{n}$ ($n\geq 1$) the $n^{\text{th}}$ Rankin--Cohen bracket (cf. \cite[pp.53-58]{BrGeHaZa1}). Set \[ \begin{array}{llll} \mathrm{RC}_{1}=\frac{[\theta,E_{10}(28\tau)]_{1}}{\Delta(28\tau)}, & \mathrm{RC}_{2}=\frac{[\theta,E_{8}(28\tau)]_{2}}{\Delta(28\tau)}, & \mathrm{RC}_{3}=\frac{[\theta,E_{6}(28\tau)]_{3}}{\Delta(28\tau)}, & \mathrm{RC}_{4}=\frac{[\theta,E_{4}(28\tau)]_{4}}{\Delta(28\tau)}, \\ \mathrm{RC}_{5}=\frac{[\mathrm{RC}_{1},E_{10}(28\tau)]_{1}}{\Delta(28\tau)}, & \mathrm{RC}_{6}=\frac{[\mathrm{RC}_{1},E_{8}(28\tau)]_{2}}{\Delta(28\tau)}, & \mathrm{RC}_{7}=\frac{[\mathrm{RC}_{1},E_{6}(28\tau)]_{3}}{\Delta(28\tau)}, & \mathrm{RC}_{8}=\frac{[\mathrm{RC}_{1},E_{4}(28\tau)]_{4}}{\Delta(28\tau)}, \\ \mathrm{RC}_{9}=\frac{[\mathrm{RC}_{2},E_{10}(28\tau)]_{1}}{\Delta(28\tau)}, & \mathrm{RC}_{10}=\frac{[\mathrm{RC}_{2},E_{8}(28\tau)]_{2}}{\Delta(28\tau)}, & \mathrm{RC}_{11}=\frac{[\mathrm{RC}_{2},E_{6}(28\tau)]_{3}}{\Delta(28\tau)}, & \mathrm{RC}_{12}=\frac{[\mathrm{RC}_{2},E_{4}(28\tau)]_{4}}{\Delta(28\tau)}, \\ \mathrm{RC}_{13}=\frac{[\mathrm{RC}_{1},E_{10}(28\tau)]_{1}}{\Delta(28\tau)}, & \mathrm{RC}_{14}=\frac{[\mathrm{RC}_{3},E_{8}(28\tau)]_{2}}{\Delta(28\tau)}, & \mathrm{RC}_{15}=\frac{[\mathrm{RC}_{3},E_{6}(28\tau)]_{3}}{\Delta(28\tau)}, & \mathrm{RC}_{16}=\frac{[\mathrm{RC}_{3},E_{4}(28\tau)]_{4}}{\Delta(28\tau)}.\\ \end{array} \] In addition, we set $$f=\tfrac{1}{5600}\mathrm{RC}_{1}+\tfrac{7}{103680}\mathrm{RC}_{2}+\tfrac{1}{80640}\mathrm{RC}_{3}+\tfrac{1}{705600}\mathrm{RC}_{4}-\tfrac{41687}{1800}\theta,$$ and define \[ \mathrm{RC}_{17}=\tfrac{[f,E_{4}(28\tau)]_{4}}{\Delta(28\tau)}.\] By taking linear combinations of these Rankin-Cohen brackets, we find \begin{align*} s(0)f_{0}&=1+2q+2q^4+2q^{9}+2q^{16}+\cdots,\\ s(-3)f_{-3}&=q^{-3}-3q-2q^{4}+6q^{8}+5q^{9}-10q^{16}+\cdots,\\ s(-7)f_{-7}&=q^{-7}-10q+4q^{4}+28q^{8}-24q^{9}+60q^{16}+\cdots,\\ s(-12)f_{-12}&=q^{-12}-10q-25q^{4}-6q^{8}+46q^{9}+152q^{16}+\cdots,\\ s(-19)f_{-19}&=q^{-19}-q-50q^{4}-50q^{8}-153q^{9}+798q^{16}+\cdots,\\ s(-20)f_{-20}&=q^{-20} - 22q + 26q^{4} - 180q^{8} - 78q^{9} - 338q^{16}+\cdots,\\ s(-24)f_{-24}&=q^{-24} - 2q - 28q^{4} + 225q^{8} - 450q^{9} - 2976q^{16}+\cdots,\\ s(-27)f_{-27}&=q^{-27} + 12q + 52q^{4} - 468q^{8} + 156q^{9} - 1300q^{16}+\cdots. \end{align*} For example, we obtain \begin{align*} f_{3}&=-\tfrac{92368453}{1197504000}\, \mathrm{RC}_{1}-\tfrac{1105849}{739031040}\, \mathrm{RC}_{2}-\tfrac{7775323}{804722688000}\, \mathrm{RC}_{3}+\tfrac{31109}{68584320000}\, \mathrm{RC}_{4}\\ &\quad-\tfrac{1}{49268736000}\, \mathrm{RC}_{7}+\tfrac{1}{862202880000}\, \mathrm{RC}_{8}-\tfrac{1}{86910050304000}\, \mathrm{RC}_{12}\\ &\quad+\tfrac{1}{216309458534400}\, \mathrm{RC}_{15}+\tfrac{83841213721}{1026432000}\, \theta\\ &=q^{-3} - 3q - 2q^{4} + 6q^{8} + 5q^{9} - 10q^{16}+\cdots. \end{align*} By Proposition \ref{prop-1.1}, the forms $s(0)f_{0},\ldots,s(-27)f_{-27}$ have integer Fourier coefficients. It follows from Theorem \ref{thm-1.2} that every reduced form in {$M_{1/2}^{!+\cdots+}(7,1)$} has integer Fourier coefficients. \end{example} Now we assume that, for any reduced form $$f_{m}=\sum_{n}a(m,n)\, q^{n}\in M_{k/2}^{!\epsilon}(N,1),$$ the form $s(m)f_{m}$ has integer Fourier coefficients. Furthermore, let $k\geq 3$ be an odd integer and set $\lambda=(k-1)/2$. Then the reduced form $f_{m}\in M_{k/2}^{!\epsilon}(N,1)$ exists for every $m\in\mathbb{Z}_{<0}$ with $\chi_{p}(m)\neq -\epsilon_{p}$ for all $p\mid N$. We write $$F_{m}(\tau)=s(m)f_{m}(\tau)=q^{m}+\sum\limits_{\substack{d\geq 0 \\ \chi_{p}(d)\neq -\epsilon_{p}~\textnormal{for all}~p\mid N}}B^{(N)}(m,d)\, q^{d}.$$ Note that the Hecke operator $T_{k/2,4N}(\ell^{2})$ acts on the space $M_{k/2}^{!\epsilon}(N,1)$ for each prime $\ell$ with $\gcd{(\ell,4N)}=1$. For any positive integer $t$ with $\gcd{(t,4N)}=1$, define $$F_{m}^{(t)}:=F_{m}\mid T_{k/2,4N}(t^{2}).$$ Then we obtain the coefficients $B^{(N)}_{t}(m,d)$ from the equation $$F_{m}^{(t)}(\tau)=\mbox{(principal part)}+\sum\limits_{\substack{d\geq 0,\\ \chi_{p}(d)\neq-\epsilon_{p}~\textnormal{for all}~p\mid N}}B^{(N)}_{t}(m,d)\, q^{d}.$$ We state our main theorem which describes various relations among the coefficients $B^{(N)}_{t}(m,d)$. \begin{theorem}\label{thm-1.4} We have the following: \begin{enumerate} \item[\textnormal{(\lowerromannumeral{1})}] $B^{(N)}_{t}(m,\ell^{2n+2}d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)B^{(N)}_{t}(m,\ell^{2n}d)=\ell^{(k-2)n}\left \{ B^{(N)}_{t}(\ell^{2n}m,\ell^{2}d)-B^{(N)}_{t}(\ell^{2n-2}m,d)\right \}$. \item[\textnormal{(\lowerromannumeral{2})}] If $\ell\nmid d$, then \begin{align*} \ell^{(k-2)n}&B^{(N)}_{t}(\ell^{2n}m,d)=B^{(N)}_{t}(m,\ell^{2n}d)\\ &\quad+\left[\left(\frac{(-1)^{\lambda}d}{\ell}\right)-\left(\frac{(-1)^{\lambda}m}{\ell}\right)\right]\cdot\sum_{k=1}^{n}\ell^{(\lambda-1)k}\left(\frac{(-1)^{\lambda}d}{\ell}\right)^{k-1}B^{(N)}_{t}(m,\ell^{2n-2k}d). \end{align*} \item[\textnormal{(\lowerromannumeral{3})}] If $\ell\parallel d$, then $$\ell^{(k-2)n}B^{(N)}_{t}(\ell^{2n}m,d)=B^{(N)}_{t}(m,\ell^{2n}d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)\cdot B^{(N)}_{t}(m,\ell^{2n-2}d).$$ \end{enumerate} \end{theorem} As a corollary, we obtain the following congruences: \begin{corollary}\label{cor-1.5} Assume that $S_{k/2}^{\epsilon}(N,1)=0$. \begin{enumerate} \item If $\left(\frac{-d}{\ell}\right)=\left(\frac{-m}{\ell}\right)\neq 0$, or if $\ell\parallel d$ and $\ell\parallel m$, then for any positive integer $t$ with $(t,4N)=1$ and $n$, we have $$B^{(N)}_{t}(m,\ell^{2n}d)=\ell^{(k-2)n}\, B^{(N)}_{t}(\ell^{2n}m,d)\equiv 0\pmod{\ell^{(k-2)n}}.$$ \item If $\chi_{p}(\ell d)\neq -\epsilon_{p}$ for all $p\mid N$, then for any positive integer $t$ with $(t,4N)=1$ and any $n\geq 1$, we get $$B^{(N)}_{t}(m,\ell^{2n+1}d)\equiv\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)B^{(N)}_{t}(m,\ell^{2n-1}d)\pmod{\ell^{(k-2)n}}.$$ \end{enumerate} \end{corollary} As for the condition in the above corollary, we remark that if \begin{align*} N\in\{n\mid n~\textnormal{is an odd } & \textnormal{square-free integer with}~1\leq n< 37\}\\ &\cup\{39,41,47,51,55,59,69,71,87,95,105,119\}, \end{align*} then $S_{3/2}^{+\cdots +}(N,1)=0$. (See \cite[Table 4]{BrEhFr1}.) We organize this paper as follows. In Section 2, we present preliminaries on discriminant forms and modular forms of half-integral weight. Also, we recall the definitions of $\epsilon$-condition and reduced forms. In Section 3, we prove Proposition \ref{prop-1.1} and Theorem \ref{thm-1.2}, and in Section 4 we prove Theorem \ref{thm-1.4} and Corollary \ref{cor-1.5}. \section{Preliminaries} In this section, we review the basics on discriminant forms, modular forms of half-integral weight and present a recent result of Zhang \cite{Zhang2}. \subsection{Discriminant forms.} \label{disc} A discriminant form is a finite abelian group {$\mathcal{D}$} with a quadratic form {$Q:\mathcal{D}\rightarrow\mathbb{Q}/\mathbb{Z}$}, such that the symmetric bilinear form defined by $(\beta,\gamma)=Q(\beta+\gamma)-Q(\beta)-Q(\gamma)$ is nondegenerate, namely, the map {$\mathcal{D}\mapsto\mathrm{Hom}(\mathcal{D},\mathbb{Q}/\mathbb{Z})$} defined by $\gamma\mapsto(\gamma,\cdot)$ is an isomorphism. We define the level of a discriminant form {$\mathcal{D}$} to be the smallest positive integer $N$ such that $NQ(\gamma)=0$ for each {$\gamma\in\mathcal{D}$}. It is known that if $L$ is an even lattice then $L'/L$ is a discriminant form, where $L'$ is the dual lattice of $L$. Conversely, any discriminant form can be obtained in this way. The signature {$\mathrm{sign}(\mathcal{D})\in\mathbb{Z}/8\mathbb{Z}$} is defined to be the signature of $L$ modulo 8 for any even lattice $L$ such that {$L'/L=\mathcal{D}$}. Every discriminant form can be decomposed uniquely into $p$-components {$\mathcal{D}=\oplus_{p}\mathcal{D}_{p}$}. Each $p$-component can be written as a direct sum of indecomposable Jordan $q$-components with $q$ powers of $p$. Such decompositions are not unique in general. We recall the possible indecomposable Jordan $q$-components as follows. Let $p$ be an odd prime and $q>1$ be a power of $p$. The indecomposable Jordan components with exponent $q$ are denoted by $q^{\delta_{q}}$ with $\delta_{q}=\pm 1$. These discriminant forms both have level $q$. If $q>1$ is a power of 2, there are also precisely two indecomposable \emph{even} Jordan components of exponent $q$, denoted by $q^{\delta_{q}2}$ with $\delta_{q}=\pm 1$. Such components have level $q$. There are also \emph{odd} indecomposable Jordan components, denoted by $q_{t}^{\pm 1}$ with $\pm 1=\left(\frac{2}{t}\right)$ for each $t\in (\mathbb{Z}/8\mathbb{Z})^{\times}$. These discriminant forms have level $2q$. We call a discriminant form {$\mathcal{D}$} \emph{transitive} if the action of {$\mathrm{Aut}(\mathcal{D})$} is transitive on the subset of elements of norm $n$ for any $n\in\mathbb{Q}/\mathbb{Z}$. By the classification of transitive forms, the level $N=\prod_{p}N_{p}$ of a transitive form {$\mathcal{D}=\oplus_{p}\mathcal{D}_{p}$} is of the following form: $N_{p}=1$ or $p$ for an odd prime $p$ and $N_{2}=1,2,4$ or 8. In other words, $N$ is the conductor of a quadratic Dirichlet character. Let {$\mathcal{D}$} be a transitive discriminant form of odd signature $r$ and level $N$. Then {$\mathcal{D}$} determines an even quadratic character $\chi$ modulo $N$. Explicitly, it is given as follows: Decompose $\chi=\prod_{p}\chi_{p}$ into $p$-components. If $p$ is odd, {\begin{equation*} \chi_{p}(d)= \begin{cases} 1, & \textnormal{if}~p\nmid|\mathcal{D}|~\textnormal{or}~p^{2}\mid|\mathcal{D}|,\\ \left(\frac{d}{p}\right), & \textnormal{otherwise}. \end{cases} \end{equation*} When $2\mid |\mathcal{D}|$, \begin{equation*} \chi_{2}(d)= \begin{cases} 1, & \textnormal{if}~\left(\frac{-1}{|\mathcal{D}|}\right)=+1~\textnormal{and}~\mathcal{D}_{2}=2_{\pm 3}^{+3},~2_{\pm 1}^{+1},\\ \left(\frac{-4}{d}\right), & \textnormal{if}~\left(\frac{-1}{|\mathcal{D}|}\right)=-1~\textnormal{and}~\mathcal{D}_{2}=2_{\pm 3}^{+3},~2_{\pm 1}^{+1},\\ \left(\frac{2}{d}\right), & \textnormal{if}~\left(\frac{-1}{|\mathcal{D}|}\right)=+1~\textnormal{and}~\mathcal{D}_{2}=4_{\pm 1}^{+1},~4_{\pm 3}^{-1},\\ \left(\frac{-2}{d}\right), & \textnormal{if}~\left(\frac{-1}{|\mathcal{D}|}\right)=-1~\textnormal{and}~\mathcal{D}_{2}=4_{\pm 1}^{+1},~4_{\pm 3}^{-1}. \end{cases} \end{equation*}} For more details on discriminant forms, see \cite{CS1}, \cite{Ni1}, \cite{Sch1} or \cite{Str1}. \subsection{Metaplectic covers.} \label{meta} Throughout this paper, unless otherwise stated, {$k$ is an odd integer}. Let $\mathrm{Mp}_{2}^{+}(\mathbb{R})$ be the metaplectic cover of $\mathrm{GL}_{2}^{+}(\mathbb{R})$. The elements of $\mathrm{Mp}_{2}^{+}(\mathbb{R})$ are pairs $(A,\phi)$ where $\phi$ is a holomorphic function on $\mathbb{H}$ and $$A=\left(\begin{array}{cc}a & b \\ c & d\end{array}\right)\in\mathrm{GL}_{2}^{+}(\mathbb{R}),\quad\phi(\tau)=tj(A,\tau),~\textnormal{for some}~t\in\mathbb{C},~|t|=1.$$ Here $j(A,\tau)=\det(A)^{-\frac{1}{4}}(c\tau+d)^{\frac{1}{2}}$. The product of two elements $(A_{1},\phi_{1})$ and $(A_{2},\phi_{2})$ is defined by $$(A_{1}A_{2},\phi_{1}(A_{2}\tau)\phi_{2}(\tau)).$$ To introduce the theta multiplier system, we first extend the Jacobi symbol. For an integer $c$ and an odd integer $d\neq 0$, the ``extended Jacobi symbol" $\left(\frac{c}{d}\right)$ is defined as follows. \begin{enumerate} \item[(1)] $\left(\frac{c}{d}\right)=0$ if $\gcd{(c,d)}>1$; \item[(2)] $\left(\frac{0}{\pm 1}\right)=1$; \item[(3)] If $d>0$, then $\left(\frac{c}{d}\right)$ is the usual Jacobi symbol; \item[(4)] If $d<0$, then $\left(\frac{c}{d}\right)=\mathrm{sgn}(c)\left(\frac{c}{|d|}\right)$. \end{enumerate} Next we define $\varepsilon_{d}$ for odd $d$ by: \begin{equation*} \varepsilon_{d}= \begin{cases} 1 & \textnormal{if}\quad d\equiv 1\pmod{4};\\ i & \textnormal{if}\quad d\equiv 3\pmod{4}. \end{cases} \end{equation*} We finally define the theta multiplier system $\nu$ on $\Gamma_{0}(4)$: $$\nu(A)=\left(\frac{c}{d}\right)\varepsilon_{d}^{-1},\quad A=\left(\begin{array}{cc}a & b \\ c & d\end{array}\right)\in\Gamma_{0}(4).$$ Note that $$\overline{\nu}(A)=\nu^{3}(A)=\left(\frac{-1}{d}\right)\nu(A),\quad\nu(A)\nu(A^{-1})=1,\quad A\in\Gamma_{0}(4).$$ For any $A=\left(\begin{smallmatrix}a & b \\ c & d\end{smallmatrix}\right)\in\mathrm{GL}_{2}^{+}(\mathbb{R})$, we let $$\tilde{A}=(A,j(A,\tau))\in\mathrm{Mp}_{2}^{+}(\mathbb{R}).$$ Let $\mathrm{Mp}_{2}(\mathbb{Z})$ be the metaplectic double cover of $\mathrm{SL}_{2}(\mathbb{Z})$ inside $\mathrm{Mp}_{2}^{+}(\mathbb{R})$, consisting of pairs $(A,\phi)$ with $A=\left(\begin{smallmatrix}a & b \\ c & d\end{smallmatrix}\right)\in\mathrm{SL}_{2}(\mathbb{Z})$ and $\phi^{2}=c\tau+d$. Let $S$ and $T$ denote the standard generators of $\mathrm{SL}_{2}(\mathbb{Z})$. Then $$\tilde{S}=\left(\left(\begin{array}{cc}0 & -1 \\ 1 & 0\end{array}\right),\sqrt{\tau}\right),\quad\tilde{T}=\left(\left(\begin{array}{cc}1 & 1 \\ 0 & 1\end{array}\right),1\right)$$ generate $\mathrm{Mp}_{2}(\mathbb{Z})$. \subsection{Modular forms.} \label{mod} Let $(A,\phi)\in\mathrm{Mp}_{2}^{+}(\mathbb{R})$ and $f:\mathbb{H}\rightarrow\mathbb{C}$ be a function. The weight {$k/2$} slash operator is defined by {$$(f|_{k/2}(A,\phi))(\tau)=\phi^{-k}(\tau)f(A\tau),\quad A=\left(\begin{array}{cc}a & b \\ c & d\end{array}\right).$$} Consider the Atkin--Lehner operators on the space {$M_{k/2}^{!}(4N,\chi)$}, where $N$ is odd square-free. For any odd divisor $m$ of $N$, we choose $\gamma_{m}$ in $\mathrm{SL}_{2}(\mathbb{Z})$ such that \begin{equation*} \gamma_{m}\equiv \begin{cases} S &\mod{m^{2}},\\ I &\mod{(4N/m)^{2}}, \end{cases} \end{equation*} and let \begin{equation*} \quad\gamma_{4m}:=S\gamma_{N/m}^{-1}\equiv \begin{cases} S &\mod{(4m)^{2}},\\ I &\mod{(4N/m)^{2}}. \end{cases} \end{equation*} We shall assume for simplicity that all of the entries of $\gamma_{m}$ are positive; this can be achieved by left and/or right multiplication by matrices in $\Gamma(16N^{2})$. For any nonzero integer $m$, let $$\delta_{m}=\left(\begin{array}{cc}m & 0 \\ 0 & 1\end{array}\right),\quad\widetilde{\delta_{m}}=\left(\left(\begin{array}{cc}m & 0 \\ 0 & 1\end{array}\right),m^{-\frac{1}{4}}\right).$$ For any odd positive divisor $m$ of $N$, let $W(m)=\gamma_{m}^{*}\widetilde{\delta_{m}}$. Define $$\tau_{4N}=(I,\sqrt{-i})\widetilde{\beta_{4N}}=\left(\left(\begin{array}{cc}0 & -1 \\ 4N & 0\end{array}\right),(4N)^{\frac{1}{4}}(-i\tau)^{\frac{1}{2}}\right),\quad\beta_{N}=\left(\begin{array}{cc}0 & -1 \\ 4N & 0\end{array}\right).$$ For each divisor $m$, even or odd, of $4N$, define $U(m)$ as follows: {$$f|U(m)=m^{\frac{k}{4}-1}\sum_{j\ \textrm{mod} \ {m}}f|\widetilde{\delta_{m}}^{-1}\tilde{T}^{j}=m^{\frac{k}{4}-1}\sum_{j\ \textrm{mod} \ {m}}f|\widetilde{\delta_{m}^{-1}T^{j}}.$$} Finally, we define $Y(p)$ for each odd prime divisor $p$ of $N$ by {$$f|Y(p)=p^{1-\frac{k}{4}}f|U(p)W(p),$$} and $Y(4)$ by {$$f|Y(4)=4^{1-\frac{k}{4}}f|U(4)W(M)\tau_{4N}.$$} \begin{proposition}[\cite{Shi1, Ueda1, Zhang2}] Let {$f\in M_{k/2}^{!}(4N,\chi)$}. \begin{enumerate} \item[\textnormal{(1)}] {$f|\tau_{4N}\in M_{k/2}^{!}(4N,\chi\left(\frac{4N}{\cdot}\right))$} and $f|\tau_{4N}^{2}=f$. \item[\textnormal{(2)}] For each $m\mid 4N$, {$f|U(m)\in M_{k/2}^{!}(4N,\chi\left(\frac{m}{\cdot}\right))$}. \item[\textnormal{(3)}] For each $m\mid N$, {$f|W(m)\in M_{k/2}^{!}(4N,\chi\left(\frac{m}{\cdot}\right))$} and {$$f|W(m)^{2}=\varepsilon_{m}^{-k}\chi_{m}(-1)\chi_{4N/m}(m)f.$$} Moreover, if $m,m'\mid N$ and $\gcd{(m,m')}=1$, then $f|W(m)W(m')=\chi_{m'}(m)f|W(mm')$. \item[\textnormal{(4)}] For any $m,m'\mid N$ with $\gcd{(m,m')}=1$, $$f|W(m)U(m')=\chi_{m}(m')f|U(m')W(m)~\textnormal{and}~f|U(4)W(m)=f|W(m)U(4).$$ \item[\textnormal{(5)}] For any $m\mid N$, $f|\tau_{4N}U(m)W(m)=\chi_{m}(M/m)f|W(m)U(m)\tau_{4N}$. \end{enumerate} \end{proposition} Now we consider the operators $Y(p)$ and $Y(4)$. \begin{proposition}[\cite{Koh1, Ueda1, Zhang2}] \hfill \begin{enumerate} \item[\textnormal{(1)}] The space {$M_{k/2}^{!}(4N,\chi)$} decomposes under $Y(4)$ into eigenspaces {$$M_{k/2}^{!}(4N,\chi)=M_{k/2}^{!}(4N,\chi)_{\mu_{2}^{+}}\oplus M_{k/2}^{!}(4N,\chi)_{\mu_{2}^{-}},$$} where the eigenvalues are {$$\mu_{2}^{+}=\chi_{2}(-1)^{\frac{k}{2}+1}(-1)^{\lfloor\frac{k+1}{4}\rfloor}2^{\frac{3}{2}} \quad \text{ and } \quad \mu_{2}^{-}=-2^{-1}\mu_{2}^{+}.$$} Moreover, {$f=\sum_{n}a(n)q^{n}\in M_{k/2}^{!}(4N,\chi)_{\mu_{2}^{+}}$} if and only if {$$a(n)=0~\text{whenever}~\chi_{2}(-1)(-1)^{\frac{k-1}{2}}n\equiv 2,3\pmod{4}.$$} \item[\textnormal{(2)}] Assume that $p\mid N$ with $\chi_{p}=1$. Then the space {$M_{k/2}^{!}(4N,\chi)$} decomposes under $Y(p)$ into eigenspaces $$M_{k/2}^{!}(4N,\chi)=M_{k/2}^{!}(4N,\chi)_{\mu_{p}^{+}}\oplus M_{k/2}^{!}(4N,\chi)_{\mu_{p}^{-}},$$ where the eigenvalues are $\mu_{p}^{+}=\varepsilon_{p}^{-1}p^{\frac{1}{2}}$ and $\mu_{p}^{-}=-\mu_{p}^{+}$. Moreover, {$f=\sum_{n}a(n)q^{n}\in M_{k/2}^{!}(4N,\chi)_{\mu_{p}^{+}}$} (resp. {$M_{k/2}^{!}(4N,\chi)_{\mu_{p}^{-}}$}) if and only if $$a(n)=0~\text{whenever}~\left(\frac{n}{p}\right)=- 1 \quad ~(resp.~ \left(\frac{n}{p}\right)= 1~)~.$$ \item[\textnormal{(3)}] Assume that $p\mid N$ with $\chi_{p}=\left(\frac{\cdot}{p}\right)$. Then the space {$M_{k/2}^{!}(4N,\chi)$} decomposes under $Y(p)$ into eigenspaces {$$M_{k/2}^{!}(4N,\chi)=M_{k/2}^{!}(4N,\chi)_{\mu_{p}^{+}}\oplus M_{k/2}^{!}(4N,\chi)_{\mu_{p}^{-}},$$} where the eigenvalues are $\mu_{p}^{+}=-1$ and $\mu_{p}^{-}=-p$. \item[\textnormal{(4)}] The space {$M_{k/2}^{!}(4N,\chi)$} decomposes into the direct sum of simultaneous eigenspaces for the operators $Y(4)$ and $Y(p)$. \end{enumerate} \end{proposition} \subsection{$\epsilon$-condition and reduced forms} \label{sec-epsilon} Let {$\mathcal{D}$} be a transitive discriminant form of odd signature $r$ and level $M$. Assume that {$\mathcal{D}_{2}=2_{\pm 1}^{+1}$}. Then $M=4N$ for some odd square-free $N$ and {$$\chi_{2}(-1)=e_{4}(r-t)=\left(\frac{-1}{|\mathcal{D}|}\right),$$} where $e_{4}(x)=e^{2\pi ix/4}$. Recall that {$\mathcal{D}$} determines an even Dirichlet character $\chi$ modulo $4N$. We define a sign vector $\epsilon=(\epsilon_{p})_{p}$ over $p=2$ or $p\mid N$ with $\chi_{p}\neq 1$ as follows: {\begin{equation*} \epsilon_{p}= \begin{cases} \chi_{p}(2N/p) & \textnormal{if}~p\mid N,~\chi_{p}\neq 1~\textnormal{and}~\mathcal{D}_{p}=p^{+1},\\ -\chi_{p}(2N/p) & \textnormal{if}~p\mid N,~\chi_{p}\neq 1~\textnormal{and}~\mathcal{D}_{p}=p^{-1},\\ \left(\frac{-1}{N}\right) & \textnormal{if}~p=2~\textnormal{and}~\mathcal{D}_{2}=2_{+1}^{+1},\\ -\left(\frac{-1}{N}\right) & \textnormal{if}~p=2~\textnormal{and}~\mathcal{D}_{2}=2_{-1}^{+1}. \end{cases} \end{equation*}} Therefore, {$\mathcal{D}$} determines $(4N,\chi\left(\frac{4N}{\cdot}\right),\epsilon)$: $\chi$ an even Dirichlet character modulo $4N$ and $\epsilon$ a sign vector. We shall denote $\chi'=\chi\left(\frac{4N}{\cdot}\right)$ from now on. Given any data $(4N,\chi',\epsilon)$ with even $\chi$ and $\epsilon_{p}=\pm 1$ for $p=2$ or $p\mid N$ with $\chi_{p}\neq 1$, we define the associated modular form space {$M_{k/2}^{!\epsilon}(N,\chi')$} to be the common eigenspace with eigenvalues $\mu_{2}^{+}$ for $Y(4)$, $\mu_{p}^{\epsilon_{p}}$ for $Y(p)$ if $\chi_{p}\neq 1$ and $\mu_{p}^{+}=-1$ for $Y(p)$ if $\chi_{p}=1$. Since \begin{align*} \epsilon_{2}\chi_{2}'(-1)(-1)^{\frac{k-1}{2}}&=t\chi_{2}(-1)e_{4}(k-1)\\ &=\chi_{2}(-1)e_{4}(k-1)e_{4}(1-t)\\ &=\chi_{2}(-1)e_{4}(r-t)=1, \end{align*} we have {$f=\sum_{n}a(n)q^{n}\in M_{k/2}^{!\epsilon}(N,\chi')$} if and only if the following two conditions are satisfied: (1) $a(n)=0$ whenever $n\equiv 2,-\epsilon_{2}\pmod{4}$ or $\left(\frac{n}{p}\right)=-\epsilon_{p}$ for some $p\mid N$ with $\chi_{p}\neq 1$, (2) {$f|_{k/2}Y(p)=-f$} for every $p\mid N$ with $\chi_{p}=1$.\\ Recall the even lattice $L$ introduced in Section \ref{intro}. Then the discriminant form {$\mathcal{D}=L'/L\cong\mathbb{Z}/2N\mathbb{Z}$} with {$\mathcal{D}=\prod_{p\mid 2N}\mathcal{D}_{p}$} is given by {$$\mathcal{D}_{2}=2_{t}^{+1},\quad t=\left(\frac{-1}{N}\right),\quad \mathcal{D}_{p}=p^{\delta_{p}},\quad \delta_{p}=\left(\frac{2N/p}{p}\right)\quad\textnormal{for}\quad p\mid N.$$} It follows that for such {$\mathcal{D}$}, we have $\epsilon_{p}=+1$ for all $p\mid N$ and $\chi'=\chi\left(\frac{4N}{\cdot}\right)=1$. From the above observation, we have {$f=\sum_{n}a(n)q^{n}\in M_{k/2}^{!\epsilon}(N,1)$ if and only if $a(n)=0$ unless $(-1)^{\frac{k-1}{2}}n$ is a square modulo $4N$}. Thus the space {$M_{k/2}^{!\epsilon}(N,1)$} is exactly the same as the space {$M_{k/2}^{!+\ldots+}(N)$}. Eichler and Zagier denote the space {$M_{k/2}^{\epsilon}(N,1)$} by $M_{k}^{+\ldots+}(N)$ in \cite[p.69]{EiZa1}. \medskip Now we shall assume that $\chi_{p}\neq 1$ for each $p\mid N$, so $\chi'=1$. A form {$f\in M_{k/2}^{!\epsilon}(N,\chi')$} is called \textit{reduced} if $f=\frac{1}{s(m)}q^{m}+ \sum_{\ell \ge m+1} a(\ell) q^\ell$ for some integer $m$ and if for each $n>m$ with $a(n)\neq 0$, there does not exist {$g\in M_{k/2}^{!\epsilon}(N,\chi')$} such that $g=q^{n}+O(q^{n+1})$. Here, {$s(m)=\displaystyle\prod_{p\mid\gcd{(N,m)}}\left(1+\frac{p}{|\mathcal{D}_p|}\right)$}. If a reduced form exists for some $m$, it is unique and $\chi_{p}(m)\neq-\epsilon_{p}$ for each $p\mid N$; we denote it by $f_{m}$. The set of reduced modular forms is a basis for {$M_{k/2}^{!\epsilon}(N,\chi')$}. The following proposition determines $m<0$ for which $f_{m}$ exists. To state it, we need some notation. Let {$\mathcal{D}^{*}$} be the dual discriminant form of {$\mathcal{D}$} given by the same abelian group with the quadratic form $-Q$. It is known that {$\mathcal{D}^{*}$} is also transitive and the corresponding data is $(4N,\chi',\epsilon^{*})$ with $\epsilon_{p}^{*}=\chi_{p}(-1)\epsilon_{p}$. \begin{proposition}[\cite{Zhang2}, Proposition 6.1] \label{prop-zhang2} Let {$B^{*}=\{m : f_{m}^{*}\in M_{2-k/2}^{\epsilon^{*}}(N,\chi')~\textnormal{exists}\}$}. Then for any $m<0$ with $\chi_{p}(m)\neq -\epsilon_{p}$ for all $p\mid N$, the reduced form {$f_{m}\in M_{k/2}^{!\epsilon}(N,\chi')$} exists if and only if $-m\notin B^{*}$. \end{proposition} \section{Proofs of Proposition \ref{prop-1.1} and Theorem \ref{thm-1.2}} In this section, we prove the rationality of Fourier coefficients of reduced forms following the lines in \cite{BrBun1} and \cite{Zhang1}. For $f=\sum_{n}a(n)q^{n}$ and $\sigma\in\mathrm{Aut}(\mathbb{C})$, define $f^{\sigma}=\sum_{n}\sigma(a(n))q^{n}$. \begin{lemma} \label{lem-1} Let $\chi$ be a Dirichlet character modulo $N$ with values in $\mathbb{Q}$. If $f\in M_{k/2}^{!}(4N,\chi)$, so is $f^{\sigma}$. \end{lemma} \begin{proof} It is known that $\mathrm{Aut}(\mathbb{C})$ acts on the space {$M_{k/2}(\Gamma_{1}(4N),\chi)$}, the space of holomorphic modular forms of weight {$k/2$} on $\Gamma_{1}(4N)$ with Nebentypus $\chi$, and {$\sigma(M_{k/2}(4N,\chi))=M_{k/2}(4N,\chi^{\sigma})$}. (See \cite{SeSt1}.) Since $\chi$ has values in $\mathbb{Q}$, $\mathrm{Aut}(\mathbb{C})$ acts on {$M_{k/2}(4N,\chi)$}. Note that {$f\Delta^{k'}\in M_{k/2+12k'}(4N,\chi)$} for a sufficiently large positive integer $k'$. Here $\Delta$ is the unique normalized cusp form of weight $12$ for $\mathrm{SL}_{2}(\mathbb{Z})$. The above observation shows that {$(f\Delta^{k'})^{\sigma}\in M_{k/2+12k'}(4N,\chi)$}. But $\Delta$ has integral Fourier coefficients, hence {$f^{\sigma}\Delta^{k'}=(f\Delta^{k'})^{\sigma}\in M_{k/2+12k'}(4N,\chi)$} and {$f^{\sigma}\in M_{k/2}^{!}(4N,\chi)$}. \end{proof} \begin{proposition} Let $k<0$ and let {$f=\sum_{n}a(n)q^{n}\in M_{k/2}^{!\epsilon}(N,\chi')$}. Suppose that $a(n)\in\mathbb{Q}$ for $n<0$. Then all the coefficients $a(n)$ are rational with bounded denominators. \end{proposition} \begin{proof} Let $\sigma\in\mathrm{Aut}(\mathbb{C})$. By Lemma \ref{lem-1}, {$f^{\sigma}\in M_{k/2}^{!}(4N,\chi')$}. It is easy to check that the action of $\mathrm{Aut}(\mathbb{C})$ preserves the $\epsilon$-condition. Since $a(n)\in\mathbb{Q}$ for $n<0$, $h:=f-f^{\sigma}$ is holomorphic at $\infty$. By \cite[Corollary 5.5]{Zhang2}, {$h\in M_{k/2}^{\epsilon}(N,\chi')$}. But $k<0$, so $h=0$. It follows that $f$ has rational coefficients. We know that {$\theta f\Delta^{k'}\in S_{(k+1)/2+12k'}(4N,\chi')\subset S_{(k+1)/2+12k'}(\Gamma_{1}(4N))$} for a sufficiently large positive integer $k'$. Shimura proved that {$S_{(k+1)/2+12k'}(\Gamma_{1}(4N))$} has a basis $\mathcal{B}$ consisting of forms whose Fourier coefficients at $\infty$ are rational integers. (See \cite[Theorem 3.52]{Shi2}.) Let {$S_{(k+1)/2+12k'}^{\mathbb{Q}}(\Gamma_{1}(4N))$} be the $\mathbb{Q}$-vector space of cusp forms in {$S_{(k+1)/2+12k'}(\Gamma_{1}(4N))$} whose Fourier coefficients at $\infty$ are rational numbers. Then $\mathcal{B}$ is a $\mathbb{Q}$-basis of {$S_{(k+1)/2+12k'}^{\mathbb{Q}}(\Gamma_{1}(4N))$} and {$f\theta\Delta^{k'}\in S_{(k+1)/2+12k'}^{\mathbb{Q}}(\Gamma_{1}(4N))$}. This implies that $f\theta\Delta^{k'}$ has coefficients with bounded denominators, and we conclude that the $a(n)$ are rational with bounded denominators. \end{proof} We are interested in integrality of Fourier coefficients. So we generalize Sturm's theorem to {$M_{k/2}^{!\epsilon}(N,\chi')$}. We begin with introducing the original Sturm's theorem. \begin{theorem}[\cite{St1}] \label{thm-St} Let $\mathcal{O}_{F}$ be the ring of integers of a number field $F$, $\mathfrak{p}$ any prime ideal, $N'$ a positive integer and $k'$ a positive integer. Assume {$f=\sum_{n}a(n)q^{n}\in M_{k'}(N',\chi)\cap\mathcal{O}_{F}[\![ q]\!]$}. If $a(n)\in\mathfrak{p}$ for $n\leq\frac{k'}{12}[\mathrm{SL}_{2}(\mathbb{Z}):\Gamma_{0}(N')]$, then $a(n)\in\mathfrak{p}$ for all $n$. \end{theorem} Using Theorem \ref{thm-St}, Kim, Lee and Zhang proved the following: \begin{corollary}[\cite{KLZ1}, Corollary 3.2] \label{cor-KLZ} Let $k'$ be a positive integer. Assume {$f=\sum_{n}a(n)q^{n}\in M_{k'}(4N,\chi)\cap\mathbb{Q}[\![ q]\!]$} with bounded denominator. If $a(n)\in\mathbb{Z}$ for $n\leq\frac{k'}{12}[\mathrm{SL}_{2}(\mathbb{Z}):\Gamma_{0}(4N)]$, then $a(n) \in\mathbb{Z}$ for all $n$. \end{corollary} We extend this result to half-integral weight case. Let $$\theta(\tau)=\sum_{n\in\mathbb{Z}}q^{n^{2}}=1+2q+2q^{4}+2q^{9}+\cdots,\quad (q=e^{2\pi i\tau},~\tau\in\mathbb{H}).$$ \begin{corollary} \label{cor-let} {Let $k>0$ be an odd integer} and assume that {$f=\sum_{n}a(n)q^{n}\in M_{k/2}(4N,\chi)\cap\mathbb{Q}[\![ q]\!]$} with bounded denominator. If $a(n)\in\mathbb{Z}$ for {$n\leq\frac{k}{12}[\mathrm{SL}_{2}(\mathbb{Z}):\Gamma_{0}(4N)]$}, then $a(n)\in\mathbb{Z}$ for all $n$. \end{corollary} \begin{proof} By multiplying {$\theta^{k}$}, we have {$f\theta^{k}\in M_{k}(4N,\chi)$}. It suffices to show that all the coefficients of {$f\theta^{k}$} are integers. Since $a(n)\in\mathbb{Z}$ for $n\leq\frac{k}{12}[\mathrm{SL}_{2}(\mathbb{Z}):\Gamma_{0}(4N)]$, the same thing holds for the coefficients of {$f\theta^{k}$}. By Corollary \ref{cor-KLZ}, every coefficient of {$f\theta^{k}$} is an integer. \end{proof} Now we are ready to prove Proposition \ref{prop-1.1} and Theorem \ref{thm-1.2}. \begin{proof}[Proof of Proposition \ref{prop-1.1}] {Since $k'\geq |\mathrm{ord}_{\infty}(f)|/4N$, we see that $f(\tau)\Delta(4N\tau)^{k'}\in M_{k/2+12k'}^{\epsilon}(N,\chi)$ and that every coefficient of $f(\tau)\Delta(4N\tau)^{k'}$ less than or equal to $\frac{k+12k'}{12}[\mathrm{SL}_{2}(\mathbb{Z}):\Gamma_{0}(4N)]$ is an integer. By Corollary \ref{cor-let}, $f(\tau)\Delta(4N\tau)^{k'}$ has integer Fourier coefficients, hence so does $f$.} \end{proof} \begin{proof}[Proof of Theorem \ref{thm-1.2}] Consider any reduced form $f_{m'}$ with $m'<-4N-m_{\epsilon}$. There exist integers $-4N-m_{\epsilon}\leq m_{0}'<-m_{\epsilon}$ and $l\geq 1$ such that $m'=-4Nl+m_{0}'$. By maximality of $m_{\epsilon}$, $f_{m_{0}'}$ exists. Consider now $$g=j(4N\tau)^{l}f_{m_{0}'}=\sum_{n}b(n)q^{n}\in M_{k/2}^{!\epsilon}(N,\chi'),$$ where $j(\tau)$ denotes the classical $j$-function. It is known that $j$ has integral Fourier coefficients. By the assumption on $f_{m_{0}'}$, we see that $b(n)s(m_{0}')\in\mathbb{Z}$ for each $n$. Now $s(m_{0}')g$ and $s(m')f_{m'}$ share the same lowest power term, and we must have that $$s(m')f_{m'}=s(m_{0}')g-\sum_{m>m'}s(m_{0}')b(m)s(m)f_{m}.$$ Hence $s(m')a_{m'}(n)=s(m_{0}')b(n)-s(m)b(m)s(m_{0}')a_{m}(n)\in\mathbb{Z}$ by the assumption and induction on $m$. \end{proof} \section{Proofs of Theorem \ref{thm-1.4} and Corollary \ref{cor-1.5}} From now on, we shall assume that, for any reduced form $$f_{m}=\sum_{n}a(m,n)q^{n}\in M_{k/2}^{!\epsilon}(N,\chi),$$ the modular form $s(m)f_{m}$ has integral Fourier coefficients. We remark that such integrality for each fixed reduced form can be verified by Proposition \ref{prop-1.1}. {Also, as we showed in Example \ref{ex-int}, every reduced form in the space {$M_{1/2}^{!+\cdots +}(7,1)$} satisfies the assumption. We begin with a lemma. \begin{lemma} \label{lem-sm} Let $N\geq 1$ be a square-free integer. Then we have {$M_{3/2}^{+\cdots +}(N,1)=S_{3/2}^{+\cdots +}(N,1)$}. \end{lemma} \begin{proof} Let {$f=\sum_{n\geq 0}a(n)q^{n}\in M_{3/2}^{+\cdots +}(N,1)$}. By Borcherds' obstruction theorem (Theorem 3.1 of \cite{Bor1}), we get $$s(0)a(0)b(0)=0$$ for each {$g=\sum_{n}b(n)q^{n}\in M_{1/2}^{\epsilon^{*}}(N,1)$}. Since $N$ is square-free, {$$M_{1/2}^{\epsilon^{*}}(N,1)=\mathbb{C}\theta.$$} Setting $g=\theta$, we obtain $$s(0)a(0)=0.$$ Since $s(0)\neq 0$, the form $f$ vanishes at $\infty$. By \cite[Proposition 5.3]{Zhang2}, the vector-valued form $\psi(f)$ is a cusp form. Here $\psi$ is the map constructed in Chapter 5 of \cite{Zhang2}. We conclude from \cite[Corollary 5.5]{Zhang2} that {$f=\psi^{-1}(\psi(f))\in S_{3/2}^{+\cdots +}(N,1)$}. \end{proof} \begin{remark} \label{rmk-zero} If $k\geq 3$ is an odd integer and $(k,\epsilon)\neq (3,+)$, then {$$M_{2-k/2}^{\epsilon^{*}}(N,1)=0.$$} \end{remark} Let $N\geq 1$ be an odd square-free integer. Suppose that $\ell$ is a prime with $\ell\nmid 4N$. Consider a modular form $$f(\tau)=\sum_{n} a(n)q^{n}\in M_{k/2}^{!\epsilon}(N,1),$$ where the sum is over $n$ such that $\chi_{p}(n)\neq -\epsilon_{p}$ for all $p\mid N$. Then the action of the Hecke operator $T_{k/2,4N}(\ell^{2})$ on $f$ is given by \begin{equation}\label{1} f(\tau)\mid T_{k/2,4N}(\ell^{2})=\sum_{n}\left(a(\ell^{2}n)+\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}n}{\ell}\right)a(n)+\ell^{2\lambda-1}a(n/\ell^{2})\right)q^{n}, \end{equation} where $\lambda=(k-1)/2$ and the sum is over $n$ such that $\chi_{p}(n)\neq -\epsilon_{p}$ for all $p\mid N$. Here we set $a(n/\ell^{2})=0$ if $\ell^{2}\nmid n$. Note that {$f(\tau)\mid T_{k/2,4N}(\ell^{2})\in M_{k/2}^{!\epsilon}(N,1)$}. We define $T_{k/2,4N}(\ell^{2n})$ for $n\geq 2$ recursively by $$T_{k/2,4N}(\ell^{2n}):=T_{k/2,4N}(\ell^{2n-2})T_{k/2,4N}(\ell^{2})-\ell^{k-2}T_{k/2,4N}(\ell^{2n-4}).$$ \begin{remark} For $n\geq 2$, our $T_{k/2,4N}(\ell^{2n})$ is different from the $\ell^{2n}$-th Hecke operator given in \cite{Shi1}. See \cite[p.241]{Pur1} for details. \end{remark} By Proposition \ref{prop-zhang2}, the reduced form {$f_{m}\in M_{k/2}^{!\epsilon}(N,1)$} exists for every $m<0$ with $\chi_{p}(m)\neq -\epsilon_{p}$ for all $p\mid N$. We write $$F_{m}(\tau)=s(m)f_{m}(\tau)=q^{m}+\sum\limits_{\substack{d\geq 0 \\ \chi_{p}(d)\neq -\epsilon_{p}~\textnormal{for all}~p\mid N}}B(m,d)q^{d}.$$ For any positive integer $t$ with $\gcd{(t,4N)}=1$, define $$F_{m}^{(t)}:=F_{m}\mid T(t^{2}).$$ Then we obtain the coefficients $B_{t}(m,d)$ from the equation $$F_{m}^{(t)}(\tau)=\mbox{(principal part)}+\sum\limits_{\substack{d\geq 0,\\ \chi_{p}(d)\neq-\epsilon_{p}~\textnormal{for all}~p\mid N}}B_{t}(m,d)q^{d}.$$ \medskip For the rest of this section, let $k\geq 3$ be an odd integer and set $\lambda=(k-1)/2$, and assume \begin{enumerate} \item $m \in \mathbb Z_{<0}$ such that $\chi_{p}(m)\neq -\epsilon_{p}$ for all $p\mid N$, \item $\ell$ is a prime with $\ell\nmid 4N$ and $\ell^{2}\nmid m$. \end{enumerate} \begin{proposition} \label{prop-FG} Assume that {$S_{k/2}^{\epsilon}(N,1)=0$}. Then, for any positive integer $t$ with $(t,4N)=1$ and any positive integer $n$, we have $$F_{m}^{(t)}|T_{k/2,4N}(\ell^{2n})-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)F_{m}^{(t)}|T_{k/2,4N}(\ell^{2n-2})=\ell^{(k-2)n}F_{\ell^{2n}m}^{(t)}.$$ \end{proposition} \begin{proof} For convenience, define $G_{0}^{(t)}:=F_{m}^{(t)}$, and, for each $n\geq 1$, \begin{equation}\label{2} G_{n}^{(t)}:=F_{m}^{(t)}\mid T_{k/2,4N}(\ell^{2n})-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)F_{m}^{(t)}\mid T_{k/2,4N}(\ell^{2n-2}). \end{equation} We need to show $G_{n}^{(t)} = \ell^{(k-2)n}F_{\ell^{2n}m}^{(t)}$. Since the Hecke operators commute, it suffices to prove the proposition in the case $t=1$, which we now assume. We claim that \begin{equation}\label{3} G_{n}^{(1)}=G_{n-1}^{(1)}\mid T_{k/2,4N}(\ell^{2}) -\ell^{k-2}\cdot G_{n-2}^{(1)}\quad\textnormal{for}\quad n\geq 2. \end{equation} Indeed, if $n=2$, then \begin{align*} G_{2}^{(1)}-&G_{1}^{(1)}\mid T_{k/2,4N}(\ell^{2})+\ell^{k-2}\cdot G_{0}^{(1)}\\ &=\left(F_{m}^{(1)}\mid T_{k/2,4N}(\ell^{4})-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)F_{m}^{(1)}\mid T_{k/2,4N}(\ell^{2})\right)\\ &\quad-\left(\left(F_{m}^{(1)}\mid T_{k/2,4N}(\ell^{2})\right)\mid T_{k/2,4N}(\ell^{2})-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)F_{m}^{(1)}\mid T_{k/2,4N}(\ell^{2})\right)+\ell^{k-2}\cdot F_{m}^{(1)}\\ &=F_{m}^{(1)}\mid T_{k/2,4N}(\ell^{4})-\left(F_{m}^{(1)}\mid T_{k/2,4N}(\ell^{2})\right)\mid T_{k/2,4N}(\ell^{2})+\ell^{k-2}\cdot F_{m}^{(1)} =0 .\\ \end{align*} For $n\geq 3$, \begin{align*} G_{n-1}^{(1)}\mid &T_{k/2,4N}(\ell^{2})-\ell^{k-2}\cdot G_{n-2}^{(1)}\\ &=\left(F_{m}^{(1)}\mid T_{k/2,4N}(\ell^{2n-2})\right)\mid T_{k/2,4N}(\ell^{2})-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)\cdot\left(F_{m}^{(1)}\mid T_{k/2,4N}(\ell^{2n-4})\right)\mid T_{k/2,4N}(\ell^{2})\\ &\quad-\ell^{k-2}\cdot\left(F_{m}^{(1)}\mid T_{k/2,4N}(\ell^{2n-4})-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)F_{m}^{(1)}\mid T_{k/2,4N}(\ell^{2n-6})\right)\\ &=\left(F_{m}^{(1)}\mid T_{k/2,4N}(\ell^{2n-2})T(\ell^{2})-\ell^{k-2}\cdot F_{m}^{(1)}\mid T_{k/2,4N}(\ell^{2n-4})\right)\\ &\quad-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)\cdot\left(F_{m}^{(1)}\mid T_{k/2,4N}(\ell^{2n-4})T(\ell^{2})-\ell^{k-2}\cdot F_{m}^{(1)}\mid T_{k/2,4N}(\ell^{2n-6})\right)\\ &=F_{m}^{(1)}\mid T_{k/2,4N}(\ell^{2n})-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)\cdot F_{m}^{(t)}\mid T_{k/2,4N}(\ell^{2n-2}) =G_{n}^{(1)}. \\ \end{align*} Since $G_{0}^{(1)}=F_{m}^{(1)}=F_{m}$, the principal part of $G_{0}^{(1)}$ is $q^{m}$. By \eqref{1}, the principal part of $G_{1}^{(1)}$ is $\ell^{k-2} q^{m\ell^{2}}$. Moreover, we see from \eqref{3} that, for all $n\geq 0$, the principal part of $G_{n}^{(1)}$ is equal to $\ell^{(k-2)n}q^{m\ell^{2n}}$. Since $F_{m\ell^{2n}}^{(1)}=F_{m\ell^{2n}}$ has principal part $q^{m\ell^{2n}}$, {$G_n^{(1)} - \ell^{(k-2)n}F_{\ell^{2n}m}$ is holomorphic at the cusp $\infty$. Arguing as in the proof of Lemma \ref{lem-sm}}, we have {\[ G_n^{(1)} - \ell^{(k-2)n}F_{\ell^{2n}m} \in M_{k/2}^{\epsilon}(N,1). \]} If $(k,\epsilon)=(3,+)$, then it follows from Lemma \ref{lem-sm} that {$$G_n^{(1)} - \ell^{(k-2)n}F_{\ell^{2n}m} \in S_{k/2}^{\epsilon}(N,1).$$} Since {$S_{k/2}^{\epsilon}(N,1) =\{ 0 \}$} by assumption, we have $G_n^{(1)} = \ell^{(k-2)n}F_{\ell^{2n}m}$. If $(k,\epsilon) \neq (3,+)$, then {$M_{2-k/2}^{\epsilon^{*}}(N,1)=0$} (Remark \ref{rmk-zero}). By Borcherds' obstruction theorem, there exists a reduced form $g$ such that $g=1+O(q)$. We see from the definition of reduced forms that $B(m,0)=B(\ell^{2n}m,0)=0$. Hence the constant term of $G_{n}^{(1)}-\ell^{(k-2)n}F_{\ell^{2n}m}$ is zero, and thus {$$G_n^{(1)} - \ell^{(k-2)n}F_{\ell^{2n}m} \in S_{k/2}^{\epsilon}(N,1)=\{0\}.$$} Therefore we have $G_n^{(1)} = \ell^{(k-2)n}F_{\ell^{2n}m}$ in this case too. \end{proof} Write $$G_{n}^{(t)}=\mbox{(principal part)}+\sum\limits_{\substack{d\geq 0, \\ \chi_{p}(d)\neq -\epsilon_{p}~\textnormal{for all}~p\mid N}}C_{n}(d)q^{d}.$$ Proposition \ref{prop-FG} implies that, for all $n$ and $d$, \begin{equation}\label{4} C_{n}(d)=\ell^{(k-2)n}B_{t}(\ell^{2n}m,d). \end{equation} \begin{lemma} \label{lem-BC} The following are true: \begin{enumerate} \item[\textnormal{(\lowerromannumeral{1})}] For any $d\geq 0$ with $\chi_{p}(d)\neq -\epsilon_{p}$ for all $p\mid N$, we have $$C_{n}(\ell^{2}d)-\ell^{k-2}\cdot C_{n-1}(d)=C_{0}(\ell^{2n+2}d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)C_{0}(\ell^{2n}d).$$ \item[\textnormal{(\lowerromannumeral{2})}] If $\chi_{p}\neq -\epsilon_{p}$ for all $p\mid N$ and $\ell\parallel d$, then $$C_{n}(d)=C_{0}(\ell^{2n}d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)C_{0}(\ell^{2n-2}d).$$ \item[\textnormal{(\lowerromannumeral{3})}] If $\chi_{p}\neq -\epsilon_{p}$ for all $p\mid N$ and $\ell\nmid d$, then $$C_{n}(d)=C_{0}(\ell^{2n}d)+\left[\left(\frac{(-1)^{\lambda}d}{\ell}\right)-\left(\frac{(-1)^{\lambda}m}{\ell}\right)\right]\cdot\sum_{k=1}^{n}\ell^{(\lambda-1)k}\left(\frac{(-1)^{\lambda}d}{\ell}\right)^{k-1}C_{0}(\ell^{2n-2k}d).$$ \end{enumerate} \end{lemma} \begin{proof} We first prove (\lowerromannumeral{1}). Note that \begin{align*} \ell^{2}d&\mbox{-th coefficient of }G_{1}^{(m)}=C_{1}(\ell^{2}d),\\ \ell^{2}d&\mbox{-th coefficient of }F_{m}^{(t)}\mid T_{k/2,4N}(\ell^{2})-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)F_{m}^{(t)}\\ &=B_{t}(m,\ell^{4}d)+\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}\ell^{2}d}{\ell}\right)B_{t}(m,\ell^{2}d)+\ell^{k-2}\cdot B_{t}(m,d)\\ &\quad-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)B_{t}(m,\ell^{2}d)\\ &=B_{t}(m,\ell^{4}d)+\ell^{k-2}\cdot B_{t}(m,d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)B_{t}(m,\ell^{2}d)\\ &=C_{0}(\ell^{4}d)+\ell^{k-2}\cdot C_{0}(d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)C_{0}(\ell^{2}d). \end{align*} By \eqref{2}, we have $$C_{1}(\ell^{2}d)=C_{0}(\ell^{4}d)+\ell^{k-2}\cdot C_{0}(d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)C_{0}(\ell^{2}d).$$ Hence, $$C_{1}(\ell^{2}d)-\ell^{k-2}\cdot C_{0}(d)=C_{0}(\ell^{4}d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)C_{0}(\ell^{2}d).$$ When $n\geq 2$, we use \eqref{3} to find that $$C_{n}(\ell^{2}d)-\ell^{k-2}\cdot C_{n-1}(d)=C_{n-1}(\ell^{4}d)-\ell^{k-2}\cdot C_{n-2}(\ell^{2}d)=\cdots=C_{1}(\ell^{2n}d)-\ell^{k-2}\cdot C_{0}(\ell^{2n-2}d).$$ From \eqref{2}, we see that $$C_{1}(\ell^{2n}d)=C_{0}(\ell^{2n+2}d)+\ell^{k-2}\cdot C_{0}(\ell^{2n-2}d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)C_{0}(\ell^{2n}d).$$ Thus we obtain $$C_{n}(\ell^{2}d)-\ell^{k-2}\cdot C_{n-1}(d)=C_{0}(\ell^{2n+2}d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)C_{0}(\ell^{2n}d).$$ We now prove (\lowerromannumeral{2}) and (\lowerromannumeral{3}). Observe that \begin{align*} d&\mbox{-th coefficient of }G_{1}^{(t)}=C_{1}(d),\\ d&\mbox{-th coefficient of }F_{m}^{(t)}\mid T_{k/2,4N}(\ell^{2})-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)F_{m}^{(t)}\\ &=B_{t}(m,\ell^{2}d)+\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}d}{\ell}\right)B_{t}(m,d)+\ell^{k-2}\cdot B_{t}(m,d/\ell^{2})\\ &\quad-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)B_{t}(m,d)\\ &= \begin{cases} B_{t}(m,\ell^{2}d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)B_{t}(m,d) & \textnormal{if}\quad\ell\parallel d,\\ B_{t}(m,\ell^{2}d)+\ell^{\lambda-1}\left[\left(\frac{(-1)^{\lambda}d}{\ell}\right)-\left(\frac{(-1)^{\lambda}m}{\ell}\right)\right]B_{t}(m,d) & \textnormal{if}\quad\ell\nmid d, \end{cases}\\ &= \begin{cases} C_{0}(\ell^{2}d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)C_{0}(d) & \textnormal{if}\quad\ell\parallel d,\\ C_{0}(\ell^{2}d)+\ell^{\lambda-1}\left[\left(\frac{(-1)^{\lambda}d}{\ell}\right)-\left(\frac{(-1)^{\lambda}m}{\ell}\right)\right]C_{0}(d) & \textnormal{if}\quad\ell\nmid d. \end{cases} \end{align*} By \eqref{2}, we have $$C_{1}(d)= \begin{cases} C_{0}(\ell^{2}d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)C_{0}(d) & \textnormal{if}\quad\ell\parallel d,\\ C_{0}(\ell^{2}d)+\ell^{\lambda-1}\left[\left(\frac{(-1)^{\lambda}d}{\ell}\right)-\left(\frac{(-1)^{\lambda}m}{\ell}\right)\right]C_{0}(d) & \textnormal{if}\quad\ell\nmid d. \end{cases} $$ On the other hand, it follows from \eqref{3} that $$C_{n}(d)=C_{n-1}(\ell^{2}d)-\ell^{k-2}\, C_{n-2}(d)+\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}d}{\ell}\right)C_{n-1}(d)$$ for $n\geq 2$. Applying part (\lowerromannumeral{1}) to $C_{n-1}(\ell^{2}d)-\ell^{k-2}\, C_{n-2}(d)$, we obtain $$C_{n}(d)=C_{0}(\ell^{2n}d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)C_{0}(\ell^{2n-2}d)+\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}d}{\ell}\right)C_{n-1}(d).$$ If $\ell\parallel d$, then we immediately obtain part (\lowerromannumeral{2}). Now assume that $\ell\nmid d$. Then by induction we have \begin{align*} C_{n}(d)&=C_{0}(\ell^{2n}d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)C_{0}(\ell^{2n-2}d)+\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}d}{\ell}\right)C_{n-1}(d)\\ &=C_{0}(\ell^{2n}d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)C_{0}(\ell^{2n-2}d)+\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}d}{\ell}\right)C_{0}(\ell^{2n-2}d)\\ &\quad+\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}d}{\ell}\right)\cdot\left[\left(\frac{(-1)^{\lambda}d}{\ell}\right)-\left(\frac{(-1)^{\lambda}m}{\ell}\right)\right] \cdot \sum_{k=1}^{n-1}\ell^{(\lambda-1)k}\left(\frac{(-1)^{\lambda}d}{\ell}\right)^{k-1}C_{0}(\ell^{2n-2k-2}d)\\ &=C_{0}(\ell^{2n}d)+\ell^{\lambda-1}\left[\left(\frac{(-1)^{\lambda}d}{\ell}\right)-\left(\frac{(-1)^{\lambda}m}{\ell}\right)\right]\cdot C_{0}(\ell^{2n-2}d)\\ &\quad+\left[\left(\frac{(-1)^{\lambda}d}{\ell}\right)-\left(\frac{(-1)^{\lambda}m}{\ell}\right)\right]\cdot\sum_{k=2}^{n}\ell^{(\lambda-1)k}\left(\frac{(-1)^{\lambda}d}{\ell}\right)^{k-1}C_{0}(\ell^{2n-2k}d)\\ &=C_{0}(\ell^{2n}d)+\left[\left(\frac{(-1)^{\lambda}d}{\ell}\right)-\left(\frac{(-1)^{\lambda}m}{\ell}\right)\right]\cdot\sum_{k=1}^{n}\ell^{(\lambda-1)k}\left(\frac{(-1)^{\lambda}d}{\ell}\right)^{k-1}C_{0}(\ell^{2n-2k}d). \end{align*} This proves the identity in part (iii). \end{proof} We now prove Theorem \ref{thm-1.4} and Corollary \ref{cor-1.5}. \begin{proof}[Proof of Theorem \ref{thm-1.4}] \hfill \noindent (\lowerromannumeral{1}) By \eqref{4} and Lemma \ref{lem-BC} (\lowerromannumeral{1}), we have \begin{align*} & B_{t}(m,\ell^{2n+2}d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)B_{t}(m,\ell^{2n}d)\\ =&C_{0}(\ell^{2n+2}d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)C_{0}(\ell^{2n}d)=C_{n}(\ell^{2}d)-\ell^{k-2}C_{n-1}(d)\\ =&\ell^{(k-2)n}B_{t}(\ell^{2n}m,\ell^{2}d)-\ell^{k-2}\cdot\ell^{(k-2)(n-1)}B_{t}(\ell^{2n-2}m,d)\\ =&\ell^{(k-2)n} \left \{ B_{t}(\ell^{2n}m,\ell^{2}d)-B_{t}(\ell^{2n-2}m,d) \right \}. \end{align*} (\lowerromannumeral{2}) Using \eqref{4} and Lemma \ref{lem-BC} (\lowerromannumeral{3}), we obtain \begin{align*} &\ell^{(k-2)n}B_{t}(\ell^{2n}m,d)=C_{n}(d)\\ =&C_{0}(\ell^{2n}d)+\left[\left(\frac{(-1)^{\lambda}d}{\ell}\right)-\left(\frac{(-1)^{\lambda}m}{\ell}\right)\right]\cdot\sum_{k=1}^{n}\ell^{(\lambda-1)k}\left(\frac{(-1)^{\lambda}d}{\ell}\right)^{k-1}C_{0}(\ell^{2n-2k}d)\\ =&B_{t}(m,\ell^{2n}d)+\left[\left(\frac{(-1)^{\lambda}d}{\ell}\right)-\left(\frac{(-1)^{\lambda}m}{\ell}\right)\right]\cdot\sum_{k=1}^{n}\ell^{(\lambda-1)k}\left(\frac{(-1)^{\lambda}d}{\ell}\right)^{k-1}B_{t}(m,\ell^{2n-2k}d). \end{align*} (\lowerromannumeral{3}) By \eqref{4} and Lemma \ref{lem-BC} (\lowerromannumeral{2}), \begin{align*} \ell^{(k-2)n}B_{t}(\ell^{2n}m,d)&=C_{n}(d)=C_{0}(\ell^{2n}d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)C_{0}(\ell^{2n-2}d)\\ &=B_{t}(m,\ell^{2n}d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)B_{t}(m,\ell^{2n-2}d). \end{align*} \end{proof} \begin{proof}[Proof of Corollary \ref{cor-1.5}] (1) First, suppose that $\left(\frac{-d}{\ell}\right)=\left(\frac{-m}{\ell}\right)\neq 0$. Then $\ell\nmid d$. By Theorem \ref{thm-1.4} (\lowerromannumeral{2}), $$\ell^{(k-2)n}B_{t}(\ell^{2n}m,d)=B_{t}(m,\ell^{2n}d).$$ Now assume that $\ell\parallel d$ and $\ell\parallel m$. Then by Theorem \ref{thm-1.4} (\lowerromannumeral{3}), $$\ell^{(k-2)n}B_{t}(\ell^{2n}m,d)=B_{t}(m,\ell^{2n}d).$$ (2) If $\ell\nmid d$, then $\ell\parallel\ell d$. By Theorem \ref{thm-1.4} (\lowerromannumeral{3}), \begin{align*} &B_{t}(m,\ell^{2n+1}d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)B_{t}(m,\ell^{2n-1}d)\\ =&B_{t}(m,\ell^{2n}(\ell d))-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)B_{t}(m,\ell^{2n-2}(\ell d))\\ =&\ell^{(k-2)n}\, B_{t}(\ell^{2n}m,\ell d)\equiv 0\pmod{\ell^{(k-2)n}}. \end{align*} If $\ell\mid d$, then by Theorem \ref{thm-1.4} (\lowerromannumeral{1}), we obtain \begin{align*} &B_{t}(m,\ell^{2n+1}d)-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)B_{t}(m,\ell^{2n-1}d)\\ =&B_{t}(m,\ell^{2n+2}(d/\ell))-\ell^{\lambda-1}\left(\frac{(-1)^{\lambda}m}{\ell}\right)B_{t}(m,\ell^{2n}(d/\ell))\\ =&\ell^{(k-2)n}\, (B_{t}(\ell^{2n}m,\ell d)-B_{t}(\ell^{2n-2}m,d/\ell)) \equiv 0\pmod{\ell^{(k-2)n}}. \end{align*} \end{proof} \section*{Acknowledgement} We would like to thank Professor Yichao Zhang for his kind and valuable comments.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,381
Das Naturschutzgebiet Bruggmatt befindet sich auf dem Gebiet der Gemeinde Dachsberg (Südschwarzwald) im Landkreis Waldshut. Kenndaten Das Naturschutzgebiet wurde mit Verordnung des Regierungspräsidiums Freiburg vom 10. Dezember 1969 ausgewiesen und hat eine Größe von 2,1071 Hektar. Es wird unter der Schutzgebietsnummer 3.078 geführt. Der CDDA-Code für das Naturschutzgebiet lautet 81471 und entspricht der WDPA-ID. Lage und Beschreibung Das Naturschutzgebiet Bruggmatt befindet sich auf dem Gebiet der Gemeinde Dachsberg (Südschwarzwald) auf der Gemarkung Wolpadingen mit einer Gesamtgröße von rund 2 ha. Das Naturschutzgebiet, in der früheren Literatur auch als "Bruggrain" bezeichnet, besteht aus einer Bergwiese und Hangquellmoor mit floristischen Seltenheiten in einem großen Waldgebiet im Südschwarzwald (Hotzenwald). Schutzzweck In der Verordnung von 1969 wurde kein wesentlicher Schutzzweck verankert. Der Schutzzweck des Naturschutzgebiets wird heute in § 23 BNatSchG Abs. 1 Nrn. 1–3 definiert, wobei die Gründe für die Ausweisung eines Naturschutzgebiets sind nicht nur auf ökologische oder ästhetische Gesichtspunkte beschränkt sind, sondern erstrecken sich auch auf wissenschaftliche, naturgeschichtliche und landeskundliche Aspekte. Arteninventar Im Naturschutzgebiet Bruggmatt wurden folgende Arten erfasst: Amphibien Bufo bufo (Erdkröte), Rana temporaria (Grasfrosch) Höhere Pflanzen/Farne Aquilegia vulgaris agg. (Artengruppe Gewöhnliche Akelei), Arnica montana (Berg-Wohlverleih), Botrychium lunaria (Echte Mondraute), Carex davalliana (Davalls Segge), Carex echinata (Stern-Segge), Carex lasiocarpa (Faden-Segge), Carex limosa (Schlamm-Segge), Carex nigra agg. (Artengruppe Braune Segge), Carex pauciflora (Wenigblütige Segge), Carex pulicaris (Floh-Segge), Carex rostrata (Schnabel-Segge), Carlina acaulis (Stängellose Eberwurz), Coeloglossum viride (Hohlzunge), Dactylorhiza maculata agg. (Artengruppe Geflecktes Knabenkraut), Dactylorhiza majalis agg. (Artengruppe Breitblättriges Knabenkraut), Danthonia decumbens (Dreizahn), Drosera rotundifolia (Rundblättriger Sonnentau), Epilobium palustre (Sumpf-Weidenröschen), Epipactis helleborine (Breitblättrige Stendelwurz), Equisetum fluviatile (Teich-Schachtelhalm), Eriophorum angustifolium (Schmalblättriges Wollgras), Eriophorum latifolium (Breitblättriges Wollgras), Eriophorum vaginatum (Moor-Wollgras), Galium uliginosum (Moor-Labkraut), Genista sagittalis (Flügel-Ginster), Gentianella campestris (Feld-Enzian), Gymnadenia conopsea (Mücken-Händelwurz), Hieracium lactucella (Geöhrtes Habichtskraut), Juncus squarrosus (Sparrige Binse), Lilium bulbiferum (Feuer-Lilie), Listera ovata (Großes Zweiblatt), Menyanthes trifoliata (Fieberklee), Nardus stricta (Borstgras), Orchis mascula (Stattliches Knabenkraut), Orchis morio (Kleines Knabenkraut), Parnassia palustris (Herzblatt), Pedicularis sylvatica (Wald-Läusekraut), Pinguicula vulgaris (Gewöhnliches Fettkraut), Platanthera chlorantha (Berg-Waldhyazinthe), Polygala serpyllifolia (Quendel-Kreuzblume), Potentilla palustris (Blutauge), Primula veris (Arznei-Schlüsselblume), Rhynchospora alba (Weiße Schnabelsimse), Scorzonera humilis (Niedrige Schwarzwurzel), Thesium pyrenaicum (Wiesen-Leinblatt), Trichophorum alpinum (Alpen-Wollgras), Trifolium montanum (Berg-Klee), Trollius europaeus subsp. europaeus (Trollblume, Nominatsippe), Vaccinium oxycoccos agg. (Artengruppe Moosbeere), Vaccinium uliginosum (Gewöhnliche Moorbeere), Vaccinium vitis-idaea (Preiselbeere), Viola canina (Hunds-Veilchen), Viola palustris (Sumpf-Veilchen) Reptilien Lacerta vivipara (Waldeidechse) Vögel Accipiter nisus (Sperber), Aegolius funereus (Rauhfusskauz), Anthus trivialis (Baumpieper), Scolopax rusticola (Waldschnepfe) Siehe auch Liste der Naturschutzgebiete in Baden-Württemberg Liste der Naturschutzgebiete im Landkreis Waldshut Literatur Regierungspräsidium Freiburg (Hrsg.): Die Naturschutzgebiete im Regierungsbezirk Freiburg. Thorbecke, Stuttgart 2011, ISBN 978-3-7995-5177-9, S. 653–654 Einzelnachweise Weblinks Karte des Naturschutzgebietes Naturschutzgebiet Bruggmatt auf: Naturschutzgebiet im Landkreis Waldshut Schutzgebiet (Umwelt- und Naturschutz) in Europa Geographie (Dachsberg (Südschwarzwald)) Schutzgebiet im Schwarzwald
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,592
Q: Creating a delay for winning or loosing picture In my code I am trying to create a game and when you get a certain amount of points you will get either a winning or loosing sign. but the issue i am having is that the loosing or winning screen is not showing for enough time. If I try to use a function like pygame.time.wait() or pygame.time.wait() either of these just stop the game on the frame where the final bullet either makes you win ore loose but unfortunately only shows the you loose or the you in screen for about one hundredth of a second which is not what i want. import pygame import sys import pygame.freetype import random import time pygame.init() pygame.font.init() pygame.init() pygame.mixer.init() pygame.display.set_caption("this game") screen = pygame.display.set_mode((1280, 720)) clock = pygame.time.Clock() # A clock to limit the frame rate. bomb =pygame.font.get_fonts() #print(bomb) class Background: picture = pygame.image.load("C:/images/space.jpg").convert() picture = pygame.transform.scale(picture, (1280, 720)) def __init__(self, x, y): self.xpos = x self.ypos = y def draw(self): screen.blit(self.picture, (self.xpos, self.ypos)) class Loosing: picture = pygame.image.load("C:/images/eye.jpg").convert() picture = pygame.transform.scale(picture, (1280, 720)) def __init__(self, x, y): self.xpos = x self.ypos = y def draw(self): screen.blit(self.picture, (self.xpos, self.ypos)) class Winning: picture = pygame.image.load("C:/images/lightspeed.jpeg").convert() picture = pygame.transform.scale(picture, (1280, 720)) def __init__(self, x, y): self.xpos = x self.ypos = y def draw(self): screen.blit(self.picture, (self.xpos, self.ypos)) class Annoy: picture = pygame.image.load("C:/aliens/Ocram_animated_2.gif").convert() picture = pygame.transform.scale(picture, (200, 200)) def __init__(self, x, y): self.xpos = x self.ypos = y self.speed_x = 0 self.speed_y = 0 self.rect = self.picture.get_rect() def update(self): self.xpos += self.speed_x self.ypos += self.speed_y def draw(self): screen.blit(self.picture, (self.xpos, self.ypos)) class Player_one: picture = pygame.image.load("C:/aliens/ezgif.com-crop.gif") picture = pygame.transform.scale(picture, (200, 200)) def __init__(self, x, y): self.xpos = x self.ypos = y self.speed_x = 0 self.speed_y = 0 self.rect = self.picture.get_rect() self.rect.x = self.xpos self.rect.y = self.ypos def update(self): self.xpos += self.speed_x self.ypos += self.speed_y self.rect.x = self.xpos self.rect.y = self.ypos def draw(self): screen.blit(self.picture, (self.xpos, self.ypos)) class Player_two: picture = pygame.image.load("C:/aliens/Giantmechanicalcrab2 - Copy.gif") picture = pygame.transform.scale(picture, (300, 200)) def __init__(self, x, y): self.xpos = x self.ypos = y self.speed_x = 0 self.speed_y = 0 self.rect = self.picture.get_rect() self.rect.x = self.xpos self.rect.y = self.ypos def update(self): self.xpos += self.speed_x self.ypos += self.speed_y self.rect.x = self.xpos self.rect.y = self.ypos def draw(self): screen.blit(self.picture, (self.xpos, self.ypos)) class Bullet_player_one(pygame.sprite.Sprite): picture = pygame.image.load("C:/aliens/giphy.gif").convert_alpha() picture = pygame.transform.scale(picture, (100, 100)) def __init__(self, owner, start_x, start_y, speed_x): self.owner = owner self.xpos = start_x self.ypos = start_y self.speed_x = speed_x super().__init__() self.rect = self.picture.get_rect() self.rect.x = self.xpos self.rect.y = self.ypos def update(self): self.xpos += self.speed_x self.rect.x = self.xpos self.rect.y = self.ypos if self.rect.right < 0 or self.rect.left > screen.get_width(): self.kill() def draw(self): screen.blit(self.picture, (self.xpos, self.ypos)) def is_collided_with(self, sprite): return self.rect.colliderect(sprite.rect) class Bullet_player_two(pygame.sprite.Sprite): picture = pygame.image.load("C:/aliens/MajesticLavishBackswimmer-size_restricted.gif").convert_alpha() picture = pygame.transform.scale(picture, (100, 100)) picture = pygame.transform.rotate(picture, 180) def __init__(self, owner, start_x, start_y, speed_x): self.owner = owner self.xpos = start_x self.ypos = start_y self.speed_x = speed_x super().__init__() self.rect = self.picture.get_rect() self.rect.x = self.xpos self.rect.y = self.ypos def update(self): self.xpos += self.speed_x self.rect.x = self.xpos self.rect.y = self.ypos if self.rect.right < 0 or self.rect.left > screen.get_width(): self.kill() def draw(self): screen.blit(self.picture, (self.xpos, self.ypos)) def is_collided_with(self, sprite): return self.rect.colliderect(sprite.rect) class Bullet_right(pygame.sprite.Sprite): picture = pygame.image.load("C:/aliens/big_boy.png").convert_alpha() picture = pygame.transform.scale(picture, (100, 100)) def __init__(self, owner, start_x, start_y, speed_x): self.owner = owner self.xpos = start_x self.ypos = start_y self.speed_x = speed_x super().__init__() self.rect = self.picture.get_rect() self.rect.x = self.xpos self.rect.y = self.ypos def update(self): self.xpos += self.speed_x self.rect.x = self.xpos self.rect.y = self.ypos if self.rect.right < 0 or self.rect.left > screen.get_width(): self.kill() def draw(self): screen.blit(self.picture, (self.xpos, self.ypos)) def is_collided_with(self, sprite): return self.rect.colliderect(sprite.rect) class Bullet_left(pygame.sprite.Sprite): picture = pygame.image.load("C:/aliens/blinking_doge_by_euamodeus-d7vjq7m.gif").convert_alpha() picture = pygame.transform.scale(picture, (100, 100)) def __init__(self, owner, start_x, start_y, speed_x): self.owner = owner self.xpos = start_x self.ypos = start_y self.speed_x = speed_x super().__init__() self.rect = self.picture.get_rect() self.rect.x = self.xpos self.rect.y = self.ypos def update(self): self.xpos += self.speed_x self.rect.x = self.xpos self.rect.y = self.ypos if self.rect.right < 0 or self.rect.left > screen.get_width(): self.kill() def draw(self): screen.blit(self.picture, (self.xpos, self.ypos)) def is_collided_with(self, sprite): return self.rect.colliderect(sprite.rect) #class Text: # def __init__(self, x, y, texty, score): # self.ypos = y # self.xpos = x # self.text = texty # self.score = score # # def update(self): # # # def draw(self): # screen.blit(self.picture, (self.xpos, self.ypos)) player_one = Player_one(0, 500) player_two = Player_two(1000, 0) skull = Annoy(520, -100) cliff = Background(0, 0) player_one_bullet_list = pygame.sprite.Group() player_two_bullet_list = pygame.sprite.Group() right_list = pygame.sprite.Group() left_list = pygame.sprite.Group() player_one_bullet = None player_two_bullet = None left = None right = None count = 100 wait = 100 locked = True lockered = True player_one_score = 0 player_two_score = 0 loaded = True ready = True lose = Loosing(0, 0) win = Winning(0, 0) word = "" difficulty = input("easy, medium or hard") locked = False ping = False done = False diff = [] on_ground = False if difficulty != "easy" and difficulty != "medium" and difficulty != "hard": while difficulty != "easy" and difficulty != "medium" and difficulty != "hard": difficulty = input("easy, medium or hard") while True: basicfont = pygame.font.SysFont('impact', 48) text = basicfont.render(' player one Score:' + str(player_one_score), True, (255, 125, 0), (0, 0, 0)) textrect = text.get_rect() textrect.centerx = screen.get_rect().centerx textrect.centery = screen.get_rect().centery basicfont_two = pygame.font.SysFont('impact', 48) text_two = basicfont.render('player two Score:' + str(player_two_score) , True, (255, 0, 0), (0, 0, 0)) textrect = text.get_rect() textrect.centerx = screen.get_rect().centerx textrect.centery = screen.get_rect().centery for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() elif event.type == pygame.KEYDOWN: if event.key == pygame.K_w: player_one.speed_y = -5 elif event.key == pygame.K_s: player_one.speed_y = 5 #elif event.key == pygame.K_UP: #player_two.speed_y = -5 #elif event.key == pygame.K_DOWN: #player_two.speed_y = 5 elif event.key == pygame.K_SPACE: if len(player_one_bullet_list) == 0: player_one_bullet = Bullet_player_one(player_one, player_one.xpos, player_one.ypos, 20) player_one_bullet_list.add(player_one_bullet) pygame.mixer.Channel(0).play(pygame.mixer.Sound("C:/sounds/Big_Explosion_Cut_Off.wav")) #elif event.key == pygame.K_KP0: elif event.type == pygame.KEYUP: # Stop moving when the keys are released. if event.key == pygame.K_s and player_one.speed_y > 0: player_one.speed_y = 0 elif event.key == pygame.K_w and player_one.speed_y < 0: player_one.speed_y = 0 if event.key == pygame.K_DOWN and player_two.speed_y > 0: player_two.speed_y = 0 elif event.key == pygame.K_UP and player_two.speed_y < 0: player_two.speed_y = 0 if difficulty == "easy": choice = random.randint(1, 10) choosen = random.randint(1, 10) if choice == 2 and choosen == 4 or choice == 4 and choosen == 2: if len(player_two_bullet_list) == 0: player_two_bullet = Bullet_player_two(player_two, player_two.xpos, player_two.ypos, -20) player_two_bullet_list.add(player_two_bullet) pygame.mixer.Channel(1).play(pygame.mixer.Sound("C:/sounds/Dumpster_Rattle.wav")) elif difficulty == "medium": choice = random.randint(1, 10) if choice == 2 or choice == 4 or choice == 6 or choice == 8 or choice == 10: if len(player_two_bullet_list) == 0: player_two_bullet = Bullet_player_two(player_two, player_two.xpos, player_two.ypos, -20) player_two_bullet_list.add(player_two_bullet) pygame.mixer.Channel(1).play(pygame.mixer.Sound("C:/sounds/Dumpster_Rattle.wav")) else: choice = random.randint(1, 10) if choice == 2 or choice == 4 or choice == 6 or choice == 8 or choice == 10 or choice == 5 or choice == 7 or choice == 9 or choice == 1: if len(player_two_bullet_list) == 0: player_two_bullet = Bullet_player_two(player_two, player_two.xpos, player_two.ypos, -20) player_two_bullet_list.add(player_two_bullet) pygame.mixer.Channel(1).play(pygame.mixer.Sound("C:/sounds/dumper.wav")) if difficulty == "easy": chooso = random.randint(1, 10) if chooso == 9 or chooso == 4: if on_ground == False: choosing = random.randint(1, 10) if choosing == 3: skull.speed_y = 10 if skull.ypos == 520: on_ground = True else: skull.speed_y = -10 if skull.ypos == -100: on_ground = False else: skull.speed_y = 0 choosa = random.randint(1, 10) locking = random.randint(1, 10) if choosa == 3 and locking == 7 and len(right_list) == 0: right = Bullet_right(skull, skull.xpos, skull.ypos, -20) #should be left right_list.add(right) elif choosa == 4 and locking == 8 and len(left_list) == 0: left = Bullet_left(skull, skull.xpos, skull.ypos, 20) #should be right left_list.add(left) if player_one.ypos == 520: player_one.speed_y = -5 if player_one.ypos == 0: player_one.speed_y = +5 if player_two.ypos == 520: player_two.speed_y = -5 if player_two.ypos == 0: player_two.speed_y = +5 # if player_one_bullet.xpos == 520: # player_one_bullet.kill() player_one.update() player_two.update() skull.update() cliff.draw() player_one.draw() player_two.draw() skull.draw() screen.blit(text, (0, 0)) screen.blit(text_two, (700, 0)) player_one_bullet_list.update() player_two_bullet_list.update() left_list.update() right_list.update() for player_one_bullet in player_one_bullet_list: player_one_bullet.draw() for player_two_bullet in player_two_bullet_list: player_two_bullet.draw() for left in left_list: left.draw() for right in right_list: right.draw() for bullet in player_one_bullet_list: if bullet.is_collided_with(player_two): player_one_bullet.kill() player_one_score +=1 pygame.mixer.Channel(2).play(pygame.mixer.Sound("C:/sounds/Beep_Short.wav")) #pygame.mixer.music.load("C:/sounds/hammond.wav") #pygame.mixer.music.play(3) for bullet in player_two_bullet_list: if bullet.is_collided_with(player_one): player_two_bullet.kill() player_two_score +=1 pygame.mixer.Channel(2).play(pygame.mixer.Sound("C:/sounds/Emergency_Siren_Short_Burst.wav")) # if bullet.is_collided_with(skull): # player_two_bullet.kill() # if player_two_score == 0: # player_two_score = 0 # elif player_two_score > 0: # player_two_score = player_two_score-1 # elif bullet.is_collided_with(player_one_bullet): # player_one_bullet.kill() # player_two_bullet.kill() #pygame.mixer.music.load("C:/sounds/clarkson.wav") #pygame.mixer.music.play(3) for right in right_list: if right.is_collided_with(player_one): right.kill() if player_one_score == 0: player_one_score = 0 else: player_one_score += -1 for left in left_list: if left.is_collided_with(player_two): left.kill() if player_two_score == 0: player_two_score = 0 else: player_two_score += -1 if player_one_score == 100: win.draw() basicfont = pygame.font.SysFont('sylfaen', 120) texto = basicfont.render('You Win!', True, (0, 255, 0)) #(0, 0, 0)) textrecto = text.get_rect() textrecto.centerx = screen.get_rect().centerx textrecto.centery = screen.get_rect().centery screen.blit(texto, (400, 360)) clock.tick(1) player_one_score = 0 player_two_score = 0 elif player_two_score == 100: lose.draw() basicfont = pygame.font.SysFont('chiller', 120) texte = basicfont.render('You Lose!', True, (255, 0, 0)) #(0, 0, 0)) textrecte = text.get_rect() textrecte.centerx = screen.get_rect().centerx textrecte.centery = screen.get_rect().centery screen.blit(texte, (400, 360)) clock.tick(1) player_one_score = 0 player_two_score = 0 pygame.display.flip() clock.tick(60) so far what outputs i get are thing like the screen freezes on the final attack waits and then shows the code and when shows the you win or loose sign only for about 1\100th of a second which is not enough and also when I put it below most of the code as well it does the exact thing A: In the last two if statements, you reset the scores to 0. player_one_score = 0 player_two_score = 0 This causes the if statements to not be executed in the next loop, so the "You win"/"You lose" sign is not drawn in the next loop. Maybe you can make a button to restart the game, which when clicked, resets the scores to 0 and restart the game.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,872
Candlelight Records er et uafhængigt pladeselskab med beliggenhed i Europa oprindeligt grundlagt af bassisten fra Extreme Noise Terror, Lee Barrett selvom den havde haft en afdeling i USA siden januar 2001. Candlelight Records specialisere sig i black- og dødsmetal og har skrevet kontrakt med bands som Emperor, Obituary, 1349, Theatre of Tragedy og Zyklon. Bandet er kendt for at have udgivet tidlige albums fra indflydelsesrige bands som Opeth og Emperor. Candlelight Records er i en medvirksomhed med Appease Me Records og AFM Records. Artister Candlelight Storbritannien: 1349, Abigail Williams, Age of Silence,* Averse Sefira,* Blut aus Nord, Carnal Forge,* Crionics, Crowbar, Dam, Daylight Dies, Emperor, Epoch of Unlight,* Forest Stream, Furze,* Grimfist, Ihsahn, Illdisposed,* Insomnium, Kaamos, Lost Eden,* Manes,* Mithras, Myrkskog, Nebelhexe,* Novembers Doom,* Octavia Sperati, October File,* Omnium Gatherum,* Onslaught,* Paganize,* Pantheon I,* Sear Bliss,* The Seventh Cross, Sigh,* Starkweather, Stonegard, Subterranean Masquerade,* Thine Eyes Bleed,* Throne of Katarsis,* To-Mera, Wolverine, Zyklon* betegnet for ikke at være på Candlelights liste i USA. Candlelight USA: Aeternus, Amoral, Audrey Horne, Bal-Sagoth, Battered, Bronx Casket Co., Capricorns, Candlemass, Centinex, Dark Funeral, Dead Man in Reno, Destruction, The Deviant, Dismember, Electric Wizard, Elvenking, Enslaved, Entombed, Firebird, Gorgoroth, Grand Magus, Hevein, Insense, Jorn, Jotunspor, Keep of Kalessin, Khold, Lord Belial, Manngard, Marduk, Masterplan, The Mighty Nimbus, Mindgrinder, Monolithe, Necrophobic, Nightmare, Of Graves and Gods, Obituary, Opeth, Overmars, P.H.O.B.O.S., Pro-Pain, The Project Hate MCMXCIX, Ram-Zet, Rob Rock, Sahg, Satariel, SCUM, Setherial, Seven Witches, Shakra, She Said Destroy, sHEAVY, Sinister, Slumber, Space Odyssey, Spektr, Susperia, Taint, Tenebre, Theatre of Tragedy, Thyrane, Thyrfing, Time Requiem, Torchbearer, Trendkill, U.D.O., Vader, Vreid, The Wake, Windir, Witchcraft,Whitechapel. Henvisninger Candlelight i USA Candlelight i Europa Noter Pladeselskaber fra Storbritannien Pladeselskaber fra USA
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,262
Emperor Culture Group Ltd. 491 (Hong Kong) 4:00 PM HKST 07/16/19 $0.115 HKD -0.005 -4.17% Media/Entertainment -0.90% Chi Fai Wong, 61 Executive Director, Emperor Culture Group Ltd. Mr. Chi Fai Wong is Chairman at Ulferts International Ltd. and a Member at Association of Chartered Certified Accountants (Hong Kong). He is on the Board of Directors at Emperor International Holdings Ltd., Emperor Entertainment Hotel Ltd., Emperor Culture Group Ltd. and Emperor Watch & Jewellery Ltd. Mr. Wong was previously employed as Executive Director by Emperor Entertainment Group Ltd. and Executive Director by New Media Group Holdings Ltd. He also served on the board at Chaoyue Group Ltd. News Emperor Culture Group Ltd.491 No news for 491 in the past two years. All Company Executives Emperor Culture Group Ltd. Man Seung Fan, 56 Chairman Chi Fai Wong, 63 Executive Director Shirley Percy Hughes, 56 Executive Director Ching Loong Yeung, 33 Executive Director Suet Ying Liu Secretary Sau Ying Tam, 51 Independent Non-Executive Director Tat Kuen Ho, 45 Independent Non-Executive Director Sim Ling Chan, 57 Independent Non-Executive Director
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,284
{"url":"http:\/\/koreascience.or.kr\/article\/JAKO200011919499985.page","text":"# NORMALIZING MAPPINGS OF AN ANALYTIC GENERIC CR MANIFOLD WITH ZERO LEVI FORM\n\n\u2022 Park, Won-K. (Department of Mathematics University)\n\u2022 Published : 2000.07.01\n\u2022 46 4\n\n#### Abstract\n\nIt is well-known that an analytic generic CR submainfold M of codimension m in Cn+m is locally transformed by a biholomorphic mapping to a plane Cn$\\times$Rm \u2282 Cn$\\times$Cm whenever the Levi form L on M vanishes identically. We obtain such a normalizing biholomorphic mapping of M in terms of the defining function of M. Then it is verified without Frobenius theorem that M is locally foliated into complex manifolds of dimension n.\n\n#### Keywords\n\nCR submanifold;Levi form;biholomorphic mapping\n\n#### References\n\n1. Russ. Math. Surveys, Introduction to the geometry of CR-manifolds E. M. Chirka","date":"2019-12-11 16:42:42","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6956685781478882, \"perplexity\": 3915.6532426470644}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575540531974.7\/warc\/CC-MAIN-20191211160056-20191211184056-00188.warc.gz\"}"}
null
null
Marathon Queen | Inquirer News newsinfo / CDN - Sports CDN - Sports Marathon Queen By: Haide Acuna - @inquirerdotnet Cebu Daily News / 07:02 AM December 11, 2012 There's something about seeing Mary Grace Delos Santos hoist the Marathon Queen trophy at the National Finals of the 36th Milo Marathon that brings tears to your eyes. Especially if you're one of those who have seen her work harder than most while training in Cebu trying to get a spot in the national team while fending off critics naysayers when she's not on the road. Besting her previous Milo Marathon personal best by roughly four minutes, Mary Grace bested both Filipinos and Kenyan runners in the female open category last Sunday clocking 2:49:29. Her closest competitor, Southeast Asian Games veteran and two-time Milo marathon champ Jho-An Banayag was a mile away finishing 2:55:56, while Kenya's Everline Atancha came in third at 3:03:39. Mary Grace ran in only a handful of races in Cebu and one 50K ultramarathon in Tacloban in order to stay focused on her high-altitude training in Baguio City with Coach Roy Vence. "I'm just maintaining my focus in training and it showed in this (36th Milo Marathon Finals) race," Mary Grace told Inquirer Sports at a post-race interview. 2:49:29 is Mary Grace's current personal best, but if you look into her eyes, you know this 25-year-old Zamboangena is hungry for more – breaking the Milo marathon Course record of 2:48:16 set by rival Jho-An Banayag, or better yet, break the national record of 2:38:44 set by Christabel Martes in 2005. "Maybe it's not yet the right time. But I hope that time comes," says Delos Santos. Sports and religion, like oil and water, don't mix Blaming Manny Pacquiao's recent switch in religion for his stunning knock-out defeat against Juan Manuel Marquez is just pure hogwash. I wish the media just drop it and not belabor the issue, as it gets in the way of having a cold and honest assessment of how and why the pound for pound king lost the way he did – the only way by which Manny and the millions of his heartbroken fans can start to regroup, pick-up the pieces, move-on and live to fight another day. Lawyer and hardcore Pacman fanatic Christopher Ang says it best – "I cannot blame some people who believe that the KO loss of Pacquiao can be attributed somewhat to his abandonment of his Catholic Faith and forsaking his rosary, which was with him from the time he started his boxing career. However, from a boxing standpoint, the Marquez punch was a well-timed punch right to the chin. When you are hit by that vicious punch, you are sure to go to sleep regardless of your religious affiliation. One thing for sure, after his KO loss to Marquez, Pacquaio will find a way to rise from this defeat as he did in the past and return stronger than ever. MABUHAY KA MANNY!!!" Philippines vs. Singapore The Philippine Azkals and Singapore Lions will face-off once more tomorrow, Wednesday Dec. 12 in the second leg of the semi-finals of the AFF Suzuki Cup, this time at the Jalan Besar Stadium in Singapore. The Azkals ended their home game at the Rizal Memorial Stadium last Saturday with a scoreless draw against the Lions. If the Philippines scores an early goal in the away game on Wednesday, Singapore will have to score two goals in order to progress to the finals. A one-all draw would put the Philippines at an advantage since away goals merit two scores in case of a tiebreaker. If the 90-minute face-off ends with another scoreless draw, both the Azkals and the Lions will go on a 30-minute extra time, then to a penalty shootout should the overtime still produce zero goals. The best scenario would be for the Philippines to score a goal or two and shut out Singapore from the Suzuki Finals for good. As of Monday afternoon, tickets for the away game in Singapore were already sold out. Our boys will be playing on enemy territory, but the Azkals had once proven that the Lions can still be beaten in their own den. Last September, the Philippines won against Singapore for the first-time ever in a friendly match 2-0. Café France aims for quarterfinal slot, faces NLEX today TAGS: Mary Grace Delos Santos, Milo Little Olympics
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,813
I don't always agree with Al but in this case I think he's right on. Read his blog here & as always I'd love your feedback. The other day I had the privilege of doing a memorial service for an incredible women. Her name was Betty Williams. She lived to be 94 years old. Her life was an incredible testimony of service, perseverance and faith. One of the things I discovered in talking with her family was that she was a poet. I got the opportunity to read some of her work and the one below stood out. Will never rank with peer or sire. For praise of man I don't aspire. Nor from all troubles give release. Because the world seems not to care. I pray that God will guide my feet. A glimpse of God's eternal love. This past Sunday I shared a message from our Losing My Religion series called, "Why Care?" In it I admitted that I sometimes struggle with caring. To say I don't have the gift of mercy would be an understatement. As I look at lives like Mrs. Betty's I am reminded that one of the greatest things we can leave is to be remembered as someone who truly cared. Thank you Mrs. Betty for reminding me what is important and I pray that as He did you, God would "guide my feet" to people who need a touch. I pray that I might be half as good at loving, touching and caring for those people God puts in my path as you were. It's been a long time since I have blogged. I took a break because it seemed more like something I had to do instead of wanted to do. I think I will jump back in. I'm not sure how consistent I will be but here goes. For the past 4 weeks BridgePoint has been in a series called "Losing My Religion." We've been looking at the difference between "religion" and the relationship God wants with us. Each week we've been answering a different "Why?" questions about God, Jesus, trust and this whole journey He wants us to join Him in. This past Sunday we took a look at the question "Why Church?" Why do we need to gather together each week? Do we even need the church anymore? In preparing I was reminded of a time when my wife and I bought a bike for our son. It was his first "big boy" bike. We went to our local Walmart, picked out one we thought he would like and loaded up the box. It wasn't long before I was sitting on the garage floor surrounded by bike parts. Those of you who know me, know that patience isn't one of my strengths, and putting together a bike was a real test of what little I had. After what seemed like an eternity I finally got the bike together and was confident that it was safe to ride. The time came to give him the bike and to say he was excited was an understatement. He jumped on that thing a tore out of the drive way wide-eyed, hooping and hollering all the way down the street. It was in that moment that I forgot how much frustration and irritation the assembly of that bike had caused. You see, his excitement made all the work worthwhile and had I not been willing to put the bike together and endure a few hours of assembly I, nor my son, would have been able to enjoy the results. In our lives God has an incredible experience for us. It's called life with Him. He tells us he wants us to have it "to the fullest," but to do that there is one truth we often ignore. It's the same one that I discovered in putting that bike together. If we want that full life God has for us. If we want to enter into a wide-eyed, hooping and hollering relationship with him... some assembly is required. Families are one of our greatest assets. In Genesis 2 God created family and gave us a great gift. But, as we see from the story of Adam and Eve this masterpiece of God's creation soon faced the same problems we face today. Every family in all of history has faced, is facing and will face storms. Family is not easy but strengthening the family and beginning the remodeling process is simpler than we think. If fact, it begins and hinges on just 11 words. The Lord is the master builder. Remember, He is the creator and designer of family. I don't know much about building but one thing I do know is that you don't bring the builder and the designer into the process at the end. He is consulted from the beginning. One of the reasons many of us have crooked, lopsided and dysfunctional families is that we brought the Builder in at the end. We tried to construct and put together ourselves and our families on our own. We don't know a 2 by 4 from a 6 by 4 and we tried to build our own house and then we asked the Lord to bless our mess. We need to bring Him in at the beginning. God wants to be the builder, building your life, your relationships and your family from the very start. Whether you are contemplating a family, beginning one or in need of a family makeover there is one plan and one Planner. If you let the Lord take control from the beginning or release control and let him take over, He will establish your family according to His will and purpose. He will build your family according to His blueprints. However remember, He will not violate your will. God will give you what you need but you have to release him to do His work. As I stated earlier as with any remodel it takes work, effort, time and determination. My prayer is that you and I will stop our DIY family projects and turn them and ourselves over to the expert. You will be shocked, amazed and pleased at the before and after. P.S. Aren't you glad God doesn't take vacation! If you've watched the World Cup, you have no doubt been annoyed by the sound that drowns out even the TV announcers. The sound is from the vuvuzela, South Africa's answer to the Thunderstick. It's a small plastic trumpet that cost less than a dollar to make and creates no known musical notes. But when thousands of people blow them simultaneously, you get a loud, incessant hum that makes the entire stadium sound like it's being attacked by bees. It's normal to find them at any South African soccer match. Ask just about anyone though and they will tell you they hate them! This past Sunday I preached a message from John 10 where Jesus describes himself as the "Good Shepherd." He tells us that we need to hear, listen and follow His voice. He then, in vs 5, warns us that there will be other voices and we need to learn to not listen to them. In fact, He tells us we should "run away" from those voices. Jesus knows that just like the vuvuzela, Satan wants to distract you and create noise that drowns your ability to hear and listen to Him. If he can do that following Jesus becomes more difficult. Jesus gives us a simple and somewhat unexpected way to eliminate the distraction of the strangers voice. It's sacrificial love. In vs 10 he tells us that Satan wants to "steal and kill and destroy." In other words distract us with the noise of this world and therefore kill our ability to hear Jesus. He wants us to have "life, and have it to the full" if we will only listen and follow His voice. In vs 11 He tells us how to hear that voice when he says the "shepherd lays down his life for the sheep." The strategy for silencing the other voices and noise of the world is in first recognizing and accepting the sacrificial love that Jesus has for us and then practicing that same kind of love by sacrificially loving others. If I want to lead the rich and satisfying life that Jesus offers I must let everything I think, do and say be colored by His love. In order to do that I have to silence the incessant noise of the world that wants to steal my attention and joy. As you watch the World Cup don't let the sound of the vuvuzela keep you from enjoying the matches and as you live your life don't let the noise of the strangers voice steal your love for Jesus and the world he wants to reach. Just got to the hotel in Santo Domingo. To say it was a busy day would be an understatement. We finally arrived in the DR at about 1:00pm after a night stuck in Miami,. We went to get our checked luggage (not really luggage - we were caring in 4 duffle bags full of diapers), and sure enough the luggage didn't make it. Why would I expect anything else. We headed to Casa de Luz so Frank could meet with the employees there. Before the meeting I got a chance to meet with the Dominican Director of SCORE International and some of his team. They were there to lend their support and connect with us. It was a brief meeting but I hope a fruitful one as we discussed potential partnerships between them and Casa de Luz. They also brought along their construction guy who assessed the needs on the 2nd floor addition and he hopes to give an estimate to get it in the dry (roof) and/or completely finish it out soon. Looking forward to seeing those figures. We then went in to the meeting with Casa de Luz leadership and then all of the employees. Now for those who know me well you know my Spanish is limited to counting to 20 or so and maybe a few countrified (not sure if that's a word but my TN & KY friends are right with me) words. Well, you guessed it, the entire meeting was in Spanish. At first I was uncomfortable. Then I realized my role today was not going to be played with words (and that's strange for me). My role today was to pray and lend silent support. But, a funny thing happened during the meeting. I began to understand. Now I don't mean I suddenly became bilingual. I didn't understand a word that was being said. What I do mean is that I began to understand what was happening in the room. God was working. God was speaking. He was calming fears, healing hurts, repairing relationships, restoring confidence and setting things right. Yes, He was using Franks words (that I didn't understand), but it was more than that, it was His Spirit. God was making a way. He was clearing the junk and obstacles that a few hours earlier threatened to take down a vital piece of His kingdom and a tool to reach forgotten and abandoned children, innocent children, that don't care about funding, politics and employee meetings. They just want to be touched, held, cared for and loved. Today I got to see what I think St Francis of Assisi meant when he said "Preach the gospel always, if necessary use words." I got to see my friend Frank be a living reflection of the love, courage and strength of Christ. I got see him be a man filled with grace & mercy. A man filled with God's Spirit. I am grateful for what I saw today even though I couldn't understand a word. More on the DR later.
{ "redpajama_set_name": "RedPajamaC4" }
8,593
\section{Introduction}\label{sec:intro} The $\pi$-calculus~\cite{MPW92} is a widely used process calculus, which models communications between processes using input and output actions, and allows the passing of communication links. Various operational semantics of the $\pi$-calculus have been proposed, which can be classified according to whether transitions are unlabelled or labelled. Unlabelled transitions (so-called reductions) represent completed interactions. As observed in~\cite{SWU10} they give us the internal behaviour of complete systems, whereas to reason compositionally about the behaviour of a system in terms of its components we need labelled transitions. With labelled transitions, we can distinguish early and late semantics~\cite{MPW93}, with the difference being that early semantics allows a process to receive (free) names it already knows from the environment, while the late does not. This creates additional causation in the early case between those inputs and previous output actions making bound names free. All existing reversible versions of the $\pi$-calculus use reduction semantics~\cite{lanese2010reversing,TIEZZI2015684} or late semantics~\cite{cristescu2013compositional,DBLP:journals/corr/abs-1808-08655}. However the early semantics of the (forward-only) $\pi$-calculus is more widely used than the late, partly because it has a sound correspondence with contextual congruences~\cite{10.5555/646246.684864,HondaKYoshida95}. We define $\pi$IH, {the first reversible early $\pi$-calculus}, and give it a denotational semantics in terms of reversible event structures. The new calculus is a reversible form of the internal $\pi$-calculus, or $\pi$I-calculus~\cite{SANGIORGI1996235}, which is a subset of the $\pi$-calculus where every link sent by an output is bound (private), yielding greater symmetry between inputs and outputs. It has been shown that the asynchronous $\pi$-calculus can be encoded in the asynchronous form of the $\pi$I-calculus~\cite{BOREALE1998205}. The $\pi$-calculus has two forms of causation. \emph{Structural} causation, as one would find in CCS, comes directly from the structure of the process, e.g.\ in $\inp{a}{b}.\inp{c}{d}$ the action $\inp{a}{b}$ must happen before $\inp{c}{d}$. \emph{Link} causation, on the other hand, comes from one action making a name available for others to use, e.g.\ in the process $\inp{a}{x}\vert \outp{b}{c}$, the event $\inp{a}{c}$ will be caused by $\outp{b}{c}$ making $c$ a free name. Note that link causation as in this example is present in the early form of the $\pi$I-calculus though not the late{, since {it is created by the process receiving one of its free names}. Restricting ourselves to the $\pi$I-calculus, rather than the full $\pi$-calculus lets us focus on the link causation created by early semantics, since it removes the other forms of link causation present in the $\pi$-calculus.} We base $\pi$IH on the work of Hildebrandt \emph{et al.}~\cite{hildebrandt2017stable}, which used extrusion histories and locations to define a stable non-interleaving early operational semantics for the $\pi$-calculus. We extend the extrusion histories so that they contain enough information to reverse the $\pi$I-calculus, storing not only extrusions but also communications. Allowing processes to evolve, while moving past actions to a history separate from the process, is called dynamic reversibility. By contrast, static reversibility, as in CCSK~\cite{PU07}, lets processes keep their structure during the computation, and annotations are used to keep track of the current state and how actions may be reversed. Event structures are a model of concurrency which describe causation, conflict and concurrency between events. They are `truly concurrent' in that they do not reduce concurrency of events to the different possible interleavings. They have been used to model forward-only process calculi~\cite{Crafa2012,Boudol1989,winskel1982event}, including the $\pi$I-calculus~\cite{Crafa2007compositional}. {Describing reversible processes as event structures is useful because it gives us a simple representation of the causal relationships between actions and gives us equivalences between processes which generate isomorphic event structures. True concurrency in semantics is particularly important in reversible process calculi, as the order actions can reverse in depends on their causal relations~\cite{journals/entcs/PhillipsU07}.} Event structure semantics of dynamically reversible process calculi have the added complexity of the histories and the actions in the process being separated, obscuring the structural causation. This was an issue for Cristescu \emph{et al.}~\cite{CristescuKV16}, who used rigid families~\cite{CastellanHLW14}, related to event structures, to describe the semantics of R$\pi$~\cite{cristescu2013compositional}. Their semantics require a process to first reverse all actions to find the original process, map this process to a rigid family, and then apply each of the reversed memories in order to reach the current state of the process. Aubert and Cristescu~\cite{AUBERT201777} used a similar approach to describe the semantics of a subset of RCCS processes as configuration structures. {We use a different tactic of first mapping to a statically reversible calculus, $\pi$IK, and then obtaining the event structure.} This means that while we do have to reconstruct the original structure of the process, we avoid redoing the actions in the event structure. Our $\pi$IK is inspired by CCSK and the statically reversible $\pi$-calculus of~\cite{DBLP:journals/corr/abs-1808-08655}, which use communication keys to denote past actions. To keep track of link causation, keys are used in a number of different ways in~\cite{DBLP:journals/corr/abs-1808-08655}. In our case we can handle link causation by using keys purely to annotate the action which was performed using the key, and any names which were substituted during that action. Although our two reversible variants of the $\pi$I-calculus have very different syntax and originate from different ideas, we show an operational correspondence between them in Theorem~\ref{the:key-ext-eq}. We do this despite the extrusion histories containing more information than the keys, since they remember what bound names were before being substituted. The mapping from $\pi$IH to $\pi$IK bears some resemblance to the one presented from RCCS to CCSK in~\cite{Medic2016}, though with some important differences. $\pi$IH uses centralised extrusion histories more similar to rho$\pi$~\cite{LANESE201625} while RCCS uses distributed memories. Additionally, unlike CCS, $\pi$I has substitution as part of its transitions and memories are handled differently by $\pi$IK and $\pi$IH, and our mapping has to take this into account. We describe denotational structural event structure semantics of $\pi$IK, partly inspired by~\cite{Crafa2012,Crafa2007compositional}, using reversible bundle event structures~\cite{EFG2018}. Reversible event structures~\cite{journals/jlp/PhillipsU15} allow their events to reverse and include relations describing when events can reverse. Bundle event structures are more expressive than prime event structures, since they allow an event to have multiple possible conflicting causes. This allows us to model parallel composition without having a single action correspond to multiple events. While it would be possible to model $\pi$IK using reversible prime event structures, using bundle event structures not only gives us fewer events, it also lays the foundation for adding rollback to $\pi$IK and $\pi$IH, similarly to~\cite{EFG2018}, which cannot be done using reversible prime event structures. The structure of the paper is as follows: Section~\ref{sec:Ext-sem} describes $\pi$IH; Section~\ref{sec:piik} describes $\pi$IK; Section~\ref{sec:sem-corr} describes the mapping from $\pi$IH to $\pi$IK; Section~\ref{sec:BES} recalls labelled reversible bundle event structures; and Section~\ref{sec:Den-Ev-Sem} gives event structure semantics of $\pi$IK. Proofs of the results presented in this paper can be found in the technical report~\cite{graversen2020event}. \section{$\pi$I-calculus reversible semantics with extrusion histories}\label{sec:Ext-sem} Stable non-interleaving, early operational semantics of the $\pi$-calculus were defined by Hildebrandt \emph{et al.} in~\cite{hildebrandt2017stable}, using locations and extrusion histories to keep track of link causation. We will in this section use a similar approach to define a reversible variant of the $\pi$I-calculus, $\pi$IH, using the locations and histories to keep track of not just causation, but also past actions. The $\pi$I-calculus is a restricted variant of the $\pi$-calculus wherein output on a channel $a$, $\outp{a}{b}$, binds the name being sent, $b$, corresponding to the $\pi$-calculus process $(\nu b)\overline{a}\!\lrangles{b}\!.P$. This creates greater symmetry with the input $\inp{a}{x}$, where the variable $x$ is also bound. The syntax of $\pi$IH processes is: \vspace{3pt}$P::=\sum\limits_{i\in I} \alpha_i.P_i \;\mid\; P_0\vert P_1 \;\mid \; (\nu x) P \;\;\;\; \alpha::=\outp{a}{b}\;\mid \; \inp{a}{b}$ \vspace{3pt}The forward semantics of $\pi$IH can be seen in Table~\ref{tab:ext-sem-fwd} and the reverse semantics can be seen in Table~\ref{tab:ext-sem-rev}. We associate each transition with an action $\mu::=\alpha\;\vert\; \tau$ and a location $u$ (Definition~\ref{def:Loc}), describing where the action came from and what changes are made to the process as a result of the action. We store these location and action pairs in extrusion and communication histories associated with processes, so $(\overline{H},\underline{H},H)\vdash\! P$ means that if $(\mu,u)$ is an action and location pair in the output history $\overline{H}$ then $\mu$ is an output action, which $P$ previously performed at location $u$. Similarly $\underline{H}$ contains pairs of input actions and locations and $H$ contains triples of two communicating actions and the location associated with their communication. We use $\mathbf{H}$ as shorthand for $(\overline{H},\underline{H},H)$. \begin{definition}[Location \cite{hildebrandt2017stable}]\label{def:Loc} A location $u$ of an action $\mu$ is one of the following: \begin{enumerate} \item $l[P][P']$ if $\mu$ is an input or output, where $l\in \{0,1\}^*$ describes the path taken through parallel compositions to get to $\mu$'s origin, $P$ is the subprocess reached by following the path before $\mu$ has been performed, and $P'$ is the result of performing $\mu$ in $P$. \item $l\lrangles{0l_0[P_0][P_0'],1l_1[P_1][P_1']}$ if $\mu=\tau$, where $l0l_0[P_0][P_0']$ and $l1l_1[P_1][P_1']$ are the locations of the two actions communicating. \end{enumerate} The path $l$ can be empty if the action did not go through any parallel compositions. \end{definition} We also use the operations on extrusion histories from Definition~\ref{def:ExtOp}. These (1) add a branch to the path in every location, (2) isolate the extrusions whose locations begin with a specific branch, (3) isolate the extrusions whose locations begin with a specific branch and then remove the first branch from the locations, and (4) add a pair to the history it belongs in. \begin{definition}[Operations on extrusion histories \cite{hildebrandt2017stable}]\label{def:ExtOp} Given an extrusion history $(\overline{H},\underline{H},H)$, for $H^*\in\{\overline{H},\underline{H},H\}$ we have the following operations for $i\in \{0,1\}$: \begin{enumerate} \item $iH^*=\{(\mu,iu)\mid (\mu,u)\in H^*\}$ \item $[i]H^*=\{(\mu,iu)\mid (\mu,iu)\in H^*\}$ \item $[\check{i}]H^*=\{(\mu,u)\mid (\mu,iu)\in H^*\}$ \item $\mathbf{H}+(\mu,u)=\begin{dcases*} (\overline{H}\cup \{L\},\underline{H},H) & if $(\mu,u)=(\outp{a}{n},u)$ \\ (\overline{H},\underline{H}\cup \{L\},H) & if $(\mu,u)=(\inp{a}{x},u)$ \\ (\overline{H},\underline{H},H\cup \{L\}) & if $(\mu,u)=(\inp{a}{x},\outp{a}{n},l\langle u_0,u_1\rangle)$ \\ \end{dcases*}$ \end{enumerate} \end{definition} \begin{table}[tb] \begin{tabular}{c} \vspace{5pt}\infer[{[\text{OUT}]}]{{\mathbf{H}\vdash\!\displaystyle\sum\limits_{i\in I} \alpha_i.P_i} \,\xrightarrow[u]{\alpha_j} {(\overline{H}\cup \{(\outp{a}{n},u)\},\underline{H},H)\vdash\! P_j}}{u=[\sum\limits_{i\in I} \alpha_i.P_i][P_j]\;\;\;\alpha_j=\outp{a}{n}\;\;\; j\in I}\\ \vspace{5pt}\infer[{[\text{IN}]}]{{\mathbf{H}\vdash\!\sum\limits_{i\in I} \alpha_i.P_i} \,\xrightarrow[u]{\inp{a}{n}} {(\overline{H},\underline{H}\cup \{(\inp{a}{n},u)\},H)\vdash\! P_j'}}{u=[\sum\limits_{i\in I} \alpha_i.P_i][P_j]\;\;\;P_j'=P_j[x:=n]\;\;\;\alpha_j=\inp{a}{x}\;\;\; j\in I } \\ \vspace{8pt}\infer[{[\text{PAR}_i]}]{{\mathbf{H}\vdash\! P_0\vert P_1}\,\xrightarrow[iu]{\mu} {((\overline{H}\setminus [i]\overline{H})\cup i\overline{H'_i},(\underline{H}\setminus [i]\underline{H})\cup i\underline{H'_i},(H\setminus [i]H)\cup iH'_i)\vdash\! P_0'\vert P_1'}}{\begin{array}{l} {([\check{i}]\overline{H},[\check{i}]\underline{H},[\check{i}]H)\vdash\! P_i} \,\xrightarrow[u]{\mu} {\mathbf{H}_i'\vdash\! P_i'}\;\;\; P_{1-i}'=P_{1-i}\;\;\;\text{if }\mu=\overline{a}(n)\text{ then } n\notin \mathsf{fn}(P_{1-i}) \end{array}} \\ \vspace{5pt}\infer[{[\text{COM}_i]}] {{\mathbf{H}\vdash\! P_0\vert P_1} \,\xrightarrow[(0v_0,1v_1)]{\tau} {(\overline{H},\underline{H},H\cup\{((\alpha_0,\alpha_1,\langle 0v_0,1v_1\rangle)\})\vdash\! (\nu n)(P_0'\vert P_1')}}{\begin{array}{l} {([\check{i}]\overline{H},[\check{i}]\underline{H},[\check{i}]H)\vdash\! P_i} \,\xrightarrow[v_i]{\alpha_i} {\mathbf{H}_i'\vdash\! P_i'} \;\;\; \alpha_i=\outp{a}{n} \;\;\; \alpha_j=\inp{a}{n} \vspace{3pt}\\ {([\check{j}]\overline{H},[\check{j}]\underline{H},[\check{j}]H)\vdash\! P_j} \,\xrightarrow[v_j]{\alpha_i} {\mathbf{H}_j'\vdash\! P_j'\;\;\; j=1-i\;\;\; n\notin \mathsf{fn}(P_j)} \end{array}}\\ \vspace{5pt}\infer[{[\text{SCOPE}]}]{{\mathbf{H}\vdash\!(\nu x)P} \,\xrightarrow[u]{\mu} {\mathbf{H}'\vdash\!(\nu x)P'}}{{\mathbf{H}\vdash\! P} \,\xrightarrow[u]{\mu} {\mathbf{H}'\vdash\! P'}\;\; x\notin n(\mu)} \;\;\;\;\;\infer[{[\text{STR}]}] {{\mathbf{H}\vdash\! P} \,\xrightarrow[u]{\mu} {\mathbf{H}'\vdash\! Q}}{P\equiv P'\;\;\; {\mathbf{H}\vdash\! P'} \,\xrightarrow[u]{\mu} {\mathbf{H}'\vdash\! Q'}\;\;\;Q'\equiv Q}\\ \end{tabular} \caption{Semantics of $\pi$IH (forwards rules)}\label{tab:ext-sem-fwd} \end{table} The forwards semantics of $\pi$IH have six rules. In $[\text{OUT}]$ the action is an output, the location is the process before and after doing the output, and they are added to the output history. The equivalent reverse rule, $[\text{OUT}^{-1}]$, similarly removes the pair from the history and transforms the process from the second part of the location back to the first. The input rule $[\text{IN}]$ works similarly, but performs a substitution on the received name and adds the pair to the input history instead. In $[\text{PAR}_i]$ we isolate the parts of the histories whose locations start with $i$ and use those to perform an action in $P_i$, getting $\mathbf{H}_i'\vdash\! P_i'$. It then replaces the part of the histories parts of the histories whose locations start with $i$ with $\mathbf{H}_i'$ when propagating the action through the parallel. A communication in $[\text{COM}_i]$ adds memory of the communication to the history. The rules $[\text{SCOPE}]$ and $[\text{STR}]$ are standard and self-explanatory. The reverse rules use the extrusion histories to find a location $l[P][P']$ such that the current state of the subprocess at $l$ is $P'$, and change it to $P$. In these semantics structural congruence, consisting only of $\alpha$-conversion together with ${!P}\equiv {!P\vert P}$ and ${(\nu~a)(\nu b)P}\equiv {(\nu~b)(\nu~a) P}$, is primarily used to create and remove extra copies of a replicated process when reversing the action that happened before the replication. Since we use locations in our extrusion histories, we try to avoid using structural congruence any more than necessary. However, not using it for parallel composition would mean that we would need some other way of preventing traces such as ${\mathbf{H}\vdash!P}\,\xrightarrow[u]{\mu}\xsquigarrow{\mu}{u}{\mathbf{H}\vdash!P\vert P}$, which allows a process to reach a state it could not reach via a parabolic trace. Using structural congruence for replication does not cause any problems for the locations, as we can tell past actions originating in each copy of $P$ apart by the path in their location, with actions from the $i$th copy having a path of $i$ $0$s followed by a $1$. \begin{table}[tb] \small \begin{tabular}{c} \vspace{5pt} $\infer[{[\text{OUT}^{-1}]}]{\mathbf{H}\vdash P_j \xsquigarrow{\alpha_j}{u} (\overline{H}\setminus \left\{(\outp{a}{n},u)\right\},\underline{H},H)\vdash \sum_{i\in I} \alpha_i.P_i}{u=[\sum\limits_{i\in I} \alpha_i.P_i][P_j]\;\;\; \alpha_j=\outp{a}{n}\;\;\; j\in I\;\;\; (\outp{a}{n},u)\in \overline{H}} $ \\ \vspace{5pt} $\infer[{[\text{IN}^{-1}]}]{\mathbf{H}\vdash P_j' \xsquigarrow{\inp{a}{n}}{u} (\overline{H},\underline{H}\setminus \left\{(\inp{a}{n},u)\right\},H)\vdash \sum\limits_{i\in I} \alpha_i.P_i}{u=[\sum\limits_{i\in I} \alpha_i.P_i][P_j]\;\;\;P_j'= P_j[x:=n]\;\;\;\alpha_j=\inp{a}{x}\;\;\; j\in I \;\;\; (\inp{a}{n},u)\in \underline{H}}$ \\ \vspace{8pt} $\infer[{[\text{PAR}_i^{-1}]}]{\mathbf{H}\vdash P_0\vert P_1\xsquigarrow{\alpha}{iu} ((\overline{H}\setminus [i]\overline{H})\cup i\overline{H'_i},(\underline{H}\setminus [i]\underline{H})\cup i\underline{H'_i},(H\setminus [i]H)\cup iH'_i)\vdash P_0'\vert P_1'}{([\check{i}]\overline{H},[\check{i}]\underline{H},[\check{i}]H)\vdash P_i\xsquigarrow{\alpha}{u} \mathbf{H}_i'\vdash P_i'\;\;\; P_{1-i}'=P_{1-i} \text{ if }\alpha=\outp{a}{n}\text{ then } n\notin \mathsf{fn}(P_{1-i})}$\\ \vspace{5pt}\infer[{[\text{COM}_i^{-1}]}] {{\mathbf{H}\vdash\! (\nu n)(P_0\vert P_1)} \,\xsquigarrow{\tau}{(0v_0,1v_1)} {(\overline{H},\underline{H},H\setminus\{((\alpha_0,\alpha_1,\langle 0v_0,1v_1\rangle)\}\vdash\! P_0'\vert P_1'}}{\begin{array}{l} ([\check{i}]\overline{H}\cup \{(\outp{a}{n},v_i)\},[\check{i}]\underline{H},[\check{i}]H)\vdash P_i\xsquigarrow{\outp{a}{n}}{v_i} \mathbf{H}_i'\vdash P_i' \;\;\; \alpha_i=\outp{a}{n} \;\;\; \alpha_j=\inp{a}{n} \vspace{3pt}\\([\check{j}]\overline{H},[\check{j}]\underline{H}\cup \{(\inp{a}{n},v_j)\},[\check{j}]H)\vdash P_j\xsquigarrow{\inp{a}{n}}{v_j} \mathbf{H}_j'\vdash P_j'\;\;\; j=1-i\;\;\; n\notin \mathsf{fn}(P_j) \end{array}}\\ \vspace{5pt} $\infer[{[\text{SCOPE}^{-1}]}]{\mathbf{H}\vdash(\nu x)P\xsquigarrow{\mu}{u} \mathbf{H}'\vdash(\nu x)P'}{\mathbf{H}\vdash P\xsquigarrow{\mu}{u} \mathbf{H}'\vdash P'\;\; x\notin n(\alpha)}\;\;\;\;\; \infer[{[\text{STR}^{-1}]}]{\mathbf{H}\vdash P \xsquigarrow{\alpha}{u} \mathbf{H}'\vdash Q}{P\equiv P'\;\;\; \mathbf{H}\vdash P' \xsquigarrow{\alpha}{u} \mathbf{H}'\vdash Q' \;\;\; Q'\equiv Q}$ \\ \end{tabular} \caption{Semantics of reversible $\pi$IH (reverse rules)}\label{tab:ext-sem-rev} \end{table} \begin{example} Consider the process $(\inp{a}{x}.\outp{x}{d} \vert \outp{a}{c})\vert \inp{b}{y}$. If we start with empty histories, each transition adds actions and locations: \noindent{\resizebox{\textwidth}{!}{ $\begin{array}{lr} {(\emptyset,\emptyset,\emptyset)\vdash\! (\inp{a}{x}.\outp{x}{d} \vert \outp{a}{c}) \vert \inp{b}{y}}& \hspace{-2cm}\xrightarrow[{0\langle 0[\inp{a}{x}.\outp{x}{d}][\outp{c}{d}],1[\outp{a}{c}][0]\rangle}]{\tau}\\ {(\emptyset,\emptyset,\{(\inp{a}{c},\outp{a}{c},0\lrangles{ 0[\inp{a}{x}.\outp{x}{d}][\outp{c}{d}],1[\outp{a}{c}][0]}\})\vdash\! (\nu c)(\outp{c}{d} \vert 0)\vert \inp{b}{y}} & \xrightarrow[{00[\outp{c}{d}][0]}]{\outp{c}{d}}\\ {(\{(\outp{c}{d},00[\outp{c}{d}][0])\},\emptyset,\{(\inp{a}{c},\outp{a}{c},0\lrangles{ 0[\inp{a}{x}.\outp{x}{d}][\outp{c}{d}],1[\outp{a}{c}][0]}\})\vdash\! (\nu c)(0\vert 0)\vert \inp{b}{y}}\; & \xrightarrow[{1[\inp{b}{y}][0]}]{\inp{b}{d}}\\ \multicolumn{2}{l}{(\{(\outp{c}{b},00[\outp{c}{b}][0])\},\{(\inp{b}{d},{1[\inp{b}{y}][0]})\},\{(\inp{a}{c},\outp{a}{c},0\lrangles{ 0[\inp{a}{x}.\outp{x}{d}][\outp{c}{d}],1[\outp{a}{c}][0]}\})\vdash\! (0\vert 0)\vert 0}\\ \end{array}$ }} \end{example} We show that our forwards and reverse transitions correspond. \begin{proposition}[Loop]\label{prop:fwdtorevTrans}\mbox{} \begin{enumerate} \item Given a $\pi$IH process $P$ and an extrusion history $\mathbf{H}$, if ${\mathbf{H}\vdash\! P}\,\xrightarrow[u]{\alpha} {\mathbf{H}' \vdash\! Q}$, then ${\mathbf{H}'\vdash\! Q} \xsquigarrow{\alpha}{u} {\mathbf{H}\vdash\! P}$. \item Given a forwards-reachable $\pi$IH process $P$ and an extrusion history $\mathbf{H}$, if ${\mathbf{H}\vdash\! P} \xsquigarrow{\alpha}{u} {\mathbf{H}' \vdash\! Q}$, then ${\mathbf{H}'\vdash\! Q}\,\,\xrightarrow[u]{\alpha} {\mathbf{H}\vdash\! P}$. \end{enumerate} \end{proposition} \section{$\pi$I-calculus reversible semantics with annotations}\label{sec:piik} In order to define event structure semantics of $\pi$IH, we first map from $\pi$IH to a statically reversible variant of $\pi$I-calculus, called $\pi$IK. $\pi$IK is based on previous statically reversible calculi $\pi$K~\cite{DBLP:journals/corr/abs-1808-08655} and CCSK \cite{PU07}. Both of these use \emph{communication keys} to denote past actions and which other actions they have interacted with, so ${\inp{a}{x}\vert\outp{a}{b}}\xrightarrow{\tau[n]}{\inp{a}{b}[n]\vert\outp{a}{b}[n]}$ means a communication with the key $n$ has taken place between the two actions. We apply this idea to define early semantics of $\pi$IK, which has the following syntax: \vspace{3pt} $P::= \alpha.P\,\mid\,\alpha[n].P\,\mid\, P_0+P_1\,\mid\, P_0\vert P_1\,\mid\, (\nu x)P \;\;\; \alpha::=\outp{a}{b}\,\mid \, \inp{a}{b}$ \vspace{3pt}The primary difference between applying communication keys to CCS and the $\pi$I-calculus is the need to deal with substitution. We need to keep track of not only which actions have communicated with each other, but also which names were substituted when. We do this by giving the substituted names a key, $a_{[n]}$, but otherwise treating them the same as those without the key, except when undoing the input associated with $n$. \begin{table}[tb] \begin{center} \begin{tabular}{c} \infer{{\inp{a}{x}.P}\xrightarrow{\inp{a}{b}[n]} {\inp{a}{b}[n].P'}}{\mathsf{std}(P)\;\;\; P'=P[x:=b_{[n]}]} \;\;\;\;\; \infer{\outp{a}{b}.P\xrightarrow{\outp{a}{b}[n]} \outp{a}{b}[n].P}{\mathsf{std}(P)} \\ \infer{{\alpha[n].P}\xrightarrow{\mu[m]} {\alpha[n].P'}}{P\xrightarrow{\mu[m]} P' \;\;\; m\neq n\;\;\;\text{ if }\mu=\overline{a}(x) \text{ then } x\notin n(\alpha)} \;\;\;\; \infer{P_0 + P_1 \xrightarrow{\mu[n]} P_0'+ P_1}{P_0\xrightarrow{\mu[n]} P_0' \;\;\; \mathsf{std}(P_1)} \\ \infer{P_0\vert P_1\xrightarrow{\mu[n]} P_0'\vert P_1}{P_0\xrightarrow{\mu[n]} P_0' \;\;\; \mathsf{fsh}[n](P_1)\;\;\; \text{if }\mu=\outp{a}{b}\text{ then }b\notin \mathsf{fn}(P_1)} \;\;\;\;\; \infer{P_0\vert P_1\xrightarrow{\tau[n]} (\nu b)(P_0'\vert P_1')}{P_0\xrightarrow{\inp{a}{b}[n]}P_0'\;\;\; P_1\xrightarrow{\outp{a}{b}[n]} P_1'} \\ \vspace{5pt}\infer{(\nu a)P \xrightarrow{\mu[m]} (\nu a)P'}{P\xrightarrow{\mu[m]} P' \;\;\;\; a\notin n(\mu)} \;\;\;\;\; \infer{P\xrightarrow{\mu[n]} P'}{P\equiv Q\xrightarrow{\mu[n]}Q'\equiv P'}\\ \end{tabular} \caption{$\pi$IK forward semantics}\label{tab:pik-sem} \end{center} \end{table} \begin{table}[tb] \begin{center} \begin{tabular}{c} \infer{\inp{a}{b}[m].P\xrsquigarrow{\inp{a}{b}[m]} \inp{a}{x}.P'}{\mathsf{std}(P)\;\;\; x\notin n(P) \;\;\; P'=P[b_{[m]}:=x]} \;\;\;\; \infer{\outp{a}{b}[n].P\xrsquigarrow{\outp{a}{b}[n]} \outp{a}{b}.P}{\mathsf{std}(P)} \\ \infer{\alpha[n].P\xrsquigarrow{\mu[m]} \alpha[n].P'}{P\xrsquigarrow{\mu[m]} P' \;\;\; m\neq n} \;\;\;\; \infer{P_0 + P_1 \xrsquigarrow{\mu[n]} P_0'+ P_1}{P_0\xrsquigarrow{\mu[n]} P_0' \;\;\; \mathsf{std}(P_1)}\\ \infer{P_0\vert P_1\xrsquigarrow{\mu[n]} P_0'\vert P_1}{P_0\xrsquigarrow{\mu[n]} P_0' \;\;\; \mathsf{fsh}[n](P_1)\;\;\; \text{ if } \mu=\outp{a}{b} \text{ then } b\notin \mathsf{fn}(P_1)} \;\;\;\;\; \infer{(\nu b)(P_0\vert P_1)\xrsquigarrow{\tau[n]} P_0'\vert P_1'}{P_0\xrsquigarrow{\inp{a}{b}[n]}P_0' \;\;\; P_1\xrsquigarrow{\outp{a}{b}[n]} P_1'} \\ \vspace{5pt} \infer{(\nu a)P \xrsquigarrow{\mu[m]} (\nu a)P'}{P\xrsquigarrow{\mu[m]} P' \;\;\; a\notin n(\mu)} \;\;\;\;\; \infer{P\xrsquigarrow{\mu[n]} P'}{P\equiv Q\xrsquigarrow{\mu[n]}Q'\equiv P'}\\ \end{tabular} \caption{$\pi$IK reverse semantics}\label{tab:pik-rev-sem} \end{center} \end{table} Table \ref{tab:pik-sem} shows the forward semantics of $\pi$IK. The reverse semantics can be seen in Table~\ref{tab:pik-rev-sem}. We use~$\alpha$ to range over input and output actions and $\mu$ over input, output, and~$\tau$. We use $\mathsf{std}(P)$ denote that~$P$ is a \emph{standard process}, meaning it does not contain any past actions (actions annotated with a key), and $\mathsf{fsh}[n](P)$ to denote that a key~$n$ is fresh for~$P$. Names in past actions are always free. Our semantics very much resemble those of CCSK, with the exceptions of substitution and ensuring that any name being output does not appear elsewhere in the process. The semantics use structural congruence as defined in Table~\ref{tab:str-con}. \begin{table}[t] \begin{tabular}{lll} $P\vert 0\equiv P$ & $P_0\vert P_1\equiv P_1\vert P_0$ & $P_0\vert(P_1\vert P_2)\equiv (P_0\vert P_1)\vert P_2$ \\ $P+0\equiv P$ \hspace{1cm} & $P_0+P_1\equiv P_1+P_0$ \hspace{1cm} & $P_0+(P_1+P_2)\equiv (P_0+P_1)+P_2$ \\ \vspace{5pt} $!P\equiv {!P\vert P} $ & $(\nu x)(\nu y)P\equiv (\nu y)(\nu x) P$ & $(\nu a)(P_0\vert P_1) \equiv ((\nu a) P_0\vert P_1)$ if $a\notin n(P_1)$\\%\multicolumn{3}{l}{$!_{\mathbf{x}} P\equiv !_{\mathbf{x}\setminus \{x_0,\dots, x_k\}} P\vert P\{\rfrac{x_0,\dots,x_k}{a_0,\dots,a_k}\}$ if $\{x_0,\dots, x_k\}\subseteq \mathbf{x}$ and $\mathsf{bn}(P)=\{a_0,\dots,a_k\}$} \end{tabular} \caption{Structural congruence}\label{tab:str-con} \end{table} We again show a correspondence between forward and reverse transitions. \begin{proposition}[Loop]\label{the:FwdToRev}\mbox{} \begin{enumerate} \item Given a process $P$, if $P\xrightarrow{\mu[n]} Q$ then $Q\xrsquigarrow{\mu[n]} P$. \item Given a forwards reachable process $P$, if $P\xrsquigarrow{\mu[n]} Q$ then $Q\xrightarrow{\mu[n]} P$. \end{enumerate} \end{proposition} \section{Mapping from $\pi$IH to $\pi$IK}\label{sec:sem-corr} We will now define a mapping from $\pi$IH to $\pi$IK and show that we have an operational correspondence in Theorem~\ref{the:key-ext-eq}. The extrusion histories store more information than the keys, as they keep track of which names were substituted, as illustrated by Example~\ref{ex:ext-key-sub}. This means we lose some information in our mapping, but not information we need. \begin{example}\label{ex:ext-key-sub} Consider the processes $(\emptyset,\{(a(b),[a(x)][0])\},\emptyset)\vdash\! 0$ and $a(b)[n]$. These are the result of $a(x)$ receiving $b$ in the two different semantics. We can see that the extrusion history remembers that the input name was $x$ before $b$ was received, but the keys do not remember, and when reversing the action could use any name as the input name. This does not make a great deal of difference, as after reversing $a(b)$, the process with the extrusion history can also $\alpha$-convert $x$ to any name. \end{example} Since we intend to define a mapping from processes with extrusion histories to processes with keys, we first describe how to add keys to substituted names in a process in Definition~\ref{def:S}. We have a function, $S$, which takes a process, $P_1$, in which we wish to add the key $[n]$ to all those names which were $x$ in a previous state of the process, $P_2$, before being substituted for some other name in an input action with the key $[n]$. \begin{definition}[Substituting in $\pi$IK-process to correspond with processes with extrusion histories]\label{def:S} Given a $\pi$IK process $P_1$, a $\pi$I-calculus process without keys, $P_2$, a key $n$, and a name $x$, we can add the key $n$ to any names which $x$ has been substituted with, by applying $S(P_1,P_2,[n],x)$, defined as: \begin{enumerate} \item $S\left(0,0,[n],x\right)=0$ \vspace{5pt} \item $S\left(\sum\limits_{i\in I} P_{i1},\sum\limits_{i\in I} P_{i2},[n],x\right)=\sum\limits_{i\in I}S\left(P_{i1},P_{i2},[n],x\right)$ \vspace{5pt} \item $S\left(P_1\vert Q_1,P_2\vert Q_2,[n],x\right)=S\left(P_1,P_2,[n],x\right)\vert S\left(Q_1,Q_2,[n],x\right)$ \vspace{5pt} \item $S\left((\nu a) P_1, (\nu b) P_2,[n],x\right)= P_1'$ where: \\ \;\;\; if $x=b$ then $P_1'=P_1$ and otherwise $P_1'=(\nu a) S\left(P_1,P_2,[n],x\right)$. \vspace{5pt} \item $S\left(\alpha_1.P_1,\alpha_2.P_2,[n],x\right)=\alpha_1'.P_1'$ where: \\ \;\;\; if $\alpha_2\in \{\inp{x}{c},\outp{x}{c}\}$ then $\alpha_1'=\alpha_{1_{[n]}}$ and otherwise $\alpha_1'=\alpha_1$; \\ \;\;\; if $\alpha_2\in \{\inp{c}{x},\outp{c}{x}\}$ then $P_1'=P_1$ and otherwise $P_1'=S\left(P_1,P_2,[n],x\right)$. \vspace{5pt} \item $S\left(\alpha_1[m].P_1,\alpha_2.P_2,[n],x\right)=\alpha_1'[m].P_1'$ where: \\ \;\;\; if $\alpha_2\in \{\inp{x}{c},\outp{x}{c}\}$ then $\alpha_1'=\alpha_{1_{[n]}}$ and otherwise $\alpha_1'=\alpha_1$; \\ \;\;\; if $\alpha_2\in \{\inp{c}{x},\outp{c}{x}\}$ then $P_1'=P_1$ and otherwise $P_1'=S\left(P_1,P_2,[n],x\right)$.\vspace{5pt} \item $S\left(!P_1, !P_2,[n],x\right)= {!S\left(P_1,P_2,[n],x\right)}$ \vspace{5pt} \item $S\left(P_1\vert P_1', !P_2,[n],x\right)= S\left(P_1,!P_2,[n],x\right)\vert S\left(P_1',P_2,[n],x\right)$ \vspace{5pt} \item $S\left(!P_1,P_2\vert P_2', [n],x\right)=S\left(!P_1,P_2,[n],x\right)\vert S\left(P_1,P_2',[n],x\right)$ \end{enumerate} \noindent where $\inp{a}{b}_{[n]}=\inp{a_{[n]}}{b}$ and $\outp{a}{b}_{[n]}=\outp{a_{[n]}}{b}$ \end{definition} Being able to annotate our names with keys, we can define a mapping, $E$, from extrusion histories to keys in Definition~\ref{def:E}. $E$ iterates over the extrusions, having one process which builds $\pi$IK-process, and another that keeps track of which state of the original $\pi$IH process has been reached. When turning an extrusion into a keyed action, we use the locations as key and also give each extrusion an extra copy of its location to use for determining where the action came from. This way we can use one copy to iteratively go through the process, removing splits from the path as we go through them, while still having another intact copy of the location to use as the final key. {In $E(\mathbf{H}\vdash\! P,P')$, $\mathbf{H}$ is a history of extrusions which need to be turned into keyed actions, $P$ is the process these keyed actions should be added to, and $P'$ is the state the process would have reached, had the added extrusions been reversed instead of turned into keyed actions.} If $E$ encounters a parallel composition in $P$ (case 2), it splits its extrusion histories in three. One part, $\mathbf{H}_{\mathsf{shared}}$ contains the locations which have an empty path, and therefore belong to actions from before the processes split. Another part contains the locations beginning with $0$, and goes to the first part of the process. And finally the third part contains the locations beginning with $1$, and goes to the second part of the process. {$E$ can add an action -- and the choices not picked when that action was performed -- to $P$ (cases 3,4) when the associated location has an empty path and has $P'$ as its result process.} When turning an input memory from the history into a past input action in the process (case 4), we use $S$ (Definition~\ref{def:S}) to add keys to the substituted names. When $E$ encounters a restriction (case 5), it moves a memory that can be used inside the restriction inside. It does this iteratively until there are no such memories left in the extrusion histories. We apply $E$ to a process in Example~\ref{ex:E}. \begin{definition} The function $\mathsf{lcopy}$ gives each member of an extrusion history an extra copy of its location: \begin{tabular}{l} $\mathsf{lcopy}(H^*)=\{(\mu,u,u)\mid (\mu,u)\in H^*\}$ \\ $\mathsf{lcopy}(\overline{H},\underline{H},H)=(\mathsf{lcopy}(\overline{H}),\mathsf{lcopy}(\underline{H}),\mathsf{lcopy}(H))$\\ \end{tabular} \end{definition} \begin{definition}\label{def:E} Given a $\pi$IH process, $\mathbf{H}\vdash\! P$, we can create an equivalent $\pi$IK process, $E(\mathsf{lcopy}(\mathbf{H})\vdash\! P,P)=P'$ defined as \begin{enumerate} \item $E((\emptyset,\emptyset,\emptyset)\vdash\! P,P')=P$ \vspace{5pt} \item $E(\mathbf{H}\vdash\! P_0\vert P_1,P_0'\vert P_1')=E(\mathbf{H}_{\mathsf{shared}}\vdash\! P_0'' \vert P_1'' ,P_0'''\vert P_1''')$ where $\;\;\; \mathbf{H}_{\mathsf{shared}}=(\{(\alpha,u,u')\mid {(\alpha,u,u')\in \overline{H}} \text{ and }u\neq iu''\},\{(\alpha,u,u')\mid {(\alpha,u,u')\in \underline{H}}$ $\;\;\; {\text{and }} {u\neq iu''}\},\emptyset)$ $\;\;\; P_0''=E((\overline{H_0},\underline{H_0},H_0)\vdash\! P_0,P_0')$ where: $\;\;\;\ind \overline{H_0}=\{(\outp{a}{b},u_0,u_0')\mid (\outp{a}{b},0u_0,u_0')\in \overline{H} \text{ or }{(\outp{a}{b},\alpha_1,\lrangles{0u_0,1u_1},u_0')}\in H\}$ $\;\;\;\ind \underline{H_0}=\{ (\inp{a}{b},u_0,u_0')\mid (\inp{a}{b},0u_0,u_0')\in \underline{H} \text{ or }{(\inp{a}{b},\alpha_1,\lrangles{0u_0,1u_1},u_0')}\in H\}$ $\;\;\;\ind H_0=\{(\alpha,\alpha',u,u')\mid (\alpha,\alpha',0u,u')\in H\}$ $\;\;\; P_1''=E((\overline{H_1},\underline{H_1},H_1)\vdash\! P_1,P_1'))$ where: $\;\;\;\ind \overline{H_1}=\{(\outp{a}{b},u_1,u_1')\mid (\outp{a}{b},1u_1,u_1')\in \overline{H} \text{ or }{(\alpha_0,\outp{a}{b},\lrangles{0u_0,1u_1},u_1')}\in H\}$ $\;\;\;\ind \underline{H_1}=\{(\inp{a}{b},u_1,u_1')\mid (\inp{a}{b},1u_1,u_1')\in \underline{H} \text{ or }{(\alpha_0,\inp{a}{b},\lrangles{0u_0,1u_1},u_1')}\in H\}$ $\;\;\;\ind H_1=\{(\alpha,\alpha',u,u')\mid (\alpha,\alpha',1u,u')\in H\}$ $\mathbf{H_i}\vdash\! P_i' \xsquigarrow{\alpha_{i,0}}{u_{i,0}} \dots \xsquigarrow{\alpha_{i,n}}{u_{i,n}} (\emptyset,\emptyset,\emptyset)\vdash\! P_i'''$ for $i\in \{0,1\}$ \vspace{5pt} \item $E((\overline{H}\cup \{(\outp{a}{b},[Q][P'],u)\},\underline{H},H)\vdash\! P,P')=E(\mathbf{H}\vdash\! \outp{a}{b}\left[u\right].P+\sum\limits_{i\in I\setminus \{j\}} \alpha_i.P_i,Q)$ $\;\;\;$ if $Q=\sum_{i\in I} \alpha_i.P_i$, $\outp{a}{b}=\alpha_j$, and $P'=P_j$ \vspace{5pt} \item $E((\overline{H},\underline{H}\cup\{(\inp{a}{b},[Q][P'],u)\},H)\vdash\! P,P')=$\\ \hspace*{\fill} $E(\mathbf{H}\vdash\! \inp{a}{b}\left[u\right].S(P,P_j,[u],x)+\sum\limits_{i\in I\setminus \{j\}} \alpha_i.P_i,Q)$ $\;\;\;$ if $Q=\sum_{i\in I} \alpha_i.P_i$, $\inp{a}{x}=\alpha_j$, and $P'=P_j[x:=b]$ \vspace{5pt} \item $E(\mathbf{H}\vdash\! (\nu x)P, (\nu x)P')=E(\mathbf{H}-(\alpha,u,u')\vdash\! P'',(\nu x)Q')$ $\;\;\;$ where $P''=(\nu x)E((\emptyset,\emptyset,\emptyset)+(\alpha,u,u')\vdash\! P,P')$ $\;\;\;$ if $(\alpha,u,u')\in \overline{H}\cup\underline{H}$ and $(\emptyset,\emptyset,\emptyset)+(\alpha,u,u)\vdash\! P \xsquigarrow{\alpha}{u} (\emptyset,\emptyset,\emptyset)\vdash\! Q'$ \vspace{5pt} \item $E(\mathbf{H}\vdash\! !P,!P')=E(\mathbf{H}\vdash\! !P\vert P,!P'\vert P')$ if there exists $(\alpha,u,u')\in \overline{H}\cup\underline{H}\cup H$ such that $u\neq [Q][Q']$. \end{enumerate} \end{definition} \begin{example}\label{ex:E} We will now apply $E$ to the process $$(\{(\outp{b}{c},u_2)\},\emptyset,\{(\inp{b}{a},\outp{b}{a},\langle{0u_0,1u_1}\rangle )\})\vdash\! \inp{a}{x}\mid 0$$ with locations $u_0=[\inp{b}{y}.\inp{y}{x}][\inp{a}{x}]$, $u_1=[\outp{b}{a}][0]$, and $u_2=[\outp{b}{c}.(\inp{b}{y}.\inp{y}{x}\mid\outp{b}{a}][\inp{b}{y}.\inp{y}{x}\mid\outp{b}{a}]$. We perform $$E(\mathsf{lcopy}((\{(\outp{b}{c},u_2)\},\emptyset,\{(\inp{b}{a},\outp{b}{a},\langle{0u_0,1u_1}\rangle )\}))\vdash \inp{a}{x}\mid 0,\inp{a}{x}\mid 0)$$ Since we are at a parallel, we use Case 2 of Definition~\ref{def:E} to split the extrusion histories into three to get $E((\{(\outp{b}{c},u_2,u_2)\},\emptyset,\emptyset)\vdash\!P_0\mid P_1,\inp{b}{y}.\inp{y}{x}\mid\outp{b}{a})$ where $P_0=E((\emptyset,\{(\inp{b}{a},u_0,\langle{0u_0,1u_1}\rangle)\},\emptyset)\vdash\! \inp{a}{x},\inp{a}{x})$ and $P_1= E((\{(\outp{b}{a},u_1,\langle{0u_0,1u_1}\rangle)\},\emptyset,\emptyset)\vdash\! 0,0)$. To find $P_0$, we look at $u_0$, and find that it has $\inp{a}{x}$ as its result, meaning we can apply Case 4 to obtain $E((\emptyset,\emptyset,\emptyset)\vdash\! \inp{b}{a}[\langle{0u_0,1u_1}\rangle].S(\inp{a}{x},\inp{y}{x},[\langle{0u_0,1u_1}\rangle],y), \inp{b}{y}.\inp{y}{x})$. And by applying Case 5 of Definition~\ref{def:S}, $S(\inp{a}{x},\inp{y}{x},[\langle{0u_0,1u_1}\rangle],y)=\inp{a_{{[\langle{0u_0,1u_1}\rangle]}}}{x}$. Since we have no more extrusions to add, we apply Case 1 to get our process $P_0=\inp{b}{a}[\langle{0u_0,1u_1}\rangle].\inp{a_{{[\langle{0u_0,1u_1}\rangle]}}}{x}$. To find $P_1$, we similarly look at $u_1$ and find that we can apply Case 3. This gives us $P_1=\outp{b}{a}[\langle{0u_0,1u_1}\rangle].0$. We can then apply Case 3 to $E((\{(\outp{b}{c},u_2,u_2)\},\emptyset,\emptyset)\vdash\! P_0 \mid P_1,\inp{b}{y}.\inp{y}{x}\mid\outp{b}{a})$. This gives us our final process, $$\outp{b}{c}[k']. \inp{b}{a}[k].\inp{a_{{[k]}}}{x} \mid \outp{b}{a}[k].0$$ where $k=\langle{0u_0,1u_1}\rangle$ and $k'=u_2$ \end{example} We can then show, in Theorem~\ref{the:key-ext-eq}, that we have an operational correspondence between our two calculi and $E$ preserves transitions. Item 1 states that every transition in $\pi$IH corresponds to one in $\pi$IK process generated by $E$, and Item 2 vice versa. \begin{theorem}\label{the:key-ext-eq} Given a reachable $\pi$IH process, $\mathbf{H}\vdash\! P$, and an action, $\mu$, \begin{enumerate} \item if there exists a location $u$ such that $\mathbf{H}\vdash\! P\xarrowtail{\mu}{u} \mathbf{H}'\vdash\! P'$ then there exists a key, $m$, such that $E(\mathsf{lcopy}(\mathbf{H})\vdash\! P,P)\xarrowtail{\mu[m]}{} E(\mathsf{lcopy}(\mathbf{H}')\vdash\! P',P')$; \item if there exists a key, $m$, such that $E(\mathsf{lcopy}(\mathbf{H})\vdash\! P,P)\xarrowtail{\mu[m]}{}P''$, then there exists a location, $u$, and a $\pi$IH process, $\mathbf{H}'\vdash\! P'$, such that $\mathbf{H}\vdash\! P\xarrowtail{\mu}{u} \mathbf{H}'\vdash\! P'$ and $P''\equiv E(\mathsf{lcopy}(\mathbf{H}')\vdash\! P',P')$. \end{enumerate} \end{theorem} \section{Bundle event structures}\label{sec:BES} In this section we will recall the definition of \emph{labelled reversible bundle event structures} (LRBESs), which we intend to use later to define the event structure semantics of $\pi$IK and through that $\pi$IH. We also describe some operations on LRBESs, which our semantics will make use of. This section is primarily a review of definitions from~\cite{EFG2018}. We use bundle event structures, rather than the more common prime event structures, because LRBESs yield more compact event structures with fewer events and simplifies parallel composition. An LRBES consists of a set of events, $E$, a subset of which, $F$, are reversible,and three relations on them. The bundle relation, $\mapsto$, says that if $X\mapsto e$ then one of the events of $X$ must have happened before $e$ can and all events in $X$ are in conflict with each other. The conflict relation, $\mathrel{\sharp}$, says that if $e\mathrel{\sharp} e'$ then $e$ and $e'$ cannot occur in the same configuration. The prevention relation, $\rhd$, says that if $e\rhd \underline{e'}$ then $e'$ cannot reverse after $e$ has happened. Since the event structure is labelled, we also have a set of labels $\mathsf{Act}$, and a labelling function $\lambda$ from events to labels. We use $\underline{e}$ to denote $e$ being reversed, and $e^*$ to denote either $e$ or $\underline{e}$. \begin{definition}[Labelled Reversible Bundle Event Structure \cite{EFG2018}] A labelled reversible bundle event structure is a 7-tuple $\M{E}=(E,F,\mapsto,\mathrel{\sharp} ,\rhd,\lambda,\mathsf{Act})$ where: \begin{enumerate} \item $E$ is the set of events; \item $F\subseteq E$ is the set of reversible events; \item the bundle set, ${\mapsto}\subseteq 2^E\times (E\cup \underline{F})$, satisfies $X\mapsto e^*\Rightarrow \forall e_1,e_2\in X.e_1\neq e_2\Rightarrow e_1\mathrel{\sharp} e_2$ and for all $e\in F$, $\{e\}\mapsto \underline{e}$; \item the conflict relation, ${\mathrel{\sharp}} \subseteq E\times E$, is symmetric and irreflexive; \item $\rhd \subseteq E\times \underline{F}$ is the prevention relation. \item $\lambda:E\rightarrow \mathsf{Act}$ is a labelling function. \end{enumerate} \end{definition} An event in a LRBES can have multiple possible causes as defined in Definition~\ref{def:Pos-Cause}. A possible cause $X$ of an event $e$ is a conflict-free set of events which contains a member of each bundle associated with $e$ and contains possible causes of all events in $X$. \begin{definition}[Possible Cause]\label{def:Pos-Cause} Given an LRBES, $\M{E}=\LRBES{}$ and an event $e\in E$, $X\subseteq E$ is a possible cause of $e$ if \begin{itemize} \item $e\notin X$, $X \text{is finite}$, whenever $X'\mapsto e$ we have $X'\cap X\neq \emptyset$; \item for any $ e',e''\in\{e\}\cup X$, we have $e'\not\mathrel{\sharp} e''$ ($X\cup \{e\}$ is conflict-free); \item for all $e'\in X$, there exists $X''\subseteq X$, such that $X''$ is a possible cause of $e'$; \item there does not exist any $X'''\subset X$, such that $X'''$ is a possible cause of $e$. \end{itemize} \end{definition} Since we want to compare the event structures generated by a process to the operational semantics, we need a notion of transitions on event structures. For this purpose we use configuration systems (CSs), which event structures can be translated into. \begin{definition}[Configuration system \cite{journals/jlp/PhillipsU15}]\label{def:CS} A \emph{configuration system} (CS) is a quadruple $\M{C} = \CS{}$ where $E$ is a set of events, $F\subseteq E$ is a set of reversible events, $\mathsf{C}\subseteq 2^E$ is the set of configurations, and $\rightarrow\subseteq \mathsf{C}\times 2^{E\cup\underline{F}} \times \mathsf{C}$ is a labelled transition relation such that if $X\CStrans{A}{B} Y$ then: \begin{itemize} \item $X,Y\in \mathsf{C}$, $A\cap X=\emptyset$; $B\subseteq X\cap F$; and $Y=(X\setminus B)\cup A$; \item for all $A'\subseteq A$ and $B'\subseteq B$, we have $X\CStrans{A'}{B'} Z \CStrans{(A\setminus A')}{(B\setminus B')}Y$, meaning $Z=(X\setminus B')\cup A'\in \mathsf{C}$. \end{itemize} \end{definition} \begin{definition}[From LRBES to CS \cite{EFG2018}]\label{def:RBEStoCS} We define a mapping $C_{br}$ from LRBESs to CSs as: $C_{br}(\LRBES{})=\CS{}$ where: \begin{enumerate} \item $X\in\textsf{C}$ if $X$ is conflict-free; \item \vspace{-.2cm}For $X,Y\in \textsf{C}$, $A\subseteq E$, and $B\subseteq F$, there exists a transition $X\CStrans{A}{B} Y$ if: \begin{enumerate} \item $Y=(X\setminus B)\cup A$; $X\cap A=\emptyset$; $B\subseteq X$; and $X\cup A$ conflict-free; \item for all $e\in B$, if $e'\rhd \underline{e}$ then $e'\notin X\cup A$; \item for all $e\in A$ and $X'\subseteq E$, if $X'\mapsto e$ then $X'\cap (X\setminus B)\neq \emptyset$; \item for all $e\in B$ and $X'\subseteq E$, if $X'\mapsto \underline{e}$ then $X'\cap (X\setminus (B\setminus \{e\}))\neq \emptyset$. \end{enumerate} \end{enumerate} \end{definition} For our semantics we need to define a prefix, restriction, parallel composition, and choice. Causal prefixing takes a label, $\mu$, an event, $e$, and an LRBES, $\M{E}$, and adds $e$ to $\M{E}$ with the label $\mu$ and associating every other event in $\M{E}$ with a bundle containing only $e$. Restriction removes a set of events from an LRBES. \begin{definition}[Causal Prefixes \cite{EFG2018}]\label{def:CausPref} Given an LRBES $\M{E}$, a label $\mu$, and an event $e$, $(\mu)(e).\M{E}=(E',F',\mapsto',\mathrel{\sharp}',\rhd',\lambda',\mathsf{Act}')$ where: \vspace{-5pt} \begin{multicols}{2} \begin{enumerate} \item $E'=E\cup e$ \item $F'=F\cup e$ \item ${\mapsto'}={\mapsto\cup (\{\{e\}\}\times (E\cup \{\underline{e}\}))}$ \item ${\mathrel{\sharp}'}={\mathrel{\sharp}}$ \item $\rhd'=\rhd \cup (E\times \{ \underline{e} \})$ \item $\lambda'=\lambda[e\mapsto \mu]$ \item $\mathsf{Act}'=\mathsf{Act} \cup \{ \mu \}$ \end{enumerate} \end{multicols} \end{definition} Removing a set of labels $L$ from an LRBES removes not just events with labels in~$A$ but also events dependent on events with labels in $L$. \begin{definition}[Removing labels and their dependants] Given an event structure $\M{E}=\LRBES{}$ and a set of labels $L\subseteq \mathsf{Act}$, we define $\rho_{\M{E}}(L)=X$ as the maximum subset of $E$ such that \begin{enumerate} \item if $e\in X$ then $\lambda(e)\notin L$; \item if $e\in X$ then there exists a possible cause of $e$, $x$, such that $x\subseteq X$. \end{enumerate} \end{definition} A choice between LRBESs puts all the events of one event structure in conflict with the events of the others. \begin{definition}[Choice \cite{EFG2018}] Given LRBESs $\M{E}_0,\M{E}_1,\dots,\M{E}_n$, the choice between them is $\sum\limits_{0\leq i\leq n}\M{E}_i=\LRBES{}$ where: \vspace{-5pt} \begin{multicols}{2} \begin{enumerate} \item $E=\bigcup\limits_{0\leq i\leq n} \{i\}\times E_i$ \item $F=\bigcup\limits_{0\leq i\leq n} \{i\}\times F_i$ \item $X\mapsto e^*$ if $e=(i,e_i)$, $X_i\mapsto_i e_i^*$, and $X=\{i\}\times X_i$ \item $(i,e)\mathrel{\sharp} (j,e')$ if $i\neq j$ or $e\mathrel{\sharp}_i e'$ \item $(i,e)\rhd (j,e')$ if $i\neq j$ or $e\mathrel{\sharp}_i e'$ \item $\lambda(j,e)=\lambda_j(e)$ \item $\mathsf{Act}=\bigcup\limits_{0\leq i\leq n} \mathsf{Act}_i$ \end{enumerate} \end{multicols} \end{definition} \begin{definition}[Restriction \cite{EFG2018}] Given an LRBES, $\M{E}=\LRBES{}$, restricting $\M{E}$ to $E'\subseteq E$ creates $\M{E}\upharpoonright E'=(E',F',\mapsto',\mathrel{\sharp}',\rhd',\lambda',\mathsf{Act}')$ where: \vspace{-5pt} \begin{multicols}{2} \begin{enumerate} \item $F'=F\cap E'$; \item ${\mapsto'}={\mapsto\cap(\mathcal{P}(E')\times(E'\cup\underline{F'}))}$; \item ${\mathrel{\sharp}'}={\mathrel{\sharp}\cap(E'\times E')}$; \item ${\rhd'}={\rhd\cap (E'\times\underline{F'})}$; \item $\lambda'=\lambda\upharpoonright_{E'}$; \item $\mathsf{Act}=\mathsf{ran}(\lambda')$. \end{enumerate} \end{multicols} \end{definition} For parallel composition we construct a product of event structures, which consists of events corresponding to synchronisations between the two event structures. The possible causes of an event $(e_0,e_1)$ contain a possible cause of $e_0$ and a possible cause of $e_1$. \begin{definition}[Parallel \cite{EFG2018}]\label{def:RBESpro} Given two LRBESs $\M{E}_0=\LRBES{0}$ and $\M{E}_1=\LRBES{1}$, their parallel composition $\M{E}_0\times \M{E}_1=\LRBES{}$ with projections $\pi_0$ and $\pi_1$ where: \begin{enumerate} \item $E=E_0\times_* E_1=\{(e,*) \mid e\in E_0\} \cup \{(*,e) \mid e\in E_1\} \cup \{(e,e') \mid e\in E_0\text{ and } e'\in E_1\}$; \item $F=F_0\times_* F_1=\{(e,*) \mid e\in F_0\} \cup \{(*,e) \mid e\in F_1\} \cup \{(e,e') \mid e\in F_0\text{ and } e'\in F_1\}$; \item for $i \in \{0,1\}$ we have $(e_0,e_1)\in E$, $\pi_i((e_0,e_1))=e_i$; \item for any $e^*\in E\cup \underline{F}$, $X\subseteq E$, $X\mapsto e^*$ iff there exists $i\in \{0,1\}$ and $X_i\subseteq E_i$ such that $X_i\mapsto \pi_i(e)^*$ and $X=\{e'\in E\mid \pi_i(e')\in X_i\}$; \item for any $e,e'\in E$, $e\mathrel{\sharp} e'$ iff there exists $i\in \{0,1\}$ such that $\pi_i(e)\mathrel{\sharp}_i\pi_i(e')$, or $\pi_i(e)=\pi_i(e')\neq \bot$ and $\pi_{1-i}(e)\neq \pi_{1-i}(e')$; \item for any $e\in E$, $e'\in F$, $e\rhd \underline{e'}$ iff there exists $i\in \{0,1\}$ such that $\pi_i(e)\rhd_i\underline{\pi_i(e')}$. \item $\lambda(e)=\begin{dcases*} \lambda_0(e_0) & if $e=(e_0,*)$\\ \lambda_1(e_1) & if $e=(*,e_1)$\\ \tau & if $e=(e_0,e_1)$ and either $\lambda_0(e_0)=\inp{a}{x}$ and $\lambda_1(e_1)=\outp{a}{x}$\\ & or $\lambda_0(e_0)=\outp{a}{x}$ and $\lambda_1(e_1)=\inp{a}{x}$\\ 0 & otherwise \end{dcases*}$ \vspace{5pt} \item $\mathsf{Act}=\{\tau\}\cup \mathsf{Act}_0\cup \mathsf{Act}_1$ \end{enumerate} \end{definition} \section{Event structure semantics of $\pi$IK}\label{sec:Den-Ev-Sem} In this section we define event structure semantics of $\pi$IK using the LRBESs and operations defined in Section~\ref{sec:BES}. Theorems~\ref{the:PtoLRBEStrans} and~\ref{the:PtoLRBEStrans2} give us an operational correspondence between a $\pi$IK process and the generated event structure. Together with Theorem~\ref{the:key-ext-eq}, this gives us a correspondence between a $\pi$IH process and the event structure it generates by going via a $\pi$IK process. As we want to ensure that all free and bound names in our process are distinct, we modify our syntax for replication, assigning each replication an infinite set, $\mathbf{x}$, of names to substitute into the place of bound names in each created copy of the process, so that $$!_{\mathbf{x}} P\equiv {!_{\mathbf{x}\setminus \{x_0,\dots, x_k\}} P}\vert {P\{\rfrac{x_0,\dots,x_k}{a_0,\dots,a_k}\}}\text{ if }\{x_0,\dots, x_k\}\subseteq \mathbf{x}\text{ and }\mathsf{bn}(P)=\{a_0,\dots,a_k\}$$ Before proceeding to the semantics we also define the standard bound names of a process $P$, $\mathsf{sbn}(P)$, meaning the names that would be bound in $P$ if every action was reversed, in Definition~\ref{def:sbn}. \begin{definition}\label{def:sbn} The standard bound names of a process $P$, $\mathsf{sbn}(P)$, are defined as:\vspace{5pt} \begin{tabular}{ll} $\mathsf{sbn}(a(x).P')=\{x\}\cup \mathsf{sbn}(P')$ & $\mathsf{sbn}(a(x)[m].P')=\{x\}\cup \mathsf{sbn}(P')$ \\ $\mathsf{sbn}(\overline{a}(x).P')=\{x\}\cup \mathsf{sbn}(P')$ & $\mathsf{sbn}(\overline{a}(x)[m].P')=\{x\}\cup \mathsf{sbn}(P')$ \\ $\mathsf{sbn}(P_0\vert P_1)=\mathsf{sbn}(P_0) \cup \mathsf{sbn}(P_1)$ \hspace{10pt} & $\mathsf{sbn}(P_0 + P_1)=\mathsf{sbn}(P_0) \cup \mathsf{sbn}(P_1)$ \\ $\mathsf{sbn}( \nu x) P' =\{x\} \cup \mathsf{sbn}(P')$ & $\mathsf{sbn}(!_{\mathbf{x}}P)= \mathbf{x}$ \\ \end{tabular} \end{definition} We can now define the event structure semantics in Table~\ref{tab:pik-den-sem}. We do this using rules of the form $\lrBraces{P}_{(\M{N},l)}=\lrangles{\M{E},\mathsf{Init},k}$ where $l$ is the level of unfolding of replication, $\M{E}$ is an LRBES, $\mathsf{Init}$ is the initial configuration, $\M{N}\supseteq n(P)$ is a set of names, which any input in the process could receive, and $k:\mathsf{Init}\rightarrow\mathcal{K}$ is a function assigning communication keys to the past actions, which we use in parallel composition to determine which synchronisations of past actions to put in $\mathsf{Init}$. We define $\lrBraces{P}_{\M{N}}=\sup_{l\in \mathbb{N}} \lrBraces{P}_{(\M{N},l)}$ {The denotational semantics in Table~\ref{tab:pik-den-sem} make use of of the LRBES operators defined in Section~\ref{sec:BES}. The choice and output cases are straightforward uses of the choice and causal prefix operators. The input creates a case for prefixing an input of each name in $\M{N}$ and a choice between the cases. We have two cases for restriction, one for restriction originating from a past communication and another for restriction originating from the original process. If the restriction does not originate from the original process, then we ignore it, otherwise we remove events which would use the restricted channel and their causes. The parallel composition uses the parallel operator, but additionally needs to consider link causation caused by the early semantics.} Each event labelled with an input of a name in standard bound names gets a bundle consisting of the event labelled with the output on that name. And each output event is prevented from reversing by the input names receiving that name. {This way, inputs on extruded names are caused by the output that made the name free. Replication substitutes the names and counts down the level of replication.} Note that the only difference between a future and a past action is that the event corresponding to a past action is put in the initial state and given a communication key. \begin{table}{\small\begin{tabular}{ll} \hspace{-0cm}$\lrBraces{0}_{(\M{N},l)}=$ & $\lrangles{(\emptyset,\emptyset,\emptyset,\emptyset,\emptyset, \emptyset, \emptyset),\emptyset,\emptyset}$ \vspace{5pt}\\ \hspace{-0cm}$\lrBraces{P_0+P_1}_{(\M{N},l)}=$ & $\lrangles{ \M{E}_0+\M{E}_1,\{0\}\times\mathsf{Init}_0\cup \{1\}\times \mathsf{Init}_1, k((i,e))=k_i(e)}$ where \\ \hspace{-0cm} & $\lrBraces{P_i}=\lrangles{\M{E}_i,\mathsf{Init}_i,k_i}$ for $i\in \{0,1\}$ \vspace{5pt}\\ \hspace{-0cm}$\lrBraces{\outp{a}{n}.P}_{(\M{N},l)}=$ & $\lrangles{\outp{a}{n}(e).\M{E}_{P},\mathsf{Init}_{P},k_{P}}$ for some fresh $e\notin E$ where \\ \hspace{-0cm} & $\lBrace P \rBrace_{(\M{N},l)}= \lrangles{\M{E}_{P},\mathsf{Init}_{P},k_{P}}$ \vspace{5pt}\\ \hspace{-0cm}$\lrBraces{\inp{a}{x}.P}_{(\M{N},l)}=$ & $\lrangles{\sum\limits_{n\in (\M{N}\setminus \mathsf{sbn}(P))} \inp{a}{n}(e).\M{E}_{P_{n}},\bigcup\limits_{n\in (\M{N}\setminus \mathsf{sbn}(P))}\{n\}\times\mathsf{Init}_{P_{n}},(n,e)\mapsto k_{P_{n}}(e)}$ \\ & for some fresh $e_n\notin E_n$ where \\ \hspace{-0cm} & $\lBrace P[x:= n] \rBrace_{(\M{N},l)}= \lrangles{\M{E}_{P_{n}},\mathsf{Init}_{P_{n}},k_{P_{n}}}$ \vspace{5pt}\\ \hspace{-0cm}$\lrBraces{\outp{a}{n}[m].P}_{(\M{N},l)}=$\hspace{-0cm} & $\lrangles{\outp{a}{n}(e).\M{E}_{P},\mathsf{Init}_{P}\cup \{e\},k_{P}[e\mapsto m]}$ for some fresh $e\notin E$ where \\ \hspace{-0cm} & $\lBrace P \rBrace_{(\M{N},l)}= \lrangles{\M{E}_{P},\mathsf{Init}_{P},k_{P}}$ \vspace{5pt}\\ \hspace{-0cm}$\lrBraces{\inp{a}{b}[m].P}_{(\M{N},l)}=$\hspace{-0cm} & $\lrangles{\sum\limits_{n\in (\M{N}\setminus \mathsf{sbn}(P))} \inp{a}{n}(e_n).\M{E}_{P_{n}},(\bigcup\limits_{n\in (\M{N}\setminus \mathsf{sbn}(P))}\{n\}\times\mathsf{Init}_{P_{n}})\cup \{(b,e_b)\},k}$ \\ & for some fresh $e_n\notin E_n$ where \\ \hspace{-0cm} & $\lBrace P[b_{[m]}:= n] \rBrace_{(\M{N},l)}= \lrangles{\M{E}_{P_{n}},\mathsf{Init}_{P_{n}},k_{P_{n}}}$ \\ & $k((n,e))=\begin{dcases*} m & if $e=e_b$ and $n=b$\\ k_{P_{n}}(e) & otherwise \\ \end{dcases*}$ \vspace{5pt}\\ \hspace{-0cm} $\lrBraces{(\nu a)P}_{(\M{N},l)}=$ & $\lrangles{\M{E}\upharpoonright E_{\alpha},\mathsf{Init}\cap E_{\alpha}, k\upharpoonright E_{\alpha})}$ where: \\ \hspace{-0cm} & $\lBrace P \rBrace_{(\M{N},l)}= \lrangles{\M{E},\mathsf{Init},k}$ \\ \hspace{-0cm} & $E_{\alpha}=\rho(\{\alpha\mid a\in n(\alpha)\}$ \\ \hspace{-0cm} & if whenever there exist past actions $\inp{b}{a}[m]$ and $\outp{b}{a}[m]$ in $P$ then \\ & they are guarded by a restriction $(\nu a)$ in $P$\vspace{5pt}\\ \hspace{-0cm} $\lrBraces{(\nu a)P}_{(\M{N},l)}=$ & $\lrangles{\M{E},\mathsf{Init}, k}$ where: \\ \hspace{-0cm} & $\lBrace P \rBrace_{(\M{N},l)}= \lrangles{\M{E},\mathsf{Init},k}$ \\ \hspace{-0cm} & if there exist past actions $\inp{b}{a}[m]$ and $\outp{b}{a}[m]$ in $P$ which\\ & are not guarded by a restriction $(\nu a)$ in $P$\vspace{5pt}\\ \hspace{-0cm} $\lrBraces{P_0\vert P_1}_{(\M{N},l)}=$ & $\lrangles{\LRBES{}\upharpoonright \{e\mid \lambda(e)\neq 0\},\mathsf{Init},k}$ where\\ \hspace{-0cm} & for $i\in \{0,1\}$, $\lBrace P_i \rBrace_l= \lrangles{\M{E}_{i} ,\mathsf{Init}_i,k_i}$ \\ \hspace{-0cm} & $(E_0,F_0,\mapsto_0,\mathrel{\sharp}_0,\rhd_0)\times (E_0,F_0,\mapsto_0,\mathrel{\sharp}_0,\rhd_0)=(E,F,\mapsto',\mathrel{\sharp},\rhd')$ \\ & $\mathsf{Init}=\{(e_0,*)\vert e_0\in \mathsf{Init}_0\text{ and } \nexists e_1\in \mathsf{Init}_1.k_1(e_1)=k_0(e_0)\} \cup$ \\ & $ \{(*,e_1)\vert e_1\in \mathsf{Init}_1\text{ and } \nexists e_0\in \mathsf{Init}_0.k_1(e_1)=k_0(e_0)\} \cup$ \\ & $\{(e_0,e_1)\vert e_0\in \mathsf{Init}_0\text{ and } e_1\in \mathsf{Init}_1 \text{ and } k_1(e_1)=k_0(e_0)\}$ \\ & $X\mapsto e$ if $X\mapsto' e$ or there exists $x\in \mathsf{no}(\lambda(e))$ such that \\ &$X=\{e'\mid\exists a. \lambda(e')=\outp{a}{x} \}$ and $x\in \mathsf{sbn}(P)$\\ & $e\rhd \underline{e'}$ if either $e\rhd' \underline{e'}$ or there exists $x\in \mathsf{no}(\lambda(e))$ and $a$ such that $\lambda(e')=\overline{a}(x)$ \\ \hspace{-0cm} & $k(e)=\begin{dcases*} k_0(e_0) & if $e=(e_0,*)$\\ k_1(e_1) & if $e=(*,e_1)$\\ k_0(e_0) & if $e=(e_0,e_1)$\\ \end{dcases*}$ \vspace{5pt}\\ \hspace{-0cm} $\lrBraces{!_{\mathbf{x}}P}_{(\M{N},0)}=$ & $\lrangles{(\emptyset,\emptyset,\emptyset, \emptyset, \emptyset, \emptyset, \emptyset),\emptyset,\emptyset}$ \vspace{5pt}\\ \hspace{-0cm} $\lrBraces{!_{\mathbf{x}}P}_{(\M{N},l)}=$ & $\lrBraces{!_{\mathbf{x}\setminus \{x_0,\dots, x_k\}} P\vert P\{\rfrac{x_0,\dots,x_k}{a_0,\dots,a_k}\}}_{(\M{N},l-1)}$ if $\{x_0,\dots, x_k\}\subseteq \mathbf{x}$ \\ & and $\mathsf{bn}(P)=\{a_0,\dots,a_k\}$\vspace{5pt}\\ \end{tabular}} \caption{Denotational event structure semantics of $\pi$IK}\label{tab:pik-den-sem} \end{table} \begin{example} Consider the process $\inp{a}{b}[n]\mid \outp{a}{b}[n]$. Our event structure semantics generate a LRBES $\lrBraces{\inp{a}{x}[n]\mid \outp{a}{b[n]}}_{\{a,b,x\}}=\lrangles{\LRBES{},\mathsf{Init},k}$ where: \[ \begin{array}{lrl} E=F = \{\inp{a}{b},\inp{a}{a},\inp{a}{x},\outp{a}{b},\tau\} &\lambda(e)=& e\\ \{\outp{a}{b}\}\mapsto \inp{a}{b} & \mathsf{Act}=& \{\inp{a}{b},\inp{a}{a},\inp{a}{x},\outp{a}{b},\tau\}\\ \inp{a}{b}\mathrel{\sharp} \inp{a}{a},~\inp{a}{b}\mathrel{\sharp}\inp{a}{x},~\inp{a}{a}\mathrel{\sharp}\inp{a}{x}, & \mathsf{Init}=&\{\tau\} \\ \inp{a}{b}\mathrel{\sharp}\tau,~\inp{a}{a}\mathrel{\sharp}\tau,~\inp{a}{x}\mathrel{\sharp}\tau,~\outp{a}{b}\mathrel{\sharp}\tau \;\;\;\;\;\; & k(\tau)=&n\\ \inp{a}{b} \rhd \outp{a}{b} \\ \end{array}\] From this we see that (1) receiving $b$ is causally dependent on sending $b$, (2) all the possible inputs on $a$ are in conflict with one another, (3) the synchronisation between the input and the output is in conflict with either happening on their own, and (4) since the two past actions have the same key, the initial state contains their synchronisation. \end{example} We show in Theorems~\ref{the:PtoLRBEStrans} and~\ref{the:PtoLRBEStrans2} that given a process $P$ with a conflict-free initial state, including any reachable process, performing a transition $P\xrightarrow{\mu[m]} P'$ does not affect the event structure, as $\lrBraces{P}_{\M{N}}$ and $\lrBraces{P'}_{\M{N}}$ are isomorphic. It also means we have an event $e$ labelled $\mu$ such that $e$ is available in $P$'s initial state, and $P'$'s initial state is $P$'s initial state with $e$ added. A similar event can be removed to correspond to a reverse action. \begin{theorem}\label{the:PtoLRBEStrans} Let $P$ be a forwards reachable process wherein all bound and free names are different and let $\M{N}\supseteq n(P)$ be a set of names. If (1) $\lrBraces{P}_{\M{N}}=\lrangles{\M{E},\mathsf{Init},k}$ where $\M{E}=\LRBES{}$, and $\mathsf{Init}$ is conflict-free, and (2) there exists a transition $P\xrightarrow{\mu[m]} P'$ such that $\lrBraces{P'}_{\M{N}}=\lrangles{\M{E}',\mathsf{Init}',k'}$, then there exists an isomorphism $f:\M{E}\rightarrow \M{E}'$ and a transition in $C_{br}(\M{E})$, $\mathsf{Init}\xrightarrow{\{e\}} X$, such that $\lambda(e)=\mu$, $f\circ k'=k[e\mapsto m]$, and $f(X)=\mathsf{Init}'$. \end{theorem} \begin{theorem}\label{the:PtoLRBEStrans2} Let $P$ be a forwards reachable process wherein all bound and free names are different and let $\M{N}\supseteq n(P)$ be a set of names. If (1) $\lrBraces{P}_{\M{N}}=\lrangles{\M{E},\mathsf{Init},k}$ where $\M{E}=\LRBES{}$, and (2) there exists a transition $\mathsf{Init}\xrightarrow{\{e\}} X$ in $C_{br}(\M{E})$, then there exists a transition $P\xrightarrow{\mu[m]} P'$ such that $\lrBraces{P'}_{\M{N}}=\lrangles{\M{E}',\mathsf{Init}',k'}$ and an isomorphism $f:\M{E}\rightarrow \M{E}'$ such that $\lambda(e)=\mu$, $f\circ k'=k[e\mapsto m]$, and $f(X)=\mathsf{Init}'$. \end{theorem} By Theorems~\ref{the:key-ext-eq},~\ref{the:PtoLRBEStrans}, and~\ref{the:PtoLRBEStrans2} we can combine the event structure semantics of $\pi$IK and mapping $E$ (Definition~\ref{def:E}) and get an operational correspondence between $\mathbf{H}\vdash\! P$ and the event structure $\lrBraces{E(\mathsf{lcopy}(\mathbf{H})\vdash\! P,P)}_{n(E(\mathsf{lcopy}(\mathbf{H})\vdash\! P,P))}$. \section{Conclusion and future work} All existing reversible versions of the $\pi$-calculus use reduction semantics~\cite{lanese2010reversing,TIEZZI2015684} or late semantics~\cite{cristescu2013compositional,DBLP:journals/corr/abs-1808-08655}, despite the early semantics being used more widely than the late in the forward-only setting. We have introduced $\pi$IH, the first reversible early $\pi$-calculus. It is a reversible form of the \emph{internal} $\pi$-calculus, where names being sent in output actions are always bound. As well as structural causation, as in CCS, the early form of the internal $\pi$-calculus also has a form of link causation {created by the semantics being early, which is not present in other reversible $\pi$-calculi}. In $\pi$IH past actions are tracked by using extrusion histories adapted from~\cite{hildebrandt2017stable}, which move past actions and their locations into separate histories for dynamic reversibility. We mediate the event structure semantics of $\pi$IH via a statically reversible version of the internal $\pi$-calculus, $\pi$IK, which keeps the structure of the process intact but annotates past actions with keys, similarly to $\pi$K~\cite{DBLP:journals/corr/abs-1808-08655} and CCSK~\cite{PU07}. We showed that a process $\pi$IH with extrusion histories can be mapped to a $\pi$IK process with keys, creating an operational correspondence (Theorem~\ref{the:key-ext-eq}). The event structure semantics of $\pi$IK, and by extension $\pi$IH, are defined inductively on the syntax of the process. We use labelled reversible bundle event structures~\cite{EFG2018}, rather than prime event structures, to get a more compact representation where each action in the calculus has only one corresponding event. While causation in the internal $\pi$-calculus is simpler that in the full $\pi$-calculus, our early semantics means that we still have to handle link causation, in the form of an input receiving a free name being caused by a previous output of that free name. We show an operational correspondence between $\pi$IK processes and their event structure representations in Theorems~\ref{the:PtoLRBEStrans} and~\ref{the:PtoLRBEStrans2}. Cristescu \emph{et al.}~\cite{CristescuKV16} have used rigid families~\cite{CastellanHLW14}, related to event structures, to describe the semantics of R$\pi$~\cite{cristescu2013compositional}. However, unlike our denotational event structure semantics, their semantics require one to reverse every action in the process before applying the mapping to a rigid family, and then redo every reversed action in the rigid family. Our approach of using a static calculus as an intermediate step means we get the current state of the event structure immediately, and do not need to redo the past steps. \paragraph{Future work:} We could expand the event structure semantics of $\pi$IK to $\pi$K. This would entail significantly more link causation, but would give us event structure semantics of a full $\pi$-calculus. Another possibility is to expand $\pi$IH to get a full reversible early $\pi$-calculus. \paragraph{Acknowledgements:} We thank Thomas Hildebrandt and H{\aa}kon Normann for discussions on how to translate their work on $\pi$-calculus with extrusion histories to a reversible setting. We thank the anonymous reviewers of RC 2020 for their helpful comments. This work was partially supported by an EPSRC DTP award; also by the following EPSRC projects: EP/K034413/1, EP/K011715/1, EP/L00058X/1, EP/N027833/1, EP/T006544/1, EP/N028201/1 and EP/T014709/1; and by EU COST Action IC1405 on Reversible Computation.
{ "redpajama_set_name": "RedPajamaArXiv" }
5,875
package akka.contrib.persistence.mongodb import akka.actor.ActorSystem import akka.persistence.PersistentRepr import akka.serialization.{SerializationExtension, Serialization} import com.mongodb.util.JSON import com.mongodb._ import com.typesafe.config.ConfigFactory import org.scalatest.BeforeAndAfterAll import collection.JavaConverters._ import scala.util.Try abstract class JournalUpgradeSpec[D <: MongoPersistenceDriver, X <: MongoPersistenceExtension](extensionClass: Class[X], toDriver: ActorSystem => D) extends BaseUnitTest with EmbeddedMongo with BeforeAndAfterAll { import ConfigLoanFixture._ override def embedDB = "upgrade-test" override def beforeAll(): Unit = { doBefore() } override def afterAll(): Unit = { doAfter() } def config(extensionClass: Class[_]) = ConfigFactory.parseString(s""" |akka.contrib.persistence.mongodb.mongo.driver = "${extensionClass.getName}" |akka.contrib.persistence.mongodb.mongo.mongouri = "mongodb://localhost:$embedConnectionPort/$embedDB" akka.contrib.persistence.mongodb.mongo.journal-automatic-upgrade = true |akka.persistence.journal.plugin = "akka-contrib-mongodb-persistence-journal" |akka-contrib-mongodb-persistence-journal { | # Class name of the plugin. | class = "akka.contrib.persistence.mongodb.MongoJournal" |} |akka.persistence.snapshot-store.plugin = "akka-contrib-mongodb-persistence-snapshot" |akka-contrib-mongodb-persistence-snapshot { | # Class name of the plugin. | class = "akka.contrib.persistence.mongodb.MongoSnapshots" |} |""".stripMargin) def configured[A](testCode: D => A) = withConfig(config(extensionClass), "upgrade-test")(toDriver andThen testCode) "A mongo persistence driver" should "do nothing on a new installation" in configured { as => mongoClient.getDB(embedDB).getCollectionNames shouldNot contain ("akka_persistence_journal") } import JournallingFieldNames._ def buildLegacyObject[A](pid: String, sn: Long, payload: A)(implicit serEv: Serialization): DBObject = { val builder = new BasicDBObjectBuilder() builder .add(PROCESSOR_ID, pid) .add(SEQUENCE_NUMBER, sn) .add(SERIALIZED, serEv.serialize(PersistentRepr(payload, sn, pid)).get ).get() } def buildLegacyDocument[A](pid: String, sn: Long)(implicit serEv: Serialization): DBObject = { val builder = new BasicDBObjectBuilder() val serBuilder = new BasicDBObjectBuilder() val plBuilder = new BasicDBObjectBuilder() val subdoc = serBuilder.add(PayloadKey, plBuilder.add("abc",1).add("def",2.0).add("ghi",true).get()).get() builder.add(PROCESSOR_ID, pid).add(SEQUENCE_NUMBER, sn).add(SERIALIZED, subdoc).get() } def queryByProcessorId(pid: String): DBObject = { new BasicDBObjectBuilder().add(PROCESSOR_ID,pid).get() } def createLegacyIndex(coll: DBCollection): Unit = { val idxSpec = new BasicDBObjectBuilder() .add(PROCESSOR_ID, 1) .add(SEQUENCE_NUMBER, 1) .add(DELETED, 1) .get() Try(coll.createIndex(idxSpec)).getOrElse(()) } it should "upgrade an existing journal" in configured { as => implicit val serialization = SerializationExtension.get(as.actorSystem) val coll = mongoClient.getDB(embedDB).getCollection("akka_persistence_journal") createLegacyIndex(coll) coll.insert(buildLegacyObject("foo",1,"bar")) coll.insert(buildLegacyObject("foo",2,"bar")) coll.insert(buildLegacyDocument("foo",3)) println(s"before = ${coll.find().toArray.asScala.toList}") as.upgradeJournalIfNeeded() val records = coll.find(queryByProcessorId("foo")).toArray.asScala.toList println(records) records should have size 3 records.zipWithIndex.foreach { case (dbo,idx) => dbo.get(PROCESSOR_ID) should be ("foo") dbo.get(TO) should be (idx + 1) dbo.get(FROM) should be (dbo.get(TO)) val event = dbo.get(EVENTS).asInstanceOf[BasicDBList].get(0).asInstanceOf[DBObject] event.get(SEQUENCE_NUMBER) should be (idx + 1) if (idx < 2) { event.get(TYPE) should be ("s") event.get(PayloadKey) should be ("bar") } else { event.get(TYPE) should be ("bson") val bson = event.get(PayloadKey).asInstanceOf[DBObject] bson.get("abc") should be (1) bson.get("def") should be (2.0) bson.get("ghi") shouldBe true } } } it should "upgrade a more complicated journal" in configured { as => implicit val serialization = SerializationExtension.get(as.actorSystem) val coll = mongoClient.getDB(embedDB).getCollection("akka_persistence_journal") coll.remove(new BasicDBObject()) createLegacyIndex(coll) val doc = """ |{ | "_id" : { "$oid" : "55deeae33de20e69f33b748b" }, | "pid" : "foo", | "sn" : { "$numberLong" : "1" }, | "dl" : false, | "cs" : [ ], | "pr" : { | "p" : { | "order-created" : { | "id" : "alsonotarealguid", | "seqNr" : 232, | "userId" : "notarealguid", | "cartId" : "notarealcartid", | "phoneNumber" : "+15555005555", | "from" : { | "country" : "US" | }, | "to" : { | "country" : "RU", | "region" : "MOW", | "city" : "Moscow" | }, | "dateCreated" : { "$date": "2015-08-27T10:48:03.101Z" }, | "timestamp" : { "$date": "2015-08-27T10:48:03.101Z" }, | "addressId" : "not-a-real-addressid" | }, | "_timestamp" : { "$date": "2015-08-27T10:48:03.102Z" } | } | } }""".stripMargin coll.insert(JSON.parse(doc).asInstanceOf[DBObject]) as.upgradeJournalIfNeeded() val records = coll.find(queryByProcessorId("foo")).toArray.asScala.toList println(records) records should have size 1 records.zipWithIndex.foreach { case (dbo,idx) => dbo.get(PROCESSOR_ID) should be ("foo") dbo.get(TO) should be (idx + 1) dbo.get(FROM) should be (dbo.get(TO)) val event = dbo.get(EVENTS).asInstanceOf[BasicDBList].get(0).asInstanceOf[DBObject] event.get(SEQUENCE_NUMBER) should be (idx + 1) event.get(TYPE) should be ("bson") val bson = event.get(PayloadKey).asInstanceOf[DBObject] val payload = bson.get("order-created").asInstanceOf[DBObject] payload.get("cartId") should be ("notarealcartid") payload.get("seqNr") should be (232) } } }
{ "redpajama_set_name": "RedPajamaGithub" }
3,723
Q: Show value into combo box using ajax? my problem is how to append data to the combo box, meanwhile there are 2 value in controller <script type='text/javascript'> $(document).ready(function(){ $('#bill').change(function(){ var bill = $(this).val(); $.ajax({ url:'<?=$url?>/bill', method: 'get', data: {bill: bill}, success: function(data) { $("#period").html(data."#test"); $("#period").val(data); $("#appen").html(data); } }); }); }); </script> <select name="periode" required class="form-control select2 txtx" > <option value="">Select Period</option> <option value="" id="period" selected></option> </select> <div id="appen"></div> in my controller echo "<a id='test'>test</a>" // i need this to append in combo box echo "<a>test number 2 </a>" //i need this to append in tag <div> A: You should return data to AJAX as JSON like this : IN your controller $return['comboBox'] = "<option value='test'>test</option>"; $return['divBox'] = '<a>test number 2 </a>'; return json_encode($return); IN AJAX success function <script type='text/javascript'> $(document).ready(function(){ $('#bill').change(function(){ var bill = $(this).val(); $.ajax({ url:'<?=$url?>/bill', method: 'get', dataType: 'JSON', data: {bill: bill}, success: function(data) { $("select").append(data.comboBox); $("#appen").html(data.divBox); } }); }); }); </script> A: Try this, I made some changes according to your need Controller: here I created one array which will echo your JSON and to terminate script I wrote exit(); function yourMethodName($bill){ //do something with $bill $response_array['comboBox'] = "<a id='test'>test</a>";; $response_array['divBox'] = '<a>test number 2 </a>'; echo json_encode($response_array); exit(); } Ajax: first thing if you are using GET then you can pass as data or as parameter you passed both so keep it only one of them so here, i removed data and pass as parameter <script type='text/javascript'> $(document).ready(function(){ $('#bill').change(function(){ var bill = $(this).val(); $.ajax({ url:'<?php echo $url;?>/'+bill, method: 'GET', dataType: 'json', success: function(data) { $("#period").html(data.comboBox); $("#appen").html(data.divBox); } }); }); }); </script>
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,140
{"url":"https:\/\/sites.math.washington.edu\/~burdzy\/nwprob2017.php","text":"## Pacific Northwest Probability Seminar\n\nThe Nineteenth Northwest Probability Seminar\nNovember 4, 2017\nSupported by the Pacific Institute for the Mathematical Sciences (PIMS) and Microsoft Research.\n\nThe Birnbaum Lecture in Probability will be delivered by Michel Ledoux (University of Toulouse) in 2017.\n\nThe 19th Northwest Probability Seminar, a one-day mini-conference organized by the University of Washington, the Oregon State University, the University of British Columbia, the University of Oregon, and Microsoft Research, will be held on November 4, 2017. The conference will be hosted at Microsoft, supported by Microsoft Research and the Pacific Institute for the Mathematical Sciences (PIMS).\n\nThere is no registration fee. Breakfast, lunch, and coffee will be free.\n\nThe talks will take place in Building 99 at Microsoft. Parking at Microsoft is free.\n\n## Schedule\n\n 10:00 \u2014 11:00 Coffee and muffins 11:00 \u2014 11:40 Gourab Ray (University of Victoria) A characterization theorem for the Gaussian free field 11:50 \u2014 12:30 Yuval Peres (Microsoft Research) Gravitational allocation to uniform points on the sphere 12:30 \u2014 1:40 Lunch (catered) 1:10 \u2014 2:15 Probability demos and open problems (overlaps with lunch) 2:20 \u2014 3:10 Michel Ledoux (University of Toulouse; Birnbaum Lecture) Optimal matching of Gaussian samples 3:20 \u2014 4:00 Yevgeniy Kovchegov (Oregon State University) Random self-similar trees: dynamical pruning and its applications to inviscid Burgers equations 4:00 \u2014 4:30 Tea and snacks 4:30 \u2014 5:10 Persi Diaconis (Stanford University) An Analysis of Spatial Mixing 6:00 \u2014 no host dinner at Haiku sushi & seafood buffet, downtown Redmond\n\n## Titles and abstracts\n\n### Persi Diaconis (Stanford University)\n\nTitle: An Analysis of Spatial Mixing\n\nAbstract: In joint work with Soumik Pal, we study natural mixing processes where cards (or dominoes or mahjong tiles) are 'smushed' around on a table with two hands. How long should mixing continue. If things are not well mixed, what patterns remain? We study this in practice (!): experiments indicate that about 30 seconds of smushing suffice to mix 52 cards. We also study it in theory introducing a variety of models which permit analysis. Part of the analysis passes to a reflecting, jump- diffusion limit and uses this and a novel 'shadow coupling' to give reasonably precise bounds on the mixing time.\n\n### Yevgeniy Kovchegov (Oregon State University)\n\nTitle: Random self-similar trees: dynamical pruning and its applications to inviscid Burgers equations\n\nAbstract: We introduce generalized dynamical pruning on rooted binary trees with edge lengths. The pruning removes parts of a tree $T$, starting from the leaves, according to a pruning function defined on subtrees within $T$. The generalized pruning encompasses a number of previously studied discrete and continuous pruning operations, including the tree erasure and Horton pruning. For example, a finite critical binary Galton-Watson tree with exponential edge lengths is invariant with respect to the generalized dynamical pruning for arbitrary admissible pruning function. We will discuss an application in which we examine a one dimensional inviscid Burgers equation with a piece-wise linear initial potential with unit slopes. The Burgers dynamics in this case is equivalent to a generalized pruning of the level set tree of the potential, with the pruning function equal to the total tree length. We give a complete description of the Burgers dynamics for the Harris path of a critical binary Galton-Watson tree with i.i.d. exponential edge lengths.\nThis work was done in collaboration with Ilya Zaliapin (University of Nevada Reno) and Maxim Arnold (University of Texas at Dallas).\n\n### Michel Ledoux (University of Toulouse)\n\nTitle: Optimal matching of Gaussian samples\n\nAbstract: Optimal matching problems are random variational problems widely investigated in the mathematics and physics literature. We discuss here the optimal matching problem of an empirical measure on a sample of iid random variables to the common law in Kantorovich-Wasserstein distances, which is a classical topic in probability and statistics. Two-dimensional matching of uniform samples gave rise to deep results investigated from various view points (combinatorial, generic chaining). We study here the case of Gaussian samples, first in dimension one on the basis of explicit representations of Kantorovich metrics and a sharp analysis of more general log-concave distributions in terms of their isoperimetric profile (joint work with S. Bobkov), and then in dimension two (and higher) following the PDE and transportation approach recently put forward by L. Ambrosia, F. Stra and D. Trevisan.\n\n### Yuval Peres (Microsoft Research)\n\nTitle: Gravitational allocation to uniform points on the sphere\n\nAbstract: Given $n$ uniform points on the surface of a two-dimensional sphere, how can we partition the sphere fairly among them? \"Fairly\" means that each region has the same area. It turns out that if the given points apply a two-dimensional gravity force to the rest of the sphere, then the basins of attraction for the resulting gradient flow yield such a partition - with exactly equal areas, no matter how the points are distributed. (See the cover of the AMS Notices at http:\/\/www.ams.org\/publications\/journals\/notices\/201705\/rnoti-cvr1.pdf.) Our main result is that this partition minimizes, up to a bounded factor, the average distance between points in the same cell. I will also present an application to almost optimal matching of $n$ uniform blue points to $n$ uniform red points on the sphere, connecting to a classical result of Ajtai, Komlos and Tusnady (Combinatorica 1984).\nJoint work with Nina Holden and Alex Zhai.\n\n### Gourab Ray (University of Victoria)\n\nTitle: A characterization theorem for the Gaussian free field\n\nAbstract: We prove that any random distribution satisfying conformal invariance and a form of domain Markov property and having a finite moment condition must be the Gaussian free field. We also present some open problems regarding what happens beyond the Gaussian free field. Ongoing joint work with Nathanael Berestycki and Ellen Powell.\n\n## Dinner info\n\nNo host dinner (6:00 onwards) at Haiku sushi & seafood buffet, downtown Redmond","date":"2018-08-17 11:48:35","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5981945395469666, \"perplexity\": 1643.363959781167}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-34\/segments\/1534221212040.27\/warc\/CC-MAIN-20180817104238-20180817124238-00284.warc.gz\"}"}
null
null
Feature/Interview INTERVIEW: Halie Loren (new album From the Wild Sky, UK debut at Pizza Express Live, Holborn 31 July) By ljazzn on 17 July 2018 • ( Leave a comment ) Halie Loren Photo credit: Bob Williams Alaska-born vocalist HALIE LOREN will be making her UK debut at Pizza Express High Holborn on 31 July. Whereas most of her previous albums had a strong bias towards the American Songbook, this new album, From the Wild Sky (Justin Time / Nettwerk) consists almost entirely of her own songs. The new album was produced here in London by Troy Miller, and London guitarist Femi Temowo is also featured. Interview by Lauren Bush: LondonJazzNews: From the Wild Sky is very different from your previous work….. Halie Loren: This new album definitely takes a different direction. It diverges from being something that can be categorised more strictly within the jazz genre and it definitely delves more into the pop and folk influences in my music. Also, it's all original with the exception of the last track and so it's the first time that I have done an album that focuses on my original material since my very first CD that I released independently about 12 years ago. A lot of my roots are in songwriting, so I felt really called to do this as an artist for a few years now and finally the moment arrived where I felt like I really needed to do it. The whole idea of crowd-funding it is what kind of made this stuff possible in terms of the way that I did it. In every way, it's a new adventure. LJN: How were you first guided into the world of jazz and the American Songbook after your first original album? HL: Jazz was always a part of my path, growing up. My mom listened to a lot of jazz albums when I was growing up so I knew a lot of the American Songbook just through listening to it my whole life and singing along. I loved Nat King Cole and Etta James was one of my favourite vocalists, and Ella Fitzgerald and all that stuff is just part of my experience with being a young person interested in music. I knew and performed a lot of these songs from forever ago but when I started making albums, the first one that I embarked on was original music. LJN: Perhaps you could tell how your songwriting process worked for some of the songs on this album? HL: The songs I wrote for this album took shape in many different ways. For the song Well-Loved Woman, I came up for the chorus of that song while I was on a road trip and it stuck in my brain for forever. Then I wrote the rest of the song maybe a year or two later. A lot of songs come to me that way. It was also more of an a cappella writing style. With Noah, I came up with the melody of the chorus and immediately went home and sat at the piano for hours to figure out where it was supposed to go. For me, I have to have my hands on the instrument and be simultaneous for it to connect for me. Then the melody comes through as this message that is informed by where my hands are. Then my hands listening this weird way to what my voice is doing and it's like this hide-and-seek game. It's very fun and whimsical. It's like these 2 sides of my brain are speaking different languages, trying to figure out what the other one is saying. LJN: You've spent the last two years composing these personal songs. What was it like taking them to your producer to help you bring it all together? HL: I've never done that before, so I didn't know how the process would work exactly, but every process and every collaboration is different. I did know, because I had specifically sought out to work with Troy Miller, that I resonated with his work. I knew that we would probably work well together. I already trusted that he would do a great job because I had heard proof of that. It was definitely a moment of vulnerability to say "here are these songs in really rough form, is there anything that you think you could uncover and polish up to a brilliant shine?" LJN: What was it like traveling to London to work in Miller's Spark Studio? HL: It was wonderful. We ended up spending less than two weeks together to record this album from start to finish. The first five days were actually spent doing pre-production, plotting out what we were going to do to arrange these songs, what elements we wanted to have on them, trimming sections of instrumentals or tidying up these songs. We then spent two of those days tracking the songs that didn't require the entire band to be present because the rest of the musicians were all in Brooklyn. We recorded all of the a cappella stuff and everything that Troy and I played in London and then we went to the studio in Brooklyn to record everything else. LJN: Tell me about this stellar band … HL: Troy Miller was a very instrumental part in assembling the cast of musicians who appear on the album. He's done a lot of work with all of these musicians I admire, including, of course, Becca Stevens. We've still never met in person, but I've long admired her artistry, and when she came through town, Troy was able to bring her in to collaborate on my song Wild Birds. These were all connections that I'd never met before, but they are all world-class musicians. Femi Temowo was a big part of the collaboration musically from start to finish. He and Troy work together a lot. The music world is so small in so many ways and there are so many connections to be made. LJN: It must have been really exciting working with all new people? HL: They're all so amazing. I was floored. I had a hard time not laughing, thinking: this is so delightful! LJN: This is going to be your London debut. What are you most looking forward to? HL: I've wanted to play in London for years and it's always fallen just shy of coming to fruition until now, for whatever reason. It's felt like two ships passing in the night, London and me! It's going to be really exciting because quite a few fans of mine are located in or near London and I'm really hoping I get to meet some of them. That's one of my favourite things about social media. That you can connect with fans who are located halfway across the world and then sometimes, you get to put a face to an online profile. The fact that it's going to be under these circumstances, with this band, too, is so beyond me. I'm so excited to be able to reunite with my studio buddies! (pp) Halie's gig at Pizza Express Holborn is on 31 July. LINKS: BOOKING Full list of tour dates Categories: Feature/Interview REPORT: Jazz and Humour at the German Radio Jazz Research Group in Lübeck INTERVIEW: Ben Cottrell (reflections on Beats & Pieces' North American tour plus new album/DVD launch at mjf)
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,694
Nara Revisited: Day 2 I spent the second day outside the city, to the southwest. My primary destination and first stop for the day was Hōryū-ji, a temple founded in 607. Hōryū-ji was built at the command of the imperial regent, Prince Shōtoku, who was instrumental in the spread of Buddhism in Japan. Prince Shōtoku also ordered the adoption of the Chinese calendar and carried out significant governmental reforms. But while he actively sought out and implemented the best aspects of Chinese culture, he also asserted Japan as being equal to China, putting an end to the previous subordinate relationship. (He famously addressed a letter, "From the Son of Heaven in the Land of the Rising Sun to the Son of Heaven in the Land of the Setting Sun." The Chinese emperor was not pleased.) A fire is said to have leveled Hōryū-ji in 670, but even so, the complex contains the longest-standing wooden buildings in the world. And while they're around a century younger than the temple itself, the gate guardians are the oldest in Japan. From Hōryū-ji, I moved on to Yakushi-ji, a temple located just within Nara's city limits, which is still a mile or two outside of the city proper. Yakushi-ji was established in 680 and moved to its present location in 718. Over the years, nearly all of its buildings have burned down and been rebuilt. The eastern pagoda, built in 730, is the sole remaining original construction. This round hall, meanwhile, is a totally new addition. It was built in 1991 to hold a portion of the cremated remains of the famous 7th century Chinese monk, Xuanzang (Jp: Genjō Sanzō). Another portion exists in a museum in India. That one was a gift from the Chinese government, but the remains at Yakushi-ji were taken by Japanese soldiers during World War II. I wonder how much that bothers China. On the one hand, the Party holds religion in contempt, but on the other hand, Journey to the West, the novel loosely based on Xuanzang's travels, is a beloved classic. I guess it's likely that most people simply don't know about the remains being kept in Japan. I certainly had no idea. After poking around Yakushi-ji, I had lunch at a small restaurant called Shūraku Ichihashi. I can't remember exactly what I had, but I do remember being struck by how low the price was for such good food. I've had better meals and I've had cheaper meals, but their ¥1,000 (~$10) lunch set was an outstanding bargain for the quality. Here's a map. I should note that their dinner prices seemed much higher, so the restaurant is probably best for lunch. After my somewhat late lunch, my last stop before heading home to Kobe was at Tōshōdai-ji, a short walk north of Yakushi-ji. The main hall was completely walled off due to repair work, but this is the grave of Ganjin, the Chinese monk who founded the temple in 759. Ganjin (Ch: Jianzhen) was invited to come to Japan to share his knowledge of Buddhism. It took him six tries over the course of a dozen years before he finally made it across the ocean, and he had gone completely blind in the meantime. When Ganjin at last made it to the capital, Nara, he served for five years as the abbot of Tōdai-ji (the temple with the giant statue of Buddha), before retiring to a plot of land granted by the emperor. Ganjin then used the land to build Tōshōdai-ji. He died four years later. There's a beautiful mossy grove between the grave and the rest of the temple. Even disregarding their cultural and historical value, Japan's many temples and shrines are priceless just for all the green space they protect from encroaching concrete. Not that there's no countryside left in Japan. The walk to the nearest train station was quite nice. It was harvest time in the rice fields. Houses pressed in at points… …but then I came upon a particularly novel bit of protected greenery. This island is a giant, key-shaped burial mound. Scores of these were built as tombs for nobility from the 3rd century to the early 7th century. This one is officially designated as the tomb of Emperor Suinin, but I don't know if there's any evidence supporting that claim. Japan's Imperial Household Agency lists some 740 burial mounds as being imperial tombs, but excavations are forbidden and it's widely thought that most of the designations – made in the 19th century – are spurious. A few actually are supported by historical and archeological evidence though, so they aren't all made up. In any case, the mound is a literal island of greenery, and a sacrosanct one at that. So rather than being all for the sake of one dead man, the enormous labor that must have been expended to build the tomb ended up producing something that will benefit a great many people for a long, long time. Tags:Burial Mounds, Food, Nara, Temples Posted in Japan, Travel | Leave a Comment » When I first visited Nara, I only had enough time to see a fraction of what I wanted to. I resolved to make another trip, and so I did. I revisited Nara at the end of October 2007, and this time I stayed at a hotel and made two days of it. On the first day, I visited two sub-temples of Tōdai-ji – the temple with the giant statue of Buddha – as well as a major shrine and Nara National Museum. From the northeast corner of the Great Buddha Hall at Tōdai-ji, a path leads up the hillside to Nigatsu-dō, the larger of the two sub-temples. Nigatsu-dō means "Hall of the 2nd Month," and while there are several buildings in the complex, only the eponymous hall itself is open to the public. Nigatsu-dō dates from the 8th century, like the rest of Tōdai-ji, but the hall was reconstructed in 1669 after being destroyed in a fire. "2nd Month," refers to a group of ceremonies held here during the 2nd month of the old lunar calendar, which equates to around March. These ceremonies have been held every year since 752. Along the stairs to the hall, there is a fountain for ritually purifying yourself by rinsing your hands. That's a Shinto tradition, not a Buddhist one, but it sometimes shows up at Japanese Buddhist temples. Up at the hall, you can't actually enter the building, but you can walk along the terrace. In addition to lanterns in a variety of shapes and sizes, there are placards mounted all along the eaves. Some have writing and others have pictures, and some are fairly new while others are very old. These two are nameplates (Nigatsu-dō is written "二月堂"), but as you can see, only the one on the left is still legible. On two neighboring buildings: Gargoyle tiles! They're called onigawara (鬼瓦) in Japanese. I love these things. Back on the ground, I encountered one of Nara's many free-roaming sacred deer. They get rounded up every October to have their antlers removed, but this guy must have evaded capture. From Nigatsu-dō, I headed south along the hillside. A short distance away is a modest building known as Sangatsu-dō, meaning "Hall of the 3rd Month." It's name comes from a ceremony held here during the 3rd lunar month. Sangatsu-dō isn't as as well known as its neighbor, but it is said to be the oldest building at Tōdai-ji. It houses 16 statues, 14 of which date from between 729 and 749. The statues are in very good condition given their age, and 12 are designated national treasures. No photography allowed, alas. After taking a look, I continued south. The hillside is wooded, but some spots allow for views over Nara. This is the Great Buddha Hall. And here you can see the pagoda at Kōfuku-ji, the other temple I stopped by on my first visit. About 15 minutes farther south, in denser forest, is Kasuga Grand Shrine. This is a side entrance. Kasuga Grand Shrine was founded in 768 as the tutelary shrine of the powerful Fujiwara clan. It's home to some 3,000 lanterns. You can buy a paper to write your name and a wish, and then put it in one of the stone lanterns. This person is praying for the well-being of his family. For a more permanent prayer object, you can have a bronze lantern made. Here's a close look at one. These aren't cheap, I'd imagine, but they're hanging everywhere inside the shrine. At least they're probably more affordable than a torii gate at Fushimi Inari Grand Shrine. And hey, here's a wooden lantern. On the left. I didn't see any writing on it, so it probably wasn't a prayer lantern. I wonder if there's always a wooden lantern there or if it was filling the spot for a bronze prayer lantern. Hmm. At any rate, as for the shrine itself, this inner gate is as far as the public is allowed to go. You can, however, see a picture of the inner sanctuary at the shrine's website, here. There are four kami enshrined in the sanctuary, hence four shrines. I left Kasuga Grand Shrine from its south gate and headed back into town. On the way, I happened upon the shrine's Treasure Hall, a small museum that truly deserves its name. They had some outstanding artifacts. There are a few pictures here (click on the images for a better view). Back in town, my last stop for the day was at Nara National Museum, which was holding its annual exhibition of treasures from Shōsō-in, a storehouse belonging to Tōdai-ji (although the treasures are now administered by the Imperial Household Agency). The dedication of the giant statue of Buddha at Tōdai-ji was attended by monks and dignitaries from as far away as India, and the collection includes some fascinating Silk Road artifacts in addition to Japanese works. You can see a handful of the repository's 8,874 items here. Dinner was noteworthy. I ate at Miyako Kozuchi (京小づち), a restaurant that serves Japanese style Chinese medicinal cuisine, made from organic and mostly locally grown ingredients. The restaurant doesn't have a standard website, but they do have a blog. This post shows what I ordered. The soup is made from the traditional Japanese stock based on kombu seaweed, shiitake mushrooms, and katsuobushi. To this is added egg, shredded nori (the dried seaweed used to wrap sushi), and green onion, as well as the very unusual ingredients of red rice and Silkie chicken. The chicken is called "crow-bone chicken" in Japanese (烏骨鶏, "ukokkei"), due to the inky color of its skin, flesh, and bones. To the right of the soup is, I believe, sesame pudding with wolfberries on top. Next is an assortment of Japanese pickles. Below that is a row of medicinal food to add to the soup – mostly seeds and berries, with pickled garlic and shiso leaf being the only things I could identify. The contents of the large plate may have been a little different for my meal, but as far as what is pictured, on the right is egg, green beans, taro root, and wheat gluten (the pink and green thing); in the middle is fish with citrus-doused sweet potato; and on the left is a lightly sweetened mix of soy beans, seaweed, shiitake, and konnyaku. The meal was delicious, satisfying, healthy, and novel. You can't ask for much more. Miyako Kozuchi is located in a shopping arcade near the Nara-machi neighborhood. From the southwest corner of Sarusawa Pond (south of Kōfuku-ji), head south one block and then west one block. (The streets in this area are all narrow and there are many side streets, but I'm defining a block as ending at a four-way intersection. And if you've left the narrow streets and hit a main road, you've gone too far.) You should be at the shopping arcade. Head south and the restaurant will be on your right, just a few doors down. You can recognize it by the picture of a short-handled mallet on the shop curtain. Tags:Food, Museums, Nara, Sacred Deer, Shrines, Temples You are currently browsing the Erratic Dispatches blog archives for June, 2009.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
782
Reprezentacja Cypru w rugby jest drużyną reprezentującą Cypr w międzynarodowych turniejach. Drużyna występuje w Europejskiej 2C dywizji. Puchar Świata w Rugby 1987-2011: nie zakwalifikowała się Reprezentacje Cypru w rugby union
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,078
Simi Valley Schools Blog From here to anywhere Artistic Bonding: Parents Share Class with Students Margarita and Nicolas Torres look at the charcoal still life drawing he worked on in Susan Selvaggio's art class at Sinaloa Middle School Tuesday. Nicolas Torres really likes art. The 7th grader from Sinaloa Middle School said that drawing is his favorite part of Susan Selvaggio's art class. The love of art is something he shares with his mom, Margarita Torres. And it's the reason why she came to visit his art class this week, at Selvaggio's invitation. "He likes art very much," Margarita Torres said. "I wanted to see the teacher. And I miss school." Sitting next to each other, mother and son each worked on their own still life charcoal drawing of the arrangement sitting in the middle of the classroom. They stopped often to compare work and share a smile. Each year, Selvaggio invites parents to join a class or two with their children. "I started it a few years ago because I was so impressed by what the kids were doing that I didn't want to be the only one watching," she said. "And the kids get so excited about having their parents come." On Monday, a handful of parents came to class. Parents had the choice of attending one of several scheduled times. Jeanette Daghestanian (left), is helped with a drawing technique by daughter Olivia Daghestanian as friend Maggie Sidway looks on. Jeanette Daghestanian is an old-pro at Selvaggio's art class. With two older sons now attending Royal High School, Daghestanian has visited art before. With daughter Olivia, she got to come once more. "I'm involved with the school and I wanted to come and do art with her," she said. "I came here with my son and I loved it. And I wanted to be here with her." While the students sketch and draw, Selvaggio dims the lights so the focal light is on the still life arrangement. There is a cow's skull, a couple of pots and vases and branches. Students focused on one or two objects in their drawings. Selvaggio stops the class near the end to demonstrate techniques for how to draw the branches. The rolling-stop motion of the pencil creates breaks in the drawn line that help "build" the branches in a realistic way. Around the class, the students and parents sit fascinated by her demo. Isabel Lawrence and her mom, Dana, take their art very seriously. Both said art was a favorite class for them. Working side-by-side, Dana Lawrence said her daughter is crazy about art and this opportunity was too much fun to pass up. Selvaggio walks around the room, offering tips and encouragement to students and parents. Nicholas calls her over to check on his progress. "First, let me tell you mom how happy I am to have you in my class," she said to Margarita Torres. "He's a wonderful students and talented too!" Posted on October 22, 2014 Author editorjakeCategories Uncategorized Previous Previous post: Drop. Cover. Hold On. The Great California Shake Out At Our Schools Next Next post: It's a Sticky Situation But Somebody's Gotta Do It!
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
668
"A Nightmare on Elm Street" is yet another remake that suffers unoriginality, but such is the lot of remaking something that is well established in pop culture. The production company Platinum Dunes has chosen their position in film to be the re-makers of classic horror films and found early success with "The Texas Chainsaw Massacre" giving them the green light to run wild with remakes including "The Amityville Horror", "The Hitcher", "Friday the 13th" and of course "A Nightmare on Elm Street". "A Nightmare on Elm Street" stars Jackie Earle Haley as Freddy Krueger, a scarred killer sporting a wicked glove with knives attached atop the fingers who invades your dreams to kill you. In this rebirth we kind of follow Nancy, I say "kind of" because the narrative jumps quite a bit, but I think she's the main character since that's how the original went down, anyway, she's played by Rooney Mara (younger sister of Kate Mara). Well, Nancy and her friends are being tracked down by Freddy and they aim to get to the bottom of it, preferably before they die. This film is wrought with issues. The CGI is decidedly awful and seeing as how the original managed to do it in-camera, rather unnecessary. The makeup of Freddy himself is inspired in contrast of the Robert Englund version, going for a more burn victim effect rather than the demonically burned, but it was an epic fail in that he could hardly move his lips, which steals away the suspension of disbelief. Movie logic was never really a big factor in any of the original films, so I'm willing to overlook "nitpicky" flaws inherent when mixing dreams with reality. However, the sheer lack of originality left much to be desired. I can understand why you would want to pay homage to memorable scenes, but homage doesn't mean it needs to be a shot-for-shot recreation. That's not to say I didn't find something good lurking in the shadows. Despite makeup setbacks, I thought Jackie Earle Haley still turned in a solid performance, striking that balance of dark creepy mixed with wit, teetering on the line of campy without fully committing to it. The non-CGI effects are also decent enough, though hardly unique and worthy of more than a passing mention. This entry was posted in Movie Reviews on May 28, 2010 by wes.
{ "redpajama_set_name": "RedPajamaC4" }
6,073
\section{Introduction} \label{sec:intro} The sieve of Eratosthenes is the oldest and most well-known of the integer sieves, and is used to find all the primes up to a given limit $N$. The sieve begins with the list of integers $L=(2, 3, \dots, N)$ and proceeds iteratively by marking the smallest number on the list as prime and removing it along with its multiples from the list. The smallest number still left on the list is marked as prime and the procedure continues until the list is empty. Algorithmically, the sieve of Eratosthenes both identifies the prime numbers in the list and yields a unique prime factorization for the composite numbers through multiple presentations of each polynomial value as product of two integers. In other words, each value $F(n)=n$ in the sequence $L=(F(2),F(3),\dots,F(N))$ is presented as the factorization presentation $F(n)=p \, q$ for each $p \mid F(n)$. If however $F$ is an arbitrary polynomial with integer coefficients and $p \mid F(n)$, then $p \mid F(n+ k \, p)$ for each $k \in \mathbb{Z}$ too. Hence, the algorithm can be generalized to include other polynomials at the cost of missing some of the factorization presentations. Fortunately, the situation can be improved by taking both factors of each composite $F(n)$ into consideration, i.e., if $F(n)= p \, q$ is marked as being divisible by $p$ then all $F(n + k q)$ where $k \in \mathbb{Z}$ can be marked as being divisible by $q$ as well. To keep track of all the factorization presentations, it suffices to record the initial value along with the sequence of quotients for the multiples of the factors, e.g., if $F(x_1) = 1 \cdot p_1$, $F(x_1 + x_2 p_1) = p_1 \, p_2$ and $F(x_1 + x_2 p_1 + x_3 p_2) = p_2 \, p_3$ then the factorization presentation can be reconstructed from the sequence $(x_1, x_2, x_3)$. This method of sieving the polynomial values for integer factorizations is expressed in Theorem \ref{main_recurrence}, and holds in the context of multivariate polynomials as well. Section \ref{sec:recursively-factorable} introduces a family of polynomials called {\it recursively-factorable polynomials} for which the collection of factorization presentations corresponding to the sequences $\{(x_1,\dots,x_m) \in \mathbb{Z}^m\}_{m=1}^{\infty}$ yield the unique prime factorization for each value of $F$ via presentations $F(n)=p \, q$ for each $p \mid F(n)$. In general, recursively-factorable polynomials are rare, but there are some noteworthy instances. Particularly, the {\it Euler-like} and {\it Legendre-like} prime producing polynomials of the form $n^2+n+c$ for $c \in \{2,3,5,11,17,41\}$ and $2n^2 + c$ for $c = \{3,5,11,29\}$, respectively, and Landau's $n^2+1$ are recursively-factorable. The sieve of Eratosthenes verifies that the line $n$ is also recursively-factorable, but we presently focus on recursively-factorable quadratic equations. In Section \ref{sec:quad_dio}, we introduce an identity which presents the factorization of a quadratic polynomial value as the product of two binary quadratic forms (Theorem \ref{thm:dio_main}) and show that this identity associates all the factorization presentations of the aforementioned polynomial-value sieving integer sequences with the set $\Gamma_a := \left\{ \left. \begin{pmatrix}\alpha & \beta \\ \gamma & \delta \end{pmatrix} \in M_2(\mathbb{Z}) \, \right| \alpha \delta - a \, \beta \gamma = 1 \right\}$. For monic quadratics, $a=1$ and the factorization presentations correspond to the transvection generators of $\Gamma_1 = \mbox{SL}_2(\mathbb{Z})$ (Corollary \ref{cor:transvections}). In Section \ref{sec:lattice_conic}, a bijection is established (Theorem \ref{thm:conic_bijection}) between $\Gamma_a$ and the set $\mathcal{L}_a$ of lattice point solutions $(X,Y) \in \mathbb{Z}^2$ for the conic sections $a \, X^2 + b \, X Y + c \, Y^2 + X - n Y = 0$ with $a, b, c, n \in \mathbb{Z}$, showing that $\mathcal{L}_a$ does not depend on $b$, $c$, or $n$. Following the mappings in Figure \ref{fig:outline}, each lattice point $(X,Y)$ of the conic section is associated with an element of $\Gamma_a$ and gives a factorization presentation for $F(n)=a n^2+b n+c$. If a factorization presentation $F(n)=p \, q$ has a corresponding integer sequence $(x_1,\dots,x_m)$ then there is a matching element of $\Gamma_a$ which corresponds to a lattice point solution of the conic section. \begin{figure}[h] \begin{tikzpicture} \matrix (magic) [matrix of nodes,ampersand replacement=\&, column sep=3.5em] { $F(n)=p \, q$ \& \& \\ \& $\begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \in \Gamma_a$ \& $(X,Y)\in \mathcal{L}_a$ \\ $(x_1,\dots,x_m)$ \& \& \\ }; \path[->] (magic-2-2) edge [bend left=15] node[auto] {\scriptsize Thm. \ref{thm:conic_bijection}} (magic-2-3) (magic-2-3) edge [bend left=15] node[auto] {\scriptsize Thm. \ref{thm:conic_bijection}} (magic-2-2) (magic-2-2) edge [bend right=15] node[above] {\hspace{1.5em} \scriptsize Thm. \ref{thm:dio_main}} (magic-1-1) (magic-3-1) edge [bend right=15] node[below] {\hspace{1.5em} \scriptsize Thm. \ref{thm:dio_quad}} (magic-2-2) (magic-3-1) edge [bend left=15] node[auto] {\scriptsize Thm. \ref{main_recurrence}} (magic-1-1); \path[->,loosely dashed] (magic-1-1) edge [bend left=15] node[auto] {\scriptsize Thm. \ref{thm:all_factors}} (magic-3-1); \end{tikzpicture} \caption{Relationships between factorization presentations $a n^2+bn+c=p\, q$, the polynomial-value sieving sequence $(x_1,\dots,x_m)$, the set of $2 \times 2$ integers matrices $\Gamma_a$, and the set of lattice point solutions $\mathcal{L}_a$ to the conic section $a X^2+b X Y+cY^2+X-nY=0$.} \label{fig:outline} \end{figure} \section{Polynomial-Value Sieving} \label{sec:poly-value} \begin{theorem} \label{main_recurrence} Let $\mathcal{R}$ be a commutative ring with identity. For any polynomial $F \in \mathcal{R}[x]$ of degree $d$, there exists a sequence of multivariate polynomials $\{f_m (x_1,\dots,x_m)\}_{m=0}^{\infty}$ such that $f_m (x_1, \dots, x_m) \in \mathcal{R}[x_1,\dots,x_m]$ and \begin{equation} \label{gen_factor} F \left( \sum\limits_{k=1}^m x_k \, f_{k-1}(x_1,\dots,x_{k-1}) \right) = f_{m-1}(x_1,\dots,x_{m-1}) \, f_m (x_1,\dots,x_m) \end{equation} where $f_0 = 1$, $f_1(x_1)=F(x_1)$, and $$f_m = f_{m-2} + x_m \sum\limits_{j=1}^d \frac{1}{j!} \left(x_m \, \frac{f_{m-1}}{f_{m-2}}\right)^{j-1} \frac{\partial^j f_{m-1}}{\partial x_{m-1}^j}$$ for $m \ge 2$ with the convention that $f_m$ is shorthand for $f_m(x_1, \dots, x_m)$. \end{theorem} \begin{proof} Since $F(x_1 f_0) = f_0 \, f_1(x_1)$ represents the trivial factorization, the statement is initially true and we proceed by induction on $m$. Let $D^{(j)}$ be the $j$th order Hasse derivative and $D^{(j)}_{x} =\frac{1}{j!} \frac{d^j}{dx^j}$ be the $j$th order Hasse derivative with respect to the intermediate $x$. Applying $D^{(j)}_{x_{m-1}}$ to both sides of $F \left( \sum \limits_{k=1}^{m-1} x_k f_{k-1} \right)=f_{m-2} \, f_{m-1}$ gives \begin{equation} \label{partial_f} (D^{(j)} F) \left(\sum\limits_{k=1}^{m-1} x_k f_{k-1} \right) \cdot f_{m-2}^j = f_{m-2} \cdot D^{(j)}_{x_{m-1}} f_{m-1} . \end{equation} Using the Taylor series expansion for $F$, \begin{align} F\left(\sum\limits_{k=1}^{m} x_k f_{k-1} \right) & = \sum\limits_{j=0}^{d} (D^{(j)} F) \left(\sum\limits_{k=1}^{m-1} x_k f_{k-1}\right) \cdot (x_m f_{m-1})^{j} \notag \\ & = F\left( \sum\limits_{k=1}^{m-1} x_k f_{k-1} \right) + (x_m f_{m-1}) \sum\limits_{j=1}^{d} (D^{(j)} F) \left(\sum\limits_{k=1}^{m-1} x_k f_{k-1}\right) \cdot (x_m f_{m-1})^{j-1} \notag \\ & = f_{m-1} \cdot \left(f_{m-2} + x_m \sum\limits_{j=1}^{d} (D^{(j)} F) \left(\sum\limits_{k=1}^{m-1} x_k f_{k-1}\right) \cdot (x_m f_{m-1})^{j-1} \right) \label{hasse_taylor} \end{align} which gives a definition for $f_m (x_1,\dots,x_m) \in \mathcal{R} [x_1,\dots, x_m]$. Substituting (\ref{partial_f}) into (\ref{hasse_taylor}) yields \begin{equation} \label{f_m} f_m = f_{m-2} + x_m \sum\limits_{j=1}^{d} \left(x_m \frac{f_{m-1}}{f_{m-2}}\right)^{j-1} D^{(j)}_{x_m-1} f_{m-1} \, . \qedhere \end{equation} \end{proof} \medskip \begin{remark} \label{quad_recurrence} For $F(z)=\sum\limits_{i=0}^d a_i z^i$, taking $j=d$ in equation (\ref{partial_f}) gives $$\frac{D^{(d)}_{x_{m-1}} f_{m-1}}{(f_{m-2})^{d-1}} = a_d$$ for all $d \ge 1$. So for $d=2$, Theorem \ref{main_recurrence} expresses $f_m$ as \begin{equation} f_m = f_{m-2} + x_m \frac{\partial f_{m-1}}{\partial x_{m-1}} + a_2 \, x_m^2 f_{m-1} \, . \end{equation} \end{remark} \begin{remark} For each sequence $(x_1, \dots, x_m)$, if $x_i = x_{i_a} + x_{i_a}$ then \begin{equation} \label{binary_sequence} f_m (x_1, x_2, \dots, x_{i-1}, x_i, x_{i+1}, \dots, x_m) = f_{m+2} (x_1, x_2, \dots, x_{i-1}, x_{i_a}, 0, x_{i_b}, x_{i+1} \dots, x_m) \, . \end{equation} Moreover if $(x_1, \dots, x_m) \in \mathbb{Z}^m$, then there exists $f_{M}$ such that $$f_{m} (x_1, \dots, x_{m}) = f_{M} (z_1, \dots, z_{M})$$ where $z_i \in \{ -1,0,1 \}$ and $M = \sum_{j=1}^{m} 2|x_j|-1$. \end{remark} \begin{example} \label{recur_ex} Let $F(x)=3x^2+5x+11$. We compute $f_3 (2,-1,4)$ as follows: \begin{align*} f_0 & = 1 \\ f_1 (2) & = \frac{F(2 \cdot 1) }{1} = \frac{F(2)}{1} = 33 \\ f_2 (2,-1) & = \frac{F(2 + (- 1) \cdot 33 )) }{33}=\frac{F(-31)}{33} =83 \\ f_3 (2,-1,4) & = \frac{F(-31 + 4 \cdot 83) }{83}=\frac{F(301)}{83} =3293\\ \end{align*} This gives $F(301) = 273319 = 83 \times 3293$. One can also verify that $$f_3 (2,-1,4) = f_{11} (1,0,1,-1,1,0,1,0,1,0,1).$$ \end{example} \smallskip \section{Recursively-Factorable Polynomials} \label{sec:recursively-factorable} Theorem \ref{main_recurrence} provides a means of factoring the values of a polynomial $F$ into two integers, but these presentations may not represent the full solution set $\{(n,p,q) \in \mathbb{Z}^3 : F(n) = p \, q \}$. For example when $F(n)=n^2+n+7$, the integer factorization $F(1)= 3 \cdot 3$ cannot be presented via Theorem \ref{main_recurrence}, i.e., there does not exist a finite sequence of integers $(x_1, x_2, \dots, x_m)$ for which $f_m = 3$, $f_{m-1} = 3$, and $\sum_{k=1}^m x_k f_{k-1} = 1$. Proof of this fact is shown in Remark \ref{non-recur-fac}. By contrast, Lemma \ref{lem:recur_exist} provides the existence of a family of polynomials $\mathcal{F}$ for which the prime integer factorization of each value of $F \in \mathcal{F}$ can be reconstructed from the presentations of Theorem \ref{main_recurrence}. Theorem \ref{thm:all_factors} shows that this family of polynomials contains the recursively-factorable polynomials characterized by the following property. \begin{definition} Let $F$ be a polynomial with integer coefficients. If for each integer factorization presentation $F(n)=p \, q$ there exists an $r \in \mathbb{Z}$ such that $|F(r)|<|F(n)|$ and $r \equiv n \pmod{|p|}$ or $r \equiv n \pmod{|q|}$, then $n$ is said to satisfy the {\it recursively-factorable criterion} for $F$. If each $n \in \mathbb{Z}$ satisfies the recursively-factorable criterion for $F$, then the polynomial $F$ is said to be {\it recursively-factorable}. \end{definition} \begin{remark} Recursively-factorable polynomials are irreducible over $\mathbb{Z}$. If not then $F(n)=0$ for some $n \in \mathbb{Z}$, but the non-trivial factorization $0 = 0 \cdot p_0$ has no associated $r \equiv n \pmod{|p_0|}$ such that $|F(r)|<|F(n)|=0$ for any $p_0 \in \mathbb{Z}$. \end{remark} \begin{lemma} \label{lem:shift_still_recursive} Let $F$ be a polynomial and $G(n)= \pm F(n-h)$ for some $h \in \mathbb{Z}$. If $F$ is recursively-factorable, then so is $G$. \end{lemma} \begin{proof} Suppose that $G(n)=\pm F(n-h)=p_0 \, p_1$ is a non-trivial factorization. Since $F$ is recursively-factorable, we may assume without loss of generality that there exists $q \in \mathbb{Z}$ such that $|F(r)|<|F(n-h)|$ where $r = (n - h) - q \, p_0$. Thus $|G(r+h)|<|G(n)|$ and $r+h = n- q \, p_0 \equiv n \pmod{|p_0|}$, so we may conclude that $G$ is recursively-factorable. \end{proof} \begin{theorem} \label{thm:all_factors} If $F$ is recursively-factorable then, for each $n\in \mathbb{Z}$ and $p \in \mathbb{N}$ such that $p \mid F(n)$, there exists a finite sequence of integers $(x_1, x_2, \dots, x_m)$ such that \begin{equation} n= \sum_{k=1}^m x_k f_{k-1} (x_1,\dots,x_{k-1}) \qquad \mbox{ and } \qquad p= |f_{m} (x_1,\dots, x_{m-1}, x_{m})|. \end{equation} \end{theorem} \begin{proof} Fix $n \in \mathbb{Z}$. If $p=1 \mbox{ or } |F(n)|$ then the sequence $(n)$ gives the presentation $F(n) = F(n \cdot f_0) = f_0 \, f_1 (n) = 1 \cdot F(n)$. Thus it is sufficient to consider the case where $F(n)$ is a composite integer with a non-trivial factorization $F(n) = p_1 \, p_0$ such that $p=|p_0|$. Let $R=\{r \in \mathbb{Z} : r \equiv n \pmod{|p_0|} \mbox{ or } r \equiv n \pmod{|p_1|} \}$. Since $F$ is recursively-factorable, there exists an $r \in R$ such that $|F(r)|<|F(n)|$. Moreover there is an $r_1 \in R$ such that $|F(r_1)| \le |F(r)|$ for all $r \in R$. Set $p_* = p_0 \mbox{ or } p_1$ so that $r_1 \equiv n \pmod{|p_*|}$. It follows that $n=q_1 \, p_* + r_1$ and $F(r_1)=p_2 \, p_*$ for some $q_1, p_2 \in \mathbb{Z}$. If $|p_2| = 1$, then $F(r_1)=p_2 \, p_*$ represents a trivial factorization and the sequence $(r_1,q_1)$ yields the presentation \begin{equation} F(n) = F(r_1 \, p_2 + q_1 \, p_*) = f_1 (r_1) \, f_2 (r_1,q_1). \end{equation} If $|p_2| \not= 1$, then $F(r_1) = p_* \, p_2$ represents a non-trivial factorization, and by the minimality of our choice of $r_1$ relative to all other $r \in R$ there exists an $r_2$ which minimizes $|F(r_2)|<|F(r_1)|$ over all $r_2 \equiv r_1 \pmod{|p_2|}$, i.e., $r_1 = q_2 \, p_2 + r_2$ for some $q_2 \in \mathbb{Z}$. We may continue in this fashion until we obtain the trivial integer factorization $F(r_{m-1}) = p_{m-1} \, p_{m}$ where $|p_m| = 1$, produced from a finite sequence of factors $(p^*, p_2, \dots, p_{m-1}, p_{m})$, quotients $(q_1, q_2, \dots, q_{m-1})$ and remainders $(r_1, r_2, \dots, r_{m-1})$ such that $r_{k} = q_{k+1} \, p_{k+1} + r_{k+1}$ and $F(r_k) = p_{k} \, p_{k+1}$ for each $2 \le k \le m-1$. Starting with $p_{m} = 1$ and $F(r_{m-1}) = p_{m-1} \, p_{m}$ we may reverse this sequence to obtain $n$ and $p$ as follows: \begin{align*} p_{m-1}& = \frac{F(r_{m-1})}{p_{m}} = \frac{F(r_{m-1})}{f_{0}} = f_1 (r_{m-1}), \\[10pt] p_{m-2} &= \frac{F(r_{m-2})}{p_{m-1}} = \frac{F(r_{m-1} + q_{m-1} \, p_{m-1})}{p_{m-1}} = \frac{F(r_{m-1} \, f_0 + q_{m-1} \, f_1 (r_{m-1}))}{f_1 (r_{m-1})} = f_2 (r_{m-1}, q_{m-1}). \end{align*} More generally $$p_k = f_{m-k} (r_{m-1}, q_{m-1}, q_{m-2}, \dots,q_{k+1})$$ for $2 \le k \le m-2$ and $p_*=f_{m-1} (r_{m-1},q_{m-1},\dots,q_2)$. Therefore the integer sequence $(r_{m-1},q_{m-1},\dots,q_1)$ gives the presentation \begin{align*} F(n) & = F \left( r_{m-1} \, f_0 + q_{m-1} f_1 (r_{m-1}) + \sum\limits_{k=3}^{m} q_{m-k+1} f_{k-1} (r_{m-1},q_{m-1}, \dots, q_{m-k+2}) \right) \\ & = f_{m-1} (r_{m-1},q_{m-1}, \dots, q_{2}) \, f_{m} (r_{m-1},q_{m-1}, \dots, q_2, q_{1}), \end{align*} and $p=|f_m (r_{m-1},q_{m-1},\dots,q_1)|$. \end{proof} \begin{figure}[h] \begin{center} \begin{overpic}[unit=1mm,width=4in]{smaller_factors.pdf} \put(99,67){$F(n)=p_1 \, p_0$} \put(-8,48){$F(r_1)=p_2 \, p_1$} \put(29,37.5){$F(r_2)=p_3 \, p_2$} \put(68,22){$F(r_k)=p_{k+1} \, p_k$} \put(55,13){\smaller $q_k \, p_k$} \put(18,10){\smaller $q_2 \, p_2$} \put(70,4){\smaller $q_1 \, p_1$} \end{overpic} \end{center} \medskip \caption{Sequence of decreasing values of $F$ used to compute $x_1, x_2, \dots, x_m$ in $f_m (x_1, x_2, \dots, x_m)$.} \label{smaller_values} \end{figure} The proof of Theorem \ref{thm:all_factors} starts with an integer factorization $F(n_0)=p_1 \, p_0$ and constructs a sequence of factorizations $F(n_1) = p_1 \, p_2$, $F(n_2) = p_2 \, p_3, \dots$ such that $|F(n_0)| > |F(n_1)| > |F(n_2)| \dots$ until a prime number $F(n_m)$ with the trivial factorization $F(n_m) \cdot 1$ is reached. In this way prime-producing polynomials, which contain a large interval of consecutive prime values, make good candidates for having the recursively-factorable property. In 1772, Euler \cite{Euler2} discovered that the polynomial $n^2-n+41$ produces prime numbers for $n \in [-39,40]$, and later Legendre \cite{Legendre} noted that both $n^2+n+17$ and $n^2+n+41$ are prime for $n \in [-16,15]$ and $n=[-40,39]$, respectively. Le Lionnais considered polynomials of the type $n^2+n+\varepsilon$ in general, which he called Euler-like polynomials \cite{LeLionnais}, and integers $\varepsilon$ for which $n^2+n+\varepsilon$ is prime for $n=0,1,\dots,\varepsilon-2$ have come to be known as {\it lucky numbers of Euler}. Rabinowitz \cite{Rabinowitz} proved that $\varepsilon$ is a lucky number of Euler if and only if the field $\mathbb{Q}(\sqrt{1-4 \varepsilon})$ has class number 1. From this, Heegner \cite{Heegner} and Stark \cite{Stark1} showed that there are exactly six lucky numbers of Euler, namely 2, 3, 5, 11, 17, and 41. Legendre \cite{Legendre} explored other types of prime-producing quadratics such as $2n^2+\lambda$ which is prime when $\lambda = 29$ for $n = 0, 1, \dots ,28$. Akin to the Euler-like polynomials, these quadratics give primes for $n = 0, 1, \dots, \lambda - 1$ for prime $\lambda$ if and only if $\mathbb{Q}(\sqrt{-2 \lambda})$ has class number 2 \cite{Frobenius, Louboutin}. Baker \cite{Baker} and Stark \cite{Stark2} found that the only such $\lambda$ are 3, 5, 11, and 29. As seen in Lemma \ref{lem:recur_exist}, Euler-like and Legendre-like prime-producing quadratics are indeed recursively-factorable. Further discussion of prime-producing quadratics can be found in \cite{Mollin,Ribenboim}. \begin{lemma} \label{lem:recur_exist} \renewcommand{\labelenumi}{(\roman{enumi})} The following quadratics (and their horizontal shifts) are recursively-factorable: \begin{enumerate} \item $n^2 + c$ \, where \, $c \in \{1,2\}$, \item $n^2 + n + c$ \, where \, $c \in \{1, 2, 3, 5, 11, 17, 41\}$ \label{Euler-like} \item $2n^2 + c$ \, where \, $c \in \{1, 3, 5, 11, 29 \}$, \label{Legendre-like} \item $2n^2 + 2n + c$ \, where \, $c \in \{1, 2, 3, 7, 19 \}$, \label{2n^2+2n-like} \item $3n^2 + c$ \, where \, $c =2$, \item $3n^2 + 3n + c$ \, where \, $c \in \{1, 2, 5, 11, 23\}$, \item $4n^2 + c$ \, where \, $c \in \{1, 3, 7\}$, and \item $4n^2 + 4n + c$ \, where \, $c \in \{2, 3, 5\}$. \end{enumerate} \end{lemma} \begin{proof} We claim that if $F$ is one of these polynomials and all the values within a suitably large interval $I_{\hat{n}}$ are known to satisfy the recursively-factorable criterion for $F$, then the remaining values outside of $I_{\hat{n}}$ also satisfy the recursively-factorable criterion. Supposing $F(n)=a n^2+b n+c$ is one of the polynomials in cases (i)-(viii), $F$ is a positive parabola having a minimum at either $n=0$ or $n=-\frac{1}{2}$. Furthermore the values $F(n) = F\left(-n-\frac{b}{a}\right)$ for all $n \in \mathbb{Z}$, so if $n$ satisfies the recursively-factorable criterion then so does $-n-\frac{b}{a}$. Also note that $|F(m)|<|F(n)|$ for $m \in I_n = \left(\min\{-n-\frac{b}{a}, \, n\},\max\{-n-\frac{b}{a}, \, n\}\right)$. For cases (i)-(vi), define $\hat{n}$ such that $|2 \, n+\frac{b}{a}| > \lfloor \sqrt{F(n)} \rfloor$ for each $n \ge \hat{n}$. Given that for each factorization presentation $F(n) = p \, q$ either $p \le \lfloor \sqrt{F(n)} \rfloor$ or $q \le \lfloor\sqrt{F(n)} \rfloor$, for $n \ge \hat{n}$ there exists a $k \in \mathbb{Z}$ such that either $n - k \, p \in I_{\hat{n}}$ or $n - k \, q \in I_{\hat{n}}$. Thus if we can verify that the values within $I_{\hat{n}}$ satisfy the recursively-factorable criterion, then so do the values greater than $\hat{n}$ (and symmetrically the values less than $-\hat{n}-\frac{b}{a}$), i.e., $F$ is recursively-factorable. In cases (vii) and (viii) we use a sharper approximation of $\min\{p,q\}$ than $\lfloor \sqrt{F(n)} \rfloor$ to determine $\hat{n}$, but the idea is the same. In cases (i), (iii), (v), and (vii), $F(n)$ is prime (or 1) for $n \in [1-c,c-1]$ and $c \mid F(\pm c)$ which means $c \mid F(0)=c$, so the recursively-factorable condition is satisfied for $n \in [-c,c]$. Similarly, $F(n)$ is prime (or 1) for $n \in [1-c,c-2]$ in cases (ii), (iv), (vi), and (viii). The recursively-factorable condition is satisfied for $-c$, $c-1$, and $c$ since $c \mid F(-c),F(c-1),F(c)$ and $F(0)=F(-1)=c$. Hence for all cases (i)-(viii) the recursively-factorable criterion is satisfied for $n \in [-c,c]$. \medskip \noindent \underline{Case (i)}: For $F(n)=n^2+c$ with $c \in \{1,2\}$, $\hat{n}=\lfloor\sqrt{\frac{c}{3}} \rfloor = 0$ and $I_{\hat{n}} = [0] \subset [-c,c]$. \medskip \noindent \underline{Case (ii)}: For $n^2 + n + c$ with $c \in \{1,2,3,5,11,17,41\}$, $I_{\hat{n}} = \left[-\left\lfloor -\frac{1}{2} + \sqrt{\frac{4c-1}{12}} \right\rfloor -1, \left\lfloor -\frac{1}{2} + \sqrt{\frac{4c-1}{12}} \right\rfloor \right]$ and yields the respective $I_{\hat{n}}$ intervals corresponding to each $c$: $[-1, 0] \subseteq [-1,1]$, $[-1,0]\subseteq [-2,2]$, $[-1,0]\subseteq [-3,3]$, $[-1,0]\subseteq [-5,5]$, $[-2, 1]\subseteq [-11,11]$, $[-2, 1]\subseteq [-17,17]$, and $[-4, 3]\subseteq [-41,41]$. \medskip \noindent \underline{Case (iii)}: For $F(n)=2n^2 + c$ with $c \in \{1,3,5,11,29\}$, $I_{\hat{n}}=[-\lfloor \sqrt{\frac{c}{2}} \rfloor,\lfloor \sqrt{\frac{c}{2}} \rfloor]$ which gives the respective intervals: $[0] \subseteq [-1,1]$, $[-1,1] \subseteq [-3,3]$, $[-1,1] \subseteq [-5,5]$, $[-2,2] \subseteq [-11,11]$, and $[-3,3] \subseteq [-29,29]$. \medskip \noindent \underline{Case (iv)}: Let $F(n) = 2n^2 + 2 n + c$ with $c \in \{1, 2, 3, 7, 19 \}$, $I_{\hat{n}} = \left[ - \lfloor \frac{\sqrt{2c -1} - 1}{2} \rfloor -1 , \lfloor \frac{\sqrt{2c -1} - 1}{2} \rfloor \right]$ which gives the respective intervals: $[0] \subseteq [-1, 1]$, $[0] \subseteq [-2, 2]$, $[0] \subseteq [-3, 3]$, $[-1,1] \subseteq [-7, 7]$, and $[-2, 2] \subseteq [-19, 19]$. \medskip \noindent \underline{Case (v)}: Let $F(n) = 3n^2 + 2$, then $I_{\hat{n}}=[0] \subseteq [-2,2]$. \medskip \noindent \underline{Case (vi)}: Let $F(n)=3n^2 +3n + c$ with $c \in \{1, 2, 5, 11, 23\}$, $I_{\hat{n}} = \left[ - \lfloor \frac{\sqrt{4c -3} - 1}{2} \rfloor -1 , \lfloor \frac{\sqrt{4c -3} - 1}{2} \rfloor \right]$ which gives the respective intervals: $[-1, 0] \subseteq [-1, 1]$, $[-1, 0] \subseteq [-2, 2]$, $[-2, 1] \subseteq [-5, 5]$, $[-3, 2] \subseteq [-11, 11]$, and $[-5, 4] \subseteq [-23, 23]$. \medskip \noindent \underline{Case (vii)}: Let $F$ be of the form $4n^2 + c$ with $c \in \{1, 3, 7\}$. We claim that if $F(n)=p \, q$ where $p \le q$ is an integer factorization presentation, then $p<2n$. Observe that $p=2n+1$ implies that $q\ge 2n+1$ and $$4n^2+4n+1 = (2n+1)^2 \le p \, q = F(n) =4n^2 + c \quad \Longrightarrow \quad 4n+1 \le c$$ and is a contradiction for $n > c$. Similarly, for $p=2n$ and $q \ge 2n+1$, $$4n^2+2n = 2n \, (2n+1) \le p \, q = F(n) =4n^2 + c \quad \Longrightarrow \quad 2n \le c$$ and is also contradiction for $n > c$. Clearly $q \not= 2n$ since $4n^2 + c = F(n) \not= p \, q = (2n)^2 = 4 n^2$. Thus we are guaranteed that $2n > p$ and there exists an $r \in (1-n,n-1)$ such that $r \equiv n \pmod{p}$. \noindent \underline{Case (viii)}: Let $F$ be of the form $4n^2 + 4n + c$ with $c \in \{2, 3, 5\}$. As in case (vii), we show that $p < 2n$ for each integer factorization presentation $F(n) = p \, q$ where $p \le q$. First notice that taking $p=2n+2$ and $q \ge 2n+2$ leads to $$4n^2 +8n+4 = (2n+2)^2 \le p \, q = F(n) = 4 n^2 + 4 n + c \quad \Longrightarrow \quad 4n+4 \le c$$ and is a contradiction for $n > c$. Likewise, taking $p=2n+1$ and $q \ge 2n+2$ gives $$4n^2 +6n+2 = (2n+1)(2n+2) \le p \, q = F(n) = 4 n^2 + 4 n + c \quad \Longrightarrow \quad 2n+2 \le c$$ and again is a contradiction for $n > c$. With $p =2n+1$ and $q=2n+1$, $4n^2 + 4n +c = F(n) \not= p \, q = 4n^2+4n+1$ as $c \not=1$. Finally assume that $p=2n$ and $q \ge 2n+3$, $$4n^2 + 6n = (2n) (2n+3) \le p \, q = F(n) = 4 n^2 + 4 n + c \quad \Longrightarrow \quad 2n \le c$$ and is a contradiction for $n > c$. Finally take $q=2n+2$ to get the contradiction $4n^2+4n = (2n)(2n+2) = p \, q \not= F(n) = 4n^2 +4n +c$. Therefore if the recursively factorable criterion holds for the values in the interval $[-c,c]$, then $2n > p$ and the criterion holds for the values outside of the interval also. \end{proof} \begin{table} \begin{tabular}{|l|l|} \hline & $c \le 5000$ \\ \hline \hline $n^2-c$ & 2, 3, 6, 7, 11, 14, 23, 38, 47, 62, 83, 167, 227, 398 \\ \hline $n^2+n-c$ & 1, 3, 4, 5, 7, 8, 9, 10, 13, 14, 15, 17, 18, 19, 22, \\ & 23, 25, 27, 28, 33, 37, 39, 43, 45, 49, 53, 59, 67, \\ & 69, 73, 75, 79, 85, 87, 93, 103, 109, 113, 115, \\ & 127, 129, 139, 153, 163, 169, 179, 193, 199, 205, \\ & 213, 235, 269, 283, 313, 337, 349, 373, 385, 409, \\ & 469, 499, 619, 643, 655, 763, 829, 865, 883, 997, \\ & 1063, 1555 \\ \hline $2n^2-c$ & 1, 3, 5, 7, 11, 13, 15, 19, 21, 29, 31, 35, 37, 47, \\ & 55, 61, 67, 69, 79, 91, 101, 103, 133, 139, 157, \\ & 159, 181, 199, 229, 283, 439, 571, 643, 661, 1069 \\ \hline $2n^2+2n-c$ & 1, 2, 3, 5, 6, 7, 9, 10, 11, 14, 15, 17, 21, 23, 26, \\ & 27, 29, 35, 38, 41, 43, 53, 63, 65, 71, 81, 83, 86, \\ & 107, 113, 146, 149, 173, 185, 191, 215, 218, 223, \\ & 251, 317, 323, 371, 413, 491, 743, 833 \\ \hline $3n^2-c$ & 1, 2, 5, 10, 14, 29, 46, 106, 149 \\ \hline $3n^2+3n-c$ & 1, 2, 3, 4, 5, 7, 8, 11, 13, 17, 19, 23, 29, 31, 37, \\ & 41, 47, 55, 59, 65, 67, 79, 89, 95, 97, 107, 119, \\ & 131, 157, 163, 173, 199, 229, 257, 275, 317, 325 \\ & 457, 479, 635, 637, 1379\\ \hline $4n^2-c$ & 1, 2, 3, 5, 7, 11, 13, 17, 19, 23, 33, 41, 47, 59, 83 \\ & 107, 167, 227, 563 \\ \hline $4n^2+4n-c$ & 1, 2, 3, 5, 6, 7, 10, 11, 13, 19, 21, 22, 27, 31, 37, \\ & 43, 46, 51, 61, 67, 82, 85, 115, 127, 163, 166, 226, \\ & 277, 397 \\ \hline \end{tabular} \caption{Recursively-factorable polynomials with real roots.} \label{tab:real_roots_recur_fac} \end{table} \begin{remark} With some additional casework to show that the values over a suitably large interval satisfy the recursively-factorable criterion, it can also be shown that the polynomials in Table \ref{tab:real_roots_recur_fac} are recursively-factorable. Some of these quadratics are prime-producing polynomials, or a horizontal shift of one, listed in \cite{Mollin} and \cite{Weisstein}. For these real-root quadratics, the condition $|F(m)| < |F(n)|$ for $m \in [2-n,n-1]$ no longer holds as it did in Lemma \ref{lem:recur_exist}. However for $n > \max \left\{ \frac{-b-\sqrt{b^2+8a c}}{2a}, \frac{-b+\sqrt{b^2+8a c}}{2a} \right\}$, $|F(m)|<|F(n)|$ for all $m \in \left( -n-\frac{b}{a}, n \right)$. Hence $\hat{n}$ can be chosen to be sufficiently large so that, for all $n>\hat{n}$, both $|F(m)|<|F(n)|$ for $m \in \left(-n-\frac{b}{a}, n \right)$ and $\lfloor \sqrt{|F(n)|} \rfloor < |2n+\frac{b}{a}|$. \end{remark} \section{Presentation as the Product of Binary Quadratic Forms} \label{sec:quad_dio} We show in this section that, for quadratic polynomials, the factorization presentations of Theorem \ref{main_recurrence}, defined recursively as $F \left( \sum_{k=1}^m x_k f_{k-1} \right) = f_{m-1} f_m$, may be expressed in a closed form as the product of two binary quadratic forms. Theorem \ref{thm:dio_quad} establishes that, in this context, each factorization presentation sequence $(x_1,\dots, x_m)$ corresponds with a particular $A_m \in M_2(\mathbb{Z})$. \begin{definition} {\rm Fix $F(n) = a \, n^2 + b \, n + c$. Let $\Delta_F$, $\eta_F$, $\phi_{F,0}$, and $\phi_{F,1}$ be functions from $M_2 (\mathbb{Z}) \rightarrow \mathbb{Z}$ defined such that for $A=\begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix}$, \begin{equation} \begin{aligned} \Delta_F[A] & = \alpha \, \delta - a \, \beta \, \gamma, \\ \eta_F [A] & = \alpha \, \gamma + b \, \beta \, \gamma + c \, \beta \, \delta, \\ \phi_{F,0} [A] & = {\alpha}^2 + b \, \alpha \, \beta + a \, c \, {\beta}^2,\\ \phi_{F,1} [A] & = a {\gamma}^2 + b \, \gamma \, \delta + c \, {\delta}^2, \end{aligned} \end{equation} } and for natural $m$, \begin{equation} \phi_{F,m} [A] = \begin{cases}\phi_{F,0}[A] & \mbox{for even } m \\ \phi_{F,1}[A] & \mbox{for odd } m \end{cases} . \end{equation} We suppress the $F$ when it is clear by the context, favoring the notation $\Delta[A]$, $\eta[A]$, $\phi_0[A]$, $\phi_1[A]$, and $\phi_m[A]$. \end{definition} \begin{definition} For $a \in \mathbb{Z}$, let \begin{equation} \Gamma_a:=\left\{ A \in M_2(\mathbb{Z}) : \Delta[A] = 1 \right\} . \end{equation} In general, the set $\Gamma_a$ is not closed under matrix multiplication and does not contain its inverses. However the case when $a=1$ is particularly noteworthy as $\Gamma_1 = \mbox{SL}_2(\mathbb{Z})$ is the special linear group. \end{definition} \begin{theorem} \label{thm:dio_main} Let $F:\mathbb{Z} \rightarrow \mathbb{Z}$ such that $F(x) = a \, x^2 + b \, x + c$. For $\alpha, \beta, \gamma, \delta \in \mathbb{Z}$, $$F( \alpha \, \gamma + b \, \beta \, \gamma + c \, \beta \, \delta ) = (\alpha^2 + b \, \alpha \, \beta + a \, c \, \beta^2)(a \, \gamma^2 + b \, \gamma \, \delta + c \, \delta^2) $$ if and only if $\alpha \, \delta - a \, \beta \, \gamma = 1$ or $-1-\frac{b \, ( \alpha \, \gamma + b \, \beta \, \gamma + c \, \beta \, \delta )}{c}$, i.e., for $A \in M_2 (\mathbb{Z})$, \begin{equation} F( \eta [A] ) = \phi_0 [A] \, \phi_1 [A] \end{equation} if and only if $\Delta [A] = 1$ or $-1-\frac{b}{c} \, \eta [A]$. \end{theorem} \begin{proof} By expanding both sides, one can verify that: \begin{align*} \lefteqn{F( \alpha \, \gamma + b \, \beta \, \gamma + c \, \beta \, \delta ) - (\alpha^2 + b \, \alpha \, \beta + a \, c \, \beta^2) \, (a \, \gamma^2 + b \, \gamma \, \delta + c \, \delta^2)} \\ & = ( 1 - (\alpha \, \delta - a \, \beta \, \gamma ) ) \, (c \, (\alpha \, \delta - a \, \beta \, \gamma ) + (c + b \, ( \alpha \, \gamma + b \, \beta \, \gamma + c \, \beta \, \delta ) )). \qedhere \end{align*} \end{proof} \begin{remark} The set of matrices $\mathcal{K}_1 \subset \Gamma_a$ given by \begin{equation} \label{K1} \mathcal{K}_1 = \left\{ \begin{pmatrix} 1 & 0 \\ s & 1 \end{pmatrix}, \begin{pmatrix} -1 & 0 \\ s & -1 \end{pmatrix} \right\}, \end{equation} $\mathcal{K}_2 \subset \Gamma_1$ and $\mathcal{K}_3 \subset \Gamma_{-1}$ given by \begin{equation} \label{K2_and_K3} \mathcal{K}_2 = \left\{ \begin{pmatrix} s & 1 \\ -1 & 0 \end{pmatrix}, \begin{pmatrix} s & -1 \\ 1 & 0 \end{pmatrix} \right\} \quad \mbox{and} \quad \mathcal{K}_3 = \left\{ \begin{pmatrix} s & 1 \\ 1 & 0 \end{pmatrix}, \begin{pmatrix} s & -1 \\ -1 & 0 \end{pmatrix} \right\}, \end{equation} respectively, correspond to the trivial factorization in Theorem \ref{thm:dio_main} for each $s \in \mathbb{Z}$. \end{remark} The Fibonacci-Brahmagupta identity has a long history in mathematics beginning with its first appearance in Diophantus' {\it Arithmetica} (III, 19) \cite{Diophantus} c.250 in the form of $(p^2 + q^2)(r^2+ s^2) = (p r + q s)^2 + (p s - q r)^2$. Later in c.628, Brahmagupta generalized Diophantus' identity by showing that numbers of the form $p^2 + c \, q^2$ are closed under multiplication. Brahmagupta's identity was popularized in 1225 upon its reprinting in Fibonacci's {\it Liber Quadratorum} \cite{Fibonacci} where the first rigorous proof of the identity appeared. Finally in 1770, Euler \cite{Euler1} further generalized Brahmagupta's identity by providing the parametric solution \begin{equation} \label{Euler_Identity} (a d \, p^2 + c e \, q^2) (d e \, r^2 + a c \, s^2) = a e (d \, p r \pm c \, q s)^2 + c d (a \, p s \mp e \, q r)^2 \end{equation} for the Diophantine equation $A x^2 + B y^2 = C$ with composite $C$. In Corollary \ref{Fib-Brahm_Identity} we show that the case $b=0$ in Theorem \ref{thm:dio_main} corresponds to the case $d=e=1$ in Euler's Identity (\ref{Euler_Identity}). \begin{corollary} \label{Fib-Brahm_Identity} $$ a \, (\alpha \, \gamma + c \, \beta \, \delta)^2 + c \, (\alpha \, \delta - a \, \beta \, \gamma)^2 = (\alpha^2 + a \, c \, \beta^2) \, (a \, \gamma^2 + c \, \delta^2) $$ \end{corollary} \begin{proof} When $b=0$, $F(x)= a \, x^2 + c$ and \begin{align*} a \, (\alpha \, \gamma + c \, \beta \, \delta)^2 + c \cdot 1^2 & = F(\alpha \, \gamma + c \, \beta \, \delta) \\ & = (\alpha^2 + a \, c \, \beta^2) \, (a \, \gamma^2 + c \, \delta^2) \end{align*} where $\alpha \, \delta - a \, \beta \, \gamma = 1$. Hence \begin{equation*} a \, (\alpha \, \gamma + c \, \beta \, \delta)^2 + c \, (\alpha \, \delta - a \, \beta \, \gamma)^2 = (\alpha^2 + a \, c \, \beta^2) \, (a \, \gamma^2 + c \, \delta^2) \, . \qedhere \end{equation*} \end{proof} \begin{theorem} \label{thm:dio_quad} For $F(n) = a \, n^2 + b \, n + c$ and $m \geq 0$, $$f_{m} (x_1, \dots ,x_m) = \phi_{m} [A_m]$$ where $A_m \in \Gamma_a$ defined recursively by $$A_0 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \quad \quad \mbox{ and } \quad \quad A_{k+1} = \begin{pmatrix} \alpha_{k+1} & \beta_{k+1} \\ \gamma_{k+1} & \delta_{k+1} \end{pmatrix} = A_{k}+x_{k+1} B_{k}$$ for $1 \le k \le m-1$ such that $$B_{k} = \begin{cases} \begin{pmatrix} a \, \gamma_k & \delta_k \\ 0 & 0 \end{pmatrix} & \mbox{ for odd } k\\ \\ \begin{pmatrix} 0 & 0 \\ \alpha_k & a \, \beta_k \end{pmatrix} & \mbox{ for even } k\\ \end{cases}.$$ \end{theorem} \begin{proof} We shall proceed by induction on $m$. For each $1 \le k \le m$, define $A_{k} \in \Gamma_a$ and $B_k$ recursively as stated in the hypothesis. Initially we see that $f_0 = 1 = \phi_{0} [A_0]$ and $f_1 = F(x_1) = \phi_{1} [A_1]$ satisfies the hypothesis. Now assume $f_{2j}=\phi_0 [A_{2j}]$ and $f_{2j+1}=\phi_1[A_{2j+1}]$ for each $0 \le j \le \lceil \frac{m}{2} \rceil$. Suppose $m=2j$ for some $j \ge 1$. Remark \ref{quad_recurrence} gives \begin{equation} \label{quad_partial_2j} f_{2j}=f_{2j-2} + x_{2j} \pderiv{x_{2j-1}} \big[ f_{2j-1} \big] + a \, x_{2j}^2 f_{2j-1}. \end{equation} By the induction hypothesis \begin{equation} \label{quad_fac_2j-2} f_{2j-2} = \phi_0[A_{2j-2}] = \alpha_{2j-2}^2 + b \, \alpha_{2j-2} \, \beta_{2j-2} + a c \, \beta_{2j-2}^2 \end{equation} and \begin{equation} \label{quad_fac_2j-1} f_{2j-1} = \phi_1[A_{2j-1}] = a \, \gamma_{2j-1}^2 + b \, \gamma_{2j-1} \, \delta_{2j-1} + c \, \delta_{2j-1}^2. \end{equation} The partial derivative $\pderiv{x_{2j-1}} \big[ \phi_1[A_{2j-1}] \big]$ may be evaluated through the equation $A_{2j-1} = A_{2j-2} + x_{2j-1} B_{2j-2}$. In particular $$\pderiv{x_{2j-1}} \big[\gamma_{2j-1}\big] = \alpha_{2j-2} \quad \mbox{and} \quad \pderiv{x_{2j-1}} \big[\beta_{2j-1}\big] = a \, \beta_{2j-2}$$ which yields \begin{equation} \label{quad_der_2j-1} \begin{aligned} \pderiv{x_{2j-1}} \big[ f_{2j-1} \big] & = \pderiv{x_{2j-1}} \big[\phi_1 [A_{2j-1}] \big] \\ & = \pderiv{x_{2j-1}} \left[ a \, \gamma_{2j-1}^2+ b \, \gamma_{2j-1} \delta_{2j-1} + c \, \delta_{2j-1}^2 \right] \\ & = 2 \, a \, \gamma_{2j-1} \, \alpha_{2j-2} + b \left( a \, \gamma_{2j-1} \, \beta_{2j-2} + \delta_{2j-1} \, \alpha_{2j-2} \right) + 2 \, a c \, \delta_{2j-1} \, \beta_{2j-2} \end{aligned} \end{equation} Substituting (\ref{quad_fac_2j-2}), (\ref{quad_fac_2j-1}), and (\ref{quad_der_2j-1}) into (\ref{quad_partial_2j}) gives \begin{equation} \label{fac_comp_2j} \begin{aligned} f_{2j} & = (\alpha_{2j-2}^2 + b \, \alpha_{2j-2} \, \beta_{2j-2} + a c \, \beta_{2j-2}^2) \\ & \quad +x_{2j} (2 \, a \, \gamma_{2j-1} \, \alpha_{2j-2} + a b \, \gamma_{2j-1} \, \beta_{2j-2} + b \, \delta_{2j-1} \, \alpha_{2j-2} \\ & \quad + 2 \, a c \, \delta_{2j-1} \, \beta_{2j-2}) + x_{2j}^2 \, (a) ( a \, \gamma_{2j-1}^2 + b \, \gamma_{2j-1} \, \delta_{2j-1} + c \, \delta_{2j-1}^2). \end{aligned} \end{equation} As defined in the hypothesis, \begin{equation} \begin{aligned} A_{2j} & = A_{2j-1} + x_{2j} B_{2j-1} \\ & = (A_{2j-2} + x_{2j-1} B_{2j-2}) + x_{2j} B_{2j-1} \\ & = \begin{pmatrix} \alpha_{2j-2} & \beta_{2j-2} \\ \gamma_{2j-2} & \delta_{2j-2} \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ x_{2j-1} \alpha_{2j-2} & a \, x_{2j-1}\beta_{2j-2} \end{pmatrix} + \begin{pmatrix} a \, x_{2j} \gamma_{2j-1} & x_{2j} \delta_{2j-1} \\ 0 & 0 \end{pmatrix}\\ & = \begin{pmatrix} \alpha_{2j-2} + a \, x_{2j} \, \gamma_{2j-1} & \beta_{2j-2} + x_{2j} \, \delta_{2j-1} \\ \gamma_{2j-2} + x_{2j-1} \, \alpha_{2j-2} & \delta_{2j-2} + a \, x_{2j-1} \, \beta_{2j-2} \end{pmatrix} \end{aligned} \end{equation} so \begin{equation} \label{phi_comp_2j} \begin{aligned} \phi_{2j}[A_{2j}] & = \phi_0 [A_{2j}]\\ & = (\alpha_{2j-2} + a \, x_{2j} \, \gamma_{2j-1})^2 + ac \, (\beta_{2j-2} + x_{2j} \, \delta_{2j-1})^2 \\ & \quad + b \, (\alpha_{2j-2} + a \, x_{2j} \, \gamma_{2j-1})(\beta_{2j-2} + x_{2j} \, \delta_{2j-1}) . \end{aligned} \end{equation} Comparing (\ref{fac_comp_2j}) and (\ref{phi_comp_2j}) shows that $f_{2j} = \phi_{2j}[A_{2j}]$. Initially $\Delta[A_0] = 1$ and by the induction hypothesis $\Delta [A_k] = 1$ for $1 \le k\le m-1$, so we check that $A_m \in \Gamma_a$: \begin{align*} \Delta[A_m] & = \Delta \left[\begin{pmatrix}\alpha_{m-1} + x_m \, a \, \gamma_{m-1} & \beta_{m-1} + x_m \, \delta_{m-1} \\ \gamma_{m-1} & \delta_{m-1} \end{pmatrix} \right] \\ & = (\alpha_{m-1}+x_m \, a \, \gamma_{m-1}) \, \delta_{m-1} - a \, (\beta_{m-1} + x_m \, \delta_{m-1}) \, \gamma_{m-1} \\ & = (\alpha_{m-1} \, \delta_{m-1} - a \, \beta_{m-1} \, \gamma_{m-1}) = \Delta[A_{m-1}] = 1 \, . \end{align*} Similarly when $m=2j+1$, Remark \ref{quad_recurrence} says that \begin{equation} \label{quad_partial_2j+1} f_{2j+1}=f_{2j-1} + x_{2j+1} \pderiv{x_{2j}} \big[ f_{2j} \big] + a \, x_{2j+1}^2 f_{2j}. \end{equation} whose partial derivative $\pderiv{x_{2j}} \big[ f_{2j} \big] = \pderiv{x_{2j}} \big[ \phi_{2j} [A_{2j}] \big]$ may be computed through (\ref{phi_comp_2j}) as \begin{equation} \label{quad_der_2j} \begin{aligned} \pderiv{x_{2j}} \big[ f_{2j} \big] & = 2 \, a \, \alpha_{2j} \, \gamma_{2j-1} + b \, \alpha_{2j} \, \delta_{2j-1} + a \, b \, \gamma_{2j-1} \, \beta_{2j} + 2 \, a \, c \, \beta_{2j} \, \delta_{2j-1} \end{aligned} \end{equation} since $\alpha_{2j} = \alpha_{2j-2}+a \, x_{2j} \, \gamma_{2j-1}$ and $\beta_{2j} = \beta_{2j-2}+x_{2j} \, \delta_{2j-1}$. Putting (\ref{quad_fac_2j-1}), (\ref{quad_partial_2j+1}), and (\ref{quad_der_2j}) together with the fact that $f_{2j} = \phi_0[A_{2j}]$ gives \begin{equation} \label{fac_comp_2j+1} \begin{aligned} f_{2j+1} & = (a \, \gamma_{2j-1}^2 + b \, \gamma_{2j-1} \delta_{2j-1} + c \, \delta_{2j-1}^2) \\ & \quad + x_{2j+1} (2 \, a \, \alpha_{2j} \, \gamma_{2j-1} + b \, \alpha_{2j} \, \delta_{2j-1} + a \, b \, \gamma_{2j-1} \, \beta_{2j} \\ & \quad + 2 \, a \, c \, \beta_{2j} \, \delta_{2j-1}) + x_{2j+1}^2 \, (a) (\alpha_{2j}^2 + b \, \alpha_{2j} \, \beta_{2j} + a \, c \, \beta_{2j}^2) \end{aligned} \end{equation} and may be compared with $\phi_{2j+1} [A_{2j+1}]$ which is computed thusly: \begin{equation} \label{phi_comp_2j+1} \begin{aligned} \phi_{2j+1} \big[ A_{2j+1} \big] & = \phi_1 \left[ \begin{pmatrix} \alpha_{2j-1} + a \, x_{2j} \, \gamma_{2j-1} & \beta_{2j-1} + x_{2j} \, \delta_{2j-1} \\ \gamma_{2j-1} + x_{2j+1} \, \alpha_{2j} & \delta_{2j-1} + a \, x_{2j+1} \, \beta_{2j} \end{pmatrix} \right] \\ & = a \, (\gamma_{2j-1} + x_{2j+1} \, \alpha_{2j})^2 + c \, (\delta_{2j-1} + a \, x_{2j+1} \, \beta_{2j})^2 \\ & \quad + b\, (\gamma_{2j-1} + x_{2j+1} \, \alpha_{2j})(\delta_{2j-1} + a \, x_{2j+1} \, \beta_{2k}). \end{aligned} \end{equation} Checking that (\ref{fac_comp_2j+1}) is equal to (\ref{phi_comp_2j+1}) shows $f_m = \phi_m[A_m]$. We have that $\Delta[A_k] = 1$ for $1 \le k \le m-1$, so \begin{align*} \Delta[A_m] & = \Delta \left[\begin{pmatrix}\alpha_{m-1} & \beta_{m-1} \\ \gamma_{m-1} + x_m \, \alpha_{m-1} & \delta_{m-1} + x_m \, a \, \beta_{m-1} \end{pmatrix} \right] \\ & = \alpha_{m-1} \, (\delta_{m-1}+x_m \, a \, \beta_{m-1}) - a \, \beta_{m-1} (\gamma_{m-1}+ x_m \, \alpha_{m-1}) \\ & = (\alpha_{m-1} \, \delta_{m-1} - a \, \beta_{m-1} \, \gamma_{m-1}) = \Delta[A_{m-1}] = 1 \, . \end{align*} which completes the proof. \end{proof} Combining Theorems \ref{thm:all_factors} and \ref{thm:dio_quad} implies that for a recursively-factorable polynomial $F$, each non-trivial factorization presentation $(n, p, q \in \mathbb{Z}: |F(n)| = p \, q)$ is represented by some $A_m \in \Gamma_a$ via the identity $F(\eta[A_m])=\phi_0 [A_m] \, \phi_1 [A_m]$ from Theorem \ref{thm:dio_main}. \begin{example} Returning to Example \ref{recur_ex}, for $F(n)=3n^2+5n+11$ we can compute $f_3 (2,-1,4)$ using Theorem \ref{thm:dio_quad}: \begin{align*} A_1 & = \begin{pmatrix} 1 & 0 \\ 2 & 1 \end{pmatrix} \\ A_2 & = \begin{pmatrix} 1 & 0 \\ 2 & 1 \end{pmatrix} + (-1) \begin{pmatrix} 3 \cdot 2 & 1 \\ 0 & 0 \end{pmatrix} = \begin{pmatrix} -5 & -1 \\ 2 & 1 \end{pmatrix} \\ A_3 & = \begin{pmatrix} -5 & -1 \\ 2 & 1 \end{pmatrix} + (4) \begin{pmatrix} 0 & 0 \\ -5 & 3 \cdot (-1) \end{pmatrix} = \begin{pmatrix} -5 & -1 \\ -18 & -11 \end{pmatrix} \end{align*} and $$f_3 (2,-1,4) = \phi_1 [A_3] = 3 \, (-18)^2 + 5 \, (-18) (-11) + 11 \, (-11)^2 = 3293.$$ It is readily checked that $\Delta [A_3] = 1$ and meets the conditions of Theorem \ref{thm:dio_main}. Since $\eta [A_3] = 301$ and $\phi_2 [A_3] = 83$, it follows that $$F(301) = 3293 \times 83.$$ \end{example} \begin{remark} \label{non-recur-fac} The non-trivial factorization $F(1)= 3 \cdot 3$, but $F(0)=7$ is the only value less than $F(1)$ and $1 \not\equiv 0 \pmod{3}$. Likewise $F(1) = 3 \cdot 3$ cannot be represented by Theorem \ref{thm:dio_main}, since $3$ cannot be represented by the binary form $\phi_0 [A] = \alpha^2 + \alpha \, \beta + 7 \, \beta^2$, see \cite{Conway} for more details. \end{remark} \begin{remark} Recall that the special linear group may be generated by its transvections \cite{Hahn}. In particular, $\mbox{SL}_2(\mathbb{Z})=\langle T, U \rangle$ where $T=\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} $ and $U=\begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix}$. It follows that $$T^{i}=\begin{pmatrix} 1 & i \\ 0 & 1 \end{pmatrix} \quad \mbox{ and } \quad U^{i}=\begin{pmatrix} 1 & 0 \\ i & 1 \end{pmatrix}$$ for all $i \in \mathbb{Z}$. \end{remark} \begin{corollary} \label{cor:transvections} For $F(n) = n^2 + b \, n + c$, \begin{equation} f_m (x_1, x_2, \dots, x_{2i-1}, x_{2i}, \dots, x_m) = \phi_m [W^{x_m} \dots T^{x_{2i}} U^{x_{2i-1}} \dots T^{x_2} U^{x_1}] \end{equation} where $W=\left\{ \begin{array}{ll}U, & \mbox{if } m \mbox{ is odd} \\ T, & \mbox{if } m \mbox{ is even.} \end{array} \right.$ \end{corollary} \begin{proof} From Theorem \ref{thm:dio_quad}, $f_m = \phi_m[A_m]$ where $A_0 = I$ and \begin{equation} A_{k} = \begin{cases} \begin{pmatrix} \alpha_{k-1} & \beta_{k-1} \\ \gamma_{k-1} + x_k \alpha_{k-1} & \delta_{k-1} + x_k \beta_{k-1} \end{pmatrix} = U^{x_k} A_{k-1} & \mbox{for odd } k \\ \begin{pmatrix} \alpha_{k-1}+ x_{k} \gamma_{k-1} & \beta_{k-1} + x_{k} \delta_{k-1} \\ \gamma_{k-1} & \delta_{k-1} \end{pmatrix} = T^{x_k} A_{k-1} & \mbox{for even } k \end{cases} \end{equation} for each $1 \le k \le m$. \end{proof} It stands to reason that shifting a polynomial horizontally does not change the integer factorization of its values. In the case of quadratics, the specific correspondence between a parabola and its shift is expressed by the following proposition. \begin{figure}[b] \begin{center} \begin{overpic}[unit=1mm,width=4in]{parabola_shifted.pdf} \put(73,55){\smaller $F$} \put(93.5,55){\smaller $G$} \put(73,42){\smaller $h$} \put(39,39){\smaller $\phi_{F,0}[A] \, \phi_{F,1}[A]$} \put(88,39){\smaller $\phi_{G,0}[B] \, \phi_{G,1}[B]$} \put(58,1){\smaller $\eta_F[A]$} \put(78,1){\smaller $\eta_G[B]$} \end{overpic} \end{center} \medskip \caption{Correspondence between integer factorizations for shifted parabolas.} \label{smaller_values} \end{figure} \begin{proposition} \label{horz_shift} Let $F(n) = a \, n^2 + b \, n + c$ and set $G(n) = F(n-h)$ for some $h \in \mathbb{Z}$. For each $A= \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \in \Gamma_a$ there is a corresponding $$B = A + h \, \begin{pmatrix} a \, \beta & 0 \\ \delta & 0 \\ \end{pmatrix}$$ for which the following conditions hold: \begin{enumerate} \renewcommand{\labelenumi}{(\roman{enumi})} \item $B \in \Gamma_a$, \item $\eta_G [B] = \eta_F [A] + h$, \item $\phi_{G,0} [B] = \phi_{F,0} [A]$, and \item $\phi_{G,1} [B] = \phi_{F,1} [A]$. \end{enumerate} \end{proposition} \begin{proof} Let $B = \begin{pmatrix} \alpha + h \, a \beta & \beta \\ \gamma + h \, \delta & \delta \end{pmatrix}$ such that $\alpha \delta - a \beta \gamma = 1$. Noting that $$G(n) = F(n-h) = a \, n^2 + (b - 2 a h) \, n + (c - b h + a h^2) :$$ \begin{align*} \Delta_G [B] & = (\alpha + h \, a \beta) \, \delta - a \, \beta (\gamma + h \, \delta) \tag{i} \\ & = \alpha \delta - a \, \beta \gamma = 1. \\[6pt] \eta_G [B] & = (\alpha + h \, a \beta) (\gamma + h \, \delta) + (b - 2 a h) \, \beta (\gamma + h \, \delta) + (c - b h + a h^2) \, \beta \delta \tag{ii} \\ & = (\alpha \gamma + b \, \beta \gamma + c \, \beta \delta) + h \, (\alpha \delta - a \beta \gamma) \\ & = \eta_F [A] + h . \\[6pt] \phi_{G,1} [B] & = a \, (\gamma + h \, \delta)^2 + (b - 2 a h) (\gamma + h \, \delta) \delta + (c - b h + a h^2) \, \delta^2 \tag{iii} \\ & = a \, \gamma^2 + b \, \gamma \delta + c \, \delta^2. \\[6pt] \phi_{G,2} [B] & = (\alpha + h \, a \beta)^2 + (b - 2 a h) (\alpha + h \, a \beta) \beta + a (c - b h + a h^2) \, \beta^2 \tag{iv} \\ & = \alpha^2 + b \, \alpha \beta + a c \, \beta^2 . \qedhere \end{align*} \end{proof} \section{Lattice Points on the Conic Section $a X^2 + b X Y + c Y^2 + X - n Y =0$} \label{sec:lattice_conic} Lastly, Theorem \ref{thm:conic_bijection} relates the set $\Gamma_a$ with the lattice point solutions of the conic sections $a X^2 + b X Y+ c Y^2+ X -n Y = 0$. From Theorem \ref{thm:dio_main}, each $A_m \in \Gamma_a$ corresponds to an integer factorization presentation of a value of $F(n)=a n^2+b n +c$, i.e., the problem of finding lattice point solutions to these conic sections is equivalent to factoring the value of an associated quadratic polynomial. \begin{theorem} \label{thm:conic_bijection} For $a,b,c \in \mathbb{Z}$, let $$\mathcal{L}_a = \{(X,Y) \in \mathbb{Z}^2 \mid a X^2 + b X Y + c Y^2 + X - n Y = 0 \mbox{ for any } n \in \mathbb{N} \}$$ The map $\psi : \Gamma_a / \mathcal{K}_1 \bigcup \mathcal{K}_2 \bigcup \mathcal{K}_3 \rightarrow \mathcal{L}_a/\{(0,0),(-1,0),(1,0)\}$ defined by $$\begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \mapsto \begin{pmatrix} \beta \gamma \\ \beta \delta \end{pmatrix}$$ is a bijection. \end{theorem} \begin{proof} Fix $a,b,c \in \mathbb{Z}$ and consider $A= \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \in \Gamma_a$. Set $n = \eta [A]$, $X = \beta \gamma$, $Y = \beta \delta$, and $Z = \alpha \gamma$. Direct substitution shows that \begin{equation} Z + b \, X + c \, Y = \alpha \gamma + b \, \beta \gamma + c \, \beta \delta = \eta [A] = n. \label{ellipse_1} \end{equation} Since $A \in \Gamma_a$, it follows that $\Delta[A]=1$ and $\beta \gamma \: (\alpha \delta-a \, \beta \gamma) = \beta \gamma (1)$, i.e., \begin{equation} Z Y = X + a X^2. \label{ellipse_2} \end{equation} Solving for $Z$ in (\ref{ellipse_1}) and substituting it into (\ref{ellipse_2}) shows that $(X,Y)$ is a solution to \begin{equation} a X^2 + b X Y + c Y^2 + X - n Y = 0. \label{ellipse_3} \end{equation} Now consider the inverse map $\psi^{-1} : \mathcal{L}_a/\{(0,0),(-1,0),(1,0)\} \rightarrow \Gamma_a / \mathcal{K}_1 \bigcup \mathcal{K}_2 \bigcup \mathcal{K}_3$ defined by \begin{equation} \begin{pmatrix} X \\ Y \end{pmatrix} \mapsto \begin{pmatrix} \frac{\gcd(X,Y)}{Y}(1 + a X) & \gcd(X,Y) \\ \frac{X}{\gcd(X,Y)} & \frac{Y}{\gcd(X,Y)} \end{pmatrix} \, . \end{equation} For each $L=(X,Y) \in \mathcal{L}_a$, $\Delta\left[ \psi^{-1}(L) \right] = 1$ and from (\ref{ellipse_3}) $$X (1 + a X) = Y (n - b X - c Y)$$ so $\frac{\gcd(X,Y)}{Y}(1+a X) \in \mathbb{Z}$. Hence $\psi^{-1} (L) \in \Gamma_a$. We show that $\psi$ is injective by verifying that $\psi^{-1} \circ \psi (A) = A$ for each $A \in \Gamma_a$. Indeed, since $\Delta[A]=1$ the $\gcd(\alpha \delta, a \, \beta \gamma)=1$ implying that $\gcd(\gamma,\delta)=1$, i.e., $\gcd(\beta \gamma, \beta \delta)=\beta$. Thus, $$\psi^{-1} \psi [A] = \psi^{-1} \left[ \begin{pmatrix} \beta \gamma \\ \beta \delta \end{pmatrix} \right] = \begin{pmatrix} \frac{\beta}{\beta \delta}(1 + a \beta \gamma) & \beta \\ \frac{\beta \gamma}{\beta} & \frac{\beta \delta}{\beta} \end{pmatrix} = A$$ since $\Delta[A]=1$ implies that $\alpha = \frac{1}{\delta}(1+ a \beta \gamma)$. Likewise, for each $(X,Y) \in \mathcal{L}_a$, $$\psi \circ \psi^{-1} \left[ \begin{pmatrix} X \\ Y \end{pmatrix} \right] = \psi \left[ \begin{pmatrix} \frac{G}{Y}(1 + a X) & G \\ \frac{X}{G} & \frac{Y}{G} \\ \end{pmatrix} \right] = \begin{pmatrix} X \\ Y \end{pmatrix}$$ meaning $\psi$ is surjective. \end{proof} The mapping $\psi:\mathcal{K}_1 \mapsto \begin{pmatrix} 0 \\ 0 \end{pmatrix}$ defined by $\psi \left[ \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \right] = \begin{pmatrix} \beta \gamma \\ \beta \delta \end{pmatrix}$ is well-defined and onto, but is not one-to-one. Similarly, when $a=1$ or $-1$ the respective mappings $\psi:\mathcal{K}_2 \mapsto \begin{pmatrix} -1 \\ 0 \end{pmatrix}$ and $\psi: \mathcal{K}_3 \mapsto \begin{pmatrix} 1 \\ 0 \end{pmatrix}$ are onto but not one-to-one. Therefore the image of $\psi$ under $\Gamma_a$ is $\mathcal{L}_a$. \begin{figure}[h] \begin{center} \includegraphics[width=5in]{ellipse_lattice.pdf} \end{center} \caption{Plot of $X^2 - X Y + 5 Y^2 + X - n Y = 0$ for $n=0,\dots,25$. The case $n=20$ is highlighted in blue and lattice points $(X,Y) \in \mathcal{L}_1$ intersecting the ellipses are indicated.} \label{ellipse_plot} \end{figure} \begin{example} Consider the Euler-like polynomial $F(n) = n^2 - n + 5$. It is easy to verify that $(X,Y)=(3,4)$ is a solution of \begin{equation} \label{conic_example} X^2 - X Y + 5 Y^2 + X - 20 Y = 0. \end{equation} By Theorem \ref{thm:conic_bijection}, the point $(3,4)$ corresponds to the element $A \in \Gamma_1$ given by $$A = \psi^{-1} \left[ \begin{pmatrix} 3 \\ 4 \end{pmatrix} \right] = \begin{pmatrix} 1 & 1\\ 3 & 4 \end{pmatrix}.$$ Thus $F(\eta[A]) = F(20) = 5 \cdot 77 = \phi_1 [A] \phi_2 [A]$. Similarly $(0,0)$, $(5,2)$, $(5,3)$, $(0,4)$, $(-3,3)$, $(-4,2)$ and $(-1,0)$ are also lattice point solutions (see Figure \ref{ellipse_plot}) to (\ref{conic_example}) corresponding to the integer factorizations $1 \cdot 385$, $11 \cdot 35$, $7 \cdot 55$, $77 \cdot 5$, $55 \cdot 7$, $35 \cdot 11$, and $385 \cdot 1$, respectively. \end{example} \begin{remark} Gauss \cite{Mordell,Gauss} showed that the general binary quadratic Diophantine equation can be reduced to a special case of the Pell equation. In particular, (\ref{ellipse_3}) can be reduced to \begin{equation} U^2-(b^2-4 a c) V^2 = 4 a (a n^2+b n+c) \end{equation} where $U=(b^2-4 a c)Y+(b+2 a n)$ and $V=2 a X+b Y+1$ provided that $b^2-4 a c \not= 0$. The trivial factorization $F(n) = 1 \cdot F(n)$ corresponds to the solution $U=\pm(2 a n + b)$ and $V=\pm1$. \end{remark} \section{Acknowledgements} I would like to thank John Quintanilla and Nata\u{s}a Jonoska for their useful discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
5,741
\section{Introduction} \label{sec:intro} As power systems shift towards renewable energy sources, system operators are facing emerging operational challenges. Human operators may find it challenging to manage the intrinsic stochasticity of renewable energy generation, motivating research for autonomous power systems control schemes. In addition to such operation challenges, for active distribution networks, designing a model-based control algorithm usually requires extensive domain knowledge and great amount of work on identifying the grid topology and network parameters~\cite{deka2016estimating}, while model misspecification can cause significant impacts on system operation. These difficulties spurred the development of model-free data-driven control pipelines, specifically deep reinforcement learning (RL) algorithms~\cite{mnih2013playing, mnih2016asynchronous, marot2020whitepaper}. As a result, there are several works that apply RL algorithms to various of control and operation tasks in power systems \cite{kelly2020pn}, ranging from topology control~\cite{grid2op,yoon2021winning,liu2020parl}, PV control~\cite{cao_multi-agent_2020}, and Volt-VAR control~\cite{liu2020two, wang_safe_2020,duan_deep-reinforcement-learning-based_2020}. \begin{figure}[!tb] \centering \includegraphics[scale=0.39]{adv_mdp_color.pdf} \caption{We use RL to both accomplish the power system control task and find the adversarial action. The adversary learns to attack the agent by disconnecting lines and observing their effect. We require only black-box access to the agent. Attack actions and states are denoted as $a_t'$ and $s_t'$ respectively. We also make use of the adversary agent for adversarial training to provide better robustness (Section \ref{sec:advtraining_algo}). } \label{fig:method} \end{figure} However, in order to deploy such data-driven agents to safety-critical power system tasks, there is an urgent need to investigate whether the learned agents can be \emph{robust} under a various operating scenarios, i.e., feasible power flow even when a power line is temporarily disconnected due to maintenance, environmental hazards, or cyber-physical attacks~\cite{rushe_borger_2021}. Previous works in machine learning have demonstrated that standard RL agents are vulnerable to observation-noise attacks. With small perturbations over state space or minor alterations on the underlying dynamics, even fully-optimized RL agents will output non-optimal actions~\cite{zhang2020robustdrl, chen2021attack}. Meanwhile, power grids have always been regarded as safety-critical infrastructures~\cite{usenergy2016}, and it is of top priority to validate the reliability of proposed algorithms under either system state uncertainties (e.g., renewable forecasting~\cite{douglas1998risk}) or system uncertainties (e.g., N-1 security criterion~\cite{vrakopoulou2013probabilistic, marot2020whitepaper}). Therefore, it is crucial for learning-based control schemes to be robust against exogenous events, whether or not they are malicious in origin. In this work, we focus on a typical power grid topology control task introduced by the Learning To Run a Power Network (L2RPN) challenge~\cite{yoon2021winning,liu2020parl} as a standard testbed for grid operation RL algorithm development. The designed controllers observe the underlying state of the power grid (network topology and parameters, load injections, and etc.), and take economical actions (modifying network topology, decide power setpoints of generators, etc.) to ensure feasible power flow. Robustness is difficult to achieve in this task due to the nonlinear, complex spatiotemporal dynamics within the power grids. In one of our test case on IEEE 118 bus system~\cite{christie2000power}, there are around $3.88 \cdot 10^{76}$ network topologies and $9.81 \cdot 10^{55}$ power line actions. When power lines are disconnected, RL agents struggle to adapt; in real life such a failure would lead to a blackout. In this paper, we study the impacts of adversarial attacks on RL agent and develop an effective defense method to enhance its robustness. We affirmatively answer two questions: 1) \emph{Is it possible to learn a harmful adversary?} 2) \emph{Does adversarial training improve agents' robustness?} At a high level, we first use RL to train an adversary to disconnect vital lines, by rewarding it for reducing the reward of grid operation RL agent. Then, we use our learned adversary, in turn, to increase the robustness of the grid operation RL agent, through adversarial training. The adversary generates training scenarios that are more difficult than normally encountered, thus boosting the robustness of the resulting RL algorithm. An overview of our approach is described in Figure~\ref{fig:method}. We instantiate our RL agents in the Learning to Run a Power Network (L2RPN) environment~\cite{marot2020whitepaper}. The environment, similar to OpenAI Gym~\cite{openaigym}, simulates realisitic power networks and provides an interface to train RL agents to operate them. An agent deployed on the L2RPN environment manages the power network topology while subjected to maintenance and environmental hazards. These hazards temporarily decommission power lines and force the agent to modify the network to prevent blackouts. \emph{We make use of the winning algorithms from recent L2RPN competitions~\cite{liu2020parl, yoon2021winning, yan2020wcci} to show the vulnerability of normal RL agents, and show the improved robustness provided by adversarial training.} Our contributions are summarized as follows. \begin{itemize}[leftmargin=*] \item We propose an agent-specific adversary MDP to learn an adversarial policy for a given agent. We demonstrate the effectiveness of our adversaries by conducting both white-box and black-box transfer attacks against the \textsc{Kaist}-agent, \textsc{PARL}-agent, and \textsc{Nanyang}-agent~(winning agents from previous L2RPN challenges), which lead to over 90\% performance drop of the winning agents, and greatly outperform other baselines attack methods. \item We use our learned adversary to improve the robustness of RL algorithm via adversarial training. Adversarial training exposes the grid operation RL agent to more difficult scenarios than normally seen. As a result, when being attacked, it is able to quickly respond and re-balance the grid. We instantiate the proposed method in \textsc{Kaist}-agent, and show significant improvement on both the clean (no adversary) and robustness performance (with different adversaries). \end{itemize} Our work is the first attempt to improve robustness for a general class of RL algorithms, and takes a step towards safe deployment of RL controllers for power grids. Compared to existing optimization-based robust power system operation methods, which require heavy modeling and specific solution techniques~\cite{bertsimas2012adaptive, roald2017chance}, our proposed framework directly learns a mapping from power system states to operation actions. Our paper is organized as follows. Section II describes related work. Section III describes how to frame power systems operations as a Markov Decision Process (MDP). Section IV describes our proposed adversary MDP for learning a strong adversary and adversarial training to improve robustness. Section V describes our experimental results. Section VI provides a discussion and concluding remarks. \section{Power Network Operation as a Markov Decision Process}\label{sec:mdp} In this section, we first introduce the power system operation model considered in this paper. Then, we show how reinforcement learning can serve as an ideal framework for efficiently solving such operation problem. \subsection{Power System Operation Problem} We consider a power system where $\mathcal{N}, \mathcal{L}$ denote the set of nodes and lines, and $|\mathcal{N}| = n, |\mathcal{L}| = l$. We define the set of nodes with generators as $\mathcal{G}$, and without loss of generality, we consider $|\mathcal{N}| = |\mathcal{G}| = n$. We denote the active/reactive power outputs of generator at node $i \in \mathcal{N}$ as $p_{G, i}, q_{G, i}$ (which is controllable by the system operator). If there is no physical generators at node $i$, we just simply restrict $p_{G, i} = q_{G, i} = 0$. Define the demand at node $i$ as $p_{D, i}, q_{D, i}$, which is given by the environment. We look into a power grid operational setting where topology can be reconfigured to minimize the system costs. Define the topology choice $\Omega \in \mathcal{S}_{\Omega}$ where $\mathcal{S}_{\Omega}$ is the set of all possible topologies by line switching, and define $\mathcal{L}_{\Omega}$ as all lines connected under topology $\Omega$. $G_{\Omega}$ and $B_{\Omega}$ are the conductance and susceptance matrices associated with topology $\Omega$. If line $l_{ij} \notin \mathcal{L}_{\Omega}$, $G_{\Omega, i, j} = B_{\Omega, i, j} = 0$. We follow the power grid operational cost definition in the L2RPN challenge~\cite{marot2020whitepaper}, where the total system cost over horizon $T$ is defined as \begin{equation}\label{eq:opt_obj} c(e) = \sum_{t=1}^{T} c_{\text{operations}} (t) \end{equation} and $c_{operations}(t)$ tracks the system operation cost at each time step, \[c_\text{operations}(t) = \mathcal{E}_\text{loss}(t) + \alpha \cdot \mathcal{E}_\text{redispatch}(t),\] where $\mathcal{E}_\text{loss}(t)$ is the sum of the energy loss (due to transmission line resistance) and \[\mathcal{E}_\text{redispatch}(t) \propto \sum_{i \in \mathcal{N}} |p_{G, i}(t)-p_{G, i}(t-1)|\] is the redispatching cost. Here $\alpha$ is a parameter trading off the two operational cost terms, and we set $\alpha = 1$ in experiments. The goal of the system operator is to reduce the total system operation costs, subject to system dynamics represented by the power flow equations, \begin{subequations} \label{eqn:main} \begin{align} \min_{p_G, q_G, \Omega} \quad & c(e) \\ \text { s.t. } \quad & G(t+1), B(t+1) = f(G(t), B(t), \Omega(t))\,, \forall t \label{eq:rl_topology}\\ & g(\theta(t), v(t), p(t), q(t); G(t), B(t)) = 0\,, \forall t \label{eq:rl_pf}\\ & \theta(t) \in \mathcal{\theta}, v(t) \in \mathcal{V}\,, \forall t \label{eq:state_constr1}\\ & p_G(t) \in \mathcal{P}_G, q_G(t) \in \mathcal{Q}_G\,, \forall t \label{eq:gen_constr}\\ & p_{ij}(t) \in \mathcal{P}, q_{ij}(t) \in \mathcal{Q}\,, \forall t \label{eq:state_constr2} \end{align} \end{subequations} where~\eqref{eq:rl_topology} represents how the system dynamics change due to topology control action $\Omega(t)$. \eqref{eq:rl_pf} is the concise representation of the power flow equations \cite{roald2017chance}, in which $\theta$ is the voltage angle and $v$ is the voltage magnitude across all nodes. $p$ is the real power injection vector where $p_i = p_{G, i} -p_{D, i}$, and $q$ is the reactive power injection vector where $q_i = q_{G, i} -q_{D, i}$. Finally, $p_{ij}$ and $q_{ij}$ are the real and reactive power flow on branch $\{i, j\}$. \eqref{eq:state_constr1}, \eqref{eq:state_constr2} and \eqref{eq:gen_constr} summarize the power system operation constraints on voltage angle, magnitude, generator capacity and line flow, respectively. Note that the optimization variables include both the topology choice $\Omega(t)$ and power dispatch $p_G(t), q_G(t)$ for all time steps $t = 1, ..., T$. The optimal topology control and generation dispatch problem defined in \eqref{eqn:main} is a nonlinear, mixed-integer optimization problem, which is challenging to solve via conventional optimization techniques. Actually, it includes both continuous optimization variable $p_G, q_G$ and discrete optimization variable $\Omega(t)$ (topology choice), which leads to further complexity. In addition, even if sub-optimal solutions are acceptable, one needs the exact system model to solve the optimization, e.g., $G_{\Omega(t)}, B_{\Omega(t)}$, which are often unknown or hard to estimate in real systems~\cite{chen2020data}. \subsection{Reinforcement Learning for Power System Operation} Reinforcement learning (RL) provides a powerful paradigm for solving \eqref{eqn:main}, by training a policy that maps the states to actions, so as to minimize the loss function defined as~\eqref{eq:opt_obj}. For the remaining of this section, we outline how the power network operation problem defined in \eqref{eqn:main} can be modeled as a RL problem. First, we define a Markov Decision Process (MDP) of 4-tuple $(\mathcal{S}, \mathcal{A}, \mathcal{P}, r)$ to represent the power network operation model. $\mathcal{S}$ is the set of states (which include network topology, generation and load values, and line flows), $\mathcal{A}$ is the set of agent actions (described below), $\mathcal{P} : \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow \mathbb{R}$ is the transition probability (determined by the power flow equations), and $r$ is the reward function (described below). \begin{figure}[t] \centering \includegraphics[scale=0.425]{pn2.pdf} \caption{An example of a power network and a topology change. Power lines connect substations (rectangles), which either have loads (houses) or generators (wind or solar) attached. In state $s_t$, the dashed line is disconnected, causing the orange line to carry too much power. Action $a_t$ changes the network topology, connecting the bold line from the blue bus to the red bus. The grid then evolves to state $s_{t+1}$, where the (formerly) orange line's power is within its thermal limits.} \label{fig:bus} \end{figure} \noindent\textbf{Agent Action Space.} For our RL agents, action $a = (a_{\Omega}, a_{\mathcal{G}})$ consists of two parts: \begin{enumerate} \item \textit{Topological changes $a_{\Omega}$.} These actions alter the network topology $\Omega$ by changing the connections between power lines. Each substation has two buses, which can connect incoming power lines. A power line can be attached to either bus 1, bus 2, or neither bus. By switching lines on and off bus 1 and bus 2, an operator can effectively control the topology of the network (see Figure~\ref{fig:bus}). These are discrete actions. \item \textit{Redispatching $a_{\mathcal{G}}$.} An operator can also modify the generators' power setpoints $p_{G, i}, q_{G, i} \forall i \in \mathcal{N}$ (subject to physical constraints) across the network. These are continuous actions. \end{enumerate} \noindent\textbf{Reward function.} At each timestep, reward function is defined following the L2RPN environment~\cite{marot2020whitepaper}, \begin{equation}\label{eq:rl_reward} r(s_t, a_t) = \begin{cases} C-c_{\text{operations}} (t) & \text{normal operation},\\ 0 & \text{blackout}. \end{cases} \end{equation} The reward function is designed in a way such that, if the grid is under normal operation condition, $C-c_{\text{operations}} (t) > 0$, and higher reward will be given to encourage feasible, more economic power dispatch. If blackout happens, it will incur a very high operation cost (i.e., proportional to the total network load that failed to be served). In particular, we define $C=c_{\text{operations}}$ during a blackout, obtaining the reward function form in \eqref{eq:rl_reward}. To summarize, the RL for optimal power network operation problem is as follows, \begin{subequations} \label{eqn:rl_operation} \begin{align} \max_{\theta} \quad & J(\theta):= E_{\mathcal{P}, \pi} \left[\sum_{t=1}^{T} r(s_t, a_t)\right] \\ \text{s.t.} \quad & a_t = \pi_{\theta}(s_t),\\ & a \in \mathcal{A}, s \in \mathcal{S}. \end{align} \end{subequations} To solve Eq~\ref{eqn:rl_operation}, typical RL algorithms either estimate the expected reward of a state-action pair (value-based methods) or directly update the policy to minimize the reward (policy gradient)~\cite{sutton2018rl}. Yet directly estimating the value is difficult to scale up to a large number of states and actions, due to the combinatorial nature of state-action pairs. Indeed, we tried D3QN~\cite{wang2016d3qn}, a value-based method, and PPO~\cite{schulman2017ppo}, a policy gradient algorithm, and found that PPO to be a better optimizer. Thus, we use policy gradient estimation to solve our optimization. We also note that current RL agents considered for power system operations only consider clean measurements and safe operating conditions. In the next section, we will show such trained agents are vulnerable to adversaries, while the compromised policy can easily output actions that can cause infeasible power flow or even blackouts. \section{Proposed Attacks and Defense} In this section, we first formulate the attack problem as an adversary MDP, and propose two training methods under different information settings. Then we discuss how to use the learnt adversaries to enhance the robustness of grid operation RL agent via adversarial training. \vspace{-3pt} \subsection{Learning to attack} \label{sec:adv_mdp} Given an agent policy $\theta$, we wish to learn an adversarial policy $\theta'$ such that the normally trained policy will output ``unsafe'' actions. We do so by solving the \emph{adversary MDP} $(\mathcal{S}, \mathcal{A}', \mathcal{P}_{\theta}, r')$. Similar as the grid operating agent MDP, $\mathcal{S}$ is the set of all network states, which include network topology, generation and load values, and line flows. $\mathcal{A}'$ is the set of all available adversary actions. Specifically, the adversary's action space consists of the set of available power lines $\mathcal{L}_{adv} \subset \mathcal{L}$ to disconnect. In addition, we restrict that the adversary can only attack once every $k$ steps, to avoid incessant blackouts during training as well as reflect practical constraints on adversary's ability. $\mathcal{P}_\theta : \mathcal{S} \times \mathcal{A}' \times \mathcal{S} \rightarrow \mathbb{R}$ is the transition probability under grid control policy $\pi_{\theta}$ and adversary policy $\pi_{\theta'}$, and $r'$ is the adversary reward function, which is the negative of the operation agent's reward. The adversary seeks to minimize the expected reward of the grid operating agent, \begin{subequations} \label{eqn:adv_rl} \begin{align} \min_{\theta'} \quad & J(\theta, \theta') \\ \text { s.t. } \quad & s_{t+1} \sim \mathcal{P}(s_{t+1}|s_t, a_t, a_t')\\ & a_t' = \pi_{\theta'}(s_t)\,, a_t = \pi_{\theta}(s_t) \text{ fixed $\theta$}\\ & a' \in \mathcal{A}', a \in \mathcal{A}, s \in \mathcal{S} \end{align} \end{subequations} We consider two attack setups: \textit{white-box attacks} and \textit{black-box attacks}. Both assume attack agent has access to interacting with the power system environment, but only white-box attacks assume access to the agent's policy parameters. \noindent\textbf{White-box attacks.} In this setup, we consider the attack agent has access to the grid operation RL agent's policy parameters. While this setting might not be realistic in practice, it represents an upper bound of the strength of learned adversaries. \noindent\textbf{Black-box transfer attacks.} In this setup, we train our own copy of grid operation RL agent and then proceed as in the white-box attack. Because the adversary does not know the exact agent's policy, it is not as strong as a white-box attacker. However, we found that as long as the adversary is trained against a strong agent, it can become a strong adversary and is able to \emph{transfer} its attacking ability across different agents. We demonstrate our adversary's transfer performance by training on one type of agent and attacking another type of agent (agents trained with different RL algorithms). This shows that our adversaries are not learning pathological behavior specific to a single agent, and are instead learning strong attacks with malicious physical consequences for power grids. \subsection{Learning to defend}\label{sec:advtraining_algo} Given that these adversarial attacks are real threats to power grid operations, a natural goal is to train the RL agent to be as robust and reliable as possible subject to malicious behaviors. Essentially, we want our data-driven agents to interact with such adversarial scenarios during the training process, so that the resulting agents are robust against possible attacks. Mathematically, the robust RL training problem is defined as, \begin{equation}\label{eqn:minmax} \max_{\theta} \min_{\theta'} J(\theta, \theta')\,, \end{equation} subject to the constraints given in Equation~\eqref{eqn:adv_rl}. To solve the robust reinforcement learning problem Eq~\eqref{eqn:minmax}, one can iteratively training the adversary policy $\theta'$ and the agent policy $\theta$. This method is known as robust adversarial reinforcement learning (RARL)~\cite{pinto2017robust}. However, there has been empirical evidence from deep reinforcement learning literature~\cite{gleave2020Adversarial} showing that iteratively training the agent and adversary converges slowly and does not confer greater robustness. An alternative way is to fix an adversary policy $\theta'$, and learn an agent policy $\theta$ that maximizes the expected system cost under the given adversary, \begin{equation} \max_{\theta} J(\theta, \theta'), \end{equation} We demonstrate that with a fixed adversary to perturb the environment during training, the robustness of RL agent can be greatly improved. We leave solving the full max-min problem, via iterative agent and adversary learning as a future work. Algorithm~\ref{alg:adv} describes our adversarial training procedure. \begin{algorithm} \SetAlgoLined \KwInput{Env $\mathcal{E}$; Trained adversary $\theta'$; Epochs $N$} \KwInit{Grid operation agent parameters $\theta$;} \For{$i = 1, \ldots, N$}{ Store $\theta_i \leftarrow \theta_{i-1}$ Collect trajectories $\{(s_t^i, a_t^i, r_t^i)\}$ by attacking the agent ${\theta_{i}}$ with a fixed adversary ${\theta'}$ in the environment $\mathcal{E}$ Estimate gradient $\nabla_{\theta_i} J(\theta)$ from $\{(s_t^i, a_t^i, r_t^i)\}$ Update $\theta_i$ with gradient $\nabla_{\theta_t} J(\theta_i)$ } \KwReturn{$\theta_N$} \caption{Adversarial Training}\label{alg:adv} \end{algorithm} \begin{table*}[!htbp] \caption{Mean reward and the mean number of steps before blackout of the \textsc{Kaist}-agent~and the \textsc{PARL}-agent~across $10$ test scenarios. Each row corresponds to an adversary and each column corresponds to an agent. Standard errors across three trials are reported. Steps is defined as the episode length of safe grid operation before a blackout happens. Lower reward and steps values indicate stronger performance of the adversary.} \makebox[\textwidth][c]{ \begin{tabular}{ccccccc} \toprule \textbf{Grid Operation RL Agent} & \multicolumn{2}{c}{\textsc{Kaist}-agent~- \textit{IEEE 14-bus}} & \multicolumn{2}{c}{\textsc{Kaist}-agent~- \textit{IEEE 118-bus}} & \multicolumn{2}{c}{\textsc{PARL}-agent~- \textit{IEEE 118-bus}} \\ \cmidrule(lr){1-1}\cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-7} Adversary & Reward & Steps & Reward & Steps & Reward (thousands) & Steps \\ \midrule None & $73.8 \pm 3.1$ & $813 \pm 25$ & $41.2 \pm 3.4$ & $738 \pm 26$ & $772.9 \pm 2.3$ & $864 \pm 0$ \\ Random & $19.0 \pm 1.9$ & $343 \pm 19$ & $-67.6 \pm 0.6$ & $55 \pm 8 $& $94.7 \pm 2.6$ & $104 \pm 4$ \\ Weighted Random & $18.9 \pm 4.6$ & $322 \pm 39$ & $-69.7 \pm 1.7$ & $44 \pm 4$ & $139.9 \pm 8.2$ & $155 \pm 11$ \\ \midrule \textbf{Whitebox attack (ours)} & $\mathbf{-15.4 \pm 0.8}$ & $\mathbf{160 \pm 4}$ & $\mathbf{-80.8 \pm 1.8}$ & $\mathbf{35 \pm 6}$ & $\mathbf{6.0 \pm 0.8}$ & $\mathbf{7 \pm 1}$ \\ \bottomrule \end{tabular}% } \label{tab:baselinecompare}% \end{table*} We found it extremely helpful to initialize the agent parameters $\theta$ using a pretrained agent. Thus our adversarial training procedure can be viewed as fine-tuning the parameters of the pretrained agent. We found that randomly initializing the parameters made training difficult. The agent follows a similar procedure as standard RL training, except an additional interaction with the trained adversary at each step. More specifically, we start with an L2RPN environment $\mathcal{E}$ and initial parameters of the agent $\theta_0$ and a pre-trained adversary $\theta'$. From the agent's perspective, one step in the environment is as follows: \begin{enumerate} \item Adversary observes $s_{t-1}$ and chooses action $a'_{t-1}$ \item Environment updates to $s_t$. \item Agent observes $s_t$ and chooses action $a_t$. \item Environment updates to $s_{t+1}$ and store $(s_t, a_t, r_t)$. \end{enumerate} These rollouts are then collected and used to estimate the gradient following PPO~\cite{schulman2017ppo}. We then update $\theta_i$ in the direction of the gradient, maximizing the expected total reward. \section{Evaluation} We end the paper with experiments demonstrating the effectiveness of our proposed approach. We first describe our evaluation setup and then results from our attacks and our defense. Our code will be made publicly available which relies on the L2RPN environment~\cite{marot2020whitepaper} and the grid2op framework~\cite{grid2op}. \subsection{Environment and Evaluation} \textbf{Experimental Setup} We use two networks to evaluate our results on. For demonstration and visualization purposes, we use the IEEE 14 grid. For our main results (Tables~\ref{tab:baselinecompare},~\ref{tab:transfer}, and~\ref{tab:advcompare}) we use a subset of lines from the IEEE 118 grid~\cite{christie2000power}, directly provided in the L2RPN package~\cite{marot2020whitepaper}. These grids are much larger, with $36$ substations and $59$ power lines, resulting in around $1.88 \cdot 10^{21}$ topologies and $5.76 \cdot 10^{17}$ power line actions. Because our grids are approaching the size of real-world power grids and there is research towards scaling deep RL algorithms~\cite{stooke2018accelerated}, it is reasonable to expect that our results will hold in the real-world regime. Each environment has its own set of test scenarios, which specify the load/generation every five minutes. Scenarios run for a maximum of 864 steps for the WCCI L2RPN challenge, or 3 days in the NeuRIPS L2RPN challenge. In each environment, we allow only \emph{a subset of the lines (around $1/6$) to be attacked}, reflecting practical constraints on the adversary's power. We found that some of the lines cause an immediate blackout when decommissioned, and we removed these lines from the adversary action set. For comparison convenience, we normalized reward in the range $[-100, 100]$ (except for the NeuRIPS L2RPN environment, where we did not have the necessary data to scale scores). For reference, an agent which takes no actions under no adversarial attack would receive a score of $0$ and an agent which fully optimizes the power flow would receive a score of $100.$ \textbf{Our Approach and Baselines} To study the robustness of RL agents and demonstrate the effectiveness of proposed adversarial attacks, we use the \textsc{Kaist}-agent~(the winning agent from the 2020 WCCI L2RPN challenge~\cite{yoon2021winning}), the \textsc{PARL}-agent~(the winning agent from the 2020 NeurIPS L2RPN challenge~\cite{liu2020parl}), the \textsc{Nanyang}-agent~(the third-place agent from the 2020 WCCI L2RPN challenge~\cite{yan2020wcci}). We also use a \textsc{D3QN}-agent~as a baseline, which is provided by~\cite{grid2op}. We evaluate our learned adversary against both the \textsc{Kaist}-agent~\cite{yoon2021winning} and \textsc{PARL}-agent~\cite{liu2020parl} using the provided winning policies. These agents are the strongest available, as they won their respective competitions. Each adversary is allowed to inject attacks every $k = 50$ steps (adversaries cannot immediately attack). We use the following three baselines. \begin{enumerate} \item \emph{No adversary;} \item \emph{Random adversary, proposed by~\cite{marot2020whitepaper}.} Whenever the adversary attacks, it randomly disconnects power line in the network uniformly; \item \emph{Weighted-random adversary, proposed by~\cite{omnes2021adversarial}.} Whenever the adversary attacks, it disconnects each of the power line proportional to the maximum line power flow. \end{enumerate} When training our adversary, we use the same state representation as the \textsc{Kaist}-agent, which essentially normalizes the L2RPN state observation to standard normal distribution. A full list of hyperparameters for the agent and adversary can be found in the appendix in Table~\ref{tab:hyperparams}. Furthermore, to demonstrate the performance of our defense method, we implement adversarial training on the \textsc{Kaist}-agent. We did not use other agents since their training code were either not available or their performance was not compared to \textsc{Kaist}-agent. We compare the performance of the baseline RL agent, and the RL agent with adversarial training against four different adversaries: the learnt adversary and the three baseline adversaries. \subsection{Learning to attack}\label{sec:attacks} We present results for both the white-box attack and the black-box transfer attack proprosed in Section~\ref{sec:adv_mdp}. \textbf{White-box attacks.} Table~\ref{tab:baselinecompare} shows our results on the proposed whitebox attack. Note that each of the three columns corresponds to a different power grid, so results should not be compared across columns. As shown in Table~\ref{tab:baselinecompare}, though the winning agents achieve high reward under the no adversary setting, their performance suffers a significant drop under attacks. Notice that a random attack can cause more than 70\% performance drop, across all test scenarios. This highlights the fragility of grid control RL algorithms. In addition, our proposed attack method is much stronger than the baseline attackers. For most runs, a trained PPO adversary is able to cause a blackout \emph{with a single attack}. In contrast, the baseline adversaries are unable to cause a blackout as effectively. In particular, an example attack is illustrated in Figure~\ref{fig:attack}. The learned adversary is able to disconnect the critical line (highlighted by the red cross), which causing three lines to exceed their thermal limits. A random adversary, on the other hand, typically causes no lines to overflow. Again, this example further illustrates that an RL agent without adversarial training is not robust to adversarial attacks on power grid operation. \begin{table*}[htbp] \caption{The transferability of our learned adversaries across different agents on the same power grid. The row corresponds to the fixed agent used to train the adversary (see Figure~\ref{fig:method}). The column corresponds to the agent attacked by the adversary. We train three adversaries; for each one, we attack each of the agents three times and take the lowest score to measure robustness. Because we only have one agent, the clean performance has no error bars. Otherwise, we report the standard error.} \makebox[\textwidth][c]{ \begin{tabular}{ccccccccc} \toprule \textbf{Black-box transfer attacks} & \multicolumn{2}{c}{\textsc{Kaist}-agent~} & \multicolumn{2}{c}{\textsc{Nanyang}-agent~} & \multicolumn{2}{c}{\textsc{D3QN}-agent~} \\ \cmidrule(lr){1-1}\cmidrule(lr){2-3}\cmidrule(lr){4-5} \cmidrule(lr){6-7} Adversary & Reward & Steps & Reward & Steps & Reward & Steps \\ \midrule None & $47.9$ & $790$& $2.7$& $451$& $-80.5$& $128$ \\ Adv. trained using~\textsc{Kaist}-agent & $\mathbf{-81.7 \pm 0.9}$ & $\mathbf{18 \pm 0}$ & $\mathbf{-64.9 \pm 3.1}$ & $\mathbf{98 \pm 20}$ & $-91.2 \pm 0.5$ & $27 \pm 3$\\ Adv. trained using~\textsc{Nanyang}-agent & $-49.2 \pm 3.2$ & $118 \pm 6$ & $-50.6 \pm 1.1$ & $186 \pm 17$ & $-92.2 \pm 1.1$ & $27 \pm 5$ \\ Adv. trained using~\textsc{D3QN}-agent & $-23.1 \pm 0$ & $444 \pm 0$ & $-6.8 \pm 0.2$ & $409 \pm 2$ & $\mathbf{-94.1 \pm 0}$ & $\mathbf{25 \pm 0}$ \\ \bottomrule \end{tabular} } \label{tab:transfer}% \end{table*}% \begin{figure}[htbp] \centering \includegraphics[scale=0.89]{Attacks.pdf} \caption{The learned adversary disconnects line 2-3, causing three lines (marked in red) to exceed their thermal limits. } \label{fig:attack} \end{figure} \begin{figure*}[htbp] \centering \centering \includegraphics[scale=0.8]{Adv_training.pdf} \caption{We compare the \textsc{Kaist}-agent~with and without adversarial training. At $t=0$, the line between stations $9$ and $10$ was cut in both agents. At $t=1$, the model with adversarial training was able to better distribute the load, in contrast to the model without adversarial training, where the line between $4$ and $5$ is overflowing. The overflowing line causes the model without adversarial training to destabilize at $t=6$, and afterwards it soon experiences a blackout. On the other hand, the model with adversarial training continues operating the network.} \label{fig:defense} \end{figure*} \begin{table*}[htbp] \caption{A comparison of adversarial training using different adversaries. The row indicates the adversary used to adversarially train the agent, as described in Algorithm~\ref{alg:adv}. The column indicates the adversary used to attack the agent. For each trained model, we evaluate three times and take the lowest score to measure robustness. We also report the standard error across three trained models.} \makebox[\textwidth][c]{ \begin{tabular}{ccccccccc} \toprule \textbf{Defending attacks}& \multicolumn{2}{c}{No Adv.} & \multicolumn{2}{c}{Random Adv.} & \multicolumn{2}{c}{Weighted Random Adv.} & \multicolumn{2}{c}{Learned Adv.} \\ \cmidrule(lr){1-1}\cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-7}\cmidrule(lr){8-9} Adv. used in adversarial training & Reward & Steps & Reward & Steps & Reward & Steps & Reward & Steps\\ \midrule None & $41.2 \pm 3.4$ & $738 \pm 26$ & $-72.1 \pm 1.2$ & $65 \pm 13$ & $-72.9 \pm 5.1$ & $44 \pm 4$ & $-76.5 \pm 2.6$ & $46 \pm 10$\\ Random & $42.2 \pm 6.9$ & $746 \pm 59$ & $-60.2 \pm 7.8$ & $102 \pm 32$ & $-51.4 \pm 0.3$ & $141 \pm 7$ & $-85.7 \pm 0.8$& $24 \pm 8$\\ Weighted Random & $44.9 \pm 1.1$ & $776 \pm 9$ & $-69.3 \pm 7.6$ & $91 \pm 40$ & $-54.9 \pm 5.3$ & $108 \pm 22$ & $-82.8 \pm 4.5$ & $39 \pm 16$\\ Learned & $\mathbf{56.3 \pm 0.1}$ & $\mathbf{864 \pm 0}$ & $\mathbf{-24.1 \pm 3.6}$ & $\mathbf{308 \pm 32}$ & $\mathbf{-16.3 \pm 1.7}$ & $\mathbf{419 \pm 6.7}$ & $\mathbf{-39.0 \pm 4.9}$ & $\mathbf{333 \pm 16}$ \\ \bottomrule \end{tabular}% } \label{tab:advcompare}% \end{table*}% \textbf{Black-box transfer attacks.} Table~\ref{tab:transfer} shows our results on black-box transfer attacks. First of all, as shown in Table~\ref{tab:transfer}, training against a stronger agent produces a stronger adversary. Moreover, \emph{stronger adversaries are able to transfer their attacking ability}. As evidenced by the second row, the adversary trained against the~\textsc{Kaist}-agent produce nearly the highest performance across all agents. The high transferability highlights that the adversary is not merely exploiting agent pathologies. Instead, the learnt black-box adversary learns some concept of ``critical lines" to attack thus can lead to consistent strong attack performance. In addition, because adversaries are able to transfer their attacking ability across agents, it implies that obfuscating the code or protecting the training algorithm of the grid operation RL agent is not a sufficient defense method for power system operators. We also want to point out that the \textsc{Kaist}-agent~performs worse than the \textsc{Nanyang}-agent~when faced with the adversary trained using both the \textsc{Kaist}-agent~and the \textsc{D3QN}-agent \ (see row 2 and 4 in Table~\ref{tab:transfer}). This indicates that the off-the-shelf \textsc{Kaist}-agent~is likely more brittle under adversarial attacks, though it achieves higher reward in clean environment (no adversary). This observation is important in the sense that, it highlights an RL agent which appears to have \emph{stronger performance} in the absence of an adversary may actually be \emph{less robust} to potential adversary attacks. Therefore, the grid operator must include robustness into consideration, and it does not come for free with standard RL training objective. As a side note, one potential reason that the \textsc{Nanyang}-agent \ achieves better robustness is that it is actually composed of two agents, and its actions is chosen from one of them under different states to maximize the expected reward. Indeed, there has been evidence showing that ensemble improves performances in other domains~\cite{pang2019ensemblerobustness, kettunen2019lpips}. It would be an interesting future direction to study how ensemble methods can help enhance RL robustness for power system operations. \subsection{Learning to defend} \label{sec:defense} Finally, we demonstrate how adversarial training can help improve the robustness of grid operation RL algorithms. We train using the the hyperparameters provided in~\cite{yoon2021winning}. Specially, to stabilize the adversarial training process, we first train the \textsc{Kaist}-agent~for $100$ epochs without the adversary agent. Afterwards, we perform adversarial training outlined in Algorithm~\ref{alg:adv} for an additional $100$ epochs. Table~\ref{tab:advcompare} shows our results. We find that a stronger adversary, in turn, help find a more robust grid operation RL agent, so our learned adversary from Section~\ref{sec:attacks} is particularly useful. As a benefit of adversarial training, the clean performance (no adversary) of the \textsc{Kaist}-agent~also increases. It did not suffer a blackout on any of the $10$ evaluation episodes, whereas all the other models experienced at least one. Furthermore, adversarial training allows the agent to save energy costs, as evidenced by its higher mean reward across all episodes. Adversarially training exposes the RL agent to more difficult scenarios than normally seen. As a result, when being attacked, it is able to quickly respond and re-balance the grid. In contrast, the agent without adversarial training struggles to do so. Eventually, these failures compound and cause more frequent blackouts. As a demonstration example, we compared how a standard \textsc{Kaist}-agent~and an adversarially trained \textsc{Kaist}-agent~behaved differently under same attack in Figure~\ref{fig:defense}. When an adversary disconnect a critical line, the effect quickly propagates to multiple lines after 6 time steps, which soon leads to a blackout. In contrast, the RL agent with adversarial training is able to maintain safe grid operation. \section{Conclusion} In this work, we look into the robustness issues of learning-based controllers in power system operation tasks. We first demonstrate an adversarial RL policy can generate strong attacks which bring a series of operational threats to the power grid control task. By further using adversarial training to train the RL controller, we show possible routes for realizing safe RL controllers for safety-critical power grid operations. Furthermore, we provide a realistic use-case of adversarial training, which suggests that this defense technique has the potential to be leveraged in real-world applications. We hope to encourage future work on robustness for power networks, a crucial challenge given the growing demand for electricity. \bibliographystyle{unsrt} \section{Introduction} \label{sec:intro} As power systems shift towards renewable energy sources, system operators are facing emerging operational challenges. Human operators may find it challenging to manage the intrinsic stochasticity of renewable energy generation, motivating research for autonomous power systems control schemes. In addition to such operation challenges, for active distribution networks, designing a model-based control algorithm usually requires extensive domain knowledge and great amount of work on identifying the grid topology and network parameters~\cite{deka2016estimating}, while model misspecification can cause significant impacts on system operation. These difficulties spurred the development of model-free data-driven control pipelines, specifically deep reinforcement learning (RL) algorithms~\cite{mnih2013playing, mnih2016asynchronous, marot2020whitepaper}. As a result, there are several works that apply RL algorithms to various of control and operation tasks in power systems \cite{kelly2020pn}, ranging from topology control~\cite{grid2op,yoon2021winning,liu2020parl}, PV control~\cite{cao_multi-agent_2020}, and Volt-VAR control~\cite{liu2020two, wang_safe_2020,duan_deep-reinforcement-learning-based_2020}. \begin{figure}[!tb] \centering \includegraphics[scale=0.39]{adv_mdp_color.pdf} \caption{We use RL to both accomplish the power system control task and find the adversarial action. The adversary learns to attack the agent by disconnecting lines and observing their effect. We require only black-box access to the agent. Attack actions and states are denoted as $a_t'$ and $s_t'$ respectively. We also make use of the adversary agent for adversarial training to provide better robustness (Section \ref{sec:advtraining_algo}). } \label{fig:method} \end{figure} However, in order to deploy such data-driven agents to safety-critical power system tasks, there is an urgent need to investigate whether the learned agents can be \emph{robust} under a various operating scenarios, i.e., feasible power flow even when a power line is temporarily disconnected due to maintenance, environmental hazards, or cyber-physical attacks~\cite{rushe_borger_2021}. Previous works in machine learning have demonstrated that standard RL agents are vulnerable to observation-noise attacks. With small perturbations over state space or minor alterations on the underlying dynamics, even fully-optimized RL agents will output non-optimal actions~\cite{zhang2020robustdrl, chen2021attack}. Meanwhile, power grids have always been regarded as safety-critical infrastructures~\cite{usenergy2016}, and it is of top priority to validate the reliability of proposed algorithms under either system state uncertainties (e.g., renewable forecasting~\cite{douglas1998risk}) or system uncertainties (e.g., N-1 security criterion~\cite{vrakopoulou2013probabilistic, marot2020whitepaper}). Therefore, it is crucial for learning-based control schemes to be robust against exogenous events, whether or not they are malicious in origin. In this work, we focus on a typical power grid topology control task introduced by the Learning To Run a Power Network (L2RPN) challenge~\cite{yoon2021winning,liu2020parl} as a standard testbed for grid operation RL algorithm development. The designed controllers observe the underlying state of the power grid (network topology and parameters, load injections, and etc.), and take economical actions (modifying network topology, decide power setpoints of generators, etc.) to ensure feasible power flow. Robustness is difficult to achieve in this task due to the nonlinear, complex spatiotemporal dynamics within the power grids. In one of our test case on IEEE 118 bus system~\cite{christie2000power}, there are around $3.88 \cdot 10^{76}$ network topologies and $9.81 \cdot 10^{55}$ power line actions. When power lines are disconnected, RL agents struggle to adapt; in real life such a failure would lead to a blackout. In this paper, we study the impacts of adversarial attacks on RL agent and develop an effective defense method to enhance its robustness. We affirmatively answer two questions: 1) \emph{Is it possible to learn a harmful adversary?} 2) \emph{Does adversarial training improve agents' robustness?} At a high level, we first use RL to train an adversary to disconnect vital lines, by rewarding it for reducing the reward of grid operation RL agent. Then, we use our learned adversary, in turn, to increase the robustness of the grid operation RL agent, through adversarial training. The adversary generates training scenarios that are more difficult than normally encountered, thus boosting the robustness of the resulting RL algorithm. An overview of our approach is described in Figure~\ref{fig:method}. We instantiate our RL agents in the Learning to Run a Power Network (L2RPN) environment~\cite{marot2020whitepaper}. The environment, similar to OpenAI Gym~\cite{openaigym}, simulates realisitic power networks and provides an interface to train RL agents to operate them. An agent deployed on the L2RPN environment manages the power network topology while subjected to maintenance and environmental hazards. These hazards temporarily decommission power lines and force the agent to modify the network to prevent blackouts. \emph{We make use of the winning algorithms from recent L2RPN competitions~\cite{liu2020parl, yoon2021winning, yan2020wcci} to show the vulnerability of normal RL agents, and show the improved robustness provided by adversarial training.} Our contributions are summarized as follows. \begin{itemize}[leftmargin=*] \item We propose an agent-specific adversary MDP to learn an adversarial policy for a given agent. We demonstrate the effectiveness of our adversaries by conducting both white-box and black-box transfer attacks against the \textsc{Kaist}-agent, \textsc{PARL}-agent, and \textsc{Nanyang}-agent~(winning agents from previous L2RPN challenges), which lead to over 90\% performance drop of the winning agents, and greatly outperform other baselines attack methods. \item We use our learned adversary to improve the robustness of RL algorithm via adversarial training. Adversarial training exposes the grid operation RL agent to more difficult scenarios than normally seen. As a result, when being attacked, it is able to quickly respond and re-balance the grid. We instantiate the proposed method in \textsc{Kaist}-agent, and show significant improvement on both the clean (no adversary) and robustness performance (with different adversaries). \end{itemize} Our work is the first attempt to improve robustness for a general class of RL algorithms, and takes a step towards safe deployment of RL controllers for power grids. Compared to existing optimization-based robust power system operation methods, which require heavy modeling and specific solution techniques~\cite{bertsimas2012adaptive, roald2017chance}, our proposed framework directly learns a mapping from power system states to operation actions. Our paper is organized as follows. Section II describes related work. Section III describes how to frame power systems operations as a Markov Decision Process (MDP). Section IV describes our proposed adversary MDP for learning a strong adversary and adversarial training to improve robustness. Section V describes our experimental results. Section VI provides a discussion and concluding remarks. \section{Power Network Operation as a Markov Decision Process}\label{sec:mdp} In this section, we first introduce the power system operation model considered in this paper. Then, we show how reinforcement learning can serve as an ideal framework for efficiently solving such operation problem. \subsection{Power System Operation Problem} We consider a power system where $\mathcal{N}, \mathcal{L}$ denote the set of nodes and lines, and $|\mathcal{N}| = n, |\mathcal{L}| = l$. We define the set of nodes with generators as $\mathcal{G}$, and without loss of generality, we consider $|\mathcal{N}| = |\mathcal{G}| = n$. We denote the active/reactive power outputs of generator at node $i \in \mathcal{N}$ as $p_{G, i}, q_{G, i}$ (which is controllable by the system operator). If there is no physical generators at node $i$, we just simply restrict $p_{G, i} = q_{G, i} = 0$. Define the demand at node $i$ as $p_{D, i}, q_{D, i}$, which is given by the environment. We look into a power grid operational setting where topology can be reconfigured to minimize the system costs. Define the topology choice $\Omega \in \mathcal{S}_{\Omega}$ where $\mathcal{S}_{\Omega}$ is the set of all possible topologies by line switching, and define $\mathcal{L}_{\Omega}$ as all lines connected under topology $\Omega$. $G_{\Omega}$ and $B_{\Omega}$ are the conductance and susceptance matrices associated with topology $\Omega$. If line $l_{ij} \notin \mathcal{L}_{\Omega}$, $G_{\Omega, i, j} = B_{\Omega, i, j} = 0$. We follow the power grid operational cost definition in the L2RPN challenge~\cite{marot2020whitepaper}, where the total system cost over horizon $T$ is defined as \begin{equation}\label{eq:opt_obj} c(e) = \sum_{t=1}^{T} c_{\text{operations}} (t) \end{equation} and $c_{operations}(t)$ tracks the system operation cost at each time step, \[c_\text{operations}(t) = \mathcal{E}_\text{loss}(t) + \alpha \cdot \mathcal{E}_\text{redispatch}(t),\] where $\mathcal{E}_\text{loss}(t)$ is the sum of the energy loss (due to transmission line resistance) and \[\mathcal{E}_\text{redispatch}(t) \propto \sum_{i \in \mathcal{N}} |p_{G, i}(t)-p_{G, i}(t-1)|\] is the redispatching cost. Here $\alpha$ is a parameter trading off the two operational cost terms, and we set $\alpha = 1$ in experiments. The goal of the system operator is to reduce the total system operation costs, subject to system dynamics represented by the power flow equations, \begin{subequations} \label{eqn:main} \begin{align} \min_{p_G, q_G, \Omega} \quad & c(e) \\ \text { s.t. } \quad & G(t+1), B(t+1) = f(G(t), B(t), \Omega(t))\,, \forall t \label{eq:rl_topology}\\ & g(\theta(t), v(t), p(t), q(t); G(t), B(t)) = 0\,, \forall t \label{eq:rl_pf}\\ & \theta(t) \in \mathcal{\theta}, v(t) \in \mathcal{V}\,, \forall t \label{eq:state_constr1}\\ & p_G(t) \in \mathcal{P}_G, q_G(t) \in \mathcal{Q}_G\,, \forall t \label{eq:gen_constr}\\ & p_{ij}(t) \in \mathcal{P}, q_{ij}(t) \in \mathcal{Q}\,, \forall t \label{eq:state_constr2} \end{align} \end{subequations} where~\eqref{eq:rl_topology} represents how the system dynamics change due to topology control action $\Omega(t)$. \eqref{eq:rl_pf} is the concise representation of the power flow equations \cite{roald2017chance}, in which $\theta$ is the voltage angle and $v$ is the voltage magnitude across all nodes. $p$ is the real power injection vector where $p_i = p_{G, i} -p_{D, i}$, and $q$ is the reactive power injection vector where $q_i = q_{G, i} -q_{D, i}$. Finally, $p_{ij}$ and $q_{ij}$ are the real and reactive power flow on branch $\{i, j\}$. \eqref{eq:state_constr1}, \eqref{eq:state_constr2} and \eqref{eq:gen_constr} summarize the power system operation constraints on voltage angle, magnitude, generator capacity and line flow, respectively. Note that the optimization variables include both the topology choice $\Omega(t)$ and power dispatch $p_G(t), q_G(t)$ for all time steps $t = 1, ..., T$. The optimal topology control and generation dispatch problem defined in \eqref{eqn:main} is a nonlinear, mixed-integer optimization problem, which is challenging to solve via conventional optimization techniques. Actually, it includes both continuous optimization variable $p_G, q_G$ and discrete optimization variable $\Omega(t)$ (topology choice), which leads to further complexity. In addition, even if sub-optimal solutions are acceptable, one needs the exact system model to solve the optimization, e.g., $G_{\Omega(t)}, B_{\Omega(t)}$, which are often unknown or hard to estimate in real systems~\cite{chen2020data}. \subsection{Reinforcement Learning for Power System Operation} Reinforcement learning (RL) provides a powerful paradigm for solving \eqref{eqn:main}, by training a policy that maps the states to actions, so as to minimize the loss function defined as~\eqref{eq:opt_obj}. For the remaining of this section, we outline how the power network operation problem defined in \eqref{eqn:main} can be modeled as a RL problem. First, we define a Markov Decision Process (MDP) of 4-tuple $(\mathcal{S}, \mathcal{A}, \mathcal{P}, r)$ to represent the power network operation model. $\mathcal{S}$ is the set of states (which include network topology, generation and load values, and line flows), $\mathcal{A}$ is the set of agent actions (described below), $\mathcal{P} : \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow \mathbb{R}$ is the transition probability (determined by the power flow equations), and $r$ is the reward function (described below). \begin{figure}[t] \centering \includegraphics[scale=0.425]{pn2.pdf} \caption{An example of a power network and a topology change. Power lines connect substations (rectangles), which either have loads (houses) or generators (wind or solar) attached. In state $s_t$, the dashed line is disconnected, causing the orange line to carry too much power. Action $a_t$ changes the network topology, connecting the bold line from the blue bus to the red bus. The grid then evolves to state $s_{t+1}$, where the (formerly) orange line's power is within its thermal limits.} \label{fig:bus} \end{figure} \noindent\textbf{Agent Action Space.} For our RL agents, action $a = (a_{\Omega}, a_{\mathcal{G}})$ consists of two parts: \begin{enumerate} \item \textit{Topological changes $a_{\Omega}$.} These actions alter the network topology $\Omega$ by changing the connections between power lines. Each substation has two buses, which can connect incoming power lines. A power line can be attached to either bus 1, bus 2, or neither bus. By switching lines on and off bus 1 and bus 2, an operator can effectively control the topology of the network (see Figure~\ref{fig:bus}). These are discrete actions. \item \textit{Redispatching $a_{\mathcal{G}}$.} An operator can also modify the generators' power setpoints $p_{G, i}, q_{G, i} \forall i \in \mathcal{N}$ (subject to physical constraints) across the network. These are continuous actions. \end{enumerate} \noindent\textbf{Reward function.} At each timestep, reward function is defined following the L2RPN environment~\cite{marot2020whitepaper}, \begin{equation}\label{eq:rl_reward} r(s_t, a_t) = \begin{cases} C-c_{\text{operations}} (t) & \text{normal operation},\\ 0 & \text{blackout}. \end{cases} \end{equation} The reward function is designed in a way such that, if the grid is under normal operation condition, $C-c_{\text{operations}} (t) > 0$, and higher reward will be given to encourage feasible, more economic power dispatch. If blackout happens, it will incur a very high operation cost (i.e., proportional to the total network load that failed to be served). In particular, we define $C=c_{\text{operations}}$ during a blackout, obtaining the reward function form in \eqref{eq:rl_reward}. To summarize, the RL for optimal power network operation problem is as follows, \begin{subequations} \label{eqn:rl_operation} \begin{align} \max_{\theta} \quad & J(\theta):= E_{\mathcal{P}, \pi} \left[\sum_{t=1}^{T} r(s_t, a_t)\right] \\ \text{s.t.} \quad & a_t = \pi_{\theta}(s_t),\\ & a \in \mathcal{A}, s \in \mathcal{S}. \end{align} \end{subequations} To solve Eq~\ref{eqn:rl_operation}, typical RL algorithms either estimate the expected reward of a state-action pair (value-based methods) or directly update the policy to minimize the reward (policy gradient)~\cite{sutton2018rl}. Yet directly estimating the value is difficult to scale up to a large number of states and actions, due to the combinatorial nature of state-action pairs. Indeed, we tried D3QN~\cite{wang2016d3qn}, a value-based method, and PPO~\cite{schulman2017ppo}, a policy gradient algorithm, and found that PPO to be a better optimizer. Thus, we use policy gradient estimation to solve our optimization. We also note that current RL agents considered for power system operations only consider clean measurements and safe operating conditions. In the next section, we will show such trained agents are vulnerable to adversaries, while the compromised policy can easily output actions that can cause infeasible power flow or even blackouts. \section{Proposed Attacks and Defense} In this section, we first formulate the attack problem as an adversary MDP, and propose two training methods under different information settings. Then we discuss how to use the learnt adversaries to enhance the robustness of grid operation RL agent via adversarial training. \vspace{-3pt} \subsection{Learning to attack} \label{sec:adv_mdp} Given an agent policy $\theta$, we wish to learn an adversarial policy $\theta'$ such that the normally trained policy will output ``unsafe'' actions. We do so by solving the \emph{adversary MDP} $(\mathcal{S}, \mathcal{A}', \mathcal{P}_{\theta}, r')$. Similar as the grid operating agent MDP, $\mathcal{S}$ is the set of all network states, which include network topology, generation and load values, and line flows. $\mathcal{A}'$ is the set of all available adversary actions. Specifically, the adversary's action space consists of the set of available power lines $\mathcal{L}_{adv} \subset \mathcal{L}$ to disconnect. In addition, we restrict that the adversary can only attack once every $k$ steps, to avoid incessant blackouts during training as well as reflect practical constraints on adversary's ability. $\mathcal{P}_\theta : \mathcal{S} \times \mathcal{A}' \times \mathcal{S} \rightarrow \mathbb{R}$ is the transition probability under grid control policy $\pi_{\theta}$ and adversary policy $\pi_{\theta'}$, and $r'$ is the adversary reward function, which is the negative of the operation agent's reward. The adversary seeks to minimize the expected reward of the grid operating agent, \begin{subequations} \label{eqn:adv_rl} \begin{align} \min_{\theta'} \quad & J(\theta, \theta') \\ \text { s.t. } \quad & s_{t+1} \sim \mathcal{P}(s_{t+1}|s_t, a_t, a_t')\\ & a_t' = \pi_{\theta'}(s_t)\,, a_t = \pi_{\theta}(s_t) \text{ fixed $\theta$}\\ & a' \in \mathcal{A}', a \in \mathcal{A}, s \in \mathcal{S} \end{align} \end{subequations} We consider two attack setups: \textit{white-box attacks} and \textit{black-box attacks}. Both assume attack agent has access to interacting with the power system environment, but only white-box attacks assume access to the agent's policy parameters. \noindent\textbf{White-box attacks.} In this setup, we consider the attack agent has access to the grid operation RL agent's policy parameters. While this setting might not be realistic in practice, it represents an upper bound of the strength of learned adversaries. \noindent\textbf{Black-box transfer attacks.} In this setup, we train our own copy of grid operation RL agent and then proceed as in the white-box attack. Because the adversary does not know the exact agent's policy, it is not as strong as a white-box attacker. However, we found that as long as the adversary is trained against a strong agent, it can become a strong adversary and is able to \emph{transfer} its attacking ability across different agents. We demonstrate our adversary's transfer performance by training on one type of agent and attacking another type of agent (agents trained with different RL algorithms). This shows that our adversaries are not learning pathological behavior specific to a single agent, and are instead learning strong attacks with malicious physical consequences for power grids. \subsection{Learning to defend}\label{sec:advtraining_algo} Given that these adversarial attacks are real threats to power grid operations, a natural goal is to train the RL agent to be as robust and reliable as possible subject to malicious behaviors. Essentially, we want our data-driven agents to interact with such adversarial scenarios during the training process, so that the resulting agents are robust against possible attacks. Mathematically, the robust RL training problem is defined as, \begin{equation}\label{eqn:minmax} \max_{\theta} \min_{\theta'} J(\theta, \theta')\,, \end{equation} subject to the constraints given in Equation~\eqref{eqn:adv_rl}. To solve the robust reinforcement learning problem Eq~\eqref{eqn:minmax}, one can iteratively training the adversary policy $\theta'$ and the agent policy $\theta$. This method is known as robust adversarial reinforcement learning (RARL)~\cite{pinto2017robust}. However, there has been empirical evidence from deep reinforcement learning literature~\cite{gleave2020Adversarial} showing that iteratively training the agent and adversary converges slowly and does not confer greater robustness. An alternative way is to fix an adversary policy $\theta'$, and learn an agent policy $\theta$ that maximizes the expected system cost under the given adversary, \begin{equation} \max_{\theta} J(\theta, \theta'), \end{equation} We demonstrate that with a fixed adversary to perturb the environment during training, the robustness of RL agent can be greatly improved. We leave solving the full max-min problem, via iterative agent and adversary learning as a future work. Algorithm~\ref{alg:adv} describes our adversarial training procedure. \begin{algorithm} \SetAlgoLined \KwInput{Env $\mathcal{E}$; Trained adversary $\theta'$; Epochs $N$} \KwInit{Grid operation agent parameters $\theta$;} \For{$i = 1, \ldots, N$}{ Store $\theta_i \leftarrow \theta_{i-1}$ Collect trajectories $\{(s_t^i, a_t^i, r_t^i)\}$ by attacking the agent ${\theta_{i}}$ with a fixed adversary ${\theta'}$ in the environment $\mathcal{E}$ Estimate gradient $\nabla_{\theta_i} J(\theta)$ from $\{(s_t^i, a_t^i, r_t^i)\}$ Update $\theta_i$ with gradient $\nabla_{\theta_t} J(\theta_i)$ } \KwReturn{$\theta_N$} \caption{Adversarial Training}\label{alg:adv} \end{algorithm} \begin{table*}[!htbp] \caption{Mean reward and the mean number of steps before blackout of the \textsc{Kaist}-agent~and the \textsc{PARL}-agent~across $10$ test scenarios. Each row corresponds to an adversary and each column corresponds to an agent. Standard errors across three trials are reported. Steps is defined as the episode length of safe grid operation before a blackout happens. Lower reward and steps values indicate stronger performance of the adversary.} \makebox[\textwidth][c]{ \begin{tabular}{ccccccc} \toprule \textbf{Grid Operation RL Agent} & \multicolumn{2}{c}{\textsc{Kaist}-agent~- \textit{IEEE 14-bus}} & \multicolumn{2}{c}{\textsc{Kaist}-agent~- \textit{IEEE 118-bus}} & \multicolumn{2}{c}{\textsc{PARL}-agent~- \textit{IEEE 118-bus}} \\ \cmidrule(lr){1-1}\cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-7} Adversary & Reward & Steps & Reward & Steps & Reward (thousands) & Steps \\ \midrule None & $73.8 \pm 3.1$ & $813 \pm 25$ & $41.2 \pm 3.4$ & $738 \pm 26$ & $772.9 \pm 2.3$ & $864 \pm 0$ \\ Random & $19.0 \pm 1.9$ & $343 \pm 19$ & $-67.6 \pm 0.6$ & $55 \pm 8 $& $94.7 \pm 2.6$ & $104 \pm 4$ \\ Weighted Random & $18.9 \pm 4.6$ & $322 \pm 39$ & $-69.7 \pm 1.7$ & $44 \pm 4$ & $139.9 \pm 8.2$ & $155 \pm 11$ \\ \midrule \textbf{Whitebox attack (ours)} & $\mathbf{-15.4 \pm 0.8}$ & $\mathbf{160 \pm 4}$ & $\mathbf{-80.8 \pm 1.8}$ & $\mathbf{35 \pm 6}$ & $\mathbf{6.0 \pm 0.8}$ & $\mathbf{7 \pm 1}$ \\ \bottomrule \end{tabular}% } \label{tab:baselinecompare}% \end{table*} We found it extremely helpful to initialize the agent parameters $\theta$ using a pretrained agent. Thus our adversarial training procedure can be viewed as fine-tuning the parameters of the pretrained agent. We found that randomly initializing the parameters made training difficult. The agent follows a similar procedure as standard RL training, except an additional interaction with the trained adversary at each step. More specifically, we start with an L2RPN environment $\mathcal{E}$ and initial parameters of the agent $\theta_0$ and a pre-trained adversary $\theta'$. From the agent's perspective, one step in the environment is as follows: \begin{enumerate} \item Adversary observes $s_{t-1}$ and chooses action $a'_{t-1}$ \item Environment updates to $s_t$. \item Agent observes $s_t$ and chooses action $a_t$. \item Environment updates to $s_{t+1}$ and store $(s_t, a_t, r_t)$. \end{enumerate} These rollouts are then collected and used to estimate the gradient following PPO~\cite{schulman2017ppo}. We then update $\theta_i$ in the direction of the gradient, maximizing the expected total reward. \section{Evaluation} We end the paper with experiments demonstrating the effectiveness of our proposed approach. We first describe our evaluation setup and then results from our attacks and our defense. Our code will be made publicly available which relies on the L2RPN environment~\cite{marot2020whitepaper} and the grid2op framework~\cite{grid2op}. \subsection{Environment and Evaluation} \textbf{Experimental Setup} We use two networks to evaluate our results on. For demonstration and visualization purposes, we use the IEEE 14 grid. For our main results (Tables~\ref{tab:baselinecompare},~\ref{tab:transfer}, and~\ref{tab:advcompare}) we use a subset of lines from the IEEE 118 grid~\cite{christie2000power}, directly provided in the L2RPN package~\cite{marot2020whitepaper}. These grids are much larger, with $36$ substations and $59$ power lines, resulting in around $1.88 \cdot 10^{21}$ topologies and $5.76 \cdot 10^{17}$ power line actions. Because our grids are approaching the size of real-world power grids and there is research towards scaling deep RL algorithms~\cite{stooke2018accelerated}, it is reasonable to expect that our results will hold in the real-world regime. Each environment has its own set of test scenarios, which specify the load/generation every five minutes. Scenarios run for a maximum of 864 steps for the WCCI L2RPN challenge, or 3 days in the NeuRIPS L2RPN challenge. In each environment, we allow only \emph{a subset of the lines (around $1/6$) to be attacked}, reflecting practical constraints on the adversary's power. We found that some of the lines cause an immediate blackout when decommissioned, and we removed these lines from the adversary action set. For comparison convenience, we normalized reward in the range $[-100, 100]$ (except for the NeuRIPS L2RPN environment, where we did not have the necessary data to scale scores). For reference, an agent which takes no actions under no adversarial attack would receive a score of $0$ and an agent which fully optimizes the power flow would receive a score of $100.$ \textbf{Our Approach and Baselines} To study the robustness of RL agents and demonstrate the effectiveness of proposed adversarial attacks, we use the \textsc{Kaist}-agent~(the winning agent from the 2020 WCCI L2RPN challenge~\cite{yoon2021winning}), the \textsc{PARL}-agent~(the winning agent from the 2020 NeurIPS L2RPN challenge~\cite{liu2020parl}), the \textsc{Nanyang}-agent~(the third-place agent from the 2020 WCCI L2RPN challenge~\cite{yan2020wcci}). We also use a \textsc{D3QN}-agent~as a baseline, which is provided by~\cite{grid2op}. We evaluate our learned adversary against both the \textsc{Kaist}-agent~\cite{yoon2021winning} and \textsc{PARL}-agent~\cite{liu2020parl} using the provided winning policies. These agents are the strongest available, as they won their respective competitions. Each adversary is allowed to inject attacks every $k = 50$ steps (adversaries cannot immediately attack). We use the following three baselines. \begin{enumerate} \item \emph{No adversary;} \item \emph{Random adversary, proposed by~\cite{marot2020whitepaper}.} Whenever the adversary attacks, it randomly disconnects power line in the network uniformly; \item \emph{Weighted-random adversary, proposed by~\cite{omnes2021adversarial}.} Whenever the adversary attacks, it disconnects each of the power line proportional to the maximum line power flow. \end{enumerate} When training our adversary, we use the same state representation as the \textsc{Kaist}-agent, which essentially normalizes the L2RPN state observation to standard normal distribution. A full list of hyperparameters for the agent and adversary can be found in the appendix in Table~\ref{tab:hyperparams}. Furthermore, to demonstrate the performance of our defense method, we implement adversarial training on the \textsc{Kaist}-agent. We did not use other agents since their training code were either not available or their performance was not compared to \textsc{Kaist}-agent. We compare the performance of the baseline RL agent, and the RL agent with adversarial training against four different adversaries: the learnt adversary and the three baseline adversaries. \subsection{Learning to attack}\label{sec:attacks} We present results for both the white-box attack and the black-box transfer attack proprosed in Section~\ref{sec:adv_mdp}. \textbf{White-box attacks.} Table~\ref{tab:baselinecompare} shows our results on the proposed whitebox attack. Note that each of the three columns corresponds to a different power grid, so results should not be compared across columns. As shown in Table~\ref{tab:baselinecompare}, though the winning agents achieve high reward under the no adversary setting, their performance suffers a significant drop under attacks. Notice that a random attack can cause more than 70\% performance drop, across all test scenarios. This highlights the fragility of grid control RL algorithms. In addition, our proposed attack method is much stronger than the baseline attackers. For most runs, a trained PPO adversary is able to cause a blackout \emph{with a single attack}. In contrast, the baseline adversaries are unable to cause a blackout as effectively. In particular, an example attack is illustrated in Figure~\ref{fig:attack}. The learned adversary is able to disconnect the critical line (highlighted by the red cross), which causing three lines to exceed their thermal limits. A random adversary, on the other hand, typically causes no lines to overflow. Again, this example further illustrates that an RL agent without adversarial training is not robust to adversarial attacks on power grid operation. \begin{table*}[htbp] \caption{The transferability of our learned adversaries across different agents on the same power grid. The row corresponds to the fixed agent used to train the adversary (see Figure~\ref{fig:method}). The column corresponds to the agent attacked by the adversary. We train three adversaries; for each one, we attack each of the agents three times and take the lowest score to measure robustness. Because we only have one agent, the clean performance has no error bars. Otherwise, we report the standard error.} \makebox[\textwidth][c]{ \begin{tabular}{ccccccccc} \toprule \textbf{Black-box transfer attacks} & \multicolumn{2}{c}{\textsc{Kaist}-agent~} & \multicolumn{2}{c}{\textsc{Nanyang}-agent~} & \multicolumn{2}{c}{\textsc{D3QN}-agent~} \\ \cmidrule(lr){1-1}\cmidrule(lr){2-3}\cmidrule(lr){4-5} \cmidrule(lr){6-7} Adversary & Reward & Steps & Reward & Steps & Reward & Steps \\ \midrule None & $47.9$ & $790$& $2.7$& $451$& $-80.5$& $128$ \\ Adv. trained using~\textsc{Kaist}-agent & $\mathbf{-81.7 \pm 0.9}$ & $\mathbf{18 \pm 0}$ & $\mathbf{-64.9 \pm 3.1}$ & $\mathbf{98 \pm 20}$ & $-91.2 \pm 0.5$ & $27 \pm 3$\\ Adv. trained using~\textsc{Nanyang}-agent & $-49.2 \pm 3.2$ & $118 \pm 6$ & $-50.6 \pm 1.1$ & $186 \pm 17$ & $-92.2 \pm 1.1$ & $27 \pm 5$ \\ Adv. trained using~\textsc{D3QN}-agent & $-23.1 \pm 0$ & $444 \pm 0$ & $-6.8 \pm 0.2$ & $409 \pm 2$ & $\mathbf{-94.1 \pm 0}$ & $\mathbf{25 \pm 0}$ \\ \bottomrule \end{tabular} } \label{tab:transfer}% \end{table*}% \begin{figure}[htbp] \centering \includegraphics[scale=0.89]{Attacks.pdf} \caption{The learned adversary disconnects line 2-3, causing three lines (marked in red) to exceed their thermal limits. } \label{fig:attack} \end{figure} \begin{figure*}[htbp] \centering \centering \includegraphics[scale=0.8]{Adv_training.pdf} \caption{We compare the \textsc{Kaist}-agent~with and without adversarial training. At $t=0$, the line between stations $9$ and $10$ was cut in both agents. At $t=1$, the model with adversarial training was able to better distribute the load, in contrast to the model without adversarial training, where the line between $4$ and $5$ is overflowing. The overflowing line causes the model without adversarial training to destabilize at $t=6$, and afterwards it soon experiences a blackout. On the other hand, the model with adversarial training continues operating the network.} \label{fig:defense} \end{figure*} \begin{table*}[htbp] \caption{A comparison of adversarial training using different adversaries. The row indicates the adversary used to adversarially train the agent, as described in Algorithm~\ref{alg:adv}. The column indicates the adversary used to attack the agent. For each trained model, we evaluate three times and take the lowest score to measure robustness. We also report the standard error across three trained models.} \makebox[\textwidth][c]{ \begin{tabular}{ccccccccc} \toprule \textbf{Defending attacks}& \multicolumn{2}{c}{No Adv.} & \multicolumn{2}{c}{Random Adv.} & \multicolumn{2}{c}{Weighted Random Adv.} & \multicolumn{2}{c}{Learned Adv.} \\ \cmidrule(lr){1-1}\cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-7}\cmidrule(lr){8-9} Adv. used in adversarial training & Reward & Steps & Reward & Steps & Reward & Steps & Reward & Steps\\ \midrule None & $41.2 \pm 3.4$ & $738 \pm 26$ & $-72.1 \pm 1.2$ & $65 \pm 13$ & $-72.9 \pm 5.1$ & $44 \pm 4$ & $-76.5 \pm 2.6$ & $46 \pm 10$\\ Random & $42.2 \pm 6.9$ & $746 \pm 59$ & $-60.2 \pm 7.8$ & $102 \pm 32$ & $-51.4 \pm 0.3$ & $141 \pm 7$ & $-85.7 \pm 0.8$& $24 \pm 8$\\ Weighted Random & $44.9 \pm 1.1$ & $776 \pm 9$ & $-69.3 \pm 7.6$ & $91 \pm 40$ & $-54.9 \pm 5.3$ & $108 \pm 22$ & $-82.8 \pm 4.5$ & $39 \pm 16$\\ Learned & $\mathbf{56.3 \pm 0.1}$ & $\mathbf{864 \pm 0}$ & $\mathbf{-24.1 \pm 3.6}$ & $\mathbf{308 \pm 32}$ & $\mathbf{-16.3 \pm 1.7}$ & $\mathbf{419 \pm 6.7}$ & $\mathbf{-39.0 \pm 4.9}$ & $\mathbf{333 \pm 16}$ \\ \bottomrule \end{tabular}% } \label{tab:advcompare}% \end{table*}% \textbf{Black-box transfer attacks.} Table~\ref{tab:transfer} shows our results on black-box transfer attacks. First of all, as shown in Table~\ref{tab:transfer}, training against a stronger agent produces a stronger adversary. Moreover, \emph{stronger adversaries are able to transfer their attacking ability}. As evidenced by the second row, the adversary trained against the~\textsc{Kaist}-agent produce nearly the highest performance across all agents. The high transferability highlights that the adversary is not merely exploiting agent pathologies. Instead, the learnt black-box adversary learns some concept of ``critical lines" to attack thus can lead to consistent strong attack performance. In addition, because adversaries are able to transfer their attacking ability across agents, it implies that obfuscating the code or protecting the training algorithm of the grid operation RL agent is not a sufficient defense method for power system operators. We also want to point out that the \textsc{Kaist}-agent~performs worse than the \textsc{Nanyang}-agent~when faced with the adversary trained using both the \textsc{Kaist}-agent~and the \textsc{D3QN}-agent \ (see row 2 and 4 in Table~\ref{tab:transfer}). This indicates that the off-the-shelf \textsc{Kaist}-agent~is likely more brittle under adversarial attacks, though it achieves higher reward in clean environment (no adversary). This observation is important in the sense that, it highlights an RL agent which appears to have \emph{stronger performance} in the absence of an adversary may actually be \emph{less robust} to potential adversary attacks. Therefore, the grid operator must include robustness into consideration, and it does not come for free with standard RL training objective. As a side note, one potential reason that the \textsc{Nanyang}-agent \ achieves better robustness is that it is actually composed of two agents, and its actions is chosen from one of them under different states to maximize the expected reward. Indeed, there has been evidence showing that ensemble improves performances in other domains~\cite{pang2019ensemblerobustness, kettunen2019lpips}. It would be an interesting future direction to study how ensemble methods can help enhance RL robustness for power system operations. \subsection{Learning to defend} \label{sec:defense} Finally, we demonstrate how adversarial training can help improve the robustness of grid operation RL algorithms. We train using the the hyperparameters provided in~\cite{yoon2021winning}. Specially, to stabilize the adversarial training process, we first train the \textsc{Kaist}-agent~for $100$ epochs without the adversary agent. Afterwards, we perform adversarial training outlined in Algorithm~\ref{alg:adv} for an additional $100$ epochs. Table~\ref{tab:advcompare} shows our results. We find that a stronger adversary, in turn, help find a more robust grid operation RL agent, so our learned adversary from Section~\ref{sec:attacks} is particularly useful. As a benefit of adversarial training, the clean performance (no adversary) of the \textsc{Kaist}-agent~also increases. It did not suffer a blackout on any of the $10$ evaluation episodes, whereas all the other models experienced at least one. Furthermore, adversarial training allows the agent to save energy costs, as evidenced by its higher mean reward across all episodes. Adversarially training exposes the RL agent to more difficult scenarios than normally seen. As a result, when being attacked, it is able to quickly respond and re-balance the grid. In contrast, the agent without adversarial training struggles to do so. Eventually, these failures compound and cause more frequent blackouts. As a demonstration example, we compared how a standard \textsc{Kaist}-agent~and an adversarially trained \textsc{Kaist}-agent~behaved differently under same attack in Figure~\ref{fig:defense}. When an adversary disconnect a critical line, the effect quickly propagates to multiple lines after 6 time steps, which soon leads to a blackout. In contrast, the RL agent with adversarial training is able to maintain safe grid operation. \section{Conclusion} In this work, we look into the robustness issues of learning-based controllers in power system operation tasks. We first demonstrate an adversarial RL policy can generate strong attacks which bring a series of operational threats to the power grid control task. By further using adversarial training to train the RL controller, we show possible routes for realizing safe RL controllers for safety-critical power grid operations. Furthermore, we provide a realistic use-case of adversarial training, which suggests that this defense technique has the potential to be leveraged in real-world applications. We hope to encourage future work on robustness for power networks, a crucial challenge given the growing demand for electricity. \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,593
Guerra è una canzone del gruppo musicale italiano Litfiba. Esistono tre versioni studio del brano: l'originale, apparsa sul primo EP del gruppo, nel 1982; la seconda, totalmente inedita e completamente diversa dalla precedente, che compare solo nel videoclip del 1985, molto simile a quella presente nell'album Live in Berlin; la terza, sempre del 1985, riarrangiamento della seconda, che compare nell'album in studio Desaparecido come traccia conclusiva. Versioni ufficiali Guerra (versione originale) - 4:23 - versione apparsa unicamente nell'EP Litfiba, del 1982. Der Krieg (Guerra) (versione videoclip 1985) - 5:44 - versione apparsa unicamente nel videoclip del 1985 Guerra (nuova versione) - 5:30 - versione del brano completamente riarrangiata nel 1985 e inclusa nel primo album del gruppo Desaparecido. Video musicale Per il brano furono girati due videoclip: il primo, nel 1982, in cui si può vedere il gruppo agli esordi suonare live al Manila di Campi Bisenzio, a Firenze; il secondo, girato nel 1985, intitolato Der Krieg (titolo alternativo del brano che spesso veniva utilizzato dal gruppo, come ad esempio nell'album Live in Berlin). Da segnalare il particolare che questa nuova versione del videoclip fu realizzata dal regista Corso Salani, all'epoca facente parte di un gruppo di allora giovani promesse del cinema italiano.
{ "redpajama_set_name": "RedPajamaWikipedia" }
764
Leeds United handed huge cup incentive after draw LEEDS United have been handed a big incentive to win their FA Cup third round replay at Birmingham with the winners being drawn out at home to Premiership big boys Tottenham. Monday, 7th January 2013, 7:42 am The Whites had to settle for a replay after failing to make the most of home advantage on Saturday when a Luciano Becchio goal earned them a 1-1 draw and now face the prospect of a trip to the Midlands on Tuesday, January 15 if they are to earn the lucrative fourth round tie. Birmingham were the better side for long spells with their pacy attack causing the hosts problems, but United hung in and could have snatched victory if Becchio had made the most of another great chance and if Jason Pearce's first half header had not struck the post. It was no surprise that Leeds were a little under par with a virus sweeping through the club and denying them the use of several first team players as well as manager Neil Warnock, who was not at the ground and had to stay in touch with the game via radio. Birmingham were without a number of first teamers also through suspension and injury and with only 11,447 spectators the magic of the cup was far away with a flat atmosphere present throughout. United's assistant manager Mick Jones admitted he was pleased that the Whites were still in the competition. He said: "The fact that we turned it around showed a bit of character. To come back from one down, it is a good result. "Neil was in constant touch with us during the game. He made the decision to change the team at half-time and it worked. "We hadn't created anything from wide positions. Becchio and Ross McCormack needed the ball in the box and they got it in the second half. "Luciano missed a good chance before he scored, but he doesn't freeze and went on to take his next chance so well." Birmingham started the sharper with Wade Elliott, Nathan Redmond and Ravel Morrison all getting in shots early on before Pearce met Ross McCormack's free-kick to send a diving header against the woodwork. Midfielder David Norris got in behind the visitors' defence twice in a good spell that followed for Leeds only for a cross to evade everyone in the area and for his angled shot soon after to be kept out by keeper Colin Doyle. Redmond sliced a shot wide when well placed as City continued to threaten on the break, but the visitors took the lead on 33 minutes as Elliott dribbled past Norris on half way and was allowed to run unchecked to the edge of the box where he then unleashed an unstoppable shot into the top corner of the net. With Sam Byram and El Hadji Diouf sent on as half-time substitutes Leeds started the second period strongly with Becchio sent clean through only to fire his volley too close to keeper Doyle. Birmingham responded with Redmond and Chris Burke going close with strikes from distance and Redmond forcing a good save from home keeper Jamie Ashdown. But it was the Whites who scored on the hour as Norris put Becchio clear again with a clever pass into the box and the Argentine striker finished confidently for his 19th goal of the season. Leeds could not capitalise on their comeback, however, as they failed to create any more chances and their opponents continued to break out well only to fail to make the most of several shooting opportunities. Birmingham also had to contend with the loss of 19-year-old defender Will Packwood, who was taken from the ground straight to hospital after breaking his leg in two places in a horror injury that followed an accidental aerial clash with Becchio. Leeds United 1 (Becchio 60) Birmingham City 1 (Elliott 33) FA Cup, round three Att: 11,447 Leeds: Ashdown, Lees, Tate (Somma 79), Pearce, Drury, Hall (Byram 45), Brown, Norris, White (Diouf 45), McCormack, Becchio. Birmingham: Doyle, Parkwood (Hancox 67), Caldwell, Davies, Robinson, Burke, Gomis, Reilly, Elliott, Morrison (Hales 85), Redmond. Referee: Chris Foy.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,438
\section{Introduction} Future cellular communication networks are expected to support a myriad of new applications and services conceived for both traditional human-type devices and for the growing number of machine-type devices \cite{7397856}. To meet the exponential growth in connectivity and mobile traffic, new technologies are needed. Among these new technologies, the deployment of different types of access points (AP), e.g., small base-stations (SBS), pico-cells, femto-cells, relays, etc., is of particular importance, since APs can offload mobile traffic from highly congested macro-base stations (MBS) \cite{6525592}. To limit human intervention and reduce planning and maintenance costs, APs can be equipped with self-organizing capabilities \cite{sonReference}, allowing them to optimize their resource use in a distributed manner. APs normally have a lower transmit power budget and a smaller coverage range when compared to traditional MBSs. However, thanks to their denser deployment, APs benefit from the ability to consume less transmit power, leading to significant gains in power consumption as was shown in \cite{6215539,JA_VehTech}. That said, by introducing APs into the network, the problem of inter-cell-interference (ICI) is aggravated, necessitating the application of adequate resource allocation algorithms to limit the interference \cite{5441362}. The problem of ICI in self-organizing networks (SON) was extensively studied in the literature. In \cite{6945909}, the weighted sum-rate of the system is optimized through ICI coordination between SBSs. The authors adopt a blanking method where at the level of each SBS, some wireless channels are not used to mitigate the ICI. In \cite{7070660}, an algorithm for ICI coordination between SBSs based on asynchronous inter-cell signaling is proposed. The authors of \cite{7343514} propose an algorithm based on a semi-static frequency allocation to mitigate ICI and enhance the performance of cell-edge users. The proposed solutions of \cite{6945909, 7070660, 7343514 } rely on explicit communication between the distributed SBSs to mitigate ICI, resulting in excessive signaling among SBSs. To limit signaling overhead, decentralized algorithms, based on reinforcement learning, are preferred. \revised{The use of reinforcement learning in wireless communications has recently garnered significant attention \cite{7476897}.} The related framework of multi-player multi-armed bandits (MAB) \cite{lattimore} has also been widely used to study multiple problems in wireless communication systems ranging from SON \cite{6140047,6240019,7248837}, to uncoordinated spectrum access \cite{5738217,5535151,magesh, meghana}, to fast uplink grant allocation \cite{9075198}, to unmanned-aerial vehicles positioning and path-planning \cite{8669870}. In the context of SON, in \cite{6140047,6240019}, a solution is proposed based on the stochastic MAB framework to allow SBSs to partition efficiently the available frequency resources in an effort to mitigate ICI. In \cite{7000863}, a method based on learning authomata is proposed where femto-cells adjust their resource use based on the feedback received from users. In \cite{7248837}, the authors resort to the EXP3 algorithm from the adversarial MAB framework to mitigate the ICI while allowing each base-station (BS) to access multiple frequency bands. \revised{The work in \cite{8336932} proposes a data-driven approach based on the MAB framework to address the ICI problem in heterogeneous networks (HetNets).} The MAB framework was also widely used to study the opportunistic and the uncoordinated spectrum access problems. For example, in \cite{5738217}, \cite{5535151} and \cite{8676344}, the MAB model is used to study the opportunistic spectrum access problem in cognitive radio networks where secondary users compete to access the part of the spectrum not occupied by primary users. In addition to studying the opportunistic channel access problem, in \cite{8676344}, the authors also solve the distributed power allocation problem. In contrast to opportunistic channel access, the authors of \cite{magesh}, \cite{meghana} and \cite{8902878} employ MAB to study the uncoordinated spectrum access problem without distinguishing between the users. The distributed power control problem is studied in \cite{8902878}, and solutions are proposed based on the upper-confidence-bound (UCB) algorithm, and on the $\epsilon$-greedy algorithm. In \cite{6953296}, the channel and power allocation problem in a device-to-device system is modeled using the MAB framework. A game-theoretic solution based on the potential game framework is proposed to minimize regret of users. With the exception of \cite{7248837}, all previous work on wireless communications solutions based on MABs assumes that each player chooses one channel at each timeslot. However, removing this assumption is expected to improve performance for the players if a suitable algorithm is formulated, especially for the case of a SON. Indeed when an AP can access multiple channels simultaneously, an increase in both, the probability of a successful transmission and the achieved reward or rate is observed, allowing the AP to serve more end-users. Moreover, with the exception of \cite{magesh, 6953296, meghana}, all previous work based on MABs considered a zero reward for multiple players accessing the same channel. By alleviating this assumption and adopting non-orthogonal multiple access (NOMA), system performance is expected to further improve. From an information-theoretical point of view, it is well-known that non-orthogonal user multiplexing using superposition coding at the transmitter and proper decoding techniques at the receiver not only outperforms orthogonal multiplexing, but is also optimal in the sense of achieving the capacity region of the downlink broadcast channel \cite{tse}. As a result, NOMA emerged as a promising multiple access technology for 5G systems \cite{6666209,8869799,tvt_mjy}. NOMA allows multiple users to be scheduled on the same time-frequency resource by multiplexing them in the power domain. At the receiver side, successive interference cancellation (SIC) is performed to retrieve superimposed signals. {To limit the ICI in a SON, studying the resource allocation in the fronthaul portion of the network is of utmost importance \cite{6140047,6240019,7248837}. When coupled with optimizing the resource allocation in the backhaul link, optimizing the fronthaul portion leads to significant performance gains \cite{8891385, tvt_mjy}.} In this paper, we consider the fronthaul part of a self-organizing wireless network where multiple APs aim at organizing their uplink transmissions with a central unit in a distributed manner. Both the uncoordinated channel access and the distributed power control problems are studied. A solution based on the MAB framework, which does not necessitate any coordination or communication between APs, is proposed. The considered setting is closest to the ones studied in \cite{magesh} and \cite{got}, where a game-theoretic approach is used to solve the uncoordinated channel access problem. Our study extends that of \cite{magesh} and \cite{got} by allowing each AP to access multiple channels simultaneously and by proposing a model for the distributed power control problem. The main contributions of this paper can be summarized as follows: \begin{itemize} \item A two-phase algorithm based on the MAB framework, extending the work in \cite{got, magesh}, is proposed for the uncoordinated channel access and distributed power control problems. \item For the first phase, i.e., the uncoordinated channel access phase, in addition to considering varying channel rewards between APs, each AP is allowed to simultaneously access multiple channels. This is in contrast to the work in \cite{magesh} and \cite{got} where each player accesses one channel in a timeslot. Moreover, each channel can accommodate multiple APs at once using NOMA, leading to a multi-player MAB problem with varying player rewards, multiple plays and non-zero reward on collision. \item For the power control phase, varying power level rewards between APs are considered and an algorithm to solve the power control problem on each channel is proposed. \item The proposed technique is shown to achieve a sublinear regret of $\mathcal{O}(\log^2 T)$. In addition, simulation results validating the theoretical results and the performance of the proposed technique are presented. \item To the best of our knowledge, this is the first work that studies the uncoordinated channel access and the distributed power control problems in a SON network, using both NOMA and the multi-player MAB framework with varying channel rewards across users, multiple plays, and non-zero reward on collision. \end{itemize} The rest of this paper is organized as follows. The system model is presented in section \ref{sec:systemModel}. In sections \ref{sec:solution}, \ref{sec:regret}, \ref{sec:expPhase} and \ref{sec:matPhase}, the proposed algorithm is presented along with an analysis of the system-wide regret. Simulation results are presented in section \ref{sec:results} and conclusions in section \ref{sec:conc}. \section{System Model}\label{sec:systemModel} Consider the uplink of a cellular system as shown in Fig.\ \ref{fig:sysModel} where $K$ APs aim to organize their communications with an MBS serving as gateway to the core network, over $M$ available wireless channels, in an uncoordinated manner. The communication occurs over a finite time horizon $T$ that may not be known in advance to the APs. At each timeslot $t$, every AP $k$ chooses $N$ channels, adjusts its transmission power, and transmits over the chosen channels. \revised{Note that the proposed solution can be easily extended to the case where each AP $k \in \mathcal{K}$ chooses $N_k$ channels at each timeslot, where $1\leq N_k \leq M$.} We assume that NOMA is employed, enabling multiple APs to choose the same channel for communication and achieve a non-zero rate. \begin{figure}[!h] \begin{center} \vspace{-0.3cm} \includegraphics[height=5.5cm,keepaspectratio]{imgN/SystemModelMAB.pdf} \vspace{-0.2cm} \caption{\label{fig:sysModel} {System Model.}} \end{center}\vspace{-0.8cm} \end{figure} That said, if two or more APs choose the same channel, the received power levels of these APs must be different at the receiving BS level in the core network, to enable SIC decoding at the receiver side. To ensure the reception of different received power levels for the signals transmitted by the APs, we generalize the uplink NOMA power allocation model introduced in \cite{8085106}, where for a constant SINR requirement, $L$ received power levels, ensuring the SINR requirement for $L$ users scheduled on the same channel, are calculated. In this work, we extend the study of \cite{8085106} to allow for $L$ distinct SINR requirements per channel, $\boldsymbol{\Gamma}=\{\Gamma_1,\ldots,\Gamma_L\}$, sorted by decreasing order. { Note that allowing for distinct SINR levels inherently encompasses the special case of constant SINR levels.} An AP $k$ choosing SINR requirement $\Gamma_l$ over channel $m$ achieves the following uplink data rate: \begin{equation} R_{k,m,l}=\log_2\left(1+\Gamma_l\right), \end{equation} where $\Gamma_l$ is given by: \begin{equation}\label{eq:gammaLevel} \Gamma_l=\frac{v_l}{V_l+N_0B_c}. \end{equation} In Eq.\ (\ref{eq:gammaLevel}), $v_l$ is the received power level of AP $k$, the expression of which is given in Section \ref{subsec:powerModel}, $N_0$ is the noise power spectral density and $B_c$ the channel bandwidth. At the receiver side, when the AP transmissions are received with different power levels, SIC is employed to decode the received messages in a descending order. In other words, the AP choosing the highest SINR requirement $\Gamma_1$, and consequently the highest received power level $v_1$, suffers interference from all APs choosing a lower SINR requirement. Once decoded, the signal of the AP choosing $\Gamma_1$ is removed using SIC before decoding the remaining messages. Hence, variable $V_l$ of Eq.\ (\ref{eq:gammaLevel}) is the power level of the interfering transmissions, not canceled with SIC, expressed as: $V_l=\sum_{l'=l+1}^L v_{l'}$. To limit the decoding complexity at the receiving BS in the core network, as well as the error propagation in SIC, the number of APs allowed to access a channel and achieve a non-zero rate is limited to $\beta$, such that $\beta M\geq K N$. \revised{Note that in the case of a varying number of chosen channels across users, this last condition becomes $\beta M\geq \sum_{k\in\mathcal{K}} N_k$.} It is assumed that when an AP $k$ accesses a channel $m$, $k$ knows the total number of APs currently accessing channel $m$. No \emph{a priori} knowledge of the channel gain experienced over each channel is assumed. Moreover, these channel gains are distinct for each AP. To solve the channel and power allocation problems in an uncoordinated manner, we proceed in two steps, the first, of length $T_C$, dedicated to channel allocation and the second, of length $T_P$, dedicated to power allocation. Note that both $T_C$ and $T_P$ may not be known to the APs. \subsection{Uncoordinated channel allocation} \label{subsec:channelModel} To allow each AP to access $N$ channels simultaneously in a NOMA manner, the problem of uncoordinated multiple access is modeled as a stochastic multi-player MAB problem with multiple plays and non-zero reward on collision. The set of players is the set of APs $\mathcal{K}$ and the set of arms is the set of channels $\mathcal{M}$. The action of each AP $k$ at each timeslot $t$ is $\boldsymbol{a}_k^t\in\{0,1\}^{M\times 1}$ such that $a_k^t (m)=1 $ if AP $k$ pulls channel $m$ at timeslot $t$. Moreover, $\sum_{m=1}^M a_k^t (m)=N,~\forall k\in\mathcal{K}.$ The action space of each AP $k$, $\mathcal{A}_k$, consists of all possible combinations of $N$ channels, hence $|\mathcal{A}_k|=\binom{M}{N}$. Let $\boldsymbol{a}^t=\{\boldsymbol{a}_1^t,\ldots,\boldsymbol{a}_K^t\}$ denote the strategy profile of all APs in timeslot $t$. \revised{Upon choosing an action $\boldsymbol{a}_k^t \in \boldsymbol{a}^t$, AP $k$ receives the following average reward:} \begin{equation} g_k^t(\boldsymbol{a^t})=\sum_{m=1}^M a_k^t (m) \mu_M(k,m, k_m), \end{equation} where $k_m$ is the number of APs choosing channel $m$ at timeslot $t$. Variable $\mu_M(k,m, k_m)$ is the mean reward of AP $k$ over channel $m$ when $k_m$ APs access it. \revised{Note that the actual value of the received reward by AP $k$ when choosing channel $m$ at timeslot $t$ is drawn from a uniform distribution with mean $\mu_M(k,m, k_m)$.} We assume that the mean reward of AP $k$ when accessing channel $m$ alone is equal to the normalized \revised{average} channel gain of AP $k$ over channel $m$, i.e., \begin{equation}\label{eq:rewardNorm} \mu_M(k,m,1)=h_{k,m}/\mu_M^{max}, \end{equation} where $h_{k,m}$ is the \revised{average} channel gain of AP $k$ over channel $m$ and $\mu_M^{max}=\max\limits_{k\in\mathcal{K}, m \in \mathcal{M}}h_{k,m}$. {Note that it is assumed that the BS at the core network performs channel estimation on the received signals from all APs. Hence, the average channel gains $h_{k,m}, \forall k \in \mathcal{K}, \forall m \in \mathcal{M}$ are assumed to be perfectly known by the receiving BS.} For $1<k_m\leq \beta$, the mean reward of an AP must account for the added interference brought by the $(k_m-1)$ other APs scheduled on the same channel $m$. Ideally, the mean reward should take into account the interference brought by each particular AP. However, that would result in a prohibitive complexity since any channel, for each $1<k_m\leq \beta$, would have $\binom{K-1}{k_m}$ distinct reward values. To simplify the analysis, in this work, we assume that the mean reward for $1<k_m\leq \beta$, is a decreasing function of the number of interfering APs on the same channel. In other words, \begin{equation} \mu_M(k,m,k_m)={\mu_M(k,m,1)}/{k_m}. \end{equation} When $k_m>\beta$, $\mu_M(k,m, k_m)=0$. The normalization in Eq.\ (\ref{eq:rewardNorm}) leads to: $\mu_M(k,m, k_m) \in [0,1]$ for every AP $k \in \mathcal{K}$, on every channel $m\in\mathcal{M}$ and for every number of APs $k_m \in [\beta]$. Hence, $g_k^t(\boldsymbol{a^t}) \in [0,N]$ \revised{In addition to receiving the achieved rewards, we assume that the feedback received by each AP $k$ from the MBS includes the total number of APs simultaneously accessing its chosen channels. In other words, for all channels $m$ such that $a_k^t(m)=1$, AP $k$ receives the total number of APs accessing channel $m$, i.e., receives $k_m=\sum_{k \in \mathcal{K}}a_k^t(m)$. {Note that this assumption is necessary for the correct estimation of the mean rewards, allowing APs to learn and settle on the optimal allocation. Moreover, since $\beta$ is normally kept small, feeding back to each AP $k$ the total number of APs simultaneously accessing its chosen channels requires only a few bits.}} APs make their decisions in a distributed manner observing neither the channels chosen by other APs nor the rewards received by other APs. Each AP $k$ can only observe the reward it gets on each of its chosen channels. Our aim is to propose a distributed algorithm allowing APs to organize their transmissions on the available channels, without communicating together, in such a way as to maximize the sum reward of the system. By definition, the action profile yielding the highest sum reward $\boldsymbol{a^*}$ is given by: \begin{equation} \boldsymbol{a^*}=\argmax_{\boldsymbol{a}\in \mathcal{A}} \sum_{k=1}^K\sum_{m=1}^M a_k (m) \mu_M(k,m, k_m), \end{equation} \revised{where $\mathcal{A}$ is the action space of all APs, i.e., $\mathcal{A}=\prod_{k\in \mathcal{K}}\mathcal{A}_k$.} The expected regret incurred during $T_C$ is the difference between the achieved reward when playing $\boldsymbol{a^*}$ at all timeslots, and the actually achieved reward by the learning players during the $T_C$ timeslots \cite{lattimore}. In our case, it is given by: \begin{equation} \revised{\bar R= T_C\sum_{k,m} a^*_k (m) \mu_M(k,m, k^*_m) -\mathbb{E}\left(\sum\limits_{t=1}^{T_C}\sum_{k,m} a_k (m) \mu_M(k,m, k_m)\right),} \end{equation} where $k^*_m$ is the optimal number of APs scheduled over channel $m$ under $\boldsymbol{a^*}$. After $T_C$ timeslots, the APs receive a signal from the core network to terminate the channel allocation phase. At the end of the channel allocation phase, at most $\beta$ APs are scheduled over each channel $m\in \mathcal{M}$. Moreover, as an outcome of this first phase, each AP $k$ computes an estimate of its average channel gain over each channel $m$, denoted by $\hat{h}_{k,m}$. \subsection{Distributed Power Allocation}\label{subsec:powerModel} Once settled over their chosen channels, the APs receive a signal from the core network to move to the power allocation stage. Since different frequency bands are allocated to different channels, power allocation over each channel $m$ can be done independently of other channels $m' \in \mathcal{M}\setminus{\{m\}}$. In the following, we will focus on the power allocation over channel $m\in \mathcal{M}$, where the set of scheduled APs is $\mathcal{K}_m$. To simplify the distributed power allocation, we assume that each AP chooses, for each of its allocated channels, one SINR level among a fixed set of $L\geq \beta$ available SINR levels, with $\boldsymbol \Gamma$ being the set of pre-determined available SINR levels. The AP then calculates the necessary power level $v_l$ for the chosen SINR level $\Gamma_l$. For successful SIC decoding, each power level can support one AP only. In other words, if multiple APs choose the same power level, SIC fails and the signals of all $K_m$ APs are not decodable. Inspired by \cite{8085106}, it can be shown that, to satisfy Eq.\ (\ref{eq:gammaLevel}), the power level $v_l$ must be set as: \begin{equation}\label{eq:powerLevel} v_l=\Gamma_l N_0 B_c \prod\limits_{l'=l+1}^L\left(\Gamma_{l'}+1\right). \end{equation} \revised{Note that the expression of $v_l$ is obtained by proceeding backwards and by induction from $v_L=\Gamma_L N_0B_c$.} {The expression of $v_l$ ensures the SINR requirement $\Gamma_l$ when considering that an AP chooses each subsequent SINR requirement, hence the worst case scenario. Note that our setting allows for similar SINR levels. However, for similar or distinct SINR levels, the power levels chosen by APs need to be distinct to allow for SIC decoding.} To ensure SIC stability, i.e., successful decoding of the received signals in descending order \cite{7982784}, the distributed power control scheme must ensure that the power of each signal scheduled for decoding at the BS is larger than the received power of the interference generated by the combination of the remaining signals, i.e., $v_l> V_l$. From Eq.\ (\ref{eq:powerLevel}), the power level $v_l$ depends on the associated SINR level $\Gamma_l$ as well as on the interfering SINR levels $\Gamma_{l'}, l'=l+1,\ldots,L$. \begin{proposition}To ensure SIC stability, the available SINR levels must satisfy: \begin{equation}\label{eq:sicStability} \Gamma_l>\frac{2^{(L-l-1)}\times\Gamma_L}{\prod\limits_{l'=l+1}^L\left(\Gamma_{l'}+1\right)}. \end{equation} \end{proposition} \begin{proof} By proceeding backwards, to get $v_{L-1}>v_L$, the following must hold: \begin{equation}\label{eq:sicStabL-1} \Gamma_{L-1}>\frac{\Gamma_L}{\Gamma_L+1}=\frac{2^{(L-(L-1)-1)}\Gamma_L}{\Gamma_L+1}. \end{equation} Similarly, to get $v_{L-2}>v_{L-1}+v_L$, the following must hold: \revised{ \begin{equation} \Gamma_{L-2}>\frac{\Gamma_{L-1}(\Gamma_L+1)+\Gamma_L}{(\Gamma_{L-1}+1)(\Gamma_L+1)}\overset{\text{(a)}}>\frac{\frac{\Gamma_L}{\Gamma_L+1} (\Gamma_L+1)+\Gamma_L}{(\Gamma_{L-1}+1)(\Gamma_L+1)}>\frac{2\Gamma_L}{(\Gamma_{L-1}+1)(\Gamma_L+1)}=\frac{2^{(L-(L-2)-1)}\Gamma_L}{\prod\limits_{l'=L-1}^L(\Gamma_{l'}+1)}, \end{equation} where (a) follows from Eq.\ (\ref{eq:sicStabL-1}).} To get $v_l>V_l=\sum\limits_{l'=l+1}^L v_{l'}$, assume that Eq.\ (\ref{eq:sicStability}) holds. By induction, to get $v_{l-1}>\sum\limits_{l'=l}^L v_{l'}$, we must have: \begin{equation} \Gamma_{l-1}>\frac{2^{(L-(l-1)-1)}\Gamma_L}{\prod\limits_{l'=l}^L(\Gamma_{l'}+1)}. \end{equation} \end{proof} Knowing the available SINR levels, each AP $k \in \mathcal{K}_m$ calculates the associated received power levels using Eq.\ (\ref{eq:powerLevel}). Then, using the estimated average channel gain over $m$, $\hat{h}_{k,m}$, AP $k \in \mathcal{K}_m$ calculates the necessary transmit power for each power level $v_l$, $p_{k,m,l}$, according to: \begin{equation} p_{k,m,l}={v_l}/{\hat{h}_{k,m}^2}. \end{equation} Each AP is assumed to have a power budget per channel $P_k^{m}$. Hence, AP $k$ can transmit over channel $m$ using power level $v_l$ if $p_{k,m,l}\leq P_k^m$. AP $k \in \mathcal{K}_m$ builds the set of possible power levels, $\mathcal{P}_{k,m}^a$, where $ \mathcal{P}_{k,m}^a=\{v_l | ~ p_{k,m,l}\leq P_k^m, l\in[L]\}$. Note that the set of possible power levels are AP-dependent because of their dependency on the estimated average channel gain of each AP, $\hat{h}_{k,m}$, and on the AP power budget. The power allocation among APs on the same channel consists of APs choosing SINR levels, and hence received power levels, in a distributed manner, and without any inter-AP coordination. Since APs choosing the same SINR level result in an unsuccessful SIC decoding, the APs must aim at organizing their transmissions using different SINR levels. For this purpose, the power allocation on each channel is modeled using the MAB framework with single play and zero-reward on collision. Over channel $m$, the set of players is $\mathcal{K}_m$ and the set of arms is the set of power levels \revised{$\mathcal{VL}=\{v_l, l=1,\ldots,L\}$}. Since $L=|\mathcal{VL}|\geq \beta \geq K_m=|\mathcal{K}_m|$, a solution where each AP accesses one power level, without collision, is achievable. At each timeslot, each AP $k \in \mathcal{K}_m$ chooses an action $a_{k,m}^t$, i.e., a power level $v_l \in \mathcal{P}_{k,m}^a$, and transmits using $p_{k,m,l}$. The action space of AP $k$ is $\mathcal{P}_{k,m}^a$. Let $\boldsymbol{a}_m^t$ denote the strategy chosen by all APs in $\mathcal{K}_m$ over channel $m$ at timeslot $t$. \revised{ Upon choosing action $a_{k,m}^t \in \boldsymbol{a}_m^t$, AP $k$ receives the following average reward on channel $m$:} \revised{\begin{equation} g_{k,m}^t(\boldsymbol{a}^t)=\mu_P(k, m, a_{k,m}^t)\eta(\boldsymbol{a}_m^t), \end{equation}} where $\mu_P(k, m, a_{k,m}^t)$ is the reward of AP $k$ when choosing $a_{k,m}^t$. \revised{Note that the actual value of the received reward by AP $k$ when choosing action $a_{k,m}^t$ on channel $m$ at timeslot $t$ is drawn from a uniform distribution with mean $\mu_P(k, m, a_{k,m}^t)$.} The mean reward $\mu_P(k, m, a_{k,m}^t)$ is chosen in a way to strike a trade-off between SINR maximization and transmit power minimization. Therefore, it is set as: \begin{equation} \mu_P(k,m, a_{k,m}^t=v_l)= w_k^1 \frac{\Gamma_l}{\Gamma_{max}}+w_k^2\frac{1}{p_{k,m,l}~\max\limits_{k,m,l}(\frac{1}{p_{k,m,l}})}, \end{equation} where $w_k^1$ and $w_k^2$ are weight parameters relative to AP $k \in \mathcal{K}_m$ satisfying $w_k^1+w_k^2=1$. \revised{ The variable $\Gamma_{max}$ is the highest available SINR, i.e., $\Gamma_{max}=\Gamma_1$.} Note that $\mu_P(k,m, a_{k,m}^t) \in [0,1]$ and is not known by the AP in advance. \revised{Let $\mathcal{N}^m_{v_l}(\boldsymbol{a}_m^t)$ be the set of APs choosing power level $v_l$ at timeslot $t$, i.e., $\mathcal{N}^m_{v_l}(\boldsymbol{a}_m^t)=\{k \in \mathcal{K}_m ~|~ a_{k,m}^t=v_l\}$. The variable $\eta(\boldsymbol{a}_m^t)$ is the collision indicator of the strategy profile of all APs, $\boldsymbol{a}_m^t$, i.e., $\eta(\boldsymbol{a}_m^t)= 1$ if $|\mathcal{N}^m_{a_{k,m}^t=v_l}(\boldsymbol{a}_m^t)|\leq 1, \forall ~ v_l \in \mathcal{VL}$, and 0 otherwise.} \revised{Note that no feedback regarding the value of the collision indicator $\eta(\boldsymbol{a}_m^t)$ is necessary. In fact, in the case of collisions, the MBS does not have to return any feedback to the colliding APs who will assume a zero reward is achieved. When no collision takes place, the MBS returns only the value of the mean reward to the AP since the collision indicator is equal to one in the case of no collision.} \vspace{0.05cm} APs choose power levels in a distributed manner without any coordination, with each AP only observing the reward received on the chosen power level. The proposed power allocation scheme aims at maximizing the sum reward of the system. Let $\boldsymbol{a}_{m}^{*P}$ be the action profile yielding the highest sum reward over channel $m$: \revised{\begin{equation} \boldsymbol{a}_{m}^{*P}=\argmax\limits_{\boldsymbol{a}_m\in \mathcal{P}^a_m}\sum\limits_{k\in \mathcal{K}}\mu_P(k,m, a_{k,m}^t)\,\eta(\boldsymbol{a}_m^t), \end{equation}} \revised{where $\mathcal{P}^a_m$ is the action space of all APs scheduled on channel $m$, i.e., $\mathcal{P}^a_m=\prod_{k\in \mathcal{K}_m}\mathcal{P}^a_{k,m}$.} The expected regret incurred during the time horizon $T_P$ over all $M$ channels is given by: \begin{equation} \revised{\bar{R}_p=\sum\limits_{m\in\mathcal{M}}\left \{T_P\sum\limits_{k\in \mathcal{K}}\mu_P(k,m, a^{*P}_{k,m})\right.-\left.\mathbb{E}\left(\sum\limits_{t=1}^{T_P}\sum\limits_{k}\mu_P(k,m, a_{k,m}^t)\,\eta(\boldsymbol{a}_m^t)\right)\right\}.} \end{equation} \section{Proposed Solution}\label{sec:solution} \subsection{Proposed Algorithm for the Channel Allocation Problem} Since the time horizon $T_C$ is not necessarily known in advance, the proposed solution, presented in Algorithm \ref{alg1}, proceeds in epochs, each epoch consisting of three phases, namely, \textit{exploration, matching and exploitation}. The exploration phase aims at estimating the previously unknown means of each channel, as well as the number of APs competing for system resources. During this phase, each AP uniformly accesses one channel at a time to estimate its mean reward. AP $k$ accessing channel $m$ gets as feedback the achieved reward on $m$ as well as the total number of APs simultaneously accessing channel $m$. This phase runs for a constant number of timeslots given by $T_C^0$. Upon termination, all APs have an estimate $\boldsymbol{\hat{\mu}}_M$ of the means of the channels and of the channel gain experienced over each channel. Each AP also calculates an estimate of the number of APs $\hat{K}$, as was done in \cite{meghana}. These estimated means and number of APs are used in the second phase of the algorithm where APs play a non-cooperative game with the aim of maximizing the achieved sum rewards. The estimated reward means are taken to be the actual utilities achieved in the matching phase. In other words, after choosing a channel $m$, if the received reward is non-zero, AP $k$ assumes that this reward is equal to: \begin{equation} \label{eq:avReward} u_k(m)= \hat{\mu}_M(k,m,k_m). \end{equation} The dynamics of this matching phase, adopted from \cite{pareto}, are described in section \ref{subsec:matching}. The matching phase runs for $c_1 l^{1+\delta}$ frames, \revised{where $c_1$ and $\delta$ are constants and $l$ is the epoch number}. The third and final phase is an exploitation phase in which APs settle on the channels that resulted in the best performance in the previous matching phase. The exploitation phase runs for $c_2 2^l$ timeslots, \revised{$c_2$ being a constant} \begin{algorithm}[!ht] \caption{}\small \begin{algorithmic}[1] \STATEx \textbf{Initialization:} Set $\hat{\mu}_M(k,m, k_m)=0, ~ \forall k\in \mathcal{K}, m \in \mathcal{M}, k_m \in [\beta]$. Set $b^t_k=0, ~ \forall k \in \mathcal{K}$. Let $\epsilon >0$ and $c\geq KN$. \FOR {$l=1,\ldots,L_C$} \STATEx \textbf{1- Exploration Phase:} \FOR {$t=1:T_C^0$} \STATE Choose one channel $m \in \mathcal{M}$ uniformly. \STATE \begin{varwidth}[t]{\linewidth}Receive the achieved reward $x_{k}^t(m)$, and the total number of APs, $k_m$, accessing channel $m$ simultaneously.\end{varwidth} \STATE \begin{varwidth}[t]{\linewidth}$W^t_k(m,k_m)=W^{t-1}_k(m,k_m)+x_{k}^t(m)$,\\ $co_k^t(k_m)=co_k^t(k_m)+1$.\end{varwidth} \IF {$k_m>1$} \STATE $b^t_k=b^{t-1}_k+1$\label{stepAlg:14} \ENDIF \ENDFOR \STATE \begin{varwidth}[t]{\linewidth} Estimate means: $\hat{\mu}_M(k,m,k_m)=\frac{W^t_k(m,k_m)}{co_k^t(k_m)}, \forall ~ k_m \in [\beta]$. \end{varwidth} \STATE\begin{varwidth}[t]{\linewidth} Estimate the number of APs according to: $\hat{K}=\min\left\{\textrm{round}\left(\frac{\log\left(\frac{T_C^0-b^t_k}{T_C^0}\right)}{\log\left(1-\frac{1}{M}\right)}+1\right),\beta M\right\}$. \end{varwidth} \STATEx\begin{varwidth}[t]{\linewidth} \textbf{2- Matching Phase:} for the next $c_1l^{1+\delta}$ frames, play according to the dynamics described in section \ref{subsec:matching}. \end{varwidth} \STATE\begin{varwidth}[t]{\linewidth} If $S_k=C$, choose the action to play according to Eq.\ (\ref{contentChoose}). If $S_k=D$, choose the action according to Eq.\ (\ref{discontentChoose}). \end{varwidth} \STATE \begin{varwidth}[t]{\linewidth} If the achieved reward for some chosen channel ${u_{k}(m)}$, found from Eq.\ (\ref{eq:avReward}), is 0, the AP becomes discontent as per Eq.\ (\ref{zeroTransition}). \end{varwidth} \STATE \begin{varwidth}[t]{\linewidth} If $\boldsymbol{a}_k\neq \boldsymbol{\bar{a}}_k$ or $\boldsymbol{u}_k\neq \boldsymbol{\bar{u}}_k$ or player $k$ is discontent, the state transition happen according to Eq.\ (\ref{cdTransition}). \end{varwidth} \STATE\begin{varwidth}[t]{\linewidth} Each AP keeps a counter of the number of times each action $\boldsymbol{a_k'}$ was played and resulted in it being content: \begin{equation} F_{k}^l(\boldsymbol{a_k'})=\sum_{t=1}^{c_2l^{1+\delta}}\mathbb{I}\left(\boldsymbol{a}_k^t=\boldsymbol{a_k'}, S_k^t=C\right), \end{equation} with $\mathbb{I}$ being the indicator function.\end{varwidth} \STATEx \textbf{3- Exploitation phase:} for $c_2 2^l$ timeslots: \STATE Play the action $\boldsymbol{a_k^{l*}}=\argmax\limits_{\boldsymbol{a}_k\in \mathcal{A}_k} F_{k}^l(\boldsymbol{a}_k)$. \ENDFOR \end{algorithmic} \label{alg1} \end{algorithm} \normalsize \subsection{Matching Dynamics}\label{subsec:matching} Each AP $k$ is associated with a state $[\boldsymbol{\bar{a}}_k, \boldsymbol{\bar{u}}_k, S]$. The baseline action of AP $k$ is $\boldsymbol{\bar{a}}_k \in \{0,1\}^{M\times 1}$, such that $\sum_{m=1}^M\bar{a}_{k}(m)=N$. The baseline utility of AP $k$ is $\boldsymbol{\bar{u}}_k, \textrm{such that} ~ |\boldsymbol{\bar{u}}_k|=N$. Variable $S \in \{C,D\}$ is the mood of AP $k$ and reflects whether $k$ is content or discontent with the current action and utility. At each frame of the matching phase, each AP chooses an action according to the game dynamics and receives a reward that depends on the collective choices of all the APs. Define $u_{k,\textrm{max}}=\argmax\limits_{\boldsymbol{a}}\sum_{m=1}^Ma_{k}(m)\mu_M(k,m,k_m)$, where $u_{k,\textrm{max}}$ is the highest reward achievable by AP $k$, with a number of estimated APs given by $\hat{K}$. At each frame $t$ during the matching phase, AP $k$ adheres by the following dynamics to decide on the action to choose: \begin{itemize} \item A content AP plays its baseline action with high probability: \begin{equation}\label{contentChoose} p_k^{\boldsymbol{a}_k}= \begin{cases} \frac{\epsilon^c}{|\mathcal{A}_k|-1}, \quad &\textrm{if} ~ \boldsymbol{a}_k\neq \boldsymbol{\bar{a}}_k, \\ 1-\epsilon^c, \quad &\textrm{if} ~ \boldsymbol{a}_k= \boldsymbol{\bar{a}}_k., \end{cases} \end{equation} where $\epsilon>0$ is a small perturbation and $c$ is a constant satisfying $c\geq KN$.% \item A discontent AP chooses its action uniformly at random: \begin{equation}\label{discontentChoose} p_k^{\boldsymbol{a}_k}=\frac{1}{|\mathcal{A}_k|}, \quad \forall ~ \boldsymbol{a}_k \in \mathcal{A}_k. \end{equation} \end{itemize} \revised{In Eq.\ (\ref{contentChoose}) and (\ref{discontentChoose}), $p_k^{\boldsymbol{a}_k}$ is the probability with which AP $k$ chooses action $\boldsymbol{a}_k$.} After deciding on the action and observing the reward $u_k(m)$ for chosen channels, the state transition of each AP $k$ occurs according to: \begin{itemize} \item If $\boldsymbol{a}_k= \boldsymbol{\bar{a}}_k$ and $\boldsymbol{u}_k= \boldsymbol{\bar{u}}_k$, a content AP remains content: \begin{equation}\label{contentTransition} [\boldsymbol{\bar{a}}_k, \boldsymbol{\bar{u}}_k, C] \rightarrow [\boldsymbol{\bar{a}}_k, \boldsymbol{\bar{u}}_k, C]. \end{equation} \item If $u_k(m)=0$ for some $m=1,\ldots,N$, AP $k$ becomes discontent with probability one. \begin{equation}\label{zeroTransition} [\boldsymbol{\bar{a}}_k, \boldsymbol{\bar{u}}_k, C/D] \rightarrow [\boldsymbol{a}_k, \boldsymbol{u}_k, D]. \end{equation} \item If $\boldsymbol{a}_k\neq \boldsymbol{\bar{a}}_k$ or $\boldsymbol{u}_k\neq \boldsymbol{\bar{u}}_k$ or player $k$ is discontent, the state transitions occur according to: \begin{equation}\label{cdTransition} [\boldsymbol{\bar{a}}_k, \boldsymbol{\bar{u}}_k, C/D] \rightarrow \begin{cases} [\boldsymbol{a}_k, \boldsymbol{u}_k, C] \quad \textrm{w.p.}~ \epsilon^{u_{k,\textrm{max}}-\sum\limits_{n=1}^N u_{k,n}}, \\ [\boldsymbol{a}_k, \boldsymbol{u}_k, D] \quad \textrm{w.p.}~ 1-\epsilon^{u_{k,\textrm{max}}-\sum\limits_{n=1}^N u_{k,n}}. \end{cases} \end{equation} \end{itemize} \subsection{Proposed Solution for the Distributed Power Allocation} A simplified version of Algorithm \ref{alg1} can be used to solve the power allocation problem over each channel $m$. The solution is divided into three phases: \begin{enumerate} \item Exploration phase: This phase runs for $T^0_P$ timeslots and aims at estimating the reward of each power value. During this phase, each AP chooses each of its possible power levels, i.e., power levels in $\mathcal{P}_k^a$, uniformly at random. Upon termination, APs have estimates of the reward associated to each power value, denoted by $\boldsymbol{\hat\mu}_P$. \item Matching phase: In this phase, APs play a non-cooperative game according to the dynamics presented in Section \ref{subsec:matching}, after replacing $\mathcal{A}_k$ in Eq.\ (\ref{contentChoose}) and (\ref{discontentChoose}) by $\mathcal{P}^a_{k,m}$. Each AP keeps a counter of the number of times each action was played and resulted in content behavior. \item Exploitation phase: During this phase, each AP $k$ exploits the action, i.e., the power level, that resulted in the most content behavior during the matching phase. \end{enumerate} \section{Regret Analysis}\label{sec:regret} The time horizon of the channel allocation phase can be lower bounded by \cite{got}: \begin{equation} T_C\geq\sum_{l=1}^{L_C-1}(T_C^0+c_1l^{1+\delta}+c_2 2^{l})\geq c_2(2^{L_C}-2), \end{equation} where $L_C$ is the total number of epochs occurring within $T_C$ and upper bounded by: \begin{equation} L_C\leq \log\left({T_C}/{c_2}+2\right). \end{equation} Similarly, the number of epochs, $L_P$, occurring within the time horizon $T_P$ dedicated to the power allocation stage is upper bounded by $L_P\leq \log(T_P/c_2+2).$ \subsection{Regret in the Exploration Phase} In the exploration phase of the channel allocation, each AP samples channels uniformly to get estimates of their means. Even though the purpose of this work is to assign to each AP $N$ channels at each timeslot, the number of channels sampled by each AP at a timeslot is set to one in the exploration phase. The expected regret incurred by all APs in the exploration phase of the channel allocation, $R_C^1$, can be upper bounded by: \begin{equation} R_C^1\leq \sum_{l=1}^{L_C}KNT_C^0\leq KNT_C^0 \log\left({T_C}/{c_2}+2\right). \end{equation} Similarly, the expected regret incurred by all APs in the exploration phase of the power allocation, $R_P^1$, can be upper bounded by: \begin{equation} R_P^1\leq\sum\limits_{m=1}^M\sum\limits_{l=1}^{L_P}K_m T_P^0\leq K T_P^0\log(T_P/c_2+2). \end{equation} \subsection{Regret in the Matching Phase} The expected regret in the matching phase of the channel allocation, $R_C^2$, can be upper bounded by: \begin{equation} R_C^2\leq \sum_{l=1}^{L_C}KNc_1l^{1+\delta}\leq KNc_1\log^{2+\delta}\left({T_C}/{c_2}+2\right). \end{equation} Similarly, the expected regret in the matching phase of the power allocation, $R_P^2$, can be upper bounded by: \begin{equation} R_P^2\leq \sum_{m=1}^{M}\sum_{l=1}^{L_C}K_mc_1l^{1+\delta}\leq Kc_1\log^{2+\delta}\left({T_P}/{c_2}+2\right). \end{equation} \subsection{Regret in the Exploitation Phase} In the exploitation phase of epoch $l$ of the channel allocation, each AP $k$ plays the action that it played the most and resulted in content behavior in the matching phase of epoch $l$. The exploitation phase fails in two cases: \begin{enumerate} \item If the exploration phase of epoch $l$ fails: This happens with a probability $\leq 4(M\beta)^2 e^{-l}$ as shown in Lemma \ref{lemma2}. \item If the most played action of the matching epoch differs from the optimal action: This happens with a probability $\leq A_1 e^{-l^{1+\delta}}$ as shown in Lemma \ref{lemma6}. \end{enumerate} The expected regret incurred by all APs in the exploitation phase can be upper bounded by: \begin{equation} \begin{aligned} R_C^3&\leq \sum_{l=1}^{L_C}KNc_22^l\left(4(M\beta)^2 e^{-l}+A_1 e^{-l^{1+\delta} }\right)\leq A_3, \end{aligned} \end{equation} \revised{where $A_1, A_3$ are constants.} Similarly, the regret incurred by the APs in the exploitation phase of the power allocation is $R_P^3\leq A_3$. \subsection{Regret of the Proposed Technique} \begin{theorem} The expected regret of the proposed allocation solution can be upper bounded as: \begin{equation} R\leq R_C^1+ R_C^2+R_C^3+R_P^1+ R_P^2+R_P^3=\mathcal{O}\left(\log^{2+\delta}(T)\right). \end{equation} \end{theorem} \section{Exploration Phase}\label{sec:expPhase} The exploration phase is performed so APs learn estimates of the channel mean reward in the channel allocation phase, and of the power level mean reward in the power allocation phase. Moreover, by keeping track of the number of times each channel was accessed with one or more other APs in the channel allocation phase, the APs can estimate the total number of APs in the system. In this section, we find the minimum length of the exploration phase ensuring an accurate estimation of both the reward means and the number of APs. \subsection{Estimation of the Reward Means } Since the estimation may not always be perfect, the result of the assignment with the estimated means ($\boldsymbol{\hat{\mu}}_M$ and $\boldsymbol{\hat{\mu}}_P$) might differ from the result of the assignment calculated with the true means ($\boldsymbol{{\mu}}_M$ and $\boldsymbol{{\mu}}_P$). However, if the estimation inaccuracy is kept small as in \cite{magesh} and \cite{got}, the result of the assignment would not be affected. \begin{lemma}\label{lemma1} Let $J_M^1$ and $J_M^2$ be the sum reward achieved by the best channel assignment and the second best channel assignment and let $\Delta_M=\frac{J_M^1-J_M^2}{2KN}$. \vspace{0.04cm} Moreover, let $J_P^1$ and $J_P^2$ be the sum reward achieved by the best power allocation on each channel $m$ and the second best power assignment and let $\Delta_P=\frac{J_P^1-J_P^2}{2K_m}$. If the difference between the estimated and the correct reward means satisfies: \begin{equation} \begin{aligned}\label{deltaC} |\mu_M(k,m, k_m)-\hat{\mu}_M(k,m, k_m)|< \Delta_M, \forall k\in\mathcal{K}, m\in\mathcal{M}, k_m\in[\beta], \end{aligned} \end{equation} \begin{equation}\label{deltaP} \begin{aligned} |\mu_P(k,m,v_l)-\hat{\mu}_P(k,m,v_l)|< \Delta_P, \forall k\in\mathcal{K}_m, m\in\mathcal{M}, v_l\in\mathcal{VL}, \end{aligned} \end{equation} then, the best assignment result does not change due to the estimation inaccuracy \end{lemma} \begin{proof} See Appendix\ \ref{appendix1}. \end{proof} Next, we upper bound the probability of error, i.e., the probability of having channel reward estimates (resp. power level reward estimates) that do not satisfy the condition in\ (\ref{deltaC}) (resp. (\ref{deltaP})) in the exploration epoch $l$. We also provide a lower bound of the length of the exploration epoch $T_{\boldsymbol{\hat{\mu}_M}}$ in the channel allocation phase, and $T_P^0$ in the power allocation phase. \begin{lemma}\label{lemma2} If $T_{\boldsymbol{\hat{\mu}_M}}= \left\lceil{\frac{2Me^{\left(\frac{K-1}{M-1}\right)}}{\Delta_M^2\left(M-1\right)^{1-\beta}}}\right\rceil$, \vspace{0.05cm} all players have an estimate of the channel means satisfying the condition in\ (\ref{deltaC}), with probability $\geq 1-\gamma_{e,l}^M$, \revised{ where $\gamma^M_{e,l}$ is the probability of error in the $l^{\textrm{th}}$ exploration phase of the uncoordinated channel access.} Moreover, $ \gamma^M_{e,l}\leq 4(M\beta)^2 e^{-l}$. For the power allocation exploration phase, if $T_P^0= \left\lceil{\frac{2Le^{\left(\frac{\beta-1}{L-1}\right)}}{\Delta_p^2}}\right\rceil$, all players have an estimate of the power level means satisfying the condition in\ (\ref{deltaP}), with probability $\geq 1-\gamma^P_{e,l}$, \revised{where $\gamma^P_{e,l}$, is the probability of error in the $l^{\textrm{th}}$ exploration phase of the power allocation, } upper bounded by $ 4\beta L e^{-l}$. \end{lemma} \begin{proof} See Appendix\ \ref{appendix2}. \end{proof} We now turn our attention to finding the minimum length of the exploration phase in the channel allocation stage ensuring an accurate estimate of the number of APs $\hat{K}$. \subsection{Estimating the number of APs} For AP $k$, $b_k^t$ found in step\ \ref{stepAlg:14} of Algorithm\ \ref{alg1} denotes the number of timeslots player $k$ was not the sole occupier of some channel $m$ until $t$. \begin{lemma}\label{lemma3} If the length of the exploration epoch in the channel allocation step satisfies: \begin{equation}\label{eq:t_kChannel} T_{\hat{K}} \left\lceil2.08\log{\left(\frac{2}{\eta}\right)}M^2e^{2\left(\frac{M\beta-1}{M-1}\right)} \right\rceil, \end{equation} then all APs have an estimate of the number of APs $\hat{K}$ satisfying $\hat{K}=K$ with probability higher than $1-\eta$, \revised{where $\eta$ is the probability of error in the estimation of the number of APs.} \end{lemma} \begin{proof} See Appendix\ \ref{appendix3}. \end{proof} \subsection{ Length of the Channel Allocation Exploration Phase} To ensure an accurate estimate of the channel reward means and of the number of APs, the minimum length of the exploration phase in the channel allocation solution, $T_C^0$, must satisfy the conditions in Lemma \ref{lemma2} and Lemma \ref{lemma3}. Hence, the following must hold: \begin{equation} \begin{aligned} T_C^0=\max&\left\{\left\lceil\frac{2Me^{\left(\frac{K-1}{M-1}\right)}}{\Delta_M^2\left(M-1\right)^{1-\beta}}\right\rceil, \right. \left. \left\lceil 2.08\log{\left(\frac{2}{\eta}\right)}M^2e^{2\left(\frac{M\beta-1}{M-1}\right)} \right\rceil\right\}. \end{aligned} \end{equation} \section{Matching Phase}\label{sec:matPhase} The matching phase of the channel allocation solution aims at reaching a final assignment in which every AP accesses $N$ channels, such that the achieved sum reward is maximized. The dynamics presented in section \ref{subsec:matching} and adopted in the matching phase induce a Markov chain over the state space $\mathcal{Z}=\prod_{k=1}^K\{\mathcal{A}_K\times [0,1]^{N\times 1} \times \{C,D\}\}$. Let $P^\epsilon $ denote the transision matrix of the regular perturbed Markov chain $\mathcal{Z}$. The work in \cite{pareto} guarantees that, when playing according to these dynamics, the optimal state, i.e., the one maximizing the sum rewards, is played most often. The proof relies on the theory of resistance trees for regular perturbed Markov chains \cite{evolution}. The dynamics used in this paper differ from those in \cite{pareto} in two aspects: \begin{enumerate} \item If AP $k$ receives a reward equal to $0$ on some channel $m$, AP $k$ is discontent with probability one. In \cite{pareto}, the game is assumed to be interdependent which means that it is not possible to partition APs into two groups that do not interact with each other. However, this property does not hold in the considered setting as shown in \cite{got}. Therefore, as in \cite{got}, to characterize the stable states of the unperturbed chain when $\epsilon=0$, a player with 0 reward on some channels is discontent with probability one. \item For the transition probabilities between content and discontent in (\ref{cdTransition}), instead of using $\epsilon^{N-\sum\limits_{n=1}^Nu_{k,n}}$, we use $\epsilon^{u_{k,\textrm{max}}-\sum\limits_{n=1}^Nu_{k,n}}$, since the maximum utility achievable by each AP $k$ is $u_{k,\textrm{max}}$. \end{enumerate} Next, the recurrence states of $\mathcal{Z}$ are characterized. \begin{lemma}\label{lemma5} Let $D^0$ denote the set of states where all APs are discontent. Moreover, let $C^0$ denote all singleton states where all APs are content and their baseline actions and utilities are aligned. As proved in \cite{pareto}, the only recurrence states of $\mathcal{Z}$ are $D^0$ and all singletons in $C^0$. \end{lemma} The resistance of moving from one recurrence state to the other being similar to \cite{pareto}, the stochastic potential of any state $z\in C^0$ is of the form: \begin{equation} \zeta(z)=c[|C^0|-1]+\sum\limits_{k=1}^K u_{k,\text{max}}-\sum\limits_{m=1}^Ma_k(m)\hat{\mu}(k,m,k_m). \end{equation} From Theorem 1 of \cite{pareto}, the stable state is the one minimizing the stochastic potential, hence the one maximizing the achieved sum reward. This stable state is guaranteed to be played the majority of times for a small enough perturbation $\epsilon$ \cite{got}, \cite{pareto}. In the exploitation phase, as the state that was most played and that resulted most in the players being content is played, the stable state is hence expected to be played with high probability. Next, the probability of error in the matching epoch $l$ is found. Let $\pi$ denote the stationary distribution of the Markov chain $\mathcal{Z}$ and let $\boldsymbol{z^*}=[\boldsymbol{\bar{a}^*}, \boldsymbol{\bar{u}^*}, C^K]$ denote the optimal state. According to \cite{got}, $\pi(\boldsymbol{z^*})>1/2$ for a small enough perturbation $\epsilon$. The following lemma finds the probability of error in the matching phase of the $l^\text{th}$ epoch, $\delta_{m,l}$. \begin{lemma}\label{lemma6}Let $\boldsymbol{a}^{(l)}$ denote the action that was most played in some epoch $l$. As proved in \cite{magesh}, the probability of error in the matching phase in epoch $l$, $\delta_{m,l}$, is upper bounded by: \begin{equation} \delta_{m,l}=\text{Pr}(\boldsymbol{a^*}\neq \boldsymbol{a}^{(l)})\leq A_0\norm{\phi}_{\pi}\exp\left(\frac{-\theta^2\pi(\boldsymbol{z^*})c_2l^{1+\delta} }{72T_m(1/8)}\right), \end{equation} where $A_0$ is a constant, $\phi_{\pi}$ is the probability distribution of the initial state played in epoch $l$ and $T_m(1/8)$ is the mixing time of the Markov chain $\mathcal{Z}$ with an accuracy of 1/8 \cite{chernoff}. \end{lemma} The analysis of the matching phase of the power allocation solution is similar to the one given above and is omitted for space constraints. \section{Simulation Results}\label{sec:results} Extensive simulations of the proposed algorithm were conducted to validate its performance. The following simulation parameters were chosen: $K=4, M=4, N=\beta=L=2, B_c= 2.5 ~\textrm{MHz}, c_1=3000, c_2=5000, \epsilon = 5 \times 10^{-5}, \gamma=0$. The available SINR values are $\boldsymbol\Gamma=\{24, 4.77\}\, \text{(dB)}$ leading to achieved rates of 20 and 5 Mbps respectively. For the channel allocation stage, the parameter $c$ used in the matching phase (Cf. Section \ref{subsec:matching}) is set as: $ c = KN$, whereas for the power allocation stage $c= K_m$ for each channel $m\in \mathcal{M}$. {Two of the APs are assumed to have a power budget of 1W per channel, while the remaining two have a power budget of 2W per channel.} Additional simulation parameters are given in Table\ \ref{tab1} \cite{3gpp}. {\renewcommand{\arraystretch}{1.1} \begin{table}[!htb] \small \caption{Simulation parameters.} \centering \vspace{-0.2cm} \begin{tabular}{|c|c|} \hline Cell Radius $R_d$ & 150 m\\ \hline Overall Transmission Bandwidth & 10 MHz\\ \hline Number of channels & 4\\ \hline Number of APs & 4\\ \hline Power Budget per AP & \multirow{2}{*}{$\{1,1,2,2\}$ (W)}\\ per channel $P_{(.)}^m$ &\\ \hline Available SINR Requirements & $\boldsymbol{\Gamma}=\{24, 4.77\} \text{(dB)}$ \\ \hline \multirow{2}{*}{Distance Dependent Path Loss} & $128.1 + 37.6 \log_{10}(d) \text{(dB)},$ \\ &$d \:\text{in Km}$ \\ \hline {Receiver Noise Density} & {$4.10^{-18}$ mW/Hz}\\ \hline \end{tabular}\label{tab1}\vspace{-0.2cm} \end{table}} \normalsize \vspace{-0.4cm} \subsection{Estimation Accuracy of the Exploration Phase}\label{subsec:estimation} \vspace{-0.6cm} \begin{figure}[!h] \begin{center} \begin{subfigure}{.33\columnwidth} \centering \includegraphics[height=4.5cm,keepaspectratio]{imgN/meansError-0.4-eps-converted-to.pdf} \caption{\label{fig:fig0a} } \end{subfigure}% \begin{subfigure}{0.33\columnwidth} \centering \includegraphics[height=4.5cm,keepaspectratio]{imgN/K_error-0.4-eps-converted-to.pdf} \caption{\label{fig:fig0b} } \end{subfigure} \begin{subfigure}{0.33\columnwidth} \centering \includegraphics[height=4.5cm,keepaspectratio]{imgN/expError0.4-eps-converted-to.pdf} \caption{\label{fig:fig0c} } \end{subfigure} \vspace{-0.2cm} \caption{\label{fig:fig0} {Estimation error as time progresses in the channel allocation stage for (a) the estimation of the rewards, (b) the estimation of the number of APs. (c) Comparison of the estimation error as a function of the epoch index in the channel allocation stage for the estimation of the rewards.}} \end{center} \end{figure} \vspace{-0.8cm} First, we evaluate the estimation accuracy of the exploration phase in the channel allocation stage. As shown in Fig.\ \ref{fig:fig0a} and Fig.\ \ref{fig:fig0b}, the estimation of both the reward means and the total number of APs converges rather quickly to the correct values. Having observed that the estimation of the exploration phase converges quickly, a version of the proposed algorithm where the exploration phase length is divided by the epoch index was tested. The estimation error of this version with a decreasing exploration phase length was compared against the version with a constant exploration phase length. Fig.\ \ref{fig:fig0c} plots the channel rewards estimation error for both versions. Although the constant length version outperforms the version with a decreasing exploration phase length, the estimation error achieved by both methods is lower than $1.1\times 10^{-2} \%$, hence negligible. When it comes to the number of APs estimation, both versions accurately estimate $\hat{K}$, without error, when convergence is reached. For the power allocation stage, the power level rewards estimation also converges quickly to a negligible error value. \vspace{-0.4cm} \subsection{Performance Analysis} \vspace{-0.6cm} \begin{figure}[!h] \begin{center} \begin{subfigure}{.33\columnwidth} \centering \includegraphics[height=4.5cm,keepaspectratio]{imgN/regretChannelConstant-eps-converted-to.pdf} \caption{\label{fig:fig1a} } \end{subfigure}% \begin{subfigure}{0.33\columnwidth} \centering \includegraphics[height=4.5cm,keepaspectratio]{imgN/regretChannel-eps-converted-to.pdf} \caption{\label{fig:fig1b} } \end{subfigure} \begin{subfigure}{0.33\columnwidth} \centering \includegraphics[height=4.5cm,keepaspectratio]{imgN/regretPower-eps-converted-to.pdf} \caption{\label{fig:fig1c} } \end{subfigure} \vspace{-0.2cm} \caption{\label{fig:fig1} {Accumulated regret as time progresses (a) for the channel allocation phase with a constant exploration phase length, (b) for the channel allocation phase with a decreasing exploration phase length, (c) for the power allocation stage.}} \end{center} \end{figure} \revised{Fig. \ref{fig:fig1} shows the average accumulated regret as a function of time in the channel allocation stage for both the constant and the decreasing length exploration phase versions. The results show that the average accumulated regret for both versions increases with time as $\mathcal{O}(\log(t)^2)$. More specifically, the regret incurred for the constant length exploration phase version is bounded between $7000 \log(t)^2$ and $22000\log(t)^2$, as shown in Fig. \ref{fig:fig1a}. The regret incurred for the decreasing length exploration phase version is bounded between $4000 \log(t)^2$ and $7000\log(t)^2$. In fact, most of the regret is accumulated during the exploration phase where APs choose a channel uniformly at random. Hence, decreasing the length of the exploration phase lowers the value of the accumulated regret as shown in Fig.\ \ref{fig:fig1b}, without jeopardizing the estimation accuracy as was shown in Section \ref{subsec:estimation}.} \revised{The regret incurred on all channels during the power allocation stage is bounded between $100 \log(t)^2$ and $400\log(t)^2$, as shown in Fig. \ref{fig:fig1c}. The lower regret observed during the power allocation stage, when compared to the channel allocation stage, results from the smaller number of APs competing for a smaller number of arms. In fact, on each channel $m \in \mathcal{M}$ during the power allocation stage, the number of competing APs is $K_m\leq \beta=2$, while the number of arms or power levels is $L=2$. In contrast, during the channel allocation stage, the number of players is $K=4$ with $\binom{M}{N}=6$ available arms. } \revised{magenta}{\begin{remark} To provide insight on the accumulated regret as a function of time in seconds, and the time duration needed to reach convergence, assume that a subcarrier spacing of 240 KHz \cite{etsi} is considered, resulting in a timeslot duration equal to 62.5 $\mu$s. For the uncoordinated channel access part of the solution, convergence to the optimal allocation is first reached at the fourth epoch, which takes place from $0.45\times 10^6$ to $0.6\times 10^6$ timeslots approximately. In terms of time duration in seconds, convergence is reached in $0.45\times 10^6\times 62.5 \times 10^{-6}=28.125$ seconds. For the uncoordinated power control part, convergence is reached from the first epoch, i.e., at around $0.1 \times 10^5$ timeslots, or 0.625 seconds with a timeslot duration of 62.5 $\mu$s. \end{remark}} In Fig.\ \ref{fig:fig2}, we compare the performance of the proposed method with a technique based on the UCB algorithm proposed in \cite{8902878} and similar to the one proposed in \cite{8676344}, denoted by Two-Dimensional UCB. In the Two-Dimensional UCB method, channel and power allocation are conducted at the same time, using the UCB algorithm, by considering all possible combinations of the channels and the power levels. For the considered setting, the number of arms in the Two-Dimensional UCB method is hence $\binom{M}{N}\times L^N=24$ arms. \begin{figure}[!h] \begin{center} \begin{subfigure}{.33\columnwidth} \centering \includegraphics[height=4.5cm,keepaspectratio]{imgN/rate-eps-converted-to.pdf} \caption{\label{fig:fig2a} } \end{subfigure}% \begin{subfigure}{0.33\columnwidth} \centering \includegraphics[height=4.5cm,keepaspectratio]{imgN/power-eps-converted-to.pdf} \caption{\label{fig:fig2b} } \end{subfigure} \begin{subfigure}{0.33\columnwidth} \centering \includegraphics[height=4.5cm,keepaspectratio]{imgN/ee-eps-converted-to.pdf} \caption{\label{fig:fig2c} } \end{subfigure} \vspace{-0.2cm} \caption{\label{fig:fig2} { Performance comparison as a function of time of (a) the achieved rate, (b) the total transmit power, (c) energy efficiency.}} \end{center} \end{figure} \vspace{-0cm} \revised{In Fig.\ \ref{fig:fig2a}, the achieved rate is plotted as a function of time. Both methods converge relatively quickly to the highest achievable rate, with small variations for the Two-Dimensional UCB technique. The sharp falls in the achieved rate of the proposed method are due to the exploration phase during each epoch of the power allocation stage where APs choose the power levels uniformly at random, {causing collisions and leading to zero rates}.} \revised{The total transmit power used by the APs as a function of time is shown in Fig.\ \ref{fig:fig2b}. While both methods converge to the same highest achievable rate, the power used by our proposed method is significantly lower than the one needed by the Two-Dimensional UCB method. \revised{This means that the UCB-based method does not lead APs to learn the optimal allocation and converges to a sub-optimal resource partitioning among the APs. In other words, our proposed method achieves a better allocation for the channel and power when compared to the UCB-based method.} Moreover, our proposed method has performance guarantees in terms of regret and optimality, while the Two-Dimensional UCB method \cite{8902878} does not.} \revised{To check the combined effect of rate and power on the performance of the compared methods, the achieved energy efficiency (EE), which is the ratio of the achieved rate to the used power, is plotted in Fig.\ \ref{fig:fig2c}. Once again, the sharp falls in the performance of our proposed method are due to the exploration phase in each epoch of the power allocation stage. \revised{Fig.\ \ref{fig:fig2c} shows that our proposed method greatly outperforms the UCB-based method, by achieving more than a twofold increase in the EE. This is due to our method converging to the optimal allocation when the UCB-based technique converges to a sub-optimal allocation requiring more transmit power as shown by Fig.\ \ref{fig:fig2b}.}} \section{Conclusion}\label{sec:conc} In this paper, the uncoordinated channel and power allocation problems in a SON were studied. The considered framework allows each AP to choose $N$ channels at each timeslot, and allows each channel to simultaneously accommodate multiple APs in a NOMA manner. The considered problem was modeled using the multi-player MAB framework, with varying user rewards, multiple plays, and non-zero reward on collision. A game-theoretic approach was used to develop an algorithm with a sub-linear regret of $\mathcal{O}(\log^2T)$. Simulation results validated the sub-linear regret of the proposed method and showed its superior performance, when compared with one of the most used algorithms in the MAB literature. \section*{Acknowledgment} The authors would like to thank Akshayaa Magesh for useful discussions regarding multi-player multi-armed bandits. \begin{appendices} \section{Proof of Lemma \ref{lemma1}}\label{appendix1} In the channel allocation phase, denote by $\boldsymbol{a}^{(1)}$ the optimal assignment, and by $J_M^1$ the sum rewards achieved when $\boldsymbol{a}^{(1)}$ is played, which is then given by: \begin{equation} J_M^1=\sum_{k=1}^K\sum_{m=1}^M a_{k}^{(1)}(m)\mu_M(k,m,k^*_m). \end{equation} Furthermore, denote the second best assignment and the sum reward achieved under it by $\boldsymbol{a}^{(2)}$ and $J_M^2$ respectively. Let the estimated mean of AP $k$ over channel $m$ with $k_m$ APs on channel $m$ be written as: \begin{equation} \hat{\mu}_M(k,m,k_m)=\mu_M(k,m,k_m)+z(k,m,k_m), \end{equation} where $z(k,m,k_m)$ is the estimation inaccuracy during the channel allocation phase satisfying $|z(k,m,k_m)|\leq \Delta_M$. The sum reward achieved when $\boldsymbol{a}^{(1)}$ is played with the estimated channel means satisfies: \begin{equation} \begin{aligned} \sum_{k=1}^K\sum_{m=1}^M a_{k}^{(1)}(m)\hat{\mu}_M(k,m,k_m)=& \sum_{k=1}^K\sum_{m=1}^M a_{k}^{(1)}(m)(\mu_M(k,m,k_m)+z(k,m,k_m))> \\ &\sum_{k=1}^K\sum_{m=1}^M a_{k}^{(1)}(m)\mu_M(k,m,k_m)- K N \Delta_M. \end{aligned} \end{equation} Any other assignment $\boldsymbol{a}\neq\boldsymbol{a}^{(1)}\neq \boldsymbol{a}^{(2)}$ must perform at most as well as $\boldsymbol{a}^{(2)}$: \begin{equation} \begin{aligned} \sum_{k=1}^K\sum_{m=1}^M a_{k}(m)\hat{\mu}_M(k,m,k_m)=&\sum_{k=1}^K\sum_{m=1}^M a_{k}(m)(\mu_M(k,m,k_m)+z(k,m,k_m))< \\ &\sum_{k=1}^K\sum_{m=1}^M a_{k}^{(2)}(m)\mu_M(k,m,k_m)+ K N \Delta_M. \end{aligned} \end{equation} To avoid changing the optimal assignment because of the estimation inaccuracy, the following must hold $ \forall \boldsymbol{a}\neq \boldsymbol{a}^{(1)}$: \begin{equation}\label{eq:conditiona} \begin{aligned} \sum_{k=1}^K\sum_{m=1}^M a_{k}^{(1)}(m)\hat{\mu}_M(k,m,k_m)>\sum_{k=1}^K\sum_{m=1}^M a_{k}(m)\hat{\mu}_M(k,m,k_m). \end{aligned} \end{equation} To ensure Eq.\ (\ref{eq:conditiona}), we need to have: $J_M^1 - KN\Delta_M>J_M^2+ KN\Delta_M$, which holds if: \begin{equation} \Delta_M<\frac{J_M^1-J_M^2}{2KN}. \end{equation} In the power allocation phase, following a similar approach over each channel $m$, we get: \begin{equation} \Delta_P<\frac{J_P^1-J_P^2}{2K_m}. \end{equation} \section{Proof of Lemma\ \ref{lemma2}}\label{appendix2 \subsection{Lower Bound of the Length of the Exploration Phase in the Channel Allocation Step}\label{appendix2:channel} To find a lower bound of the length of the exploration phase in the channel allocation step, we first find the required number of observations of each channel by each AP to guarantee condition\ (\ref{deltaC}) \cite{meghana,rosenski}. To do so, the probability of each AP not having a correct estimation of the channel means should be bounded. Let $\gamma=\gamma_{e,l}^M/2$. Define the following events: \begin{itemize} \item $A$: all players have an estimate satisfying condition\ (\ref{deltaC}), \item $B$: all players have $\geq Q$ ~observations of each channel $m$ for every $s$ in $[\beta]$, \item $A_k$: player $k$ has an estimate satisfying condition (\ref{deltaC}), \item $B_k$: player $k$ has $\geq Q$ ~observations of each channel $m$ for every $s$ in $[\beta]$. \end{itemize} The following must hold: \begin{equation} \textrm{Pr} (\bar{A}_k~|~B_k )\leq \frac{\gamma}{K}. \end{equation} In fact, \begin{equation} \begin{aligned} &\textrm{Pr} (\bar{A}_k|B_k)\leq \textrm{Pr} ~ (\exists ~m,s, \textrm{s.t.} ~|\mu_M(k,m,s)-\hat{\mu}_M(k,m,s)|> \Delta_M~ |~ B_k) \overset{\text{(a)}}\leq\\ &\sum_{m=1}^M\sum_{s=1}^{\beta}\textrm{Pr}~ (|\mu_M(k,m,s)-\hat{\mu}_M(k,m,s)|> \Delta_M~ |~ B_k)=\\ &\sum_{m=1}^M\sum_{s=1}^{\beta}\sum_{q=Q}^\infty\textrm{Pr} ~(|\mu_M(k,m,s)-\hat{\mu}_M(k,m,s)|> \Delta_M~ | k ~\textrm{has} ~q \textrm{ observations of $(m,s)$}) \times p_2 \overset{\text{(b)}}\leq\\ &\sum_{m=1}^M\sum_{s=1}^{\beta}\sum_{q=Q}^\infty 2p_2e^{(-2q\Delta_M^2)} \leq\sum_{m=1}^M\sum_{s=1}^{\beta}2e^{(-2Q\Delta_M^2)}=2M\beta e^{(-2Q\Delta_M^2)}, \end{aligned} \end{equation} where $(m,s)$ refers to channel $m$ with $s$ players on it, (a) results from applying the union bound and (b) from using Hoeffding's inequality \cite{hoeffding}, and $p_2=\textrm{Pr}~(q ~\textrm{observations of $(m,s)$} ~|~ q\geq Q)$. To ensure $\textrm{Pr} (\bar{A}_k|B_k)$ is lower than $\frac{\gamma}{K}$, $Q$ must satisfy: \begin{equation}\label{eq:numberofObs} Q\geq \frac{1}{2\Delta_M^2}\log(\frac{2KM\beta}{\gamma}). \end{equation} Then, \begin{equation} \textrm{Pr} ~(A|B)= 1-\textrm{Pr}(\bar{A}|B)\geq 1-\sum_{k=1}^K\textrm{Pr}~(\bar{A_k}|B_k) 1-\gamma, \end{equation} leading to all APs having an estimate of every channel satisfying condition (\ref{deltaC}) with probability higher than $1-\gamma$. Next, we need to find a time horizon $T_h$ for the exploration phase of the channel allocation step large enough such that all players have $\geq Q$ observations of each arm with probability higher than $1-\gamma$. Note that the length of each exploration phase $T_{\boldsymbol{\hat{\mu}}}$ does not necessarily satisfy $T_{\boldsymbol{\hat{\mu}}}\geq T_h$. In other words, all players can get $\geq Q$ observations of each arm with probability higher than $1-\gamma$ after multiple exploration phases. Let $A_{k,m,s}(t)=1$ if player $k$ observed channel $m$ with $s$ APs on it at timeslot $t$, and 0 otherwise. For $0<\tau<1$, we have: \begin{equation}\label{eq:obsWithTime} \begin{aligned} &\textrm{Pr $\left(\right.$ $k$ has $\leq (1-\tau)T_h \mathbb{E}[A_{k,m,s}]$ observations$\left.\right)$}=\textrm{Pr} \left( \sum_{t=1}^{T_h}A_{k,m,s}(t)\leq (1-\tau)T_h \mathbb{E}[A_{k,m,s}] \right)=\\ &\textrm{Pr} \left(e^{\left(-d \sum_{t=1}^{T_h}A_{k,m,s}(t)\right)}\geq e^{\left(-d(1-\tau)T_h \mathbb{E}[A_{k,m,s}]\right)}\right) \overset{\text{(a)}}\leq\frac{\mathbb{E}\left[e^{\left(-d \sum_{t=1}^{T_h}A_{k,m,s}(t)\right)}\right]}{e^{\left(-d(1-\tau)T_h \mathbb{E}[A_{k,m,s}]\right)}}, \end{aligned} \end{equation} where $d>0$ and (a) results from applying the Chernoff bound. By noting that all players are randomly and uniformly sampling every channel during the exploration phase, for any $k\in \mathcal{K},m \in \mathcal{S},s \in [\beta]$, $A_{k,m,s}$ are i.i.d. across time. Hence: \begin{equation}\label{eq:mgfInd} \mathbb{E}\left[e^{\left(-d \sum_{t=1}^{T_h}A_{k,m,s}(t)\right)}\right]=\prod\limits_{t=1}^{T_h}\mathbb{E}\left[e^{\left(-d A_{k,m,s}(t)\right)}\right]. \end{equation} Moreover, $A_{k,m,s}(t)$ is a Bernoulli random variable that takes the value 1 with probability $p_A$. Therefore, we have: \begin{equation} \mathbb{E}\left[e^{\left(-d A_{k,m,s}(t)\right)}\right] 1+p_A(e^{-d}-1)\overset{\text{(a)}}\leq e^{\left(p_A\left(e^{-d}-1\right)\right)}, \end{equation} where $(a)$ follows since $1+y\leq e^y$. Eq.\ (\ref{eq:mgfInd}) can hence be expressed as: \begin{equation}\label{eq:mgfT} \begin{aligned} \mathbb{E}\left[e^{\left(-d \sum_{t=1}^{T_h}A_{k,m,s}(t)\right)}\right]\leq& e^{\sum\limits_{t=1}^{T_h}\left(p_A\left(e^{-d}-1\right)\right)}=e^{\left(T_h \mathbb{E}[A_{k,m,s}]\left(e^{-d}-1\right)\right)}. \end{aligned} \end{equation} By inserting Eq.\ (\ref{eq:mgfT}) into Eq.\ (\ref{eq:obsWithTime}), we get: \begin{equation}\label{eq:boundInt} \begin{aligned} &\textrm{Pr $\left(\right.$player $k$ has $\leq (1-\tau)T_h \mathbb{E}[A_{k,m,s}]\left.\right)$}\leq e^{\left(T_h \mathbb{E}[A_{k,m,s}]\left(e^{-d}-1\right)\right)+\left(d(1-\tau)T_h \mathbb{E}[A_{k,m,s}]\right)}. \end{aligned} \end{equation} To make the bound as tight as possible, $d$ is chosen such that the right hand side of Eq.\ ({\ref{eq:boundInt}}) is minimized, leading to $d=-\log(1-\tau)$. By substituting $d$ by its value in Eq.\ (\ref{eq:boundInt}), we get: \begin{equation}\label{eq:obsWithTimeFinal} \begin{aligned} &\textrm{Pr $\left(\right.$player $k$ has $\leq (1-\tau)T_h \mathbb{E}[A_{k,m,s}]\left.\right)$}\leq e^{\left(-T_h \mathbb{E}[A_{k,m,s}]\left(\tau-(1-\tau)\log(1-\tau)\right)\right)}=\\ &\left(\frac{e^{-\tau}}{(1-\tau)^{(1-\tau)}}\right)^{\left(T_h \mathbb{E}[A_{k,m,s}]\right)}\overset{\text{(a)}}\leq e^{-\frac{\tau^2}{2}T_h \mathbb{E}[A_{k,m,s}]}, \end{aligned} \end{equation} where (a) results from having $(1-\tau)\log(1-\tau)>-\tau+\frac{\tau^2}{2}$, obtained by using a Taylor expansion. Taking $\tau=1/2$ and using a union bound on (\ref{eq:obsWithTimeFinal}), we get: \begin{equation}\label{eq:obsWithTimeAll} \begin{aligned} &\textrm{Pr ($\exists ~ k,m,s$ s.t. $k$ has $\leq \frac{T_h}{2} \mathbb{E}[A_{k,m,s}(t)]$ observations)}\leq KM\beta e^{\left(\frac{-\frac{1}{4}T_h\mathbb{E}[A_{k,m,s}]}{2}\right)}, \end{aligned} \end{equation} which is upper bounded by $\gamma$ if $T_h$ satisfies: \begin{equation}\label{eq:Th} T_h\geq\frac{8}{\mathbb{E}[A_{k,m,s}]}\log\left(\frac{KM\beta}{\gamma}\right). \end{equation} Moreover, the number of observations of each arm during $T_h$ timeslots, $\sum_{t=1}^{T_h}A_{k,m,s}(t)$, must be at least equal to $Q$. Hence we need: \begin{equation} \begin{aligned} \sum_{t=1}^{T_h}A_{k,m,s}(t)>&\frac{T_h}{2}\mathbb{E}[A_{k,m,s}]\geq Q>\frac{1}{2\Delta_M^2}\log\left(\frac{2KM\beta}{\gamma}\right), \end{aligned} \end{equation} which holds if: \begin{equation}\label{eq:ThV2} \begin{aligned} T_h\geq\left\lceil\text{max}\left\{\frac{8}{\mathbb{E}[A_{k,m,s}]}\log\left(\frac{KM\beta}{\gamma}\right),\right.\right.\left.\left.\frac{1}{\Delta_M^2\mathbb{E}[A_{k,m,s}]}\log\left(\frac{2KM\beta}{\gamma}\right)\right\}\right\rceil. \end{aligned} \end{equation} Note that: \begin{equation}\label{eq:expectationBound} \begin{aligned} &\mathbb{E}[A_{k,m,s}]=\binom{K-1}{s-1}\left(\frac{1}{M}\right)^s\left(1-\frac{1}{M}\right)^{K-s}\overset{\text{(a)}}\geq\left(\frac{1}{M}\right)^s\left(1-\frac{1}{M}\right)^{K-s}\geq\\ &\left(\frac{1}{M}\right)\left(\frac{1}{M}\right)^{s-1}\left(1-\frac{1}{M}\right)^{K-1}\left(1-\frac{1}{M}\right)^{1-s}\overset{\text{(b)}}\geq\frac{1}{M e^{\left(\frac{K-1}{M-1}\right)}}\left(M-1\right)^{1-s}\overset{\text{(c)}}\geq\frac{\left(M-1\right)^{1-\beta}}{Me^{\left(\frac{K-1}{M-1}\right)}}, \end{aligned} \end{equation} where (a) follows from having $\binom{K-1}{s-1}\geq 1$, (b) from $(1-\frac{1}{x})^{x-1}\geq\frac{1}{e}$, and (c) from $s\leq \beta$. Hence, $T_h$ can be re-written as: \begin{equation}\label{eq:ThV3} \begin{aligned} T_h\geq\left\lceil\text{max}\left\{\frac{8 M e^{\left(\frac{K-1}{M-1}\right)}}{\left(M-1\right)^{1-\beta}}\log\left(\frac{KM\beta}{\gamma}\right),\right.\right. \left.\left.\frac{Me^{\left(\frac{K-1}{M-1}\right)}}{\Delta_M^2\left(M-1\right)^{1-\beta}}\log\left(\frac{2KM\beta}{\gamma}\right)\right\}\right\rceil. \end{aligned} \end{equation} Having $T_h$, the probability of all APs having an estimate of the channel means satisfying Eq.\ (\ref{deltaC}) is lower bounded by: \begin{equation} \begin{aligned} &\textrm{Pr} (A)= 1-\textrm{Pr} (\bar{A})=1-\left(\textrm{Pr} (\bar{A}|B)\,\textrm{Pr} (B)+\textrm{Pr} (\bar{A}|\bar{B})\,\textrm{Pr} (\bar{B})\right)\\ &\geq 1-\left(\textrm{Pr}(\bar{A}|B))+\textrm{Pr}(\bar{B})\right)\geq 1-(\gamma+\gamma)=1-\gamma_{e,l}^M. \end{aligned} \end{equation} Since $\Delta_M=\frac{J_M^1-J_M^2}{2KN}\leq \frac{KN-0}{2KN}\leq \frac{1}{2}$, Eq.\ (\ref{eq:ThV3}) is satisfied if: \begin{equation} T_h=\frac{2Me^{\left(\frac{K-1}{M-1}\right)}}{\Delta_M^2\left(M-1\right)^{1-\beta}}\log\left(\frac{4KM\beta}{\gamma_{e,l}^M}\right). \end{equation} Having found the minimum needed length of the exploration epoch in the channel allocation phase, next, we upper bound the error probability in the $l^{\textrm{th}}$ exploration epoch. To do so, we first note that: \begin{equation} T_{\boldsymbol{\hat{\mu}_M}}\times l = T_h=\frac{2Me{ \left(\frac{K-1}{M-1}\right)}}{\Delta_M^2\left(M-1\right)^{1-\beta}}\log\left(\frac{4KM\beta}{\gamma_{e,l}^M}\right). \end{equation} To have $\gamma_{e,l}^M\leq 4KM\beta e^{-l}\leq 4 (M\beta)^2e^{-l}$, the length of each exploration epoch must satisfy: \begin{equation}\label{eq:t_mu} T_{\boldsymbol{\hat{\mu}_M}}\geq \frac{2Me^{\left(\frac{K-1}{M-1}\right)}}{\Delta_M^2\left(M-1\right)^{1-\beta}}. \end{equation} \subsection{Lower Bound of the Length of the Exploration Phase in the Power Allocation Step}\label{appendix2:power} By following a similar analysis of the one in Appendix \ \ref{appendix2:channel}, the minimum length of the length of the exploration phase on each channel $m$ in the power allocation step can be given by: \begin{equation}\label{eq:Tp} T_P^0= \left\lceil{\frac{2Le^{\left(\frac{\beta-1}{L-1}\right)}}{\Delta_p^2}}\right\rceil. \end{equation} If the length of the exploration phase in the power allocation step on each channel $m$ satisfies Eq.\ (\ref{eq:Tp}), then all players have an estimate of the power level means satisfying the condition in\ (\ref{deltaP}), with probability $\geq 1-\gamma^P_{e,l}$, where $\gamma^P_{e,l}$ is upper bounded by $ 4\beta L e^{-l}$. \section{Proof of Lemma\ \ref{lemma3}}\label{appendix3} Let $p$ be the true probability of player $k$ not being the sole occupier of some channel $m$ when $k$ accesses the $M$ channels uniformly at random: \begin{equation}\label{eq:probNoCollision} p=1-\sum\limits_{m=1}^M \frac{1}{M}\left(1-\frac{1}{M}\right)^{K-1}=1-\left(1-\frac{1}{M}\right)^{K-1}. \end{equation} From Eq. (\ref{eq:probNoCollision}), the number of APs $K$ is given by: \begin{equation} K=\textrm{round}\left(\frac{\log(1-p)}{\log(1-\frac{1}{M})}+1\right). \end{equation} The estimated probability of player $k$ not accessing channel $m$ alone at time $t$ is: $ \hat{p}_t={b_k^t}/{t}.$ For a correct estimation of the number of APs, we need to find a time $t$ sufficiently large to guarantee with high probability that: \begin{equation}\label{eq:k=k_hat} \begin{aligned} \hat{K}=&\textrm{round}\left(\frac{\log(1-\hat{p}_t)}{\log(1-\frac{1}{M})}+1\right)=\textrm{round}\left(\frac{\log(1-p)}{\log(1-\frac{1}{M})}+1\right)=K. \end{aligned} \end{equation} To ensure Eq.\ (\ref{eq:k=k_hat}), if $\kappa < 1/2$, the following must hold: \begin{equation}\label{eq:p_pt} \left|\frac{\log(\frac{t-b_k^t}{t})}{\log(1-\frac{1}{M})}-\frac{\log(1-p)}{\log(1-\frac{1}{M})}\right|=\left|\frac{\log\left(\frac{1-\hat{p}_t}{1-p}\right)}{\log\left(1-\frac{1}{M}\right)}\right|\leq \kappa. \end{equation} Let $\hat{p}_t-p=\xi$. After some calculations, Eq.\ (\ref{eq:p_pt}) can be expressed as: \begin{equation} \begin{aligned} (1-p)\left(1-\left(1-\frac{1}{M}\right)^{-\kappa}\right)\leq\xi&\leq (1-p)\left(1-\left(1-\frac{1}{M}\right)^\kappa\right)&. \end{aligned} \end{equation} With high probability, $K=\hat{K}$ when $\kappa<\frac{1}{2}$, if $|\hat{p}_t-p|\leq \xi_1$, where: \begin{equation}\label{eq:xi1} \begin{aligned} \xi_1= \min\left\{\left|(1-p)\left(1-\left(1-\frac{1}{M}\right)^{-\kappa}\right)\right|,\right. \left.\left|(1-p)\left(1-\left(1-\frac{1}{M}\right)^\kappa\right)\right|\right\}. \end{aligned} \end{equation} Let $T_{\hat{K}}$ be a large enough time horizon for which the estimated probability $\hat{p}_{T_{\hat{K}}}$ is an average of i.i.d. random variables with expectation $p$. Using Hoeffding's inequality \cite{hoeffding}, we get: \begin{equation} \textrm{Pr}\left(|\hat{p}_{T_{\hat{K}}}-p|\geq \xi_1\right)\leq 2 e^{-2T_{\hat{K}}\xi_1^2}. \end{equation} To bound the probability of an incorrect estimation of $\hat{K}$ by some small value $\eta$, $T_{\hat{K}}$ must be lower bounded by: \begin{equation} T_{\hat{K}}\geq\frac{\log(2\eta)}{2\xi_1^2}. \end{equation} To get a simpler expression of $\xi_1$ and hence of $ T_{\hat{K}}$, suppose that $\kappa=0.49$. With the expression of $p$ given by Eq.\ (\ref{eq:probNoCollision}), the first term in Eq. (\ref{eq:xi1}) can be lower bounded as: \begin{equation}\label{eq:lbTerm1} \begin{aligned} &\left|\left(1-\frac{1}{M}\right)^{K-1}\left(1-\left(1-\frac{1}{M}\right)^{-0.49}\right)\right|=-\left(1-\frac{1}{M}\right)^{K-1}\left(1-\left(1-\frac{1}{M}\right)^{-0.49}\right)\overset{\text{(a)}}\geq\\ &\left(1-\frac{1}{M}\right)^{M\beta-1}\left(1-\left(-1+\frac{1}{M}\right)^{-0.49}\right)\overset{\text{(b)}}\geq\frac{1}{e^{\left(\frac{M\beta-1}{M-1}\right)}}\left(1-\left(-1+\frac{1}{M}\right)^{-0.49}\right)\overset{\text{(c)}}\geq\frac{0.49}{Me^{\left(\frac{M\beta-1}{M-1}\right)}}, \end{aligned} \end{equation} where (a) results from having $M \beta \geq K$, (b) from $(1-\frac{1}{x})^{x-1}\geq \frac{1}{e}$, and (c) from using a Taylor Expansion. Similarly, the second term in Eq. (\ref{eq:xi1}) can be lower bounded as: \begin{equation}\label{eq:lbTerm2} \begin{aligned} \left|\left(1-\frac{1}{M}\right)^{K-1}\left(1-\left(1-\frac{1}{M}\right)^{0.49}\right)\right|\geq \frac{0.49}{Me^{\left(\frac{M\beta-1}{M-1}\right)}}. \end{aligned} \end{equation} Variable $\xi_1$ is therefore lower bounded by: $ \xi_1\geq\frac{0.49}{Me^{\left(\frac{M\beta-1}{M-1}\right)}}.$ Hence, $\hat{K}=K$ with probability higher than $1-\eta$ if: \begin{equation}\label{eq:t_k} T_{\hat{K}} \left\lceil2.08\log{\left(\frac{2}{\eta}\right)}M^2e^{2\left(\frac{M\beta-1}{M-1}\right)} \right\rceil. \end{equation} \end{appendices} \vspace{-1cm} \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,662
Home > Products > A Haunting Legacy A Haunting Legacy DVD - $ 9.99 What earned Wichita, Kansas, the title of late-term abortion capital? What went on in a facility where thousands of women came from all over the nation for abortions in the second and third trimesters of pregnancy? Just days before abortionist George Tiller was killed by a gunman, we visited Kansas and talked to those working to get late-term abortion laws enforced. In part one of this two part series about late-term abortion practices of George Tiller, we visit Wichita and then we'll meet Michelle Armesto-Berge. Now a wife and mother of three, Michelle was pressured into an abortion at age 18 when she was 26 weeks into her pregnancy. During her senior year in high school, Michelle found out she was pregnant. Although she and her boyfriend initially planned to get married and raise the baby, when her parents found out, they contacted a place that would do abortions well into the second and third trimester. Michelle shares how she finally gave into her parent's wishes despite her belief that abortion is murder and the gruesome process she went through at George Tiller's Wichita facility. Pedro, now Michelle's husband, tried to stop the abortion, but it was too late by the time he found her. Michelle shares the consequences of the abortion and how it has impacted her marriage and her family relationships. Her clear and candid account of her experience makes a compelling case for passing and enforcing laws that protect women and children from being rushed and coerced into a decision they forever regret. Season 3. Each episode is 22 minutes. DVD includes both episodes. A Look Back: Series Finale Facing Life Head-On From $ 9.99 Abortion's Devastating Impact on Women 2016: A Call to Action Surprising Realities of Brain Death and Organ Donation: Parts 1 & 2
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,407
Bills, Acts & Legislation Acts, Bills and Regulations Acts relating to the Treasury are available on the ComLaw website. Bills relating to the Treasury are available on the BillsNet website. Regulations relating to the Treasury are available on the ComLaw website. The legislation listed below is that relating to Treasury business. Related and supporting material can be found on the individual Treasury business pages. Exposure Draft Bills and Regulations for Consultation: For Exposure Draft Bills and Regulations currently open for consultation please refer to the Consultations page on this website. Competition and Consumer Act 2010 and the Australian Consumer Law On 1 January 2011, the Australian Consumer Law(ACL) commenced and the Trade Practices Act 1974 (TPA) was renamed the Competition and Consumer Act 2010 (CCA). A compilation of the CCA, which includes the ACL, consequential changes to the CCA and recent changes to Parts IIIA and IVB of the CCA, is available on ComLaw. As part of the transition from the TPA to the CCA, the Treasury is reviewing this website and its publications to account for the change of name and the other reforms to the Act resulting from the implementation of the ACL. Update on Tax Bills and Tax Acts 05-03-2012 Forward Work Program for Tax Measures Latest Determination: A New Tax System (Goods and Services Tax) (Exempt Taxes, Fees and Charges) Determination 2011 (No. 1)
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,184
Q: C# Use Windows 7 control look in windows xp I found out how to do it in WPF but how can I achieve this in winforms? The most important thing I want to use is the new calendar control but I would love to have the whole app be Windows 7-like. Sorry if this has been answered already but I didn't find the answer. A: There is no native way to do it with WinForms. WPF is a "Exception" as it draws everything itself instead of through the system... There are though a number of companies that sell themed controls. Such as... https://www.devexpress.com/Products/Free/NetOffer/ This company also has a lot of controls including a calander: http://www.bcgsoft.com/
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,263
\section*{INTRODUCTION} \def\theequation{0. \arabic{equation}} \setcounter{equation} {0} The study of Hom-algebras can be traced back to Hartwig, Larsson and Silvestrov's work in \cite{Hartwig}, where the notion of Hom-Lie algebra in the context of q-deformation theory of Witt and Virasoro algebras \cite{Hu} was introduced, which plays an important role in physics, mainly in conformal field theory. Hom-algebras and Hom-coalgebras were introduced by Makhlouf and Silvestrov \cite{Makhlouf2008} as generalizations of ordinary algebras and coalgebras in the following sense: the associativity of the multiplication is replaced by the Hom-associativity and similar for Hom-coassociativity. They also defined the structures of Hom-bialgebras and Hom-Hopf algebras, and described some of their properties extending properties of ordinary bialgebras and Hopf algebras in \cite{Makhlouf2009, Makhlouf2010}. In \cite{Caenepeel2011}, Caenepeel and Goyvaerts studied Hom-bialgebras and Hom-Hopf algebras from a categorical view point, and called them monoidal Hom-bialgebras and monoidal Hom-Hopf algebras respectively, which are different from the normal Hom-bialgebras and Hom-Hopf algebras in \cite{Makhlouf2009}. Many more properties and structures of Hom-Hopf algebras have been developed, see \cite{CX14, Gohr2010, ma2014, ZD} and references cited therein. Later, Yau \cite{Yau2009, Yau3} proposed the definition of quasitriangular Hom-Hopf algebras and showed that each quasitriangular Hom-Hopf algebra yields a solution of the Hom-Yang-Baxter equation. The Hom-Yang-Baxter equation reduces to the usual Yang-Baxter equation when the twist map is trivial. Several classes of solutions of the Hom-Yang-Baxter equation were constructed from different respects, including those associated to Hom-Lie algebras \cite{Fang, wang&GUO2020, Yau2009, Yau2011}, Drinfelds (co)doubles \cite{CWZ2014, ZGW, ZWZ}, and Hom-Yetter-Drinfeld modules \cite{chen2014, LIU2014, ma2017, ma2019, Makhlouf2014, WCZ2012, YW}. It is well-known that classical nonlinear equations in Hopf algebra theory including the quantum Yang-Baxter equation, the Hopf equation, the pentagon equation, and the Long equation. In \cite{Militaru}, Militaru proved that each Long dimodule gave rise to a solution for the Long equation. Long dimodules are the building stones of the Brauer-Long group. In the case where $H$ is commutative, cocommutative and faithfully projective, the Yetter-Drinfeld category $^{H}_{H}\Bbb{YD}$ is precisely the Long dimodule category $^{H}_{H}\Bbb{L}$. Of course, for an arbitrary $H$, the categories $^{H}_{H}\Bbb{YD}$ and $^{H}_{H}\Bbb{L}$ are basically different. In \cite{CWZ2013}, Chen et al. introduced the concept of Long dimodules over a monoidal Hom-bialgebra and discussed its relation with Hom-Long equations. Later, we \cite{wang&ding2017} extended Chen's work to generalized Hom-Long dimodules over monoidal Hom-Hopf algebras and obtained a kind solution for the quantum Yang-Baxter equation. For more details about Long dimodules, see \cite{Long, Lu, WangShuanhong, Zhang} and references cited therein. The main purpose of this paper is to construct a new braided monoidal category and present solutions for two kinds of nonlinear equations. Different to our previous work in \cite{wang&ding2017}, in the present paper we do all the work over Hom-Hopf algebras, which is more unpredictable than the monoidal version. Since Hom-Hopf algebras and monoidal Hom-Hopf algebras are different concepts, it turns out that our definitions, formulas and results are also different from the ones in \cite{wang&ding2017}. Most important, we associate quantum Yang-Baxter equations and Hom-Long equations to the Hom-Long dimodule categories. This paper is organized as follows. In Section 1, we recall some basic definitions about Hom-(co)modules and (co)quasitriangular Hom-Hopf algebras . In Section 2, we first introduce the notion of $(H,B)$-Hom-Long dimodules over Hom-bialgebras $(H,\alpha)$ and $(B, \b)$, then we show that the Hom-Long dimodule category $^{B}_{H}\Bbb{L}$ forms an autonomous category (see Theorem 2.6) and prove that the category is equivalent to the category of left $B^{\ast op} \o H$-Hom-modules (see Theorem 2.7). In Section 3, for a quasitriangular Hom-Hopf algebra $(H,R, \alpha)$ and a coquasitriangular Hom-Hopf algebra $(B,\langle|\rangle,\beta)$, we prove that the Hom-Long dimodule category $^{B}_{H}\Bbb{L}$ is a subcategory of the Hom-Yetter-Drinfeld category $^{H\o B}_{H\o B}\Bbb{HYD}$ (see Theorem 3.5), and show that the braiding yields a solution for the quantum Yang-Baxter equation (see Corollary 3.2). In Section 4, we prove that the category $_{H}\Bbb{M}$ over a triangular Hom-Hopf algebra (resp., $^{H}\Bbb{M}$ over a cotriangular Hom-Hopf algebra) is a Hom-Long dimodule subcategory of $^{B}_{H}\Bbb{L}$ (see Propositions 4.1 and 4.2). We also show that the Hom-Long dimodule category $^{B}_{H}\Bbb{L}$ is symmetric in case $(H,R, \alpha)$ is triangular and $(B,\langle|\rangle,\beta)$ is cotriangular (see Theorem 4.3). In Section 5, we introduce the notion of $(H,\a)$-Hom-Long dimodules and obtain a solution for the Hom-Long equation (see Theorem 5.10). \section{PRELIMINARIES} \def\theequation{\arabic{section}.\arabic{equation}} \setcounter{equation} {0} Throughout this paper, $k$ is a fixed field. Unless otherwise stated, all vector spaces, algebras, modules, maps and unadorned tensor products are over $k$. For a coalgebra $C$, the coproduct will be denoted by $\Delta$. We adopt a Sweedler's notation $\triangle(c)=c_{1}\otimes c_{2}$, for any $c\in C$, where the summation is understood. We refer to \cite{R, Sweedler} for the Hopf algebra theory and terminology. We now recall some useful definitions in \cite{Li2014, Makhlouf2008, Makhlouf2009, Makhlouf2010, Yau1, Yau3}. \smallskip {\bf Definition 1.1.} A Hom-algebra is a quadruple $(A,\mu,1_A,\a)$ (abbr. $(A,\a)$), where $A$ is a $k$-linear space, $\mu: A\o A \lr A$ is a $k$-linear map, $1_A \in A$ and $\a$ is an automorphism of $A$, such that \begin{eqnarray*} &(HA1)& \a(aa')=\a(a)\a(a');~~\a(1_A)=1_A,\\ &(HA2)& \a(a)(a'a'')=(aa')\a(a'');~~a1_A=1_Aa=\a(a) \end{eqnarray*} are satisfied for $a, a', a''\in A$. Here we use the notation $\mu(a\o a')=aa'$. \smallskip {\bf Definition 1.2.} Let $(A,\alpha)$ be a Hom-algebra. A left $(A,\alpha)$-Hom-module is a triple $(M,\rhd,\nu)$, where $M$ is a linear space, $\rhd: A\o M \lr M$ is a linear map, and $\nu$ is an automorphism of $M$, such that \begin{eqnarray*} &(HM1)& \nu(a\rhd m)=\alpha(a)\rhd \nu(m),\\ &(HM2)& \alpha(a)\rhd (a'\rhd m)=(aa')\rhd \nu(m);~~ 1_A\rhd m=\nu(m) \end{eqnarray*} are satisfied for $a, a' \in A$ and $m\in M$. Let $(M,\rhd_M,\nu_M)$ and $(N,\rhd_N,\nu_N)$ be two left $(A,\alpha)$-Hom-modules. Then a linear morphism $f: M\lr N$ is called a morphism of left $(A,\alpha)$-Hom-modules if $f(h\rhd_M m)=h\rhd_N f(m)$ and $\nu_N\circ f=f\circ \nu_M$. \smallskip {\bf Definition 1.3.} A Hom-coalgebra is a quadruple $(C,\D,\v,\b)$ (abbr. $(C,\b)$), where $C$ is a $k$-linear space, $\D: C \lr C\o C$, $\v: C\lr k$ are $k$-linear maps, and $\b$ is an automorphism of $C$, such that \begin{eqnarray*} &(HC1)& \b(c)_1\o \b(c)_2=\b(c_1)\o \b(c_2);~~\v\circ \b=\v;\\ &(HC2)& \b(c_{1})\o c_{21}\o c_{22}=c_{11}\o c_{12}\o \b(c_{2});~~\v(c_1)c_2=c_1\v(c_2)=\b(c) \end{eqnarray*} are satisfied for $c \in C$. \smallskip {\bf Definition 1.4.} Let $(C,\b)$ be a Hom-coalgebra. A left $(C,\b)$-Hom-comodule is a triple $(M,\rho,\mu)$, where $M$ is a linear space, $\rho: M\lr C\o M$ (write $\rho(m)=m_{(-1)}\o m_{(0)},~\forall m\in M$) is a linear map, and $\mu$ is an automorphism of $M$, such that \begin{eqnarray*} &(HCM1)&\mu(m)_{(-1)}\o \mu(m)_{(0)}=\b(m_{(-1)})\o \mu(m_{(0)}),~\v(m_{(-1)})m_{(0)}=\mu(m);\\ &(HCM2)&\b(m_{(-1)})\o m_{(0)(-1)}\o m_{(0)(0)}= m_{(-1)1}\o m_{(-1)2}\o \mu(m_{(0)}) \end{eqnarray*} are satisfied for all $m\in M$. Let $(M,\rho^M,\mu_M)$ and $(N,\rho^N,\mu_N)$ be two left $(C,\b)$-Hom-comodules. Then a linear map $f: M\lr N$ is called a map of left $(C,\b)$-Hom-comodules if $f(m)_{(-1)}\o f(m)_{(0)}=m_{(-1)}\o f(m_{(0)})$ and $\mu_N\circ f=f\circ \mu_M$. \smallskip {\bf Definition 1.5.} A Hom-bialgebra is a sextuple $(H,\mu,1_H,\D,\v,\g)$ (abbr. $(H,\g)$), where $(H,\mu,1_H,\g)$ is a Hom-algebra and $(H,\D,\v,\g)$ is a Hom-coalgebra, such that $\D$ and $\v$ are morphisms of Hom-algebras, i.e. $$ \D(hh')=\D(h)\D(h');~\D(1_H)=1_H\o 1_H;~ \v(hh')=\v(h)\v(h');~\v(1_H)=1. $$ Furthermore, if there exists a linear map $S: H\lr H$ such that $$ S(h_1)h_2=h_1S(h_2)=\v(h)1_H~\hbox{and}~S(\g(h))=\g(S(h)), $$ then we call $(H,\mu,1_H,\D,\v,\g,S)$ (abbr. $(H,\g,S)$) a Hom-Hopf algebra. \smallskip {\bf Definition 1.6.} (\cite{Li2014}) Let $(H, \b)$ be a Hom-bialgebra, $(M,\rhd, \mu)$ a left $(H,\b)$-module with action $\rhd: H\o M\lr M, h\o m\mapsto h\rhd m$ and $(M,\rho, \mu)$ a left $(H,\b)$-comodule with coaction $\rho: M\lr H\o M, m\mapsto m_{(-1)}\o m_{(0)}$. Then we call $(M,\rhd, \rho, \mu)$ a (left-left) Hom-Yetter-Drinfeld module over $(H,\b)$ if the following condition holds: $$ (HYD)~~~h_1\b(m_{(-1)})\o (\b^3(h_2)\rhd m_{(0)}=(\b^2(h_1)\rhd m)_{(-1)}h_2\o (\b^2(h_1)\rhd m)_{(0)}, $$ where $h\in H$ and $m\in M$. When $H$ is a Hom-Hopf algebra, then the condition $(HYD)$ is equivalent to $$ (HYD)' ~~\rho(\b^4(h)\rhd m)=\b^{-2}(h_{11}\b(m_{(-1)}))S(h_2)\o (\b^3(h_{12})\rhd m_0). $$ {\bf Definition 1.7.} (\cite{Li2014}) Let $(H, \b)$ be a Hom-bialgebra. A Hom-Yetter-Drinfeld category $^{H}_{H}\Bbb Y\Bbb D$ is a pre-braided monoidal category whose objects are left-left Hom-Yetter-Drinfeld modules, morphisms are both left $(H, \beta)$-linear and $(H, \beta)$-colinear maps, and its pre-braiding $C_{-, -}$ is given by \begin{eqnarray} C_{M, N} (m \o n) = \beta^{2}(m_{(-1)})\rhd\nu^{-1}(n)\o \mu^{-1}(m_{0}), \end{eqnarray} for all $m \in (M,\mu)\in{}^{H}_{H}\Bbb Y\Bbb D$ and $n \in (N,\nu) \in{}^{H}_{H}\Bbb Y\Bbb D$. \smallskip {\bf Definition 1.8.} A quasitriangular Hom-Hopf algebra is a octuple $(H, \mu, 1_{H},\Delta, \epsilon, S, \beta, R)$ (abbr. $(H, \beta, R)$) in which $(H, \mu, 1_{H}, \Delta, \epsilon, S, \beta)$ is a Hom-Hopf algebra and $R=R^{(1)}\o R^{(2)}\in H\o H$, satisfying the following axioms (for all $h\in H$ and $R=r$): \begin{eqnarray*} &&(QHA1)~\epsilon(R^{(1)})R^{(2)}=R^{(1)}\epsilon(R^{(2)})=1_{H},\\ &&(QHA2)~\Delta(R^{(1)})\otimes\beta(R^{(2)})=\beta(R^{(1)})\otimes\beta(r^{(1)})\otimes R^{(2)}r^{(2)},\\ &&(QHA3)~\beta(R^{(1)})\otimes\Delta(R^{(2)})=R^{(1)}r^{(1)}\otimes \beta(r^{(2)})\otimes \beta(R^{(2)}),\\ &&(QHA4)~\Delta^{cop}(h)R=R\Delta(h),\\ &&(QHA5)~\beta(R^{(1)})\o\beta(R^{(2)})=R^{(1)}\o R^{(2)}, \end{eqnarray*} where $\Delta^{cop}(h)=h_{2}\otimes h_{1}$ for all $h\in H$. A quasitriangular Hom-Hopf algebra $(H,R,\beta)$ is called triangular if $R^{-1}=R^{(2)}\otimes R^{(1)}.$ \medskip {\bf Definition 1.9.} A coquasitriangular Hom-Hopf algebra is a Hom-Hopf algebra $(H, \beta)$ together with a bilinear form $\langle|\rangle$ on $(H, \beta)$ (i.e. $\langle|\rangle\in$ Hom($H\otimes H,k$)) such that the following axioms hold: \begin{eqnarray*} &&(CHA1)~\langle hg|\beta(l)\rangle=\langle\beta(h)|l_{2}\rangle\langle \beta(g)|l_{1}\rangle,\\ &&(CHA2)~\langle \beta(h)|gl\rangle=\langle h_{1}|\beta(g)\rangle\langle h_{2}|\beta(l)\rangle,\\ &&(CHA3)~\langle h_{1}|g_{1}\rangle g_{2}h_{2}=h_{1}g_{1}\langle h_{2}|g_{2}\rangle,\\ &&(CHA4)~\langle 1|h\rangle=\langle h|1\rangle=\epsilon(h),\\ &&(CHA5)~\langle \beta(h)|\beta(g)\rangle=\langle h|g\rangle \end{eqnarray*} for all $h,g,l\in H$. A coquasitriangular Hom-Hopf algebra $(H,\langle|\rangle, \beta)$ is called cotriangular if $\langle|\rangle$ is convolution invertible in the sense of $ \langle h_{1}|g_{1}\rangle\langle g_{2}|h_{2}\rangle=\epsilon(h)\epsilon(g), $ for all $h,g\in H$. \section{ Hom-Long dimodules over Hom-bialgebras} \def\theequation{\arabic{section}.\arabic{equation}} \setcounter{equation} {0} In this section, we will introduce the notion of Hom-Long dimodules and prove that the Hom-Long dimodule category is an autonomous category. \medskip \noindent{\bf Definition 2.1.} Let $(H,\alpha)$ and $(B,\beta)$ be two Hom-bialgebras. A left-left $(H,B)$-Hom-Long dimodule is a quadrupl $(M,\c, \rho,\mu)$, where $(M, \c, \mu)$ is a left $(H,\alpha)$-Hom-module and $(M, \rho, \mu)$ is a left $(B,\beta)$-Hom-comodule such that \begin{eqnarray} \rho(h\c m)=\beta(m_{(-1)})\o \alpha(h)\c m_{(0)}, \end{eqnarray} for all $h\in H$ and $m\in M$. We denote by $^{B}_{H} \Bbb L$ the category of left-left $(H,B)$-Hom-Long dimodules, morphisms being $H$-linear $B$-colinear maps. \medskip \noindent{\bf Example 2.2.} Let $(H,\alpha)$ and $(B,\beta)$ be two Hom-bialgebras. Then $(H\o B,\alpha\o\beta)$ is an $(H,B)$-Hom-Long dimodule with left $(H,\alpha)$-action $h\c(g\o x)=hg\o \beta(x)$ and left $(B,\beta)$-coaction $\rho(g\o x)=x_{1}\o(\alpha(g)\o x_{2})$, where $h,g\in H, x\in B$. \medskip \noindent{\bf Proposition 2.3.} Let $(M, \mu), (N, \nu)$ be two $(H,B)$-Hom-Long dimodules, then $(M\o N, \mu\o\nu)$ is an $(H,B)$-Hom-Long dimodule with structures: \begin{eqnarray*} &&h\c (m\o n)=h_{1}\c m\o h_{2}\c n,\\ &&\rho(m\o n)=\b^{-2}(m_{(-1)}n_{(-1)})\o m_{(0)}\o n_{(0)}, \end{eqnarray*} for all $m\in M, n\in N$ and $h\in H$. \medskip \noindent{\bf Proof.} From Theorem 4.8 in \cite{ma2017}, $(M\o N, \mu\o\nu)$ is both a left $(H,\alpha)$-Hom-module and a left $(B,\beta)$-Hom-comodule. It remains to check that the compatibility condition (2.1) holds. For any $m\in M, n\in N$ and $h\in H$, we have \begin{eqnarray*} \rho(h\c (m\o n)) &=&\b((h_{1}\c m)_{(-1)}(h_{2}\c n)_{(-1)})\o(h_{1}\c m)_{(0)}\o(h_{2}\c n)_{(0)}\\ &=&\beta^{-1}(m_{(-1)}n_{(-1)})\o\alpha(h_{1})\c m_{(0)}\o\alpha(h_{2})\c n_{(0)}\\ &=&\beta((m\o n)_{(-1)})\o\alpha(h)\c ((m\o n)_{(0)}), \end{eqnarray*} as desired. This completes the proof. \hfill $\square$ \medskip \noindent{\bf Proposition 2.4.} The Hom-Long dimodule category $^{B}_{H} \Bbb L$ is a monoidal category, where the tensor product is given in Proposition 2.3, the unit $I=(k,id)$, the associator and the constraints are given as follows: \begin{eqnarray*} &&a_{U,V,W}: (U\o V)\o W\rightarrow U\o (V\o W), (u\o v)\o w\rightarrow \mu^{-1}(u)\o(v\o \omega(w)),\\ &&l_{V}: k\o V\rightarrow V, k\o v\rightarrow k\nu(v), r_{V}: V\o k\rightarrow V, v\o k \rightarrow k\nu(v), \end{eqnarray*} for $u\in (U,\mu)\in {}^{B}_{H} \Bbb L, v\in (V,\nu)\in{}^{B}_{H} \Bbb L, w\in (W,\omega)\in{} ^{B}_{H} \Bbb L.$ \medskip \noindent{\bf Proof.} Straightforward. \hfill $\square$ \medskip \noindent{\bf Proposition 2.5.} Let $H$ and $B$ be two Hom-Hopf algebras with bijective antipodes. For any Hom-Long dimodule $(M,\mu)$ in $^{B}_{H} \Bbb L$, set $M^\ast = Hom_k(M,k)$, with the $(H,\alpha)$-Hom-module and the $(B,\b)$-Hom-comodule structures: \begin{eqnarray*} &&\theta_{M^\ast}: H\o M^\ast\longrightarrow M^\ast,~~ (h\cdot f )(m)=f(S_H\a^{-1}(h)\cdot\mu^{-2}(m)),\\ &&\rho_{M^\ast}: M^\ast \longrightarrow B \o M^\ast, ~~ f_{(-1)} \o f_{(0)}(m)=S^{-1}_B\b^{-1}(m_{(-1)}) \o f(\mu^{-2}(m_{(0)})), \end{eqnarray*} and the Hom-structure map $\mu^{\ast}$ of $M^\ast$ is $\mu^{\ast}(f)(m)=f(\mu^{-1}(m))$. Then $M^\ast$ is an object in $^{B}_{H} \Bbb L$. Moreover, $^{B}_{H} \Bbb L$ is a left autonomous category. \medskip \noindent{\bf Proof.} It is not hard to check that $(M^\ast,\theta_{M^\ast},\mu^{\ast})$ is an $(H,\alpha)$-Hom-module and $(M^\ast,\rho_{M^\ast},\mu^\ast)$ is a $(B,\beta)$-Hom-comodule. Further, for any $f \in M^\ast$, $m \in M$, $h\in H$, we have \begin{eqnarray*} (h\c f)_{(-1)} \o (h\c f)_{(0)}(m) &=&S^{-1}_B\b^{-1}(m_{(-1)}) \o (h\c f)(\mu^{-2}(m_{(0)}))\\ &=&S^{-1}_B\b^{-1}(m_{(-1)}) \o f(S_H\a^{-1}(h) \c \mu^{-4}(m_{(0)})),\\ \b(f_{(-1)}) \o (\alpha(h)\c f_{(0)})(m) &=&\b(f_{(-1)}) \o f_{(0)}(S_H(h) \c \mu^{-2}(m) )\\ &=&\b(S^{-1}_B\b^{-2}(m_{(-1)} )) \o f( \mu^{-2}( S_H \alpha (h) \c \mu^{-2}(m_{(0)}) )) \\ &=&S^{-1}_B\b^{-1}(m_{(-1)}) \o f(S_H\a^{-1}(h) \c \mu^{-4}(m_{(0)})). \end{eqnarray*} Thus $M^\ast \in {}^{B}_{H} \Bbb L$. Moreover, for any $f \in M^\ast$ and $m\in M$, one can define the left evaluation map and the left coevaluation map by \begin{eqnarray*} ev_M: f \otimes m \longmapsto f(m), ~coev_M: 1_k \longmapsto \sum e_i \o e^i, \end{eqnarray*} where $e_i$ and $e^i$ are dual bases in $M$ and $M^\ast$ respectively. Next, we will show $(M^\ast,ev_M,coev_M)$ is the left dual of $M$. It is easy to see that $ev_M$ and $coev_M$ are morphisms in $^{B}_{H} \Bbb L$. For this, we need the following computation \begin{eqnarray*} &&~~~(r_M\circ(id_M \o ev_M)\circ a_{M,M^\ast,M} \circ (coev_M \o id_M)\circ l_M^{-1})(m)\\ &&=(r_M\circ(id_M \o ev_M)\circ a_{M,M^\ast,M})(\sum_{i} (e_i \o e^i) \o \mu^{-1}(m))\\ &&=(r_M\circ(id_M \o ev_M))(\sum_{i} \mu^{-1}(e_i) \o ( e^i \o m))\\ &&=r_M(\sum_{i} \mu^{-1}(e_i) \o e^i(m ) )\\ &&= r_M(\mu^{-1}(m) \o 1_k )=m. \end{eqnarray*} Similarly, we get \begin{eqnarray*} &&~~~(l_{M^\ast}\circ (ev_M \o id_{M^\ast})\circ a^{-1}_{M^\ast,M,M^\ast}\circ (id_{M^\ast} \o coev_M)\circ r_{M^\ast}^{-1})(f)\\ &&=(l_{M^\ast}\circ (ev_M \o id_{M^\ast})\circ a^{-1}_{M^\ast,M,M^\ast})(\sum_{i} {\mu}^{\ast-1}(f) \o (e_i \o e^i)) \\ &&=(l_{M^\ast}\circ (ev_M \o id_{M^\ast}))(\sum_{i} f\o e_i) \o \mu^{\ast-1}(e^i) )\\ &&=l_{M^\ast}(\sum_{i} f(e_i) \o \mu^{\ast-1}(e^i) )\\ &&=l_{M^\ast}(1_k \o {\mu}^{\ast-1}(f) )=f. \end{eqnarray*} So $^{B}_{H} \Bbb L$ admits the left duality. The proof is finished.\hfill $\square$ \medskip \noindent{\bf Theorem 2.6.} The Hom-Long dimodule category $^{B}_{H} \Bbb L$ is an autonomous category. \medskip \noindent{\bf Proof} By Proposition 2.5, it is sufficient to show that $^{B}_{H} \Bbb L$ is also a right autonomous category. In fact, for any $(M,\mu) \in {}^{B}_{H} \Bbb L$, its right dual $({}^\ast M, \widetilde{coev}_M, \widetilde{ev}_M)$ is defined as follows: $\bullet$ ${}^\ast M = Hom_k(M,k)$ as $k$-modules, with the Hom-module and Hom-comodule structures: \begin{eqnarray*} &( h\cdot f)(m)=f(S^{-1}_H\a^{-1}(h)\cdot\mu^{-2}(m)),\\ &f_{(-1)} \o f_{(0)}(m)=S_B\b^{-1}(m_{(-1)}) \o f(\mu^{-2} (m_{(0)})), \end{eqnarray*} where $f \in {}^\ast M$, $m \in M$, and the Hom-structure map $\mu^{\ast}$ of $^\ast M$ is $\mu^{\ast}(f)(m)=f(\mu^{-1}(m))$; $\bullet$ The right evaluation map and the right coevaluation map are given by \begin{eqnarray*} \widetilde{ev}_M: m\otimes f\longmapsto f(m), ~\widetilde{coev}_M: 1_k \longmapsto \sum a^i \o a_i. \end{eqnarray*} where $a_i$ and $a^i$ are dual bases of $M$ and ${}^\ast M$ respectively. By similar verification in Proposition 2.5, one may check that $^{B}_{H} \Bbb L$ is a right autonomous category, as required. This completes the proof.\hfill $\square$ \medskip Recall from \cite{ZGW} that for any finite dimensional Hom-Hopf algebra $B$, $B^\ast$ is also a Hom-Hopf algebra with the following structures \begin{eqnarray*} &(f\ast g)(y):=f(\b^{-2}(y_1))g(\b^{-2}(y_2)),~~\Delta_{B^\ast}(f)(xy):=f(\b^{-2}(xy)),\\ &1_{B^\ast}:=\v,~~\v_{B^\ast}(f):=f(1_H),~~S_{B^\ast}:=S^\ast,~~\a_{B^\ast}(f):=f\circ \b^{-1}, \end{eqnarray*} where $x,y \in H$, $f,g \in B^{\ast}$. \noindent{\bf Theorem 2.7.} If $B$ is a finite dimensional Hom-Hopf algebra, then the Hom-Long dimodule category $^{B}_{H} \Bbb L$ is identified to the category of left $B^{\ast op} \o H$-Hom-modules, where $B^{\ast op} \o H$ means the usual tensor product Hom-Hopf algebra. \medskip \noindent{\bf Proof} Define the functor $\Psi$ from ${}_{B^{\ast op} \o H} \Bbb{M}$ to $^{B}_{H} \Bbb L$ by \begin{eqnarray*} \Psi(M):=M\mbox{~as~}k\mbox{-module~},~~~~\Psi(f):=f, \end{eqnarray*} where $(M, \mu, \rightharpoondown)$ is a $B^{\ast op} \o H$-Hom-module, $f:M\rightarrow N$ is a morphism of $B^{\ast op} \o H$-Hom-modules. Further, the $H$-action on $M$ is defined by \begin{eqnarray*} h \c m:= (\v_B \o h)\rightharpoondown m, ~~~~\mbox{for~all~}m\in M,~~h\in H, \end{eqnarray*} and the $B$-coaction on $M$ is given by \begin{eqnarray*} m_{(-1)} \o m_{(0)}:= \sum e_i \o (e^i \o 1_H)\rightharpoondown m, \end{eqnarray*} where $e_i$ and $e^i$ are dual bases of $B$ and $B^\ast$ respectively. First, we will show $(M,\mu,\c)$ is a left $(H,\a)$-Hom-module. Actually, for any $m \in M$, $h,g \in H$, we have $ 1_H \c m = (\v_B \o 1_H)\rightharpoondown m = \mu(m), $ and \begin{eqnarray*} \a(h)\c(g \c m) &=& (\v_B \o \a(h)) \rightharpoondown ( (\v_B \o g) \rightharpoondown m ) \\ &=& (\v_B \o hg)\rightharpoondown \mu(m)=(hg) \c \mu(m), \end{eqnarray*} which implies $(M,\mu,\c) \in {}_H \Bbb{M}$. Second, one can show that $(M,\mu) \in {}^B \Bbb{M}$ in a similar way. At last, for any $m \in M$, $h \in H$, we have \begin{eqnarray*} (h \c m)_{(-1)} \o (h \c m)_{(0)} &=& \sum e_i \o (e^i \o 1_H) \rightharpoondown (h \c m) \\ &=& \sum e_i \o ( e^i \o \a(h) ) \rightharpoondown \mu(m) \\ &=& \sum \b(e_i) \o ( (\v_B \o 1_H) (e^i \o h ) \rightharpoondown \mu(m) \\ &=& \sum \b(e_i) \o ( (\v_B \o h ) (e^i \o 1_H ) \rightharpoondown \mu(m) \\ &=& \sum \b(e_i) \o \a(h) \c ((e^i \o 1_H ) \rightharpoondown \mu(m))\\ &=& \b(m_{(-1)}) \o \a(h) \c m_{(0)}, \end{eqnarray*} which implies $(M,\mu) \in {}^B_H\Bbb{L}$. Conversely, for any object $(M,\mu)$, $(N,\nu)$, and morphism $f:U\rightarrow V$ in ${}^B_H\Bbb{L}$, one can define a functor $\Phi$ from ${}^B_H\Bbb{L}$ to ${}_{B^{\ast op} \o H} \Bbb{M}$ \begin{eqnarray*} \Phi (M):=M\mbox{~as~}k\mbox{-modules~},~~~~\Phi(f):=f, \end{eqnarray*} where the $(B^{\ast op}\o H,\b^*\o \a)$-Hom-module structure on $M$ is given by \begin{eqnarray*} (p \o h)\rightharpoondown m = p(m_{(-1)}) h \c \mu^{-1}(m_{(0)}), \end{eqnarray*} for all $p \in B^{\ast},~h \o H,~m \in M.$ It is straightforward to check that $(M,\mu, \rightharpoondown)$ is an object in $^{B}_{H} \Bbb L$ to ${}_{B^{\ast op} \o H} \Bbb{M}$, and hence $\Phi$ is well defined. Note that $\Phi$ and $\Psi$ are inverse with each other. Hence the conclusion holds. \section{New braided momoidal categories over Hom-Long dimodules} \def\theequation{\arabic{section}.\arabic{equation}} \setcounter{equation} {0} In this section, we will prove that the Hom-Long dimodule category $^{B}_{H} \Bbb L$ over a quasitriangular Hom-Hopf algebra $(H,R,\alpha)$ and a coquasitriangular Hom-Hopf algebra $(B,\langle|\rangle,\beta)$ is a braided monoidal subcategory of the Hom-Yetter-Drinfeld category $^{H\o B}_{H\o B} \Bbb{HYD}$. \medskip \noindent{\bf Theorem 3.1.} Let $(H,R,\alpha)$ be a quasitriangular Hom-Hopf algebra and $(B,\langle|\rangle,\beta)$ a coquasitriangular Hom-Hopf algebra. Then the category $^{B}_{H} \Bbb L$ is a braided monoidal category with braiding \begin{eqnarray} C_{M,N}: M\o N\rightarrow N\o M, m\o n\rightarrow \langle m_{(-1)}|n_{(-1)}\rangle R^{(2)}\c \nu^{-2}(n_{(0)})\o R^{(1)}\c \mu^{-2}(m_{(0)}), \end{eqnarray} for all $m\in (M,\mu)\in{}^{B}_{H} \Bbb L$ and $ n\in (N,\nu)\in{}^{B}_{H} \Bbb L.$ \medskip \noindent{\bf Proof.} We will first show that the braiding $C_{M,N}$ is a morphism in $^{B}_{H} \Bbb L$. In fact, for any $m\in M, n\in N$ and $h\in H$, we have \begin{eqnarray*} &&C_{M,N}(h_{1}\c m\o h_{2}\c n)\\ &=&\langle(h_{1}\c m)_{(-1)}|(h_{2}\c n)_{(-1)}\rangle R^{(2)}\c \nu^{-2}(h_{2}\c n)_{(0)}\o R^{(1)}\c \mu^{-2}(h_{1}\c m)_{(0)}\\ &\stackrel{(2.1)}{=}&\langle\beta(m_{(-1)})|\beta(n_{(-1)})\rangle R^{(2)}\c \nu^{-2}(\alpha(h_{2})\c n_{(0)})\o R^{(1)}\c \mu^{-2}(\alpha(h_{1})\c m_{(0)})\\ &\stackrel{(HM2)}{=}&\langle m_{(-1)}|n_{(-1)}\rangle \alpha^{-1}(R^{(2)}h_{2})\c \nu^{-1}(n_{(0)})\o\alpha^{-1}(R^{(1)}h_{1})\c \mu^{-1}(m_{(0)}),\\ &&h\c C_{M,N}(m\o n)\\ &=&\langle m_{(-1)}|n_{(-1)}\rangle h\c(R^{(2)}\c \nu^{-2}(n_{(0)})\o R^{(1)}\c\mu^{-2}( m_{(0)}))\\ &=&\langle m_{(-1)}|n_{(-1)}\rangle h_{1}\c(\a^{-1}(R^{(2)})\c \nu^{-2}(n_{(0)}))\o h_{2}\c(\a^{-1}(R^{(1)})\c \mu^{-2}(m_{(0)}))\\ &\stackrel{(HM2)}{=}&\langle m_{(-1)}|n_{(-1)}\rangle \alpha^{-1}(h_{1}R^{(2)})\c \nu^{-1}(n_{(0)})\o \alpha^{-1}(h_{2}R^{(1)})\c \mu^{-1}(m_{(0)})\\ &\stackrel{(QHA4)}{=}&\langle m_{(-1)}|n_{(-1)}\rangle \alpha^{-1}(R^{(2)}h_{2})\c \nu^{-1}(n_{(0)})\o\alpha^{-1}(R^{(1)}h_{1})\c \mu^{-1}(m_{(0)}). \end{eqnarray*} The third equality holds since $\langle |\rangle$ is $\beta$-invariant and the fifth equality holds since $R$ is $\alpha$-invariant. So $C_{M,N}$ is left $(H,\alpha)$-linear. Similarly, one may check that $C_{M,N}$ is left $(B,\beta)$-colinear. Now we prove that the braiding $C_{M,N}$ is natural. For any $(M,\mu), (M',\mu'),$ $(N,\nu), (N',\nu')$ $\in{}^{B}_{H} \Bbb L$, let $f: M\rightarrow M'$ and $g: N\rightarrow N'$ be two morpshisms in $^{B}_{H} \Bbb L$, it is sufficient to verify the identity $(g\o f)\circ C_{M,N}=C_{M',N'}\circ(f\o g)$. For this purpose, we take $m\in M, n\in N$ and do the following calculation: \begin{eqnarray*} (g\o f)\circ C_{M,N}(m\o n) &=&\langle m_{(-1)}|n_{(-1)}\rangle(g\o f)(R^{(2)}\c \nu^{-2}(n_{(0)})\o R^{(1)}\c \mu^{-2}(m_{(0)}))\\ &=&\langle m_{(-1)}|n_{(-1)}\rangle g(R^{(2)}\c \nu^{-2}(n_{(0)}))\o f(R^{(1)}\c \mu^{-2}(m_{(0)}))\\ &=&\langle m_{(-1)}|n_{(-1)}\rangle R^{(2)}\c g(\nu^{-2}(n_{(0)}))\o R^{(1)}\c f(\mu^{-2}(m_{(0)})),\\ C_{M',N'}\circ(f\o g)(m\o n) &=&C_{M',N'}(f(m)\o g(n))\\ &=&\langle f(m)_{(-1)}|g(n)_{(-1)}\rangle R^{(2)}\c \nu^{-2}(g(n)_{(0)})\o (R^{(1)}\c \mu^{-2}(f(m)_{(0)})\\ &=&\langle m_{(-1)}|n_{(-1)}\rangle R^{(2)}\c \nu^{-2}(g(n_{(0)}))\o R^{(1)}\c \mu^{-2}(f(m_{(0)}))\\ &=&\langle m_{(-1)}|n_{(-1)}\rangle R^{(2)}\c g(\nu^{-2}(n_{(0)}))\o R^{(1)}\c f(\mu^{-2}(m_{(0)})). \end{eqnarray*} The sixth equality holds since $f,g$ are left $(B,\beta)$-colinear. So the braiding $C_{M,N}$ is natural, as needed. Next, we will show that the braiding $C_{M,N}$ is an isomorphsim with inverse map \begin{eqnarray*} C^{-1}_{M,N}: N\o M\rightarrow M\o N , n\o m\rightarrow \langle S^{-1}(m_{(-1)})|n_{(-1)}\rangle S(R^{(1)})\c \mu^{-2}(m_{(0)})\o R^{(2)}\c \nu^{-2}(n_{(0)}). \end{eqnarray*} For any $m\in M, n\in N$, we have \begin{eqnarray*} &&C^{-1}_{M,N}\circ C_{M,N}(m\o n)\\ &=&\langle m_{(-1)}|n_{(-1)}\rangle C^{-1}_{M,N}(R^{(2)}\c \nu^{-2}(n_{(0)})\o R^{(1)}\c \mu^{-2}(m_{(0)}))\\ &=&\langle m_{(-1)}|n_{(-1)}\rangle\langle S^{-1}(\beta^{-1}(m_{(0)(-1)}))| \beta^{-1}(n_{(0)(-1)})\rangle\\ &&~~~~~~S(r^{(1)})\c \mu^{-2}(\alpha(R^{(2)})\c \mu^{-2}(m_{(0)(0)}))\o r^{(2)}\c \nu^{-2}(\alpha(R^{(1)})\c \nu^{-2}(n_{(0)(0)}))\\ &\stackrel{(HCM2)}{=}&\langle \beta^{-1}(m_{(-1)1})|\beta^{-1}(n_{(-1)1})\rangle\langle S^{-1}(\beta^{-1}(m_{(-1)2}))| \beta^{-1}(n_{(-1)2})\rangle\\ &&~~~~~~S(r^{(1)})\c (\alpha^{-1}(R^{(2)})\c \mu^{-3}(m_{(0)}))\o r^{(2)}\c (\alpha^{-1}(R^{(1)})\c \nu^{-3}(n_{(0)}))\\ &\stackrel{(HM2)}{=}&\langle m_{(-1)1}|n_{(-1)1}\rangle\langle S^{-1}(m_{(-1)2})| n_{(-1)2}\rangle\\ &&~~~~~~\alpha^{-1}(S(r^{(1)})R^{(2)})\c \mu^{-2}(m_{(0)})\o \alpha^{-1}(r^{(2)}R^{(1)})\c \nu^{-2}(n_{(0)})\\ &\stackrel{(CHA1)}{=}&\langle S^{-1}(\beta^{-1}(m_{(-1)2}))\beta^{-1}(m_{(-1)1})|\b(n_{(-1)})\rangle 1_{H}\c \mu^{-2}(m_{(0)})\o1_{H}\c \nu^{-2}(n_{(0)})\\ &=&\langle\beta^{-2}(S^{-1}(m_{(-1)2})m_{(-1)1})|n_{(-1)}\rangle 1_{H}\c \mu^{-2}(m_{(0)})\o 1_{H}\c \nu^{-2}(n_{(0)})\\ &=&\langle \epsilon(m_{(-1)})1_{H}|n_{(-1)}\rangle \mu^{-1}(m_{(0)})\o \nu^{-1}(n_{(0)})\\ &=&\epsilon(m_{(-1)})\epsilon(n_{(-1)}) \mu^{-1}(m_{(0)})\o \nu^{-1}(n_{(0)})\\ &=&m\o n. \end{eqnarray*} The second equality holds since $\rho(R^{(2)}\c \nu^{-2}(n_{(0)}))=\beta^{-1}(n_{(0)(-1)})\o \alpha(R^{(2)})\c n_{(0)(0)}$ and the fifth equality holds since $R^{-1}=S(r^{(1)})\o r^{(2)}$. Now let us verify the hexagon axioms ($H_{1}, H_{2}$) from Section XIII. 1.1 of \cite{Kassel}. We need to show that the following diagram ($H_{1}$) commutes for any $(U,\mu), (V,\nu), (W,\omega)\in{}^{B}_{H} \Bbb L$: $$\aligned \xymatrix{ (U\otimes V)\otimes W \ar[d]_{C_{U,V} \o id_W} \ar[rr]^{a_{U,V,W}} && U\otimes (V\otimes W) \ar[rr]^{C_{U,V \o W}} & & (V \otimes W)\otimes U \ar[d]^{a_{V,W,U}} \\ (V \o U) \o W \ar[rr]^{a_{V,U,W}} & & V \o (U \o W) \ar[rr]^{id_V \o C_{U,W}} && V \o (W \o U),} \endaligned$$ For this purpose, let $u\in U, v\in V, w\in W$, then we have \begin{eqnarray*} &&a_{V,U,W}\circ C_{U,V\o W}\circ a_{U,V,W}((u\o v)\o w)\\ &=&a_{V,U,W}\circ C_{U,V\o W}(\mu^{-1}(u)\o(v\o \omega(w)))\\ &=&\langle \beta^{-1}(u_{(-1)})|\b^{-2}(v_{(-1)})\beta^{-1}(w_{(-1)})\rangle a_{V,U,W}\\ &&~~~~~~~~~~(R^{(2)}\c (\nu^{-2}\o\omega^{-2})(v_{(0)}\o \omega(w_{(0)}))\o R^{(1)}\c \mu^{-3}(u_{(0)}))\\ &=&\langle \beta(u_{(-1)})|v_{(-1)}\beta(w_{(-1)})\rangle a_{V,U,W}\\ &&~~~~~~~~~~(R^{(2)}\c (\nu^{-2}(v_{(0)})\o \omega^{-1}(w_{(0)}))\o R^{(1)}\c \mu^{-3}(u_{(0)}))\\ &=&\langle \beta(u_{(-1)})|v_{(-1)}\beta(w_{(-1)})\rangle\\ &&~~~~~~~~~~\a^{-1}(R^{(2)}_{1})\c \nu^{-3}(v_{(0)})\o(R^{(2)}_{2}\c \omega^{-1}(w_{(0)})\o\a(R^{(1)})\c \mu^{-2}(u_{(0)}))\\ &\stackrel{(QHA3)}{=}&\langle \beta(u_{(-1)})|v_{(-1)}\beta(w_{(-1)})\rangle\\ &&~~~~~~~~~~r^{(2)}\c \nu^{-3}(v_{(0)})\o(\a(R^{(2)})\c \omega^{-1}(w_{(0)})\o(R^{(1)}r^{(1)})\c \mu^{-2}(u_{(0)})) \end{eqnarray*} and \begin{eqnarray*} &&(id_{V}\o C_{U,W})\circ a_{V,U,W}\circ(C_{U,V}\o id_{W})((u\o v)\o w)\\ &=&\langle u_{(-1)}|v_{(-1)}\rangle(id_{V}\o C_{U,W})\circ a_{V,U,W}((R^{(2)}\c \nu^{-2}(v_{(0)})\o R^{(1)}\c \mu^{-2}(u_{(0)}))\o w)\\ &=&\langle u_{(-1)}|v_{(-1)}\rangle(id_{V}\o C_{U,W}) \a^{-1}(R^{(2)})\c \nu^{-3}(v_{(0)})\o(R^{(1)}\c \mu^{-2}(u_{(0)})\o \omega(w))\\ &=&\langle u_{(-1)}|v_{(-1)}\rangle\langle \beta^{-1}(u_{(0)(-1)})|\beta(w_{(-1)})\rangle\\ &&~~~~~~\a^{-1}(R^{(2)})\c \nu^{-3}(v_{(0)})\o(r^{(2)}\c\omega^{-1}(w_{(0)})\o r^{(1)}\c\mu^{-2}(\a(R^{(1)})\c \mu^{-2}(u_{(0)(0)})))\\ &\stackrel{(HCM2)}{=}&\langle \beta^{-1}(u_{(-1)1})|v_{(-1)}\rangle\langle \beta^{-1}(u_{(-1)2})|\beta(w_{(-1)})\rangle\\ &&~~~~~~\a^{-1}(R^{(2)})\c \nu^{-3}(v_{(0)})\o(r^{(2)}\c\omega^{-1}(w_{(0)})\o\a^{-1}(r^{(1)}R^{(1)})\c \mu^{-2}(u_{(0)}))\\ &\stackrel{(CHA2)}{=}&\langle u_{(-1)}|\b^{-1}(v_{(-1)})w_{(-1)}\rangle\\ &&~~~~~~\a^{-1}(R^{(2)})\c \nu^{-3}(v_{(0)})\o(r^{(2)}\c\omega^{-1}(w_{(0)})\o\a^{-1}(r^{(1)}R^{(1)})\c \mu^{-2}(u_{(0)}))\\ &=&\langle \beta(u_{(-1)})|v_{(-1)}\beta(w_{(-1)})\rangle\\ &&~~~~~~ R^{(2)}\c \nu^{-3}(v_{(0)})\o(\a(r^{(2)})\c \omega^{-1}(w_{(0)})\o(r^{(1)}R^{(1)})\c \mu^{-2}(u_{(0)})) \end{eqnarray*} Since $r=R$, it follows that $a_{V,U,W}\circ C_{U,V\o W}\circ a_{U,V,W}=(id_{V}\o C_{U,W})\circ a_{V,U,W}\circ(C_{U,V}\o id_{W})$, that is, the diagram ($H_{1}$) commutes. Now we check that the diagram ($H_{2}$) commutes for any $(U,\mu), (V,\nu), (W,\omega)\in{} ^{B}_{H}\Bbb{L}$: $$\aligned \xymatrix{ U\otimes (V\otimes W) \ar[d]_{id_U \o C_{V,W}} \ar[rr]^{a^{-1}_{U,V,W}} && (U\otimes V)\otimes W \ar[rr]^{C_{U\o V, W}} & & W \otimes (U\otimes V) \ar[d]^{a^{-1}_{W,U,V}} \\ U \o (W\o V) \ar[rr]^{a^{-1}_{U,W, V}} & & (U \o W)\o V \ar[rr]^{C_{U,W} \o id_V } && (W \o U) \o V .} \endaligned$$ In fact, for any $u\in U, v\in V, w\in W$, we obtain \begin{eqnarray*} &&a^{-1}_{W,U,V}\circ C_{U\o V, W}\circ a^{-1}_{U,V,W}(u\o (v\o w))\\ &=&a^{-1}_{W,U,V}\circ C_{U\o V, W}((\mu(u)\o v)\o \omega^{-1}(w))\\ &=&\langle \beta^{-1}(u_{(-1)})\beta^{-1}(v_{(-2)})|\beta^{-1}(w_{(-1)})\rangle a^{-1}_{W,U,V}\\ &&~~~~~~~~~~(R^{(2)}\c \omega^{-3}(w_{(0)})\o R^{(1)}\c (\mu^{-1}(u_{(0)})\o \nu^{-2}(v_{(0)})))\\ &=&\langle \beta(u_{(-1)})v_{(-1)}|\beta(w_{(-1)})\rangle a^{-1}_{W,U,V}\\ &&~~~~~~~~~~(R^{(2)}\c \omega^{-3}(w_{(0)})\o (R^{(1)}_{1}\c \mu^{-1}(u_{(0)})\o R^{(1)}_{2}\c \nu^{-2}(v_{(0)})))\\ &=&\langle \beta(u_{(-1)})v_{(-1)}|\beta(w_{(-1)})\rangle\\ &&~~~~~~~~~~(\omega(R^{(2)}\c \omega^{-2}(w_{(0)}))\o R^{(1)}_{1}\c \mu^{-1}(u_{(0)}))\o\a^{-1}(R^{(1)}_{2})\c \nu^{-3}(v_{(0)})\\ &=&\langle \beta(u_{(-1)})v_{(-1)}|\beta(w_{(-1)})\rangle\\ &&~~~~~~~~~~ (\alpha^{-1}(R^{(2)})\c\omega^{-2}(w_{(0)})\o R^{(1)}_{1}\c \mu^{-1}(u_{(0)}))\o \alpha(R^{(1)}_{2})\c \nu(v_{(0)})\\ &\stackrel{(QHA2)}{=}&\langle \beta(u_{(-1)})v_{(-1)}|\beta(w_{(-1)})\rangle\\ &&~~~~~~~~~~(\alpha^{-1}(R^{(2)}r^{(2)})\c\omega^{-2}(w_{(0)})\o R^{(1)}\c \mu^{-1}(u_{(0)}))\o \alpha^{-1}(r^{(1)})\c \nu^{-3}(v_{(0)}). \end{eqnarray*} Also we can get \begin{eqnarray*} &&(C_{U,W}\o id_{V})\circ a^{-1}_{U,W,V}\circ(id_{U}\o C_{V,W})(u\o (v\o w))\\ &=&\langle v_{(-1)})|w_{(-1)}\rangle(C_{U,W}\o id_{V})\circ a^{-1}_{U,W,V} (u\o (R^{(2)}\c \omega^{-2}(w_{(0)})\o R^{(1)}\c \nu^{-2}(v_{(0)})))\\ &=&\langle v_{(-1)})|w_{(-1)}\rangle(C_{U,W}\o id_{V}) ((\mu(u)\o R^{(2)}\c\omega^{-2}(w_{(0)}))\o\a^{-1}(R^{(1)})\c \nu^{-3}(v_{(0)}))\\ &=&\langle v_{(-1)})|w_{(-1)}\rangle\langle \beta(u_{(-1)})|\beta^{-1}(w_{(0)(-1)})\rangle\\ &&~~~~~(r^{(2)}\c\omega^{-2}(\alpha(R^{(2)})\c \omega^{-2}(w_{(0)(0)}))\o r^{(1)}\c \mu^{-1}(u_{(0)}))\o\a^{-1}(R^{(1)})\c \nu^{-3}(v_{(0)})\\ &\stackrel{(HCM2)}{=}&\langle v_{(-1)})|\beta^{-1}(w_{(-1)1})\rangle\langle \beta(u_{(-1)})|\beta^{-1}(w_{(-1)2})\rangle\\ &&~~~~~(r^{(2)}\c(\a^{-1}(R^{(2)})\c \omega^{-3}(w_{(0)}))\o r^{(1)}\c \mu^{-1}(u_{(0)}))\o\alpha^{-1}(R^{(1)})\c \nu^{-3}(v_{(0)})\\ &\stackrel{(CHA1)}{=}&\langle u_{(-1)}\beta^{-1}(v_{(-1)})|w_{(-1)}\rangle\\ &&~~~~~(\alpha^{-1}(r^{(2)}R^{(2)})\c \omega^{-2}(w_{(0)})\o r^{(1)}\c \mu^{-1}(u_{(0)}))\o\alpha^{-1}(R^{(1)})\c \nu^{-3}(v_{(0)}). \end{eqnarray*} So the diagram ($H_{2}$) commutes since $r=R$. This ends the proof. \medskip \noindent{\bf Corollary 3.2.} Under the hypotheses of the Theorem 3.1, the braiding $C$ is a solution of the quantum Yang-Baxter equation \begin{eqnarray*} &&(id_{W}\o C_{U,V})\circ a_{W,U,V}\circ( C_{U,W}\o id_{V})\circ a^{-1}_{W,V,U}\circ( id_{U}\o C_{V,W})\circ a_{U,V,W}\nonumber\\ &=&a_{W,V,U}\circ( C_{W,V}\o id_{U})\circ a^{-1}_{W,V,U}\circ( id_{V}\o C_{U,W})\circ a_{V,U,W}\circ( C_{U,V}\o id_{W}). \end{eqnarray*} \noindent{\bf Proof.} Straightforward. \medskip \noindent{\bf Lemma 3.3.} Let $(H,R,\alpha)$ be a quasitriangular Hom-Hopf algebra and $(B,\langle|\rangle,\beta)$ a coquasitriangular Hom-Hopf algebra. Define a linear map \begin{eqnarray*} (H\o B)\o M\rightarrow M,(h\o x)\rightharpoonup m=\langle x|m_{(-1)}\rangle \a^{-3}(h)\c \mu^{-1}(m_{(0)}), \end{eqnarray*} for any $h\in H, x\in B$ and $m\in(M,\mu)\in{}^{B}_{H} \Bbb L$. Then $(M,\mu)$ becomes a left $(H\o B)$-Hom-module. \medskip \noindent{\bf Proof.} It is sufficient to show that the Hom-module action defined above satisfies Definition 1.2. For any $h,g\in H, x,y\in B$ and $m\in M$, we have \begin{eqnarray*} (1_{H}\o 1_{B})\rightharpoonup m =\langle 1_{B}|m_{(-1)}\rangle 1_{H}\c \mu^{-1}(m_{(0)}) =\epsilon(m_{(-1)})m_{(0)}=\mu(m). \end{eqnarray*} That is, $(1_{H}\o 1_{B})\rightharpoonup m=\mu(m)$. For the equality $\mu((h\o x)\rightharpoonup m)=(\alpha(h)\o \beta(x))\rightharpoonup \mu(m)$, we have \begin{eqnarray*} (\alpha(h)\o \beta(x))\rightharpoonup \mu(m) &=&\langle \beta(x)|\beta(m_{(-1)})\rangle \alpha^{-2}(h)\c m_{(0)}\\ &=&\langle x|m_{(-1)}\rangle \alpha^{-2}(h)\c m_{(0)} =\mu((h\o x)\rightharpoonup m), \end{eqnarray*} as required. Finally, we check the expression $((h\o x)(g\o y))\rightharpoonup \mu(m)=(\alpha(h)\o\beta(x))\rightharpoonup((g\o y)\rightharpoonup m)$. For this, we calculate \begin{eqnarray*} &&(\alpha(h)\o\beta(x))\rightharpoonup ((g\o y)\rightharpoonup m)\\ &=&\langle y|m_{(-1)}\rangle(\alpha(h)\o\beta(x))\c (\alpha^{-3}(g)\c \mu^{-1}(m_{(0)}))\\ &=&\langle y|m_{(-1)}\rangle\langle \beta(x)|m_{(0)(-1)}\rangle\alpha^{-2}(h)\c(\alpha^{-3}(g)\c\mu^{-2}(m_{(0)(0)}))\\ &\stackrel{(HCM2)}{=}&\langle y|\beta^{-1}(m_{(-1)1})\rangle\langle x|\beta^{-1}(m_{(-1)2})\rangle \alpha^{-3}(hg)\c m_{(0)}\\ &\stackrel{(CHA1)}{=}&\langle xy|\beta(m_{(-1)})\rangle\alpha^{-3}(hg)\c m_{(0)}\\ &=&((h\o x)(g\o y))\rightharpoonup \mu(m). \end{eqnarray*} So $(M,\mu)$ is a left $(H\o B)$-Hom-module. The proof is completed. \medskip \noindent{\bf Lemma 3.4.} Let $(H,R,\alpha)$ be a quasitriangular Hom-Hopf algebra and $(B,\langle|\rangle,\beta)$ a coquasitriangular Hom-Hopf algebra. Define a linear map \begin{eqnarray*} \overline{\rho}: M\rightarrow (H\o B)\o M,~\overline{\rho}(m)=m_{[-1]}\o m_{[0]}= R^{(2)} \o \beta^{-3}(m_{(-1)})\o R^{(1)}\c \mu^{-1}(m_{(0)}), \end{eqnarray*} for any $m\in (M,\mu)$. Then $(M,\mu)$ becomes a left $(H\o B)$-Hom-comodule. \medskip \noindent{\bf Proof.} We first show that $\overline{\rho}$ satisfies Eq. (HCM2). On the one side, we have \begin{eqnarray*} &&\Delta(m_{[-1]})\o \mu(m_{[0]})\\ &=&( R^{(2)}_{1} \o\beta^{-3}(m_{(-1)1}))\o( R^{(2)}_{2} \o\beta^{-3}(m_{(-1)2}))\o\alpha(R^{(1)})\c m_{(0)}\\ &=&(\a(r^{(2)})\o\beta^{-2}(m_{(-1)}))\o(\a(R^{(2)})\o\beta^{-3}(m_{(0)(-1)}))\o\alpha(R^{(1)})(r^{(1)}\c \mu^{-2}(m_{(0)(0)})). \end{eqnarray*} On the other side, we have \begin{eqnarray*} &&~~(\alpha\o\beta)(m_{[-1]})\o \overline{\rho}(m_{[0]})\\ &&=(\a(r^{(2)})\o \b^{-2}(m_{(-1)}))\o(R^{(2)} \o \beta^{-3}( ( r^{(1)} \c \mu^{-1}(m_{(0)}) )_{(-1)} ) \o R^{(1)} \\ &&~~~~~~~~~~\c \mu^{-1}(( r^{(1)} \c \mu^{-1}(m_{(0)}) )_{(0)} ) \\ &&= (\a(r^{(2)})\o\beta^{-2}(m_{(-1)}))\o(R^{(2)}\o\beta^{-3}(m_{(0)(-1)}))\o R^{(1)} \c (r^{(1)}\c \mu^{-2}(m_{(0)(0)})). \end{eqnarray*} Since $R$ is $\alpha$-invariant, we have $\Delta(m_{[-1]})\o \mu(m_{[0]})=(\alpha\o\beta)(m_{[-1]})\o \overline{\rho}(m_{[0]})$, as needed. For Eq. (HCM1), we have \begin{eqnarray*} (\epsilon_{H}\o\epsilon_{B})(m_{[-1]})m_{[0]} &=&\epsilon_{H}(R^{(2)})\epsilon_{B}(m_{(-1)})R^{(1)}\c \mu^{-1}(m_{(0)})\\ &=&1_{H}\c m =\mu(m),\\ (\alpha\o\beta)(m_{[-1]})\o \mu(m_{[0]}) &=&(\alpha(R^{(2)})\o \beta^{-2}(m_{(-1)}))\o \mu(R^{(1)}\c \mu^{-1}(m_{(0)}))\\ &=& R^{(2)} \o \beta^{-3}(\beta(m_{(-1)}))\o R^{(1)}\c \mu^{-1}(\mu(m_{(0)}))\\ &=& \overline{\rho}(\mu(m)), \end{eqnarray*} as desired. And this finishes the proof. \medskip \noindent{\bf Theorem 3.5.} Let $(H,R,\alpha)$ be a quasitriangular Hom-Hopf algebra and $(B,\langle|\rangle,\beta)$ a coquasitriangular Hom-Hopf algebra. Then the Hom-Long dimodules category $^{B}_{H} \Bbb L$ is a monoidal subcategory of Hom-Yetter-Drinfeld category $^{H\o B}_{H\o B}\Bbb {YD}$. \medskip \noindent{\bf Proof.} Let $m\in(M,\mu)\in {}^{B}_{H}\mathcal{L}$ and $h\in H$. Here we first note that $\rho(h\c \mu^{-1}(m_{(0)}))=m_{(0)(-1)}\o\alpha(h)\c \mu^{-1}(m_{(0)(0)})$. It is sufficient to show that the left $(H\o B)$-Hom-module action in Lemma 3.3 and the left $(H\o B)$-Hom-comodule structure in Lemma 3.4 satisfy the compatible condition Eq. (HYD). Indeed, for any $h \in H$, $x \in B$, $m \in M$, we have \begin{eqnarray*} &&(h_1 \o x_1)(\a \o \b )(m_{[-1]}) \o (\a^3(h_2) \o \b^3(x_2))\rightharpoonup m_{[0]} \\ &=& h_1 \a(R^{(2)}) \o x_1 \b^{-2}(m_{(-1)}) \o \langle \b^3(x_2) | (R^{(1)} \c \mu^{-1}(m_{(0)}))_{(-1)} \rangle h_2 \c \mu^{-1}( (R^{(1)} \c \mu^{-1}(m_{(0)}))_{(0)} )\\ &=& h_1 \a(R^{(2)}) \o x_1 \b^{-3}(m_{(-1)1}) \o \langle \b^3(x_2) | m_{(-1)2} \rangle h_2 \c ( R^{(1)} \c \mu^{-1}(m_{(0)}) )\\ &=& h_1 \a(R^{(2)}) \o x_1 \b^{-3}(m_{(-1)1}) \o \langle x_2 | \b^{-3}(m_{(-1)2}) \rangle \a^{-1}(h_2 \a(R^{(1)}) ) \c m_{(0)} \\ &=& R^{(2)} h_2 \o \b^{-3}(m_{(-1)2}) x_2 \langle x_1 | \b^{-3}(m_{(-1)1}) \rangle \o (\a^{-1}(R^{(1)}) \a^{-1}(h_1)) \c m_{(0)} \\ &=& \langle \a^2(x_1) | m_{(-1)} \rangle R^{(2)} h_2 \o \b^{-3}(m_{(0)(-1)}) x_2 \o (\a^{-1}(R^{(1)}) \a^{-1}(h_1)) \c \mu^{-1}(m_{(0)(0)}) \\ &=& \langle \a^2(x_1) | m_{(-1)} \rangle (R^{(2)} \o \b^{-3}( {\a^{-1}(h_1) \c \mu^{-1}(m_{(0)}) }_{(-1)} ) )(h_2 \o x_2) \\ &&~~~~~~~~~~~~\o R^{(1)} \c \mu^{-1}({\a^{-1}(h_1) \c \mu^{-1}(m_{(0)}) }_{(0)}) \\ &=& {(\a^2(h_1) \o \b^2(x_1))\rightharpoonup m}_{[-1]} (h_2 \o x_2) \o {(\a^2(h_1) \o \b^2(x_1))\rightharpoonup m}_{[0]}. \end{eqnarray*} So $(M,\mu)\in{}^{H\o B}_{H\o B}\Bbb{HYD}$. The proof is completed. \medskip \noindent{\bf Proposition 3.6.} Under the hypotheses of the Theorem 3.5, $^{B}_{H}\Bbb{L}$ is a braided monoidal subcategory of $^{H\o B}_{H\o B}\Bbb{HYD}$. \medskip \noindent{\bf Proof.} It is sufficient to show that the braiding in the category $^{B}_{H}\Bbb{L}$ is compatible to the braiding in $^{H\o B}_{H\o B}\Bbb{HYD}$. In fact, for any $m\in (M,\mu)$ and $n\in (N,\nu)$, we have \begin{eqnarray*} C_{M,N}(m\o n) &=& (\a^2(R^{(2)}) \o \b^{-1}(m_{(-1)}))\rightharpoonup \nu^{(-1)}(n) \o \a^{-1}(R^{(1)}) \c \mu^{-2}(m_{(0)}) \\ &=& \langle \b^{-1}(m_{(-1)}) | \b^{-1}(n_{(-1)}) \rangle \a^{-1}(R^{(2)}) \c \nu^{-2}(n_{(0)}) \o \a^{-1}(R^{(1)}) \c \mu^{-2}(m_{(0)}) \\ &=& \langle m_{(-1)} | n_{(-1)} \rangle R^{(2)} \c \nu^{-2}(n_{(0)}) \o R^{(1)} \c \mu^{-2}(m_{(0)}), \end{eqnarray*} as desired.This finishes the proof. \section{Symmetries in Hom-Long dimodule categories} \def\theequation{\arabic{section}.\arabic{equation}} \setcounter{equation} {0} In this section, we obtain a sufficient condition for the Hom-Long dimodule category $^{B}_{H} \Bbb L$ to be symmetric. \medskip Let $\mathcal{C}$ be a monoidal category and $C$ a braiding on $\mathcal{C}$. The braiding $C$ is called a symmetry \cite{Joyal1993, Kassel} if $C_{Y,X}\circ C_{X,Y}=id_{X\otimes Y}$ for all $X,Y\in \mathcal{C}$, and the category $\mathcal{C}$ is called symmetric. \medskip \noindent{\bf Proposition 4.1.} Let $(H,R,\alpha)$ be a triangular Hom-Hopf algebra and $(B,\beta)$ a Hom-Hopf algebra. Then the category $_{H} \Bbb M$ of left $(H,\alpha)$-Hom-modules is a symmetric subcategory of $^{B}_{H} \Bbb L$ under the left $(B,\beta)$-comodule structure $\rho(m)=1_{B}\otimes \mu(m)$, where $m\in (M,\mu)\in{}_{H} \Bbb M$, and the braiding is defined as \begin{eqnarray*} C_{M,N}: M\o N\rightarrow N\o M, m\o n\rightarrow R^{(2)}\c \nu^{-1}(n)\o R^{(1)}\c \mu^{-1}(m), \end{eqnarray*} for all $m\in (M,\mu)\in{}_{H} \Bbb M, n\in (N,\nu)\in{}_{H} \Bbb M.$ \medskip \noindent{\bf Proof.} It is clear that $(M,\rho,\mu)$ is a left $(B,\beta)$-Hom-comodule under the left $(B,\beta)$-comodule structure given above. Now we check that the left $(B,\beta)$-comodule structure satisfies the compatible condition Eq. (2.1). For this purpose, we take $h\in H, m\in(M,\mu)\in{}_{H} \Bbb M$, and calculate \begin{eqnarray*} \rho(h\c m) =1_{B}\otimes \mu(h\c m) =1_{B}\otimes \alpha(h)\c\mu(m) =\beta(m_{(-1)})\otimes \alpha(h)\c m_{(0)}. \end{eqnarray*} So, Eq. (2.1) holds. That is, $(M,\rho,\mu)$ is an $(H,B)$-Hom-Long dimodule. Next we verify that any morphism in $_{H} \Bbb M$ is left $(B,\beta)$-colinear, too. Indeed, for any $m\in (M,\mu)\in{}_{H} \Bbb M$ and $n\in (N,\nu)\in{}_{H} \Bbb M$. Assume that $f: (M,\mu)\rightarrow (N,\nu)$ is a morphism in $_{H} \Bbb M$, then \begin{eqnarray*} (id_{B}\otimes f)\rho(m) =1_{B}\otimes f(\mu(m)) =1_{B}\otimes\nu(f(m)) =\rho(f(m)). \end{eqnarray*} So $f$ is left $(B,\beta)$-colinear, as desired. Therefore, $_{H} \Bbb M$ is a subcategory of $^{B}_{H} \Bbb L$. Finally, we prove that $_{H} \Bbb M$ is a symmetric subcategory of $^{B}_{H} \Bbb L$. Since $ C_{M,N}(m\o n) = R^{(2)}\c \nu^{-1}(n)\o R^{(1)}\c \mu^{-1}(m), $ for all $m\in (M,\mu)\in{}_{H} \Bbb M$ and $n\in (N,\nu)\in{}_{H} \Bbb M$, we have \begin{eqnarray*} C_{N,M}\circ C_{M,N}(m\o n) &=&C_{N,M}(R^{(2)}\c \nu^{-1}(n)\o R^{(1)}\c \mu^{-1}(m))\\ &=&r^{(2)}\c\mu^{-1}(R^{(1)}\c \mu^{-1}(m))\o r^{(1)}\c\nu^{-1}(R^{(2)}\c \nu^{-1}(n))\\ &=&r^{(2)}\c(\a^{-1}(R^{(1)})\c \mu^{-2}(m))\o r^{(1)}\c(\a^{-1}(R^{(2)})\c \nu^{-2}(n))\\ &=&\alpha^{-1}(r^{(2)}R^{(1)})\c \mu^{-1}(m)\o \alpha^{-1}(r^{(1)}R^{(2)})\c \nu^{-1}(n)\\ &=&1_{H}\c \mu^{-1}(m)\o 1_{H}\c \nu^{-1}(n) =m\o n. \end{eqnarray*} It follows that the braiding $C_{M,N}$ is symmetric. The proof is completed. \medskip \noindent{\bf Proposition 4.2.} Let $(B,\langle|\rangle,\beta)$ be a cotriangular Hom-Hopf algebra and $(H,\alpha)$ a Hom-Hopf algebra. Then the category $^{B} \Bbb M$ of left $(B,\beta)$-Hom-comodules is a symmetric subcategory of $^{B}_{H} \Bbb L$ under the left $(H,\alpha)$-module action $h\c m=\epsilon(h)\mu(m)$, where $h\in H, m\in (M,\mu)\in{}^{B} \Bbb M$, and the braiding is given by \begin{eqnarray*} C_{M,N}: M\o N\rightarrow N\o M, m\o n\rightarrow\langle m_{(-1)}|n_{(-1)}\rangle \nu^{-2}(n_{(0)})\o \mu^{-2}(m_{(0)}), \end{eqnarray*} for all $m\in (M,\mu)\in{}^{B} \Bbb M, n\in (N,\nu)\in{}^{B} \Bbb M.$ \medskip \noindent{\bf Proof.} We first show that the left $(H,\alpha)$-module action defined above forces $(M,\mu)$ to be a left $(H,\alpha)$-module, but this is easy to check. For the compatible condition Eq. (2.1), we take $h\in H, m\in(M,\mu)\in{}^{B} \Bbb M$ and calculate as follows: \begin{eqnarray*} \rho(h\c m)=1_{B}\o\mu(h\c m)=1_{B}\o\epsilon(h)\mu( m)=\beta(m_{(-1)})\otimes \alpha(h)\c m_{(0)}. \end{eqnarray*} So, Eq. (2.1) holds, as required. Therefore, $(M,\rho,\mu)$ is an $(H,B)$-Hom-Long dimodule. Now we verify that any morphism in $^{B} \Bbb M$ is left $(H,\alpha)$-linear, too. Indeed, for any $m\in (M,\mu)\in{}^{B} \Bbb M$ and $n\in (N,\nu)\in{}^{B} \Bbb M$. Assume that $f: (M,\mu)\rightarrow (N,\nu)$ is a morphism in $^{B} \Bbb M$, then \begin{eqnarray*} f(h\c m) =f(\epsilon(h)\mu(m)) =\epsilon(h)\mu(f(m)) =h\c f(m). \end{eqnarray*} So $f$ is left $(H,\alpha)$-linear, as desired. Therefore, $^{B} \Bbb M$ is a subcategory of $^{B}_{H} \Bbb L$. Finally, we show that $^{B} \Bbb M$ is a symmetric subcategory of $^{B}_{H} \Bbb L$. Since $ C_{M,N}(m\o n) =\langle m_{(-1)}|n_{(-1)}\rangle \nu^{-1}(n_{(0)})\o \mu^{-1}(m_{(0)}), $ for all $m\in (M,\mu)\in{}^{B} \Bbb M$ and $n\in (N,\nu)\in{}^{B} \Bbb M$, then \begin{eqnarray*} &&C_{N,M}\circ C_{M,N}(m\o n)\\ &=&\langle m_{(-1)}|n_{(-1)}\rangle C_{N,M}(\nu^{-1}(n_{(0)})\o \mu^{-1}(m_{(0)}))\\ &=&\langle m_{(-1)}|n_{(-1)}\rangle\langle \b^{-1}(n_{(0)(-1)})|\b^{-1}(m_{0(-1)})\rangle(\mu^{-2}(m_{(0)(0)})\o \nu^{-2}(n_{(0)(0)})\\ &=&\langle \beta^{-1}(m_{(-1)1})|\beta^{-1}(n_{(-1)1})\rangle\langle \b^{-1}(n_{(-1)2})|\b^{-1}(m_{(-1)2})\rangle\mu^{-1}(m_{(0)})\o \nu^{-1}(n_{(0)})\\ &=&\epsilon(m_{(-1)})\epsilon(n_{(-1)})\mu^{-1}(m_{(0)})\o \nu^{-1}(n_{(0)}) =m\o n, \end{eqnarray*} where the fourth equality holds since $\langle | \rangle$ is $\b$-invariant. It follows that the braiding $C_{M,N}$ is symmetric. The proof is completed. \medskip \noindent{\bf Theorem 4.3.} Let $(H,\alpha)$ be a triangular Hom-Hopf algebra and $(B,\langle|\rangle,\beta)$ a cotriangular Hom-Hopf algebra. Then the category $^{B}_{H} \Bbb L$ is symmetric. \medskip \noindent{\bf Proof.} For any $m\in (M,\mu)\in{}^{B}_{H} \Bbb L$ and $n\in (N,\nu)\in{}^{B}_{H} \Bbb L$, we have \begin{eqnarray*} &&C_{N,M}\circ C_{M,N}(m\o n)\\ &=&\langle m_{(-1)}|n_{(-1)}\rangle C_{N,M}(R^{(2)}\c \nu^{-2}(n_{(0)})\o R^{(1)}\c \mu^{-2}(m_{(0)}))\\ &=&\langle m_{(-1)}|n_{(-1)}\rangle\langle \beta(n_{(0)(-1)})|\beta(m_{(0)(-1)})\rangle \\ &&~~~~~~~~r^{(2)}\c\mu^{-2}(\alpha(R^{(1)})\c\mu^{-2}( m_{(0)(0)}))\o r^{(1)}\c\nu^{-2}(\alpha(R^{(2)})\c \nu^{-2}(n_{(0)(0)}))\\ &=&\langle \beta^{-1}(m_{(-1)1})|\beta^{-1}(n_{(-1)1})\rangle\langle \beta^{-1}(n_{(-1)2})|\beta^{-1}(m_{(-1)2})\rangle\\ &&~~~~~~~~\alpha^{-1}(r^{(2)}R^{(1)})\c \mu^{-2}(m_{(0)})\o\alpha^{-1}(r^{(1)}R^{(2)})\c \nu^{-2}(n_{(0)})\\ &=&\epsilon(m_{(-1)})\epsilon(n_{(-1)}) 1_{H}\c \mu^{-2}(m_{(0)})\o 1_{H}\c \nu^{-2}(n_{(0)})\\ &=&\epsilon(m_{(-1)})\epsilon(n_{(-1)}) \mu^{-1}(m_{(0)})\o\nu^{-1}(n_{(0)})\\ &=&m\o n, \end{eqnarray*} as desired. This finishes the proof. \section{New solutions of the Hom-Long Equation} \def\theequation{\arabic{section}.\arabic{equation}} \setcounter{equation} {0} In this section, we will present a kind of new solutions of the Hom-Long equation. \medskip \noindent{\bf Definition 5.1.} Let $(H,\alpha)$ be a Hom-bialgebra and $(M,\mu)$ a Hom-module over $(H,\a)$. Then $R\in End(M\o M)$ is called the solution of the Hom-Long equation if it satisfies the nonlinear equation: \begin{eqnarray} R^{12}\circ R^{23}=R^{23}\circ R^{12}, \end{eqnarray} where $R^{12}=R\o\mu,R^{23}=\mu\o R$. \medskip \noindent{\bf Example 5.2.} If $R\in End(M\o M)$ is invertible, then it is easy to see that $R$ is a solution of the Hom-Long equation if and only if $R^{-1}$ is too. \medskip \noindent{\bf Example 5.3.} Let $(M,\mu)$ an $(H,\a)$-Hom-module with a basis $\{m_1,m_2,\cdots,m_n\}$. Assume that $\mu$ is given by $\mu(m_i)=a_im_i$, where $a_i\in k,~i=1,2,\cdots,n$. Define a map \begin{eqnarray*} R:~M\o M\rightarrow M\o M,~~R(m_i\o m_j)=b_{ij}m_i\o m_j, \end{eqnarray*} where $b_{ij}\in k,~i,j=1,2,,\cdots,n.$ Then $R$ is a solution of the Hom-Long equation (5.1). Furthermore, if $a_i=1$, for all $i=1,2,\cdots,n$, then $R$ is a solution of the classical Long equation. \medskip \noindent{\bf Proposition 5.4.} Let $(M,\mu)$ an $(H,\a)$-Hom-module with a basis $\{m_1,m_2,\cdots,m_n\}$. Assume that $R,S\in End(M\o M,\mu\o\mu^{-1})$ given by the matrix formula \begin{eqnarray*} R(m_k\o m_l)= x_{kl}^{ij}m_i\o \mu^{-1}(m_j),~~S(m_k\o m_l)= y_{kl}^{ij}m_i\o \mu^{-1}(m_j), \end{eqnarray*} and $\mu(m_l)=z_{l}^{i}m_i$, where $x_{kl}^{ij},y_{kl}^{ij},z_{l}^{i}\in k$. Then $S^{12}\circ R^{23}=R^{23}\circ S^{12}$ if and only if \begin{eqnarray*} z_{u}^{i}x_{vw}^{jk}y_{ij}^{pq}=z_{i}^{p}x_{jw}^{qk}y_{uv}^{ij}, \end{eqnarray*} for all $k,p,q,u,v,w=1,2,\cdots,n$. In particular, $R$ is a solution of Hom-Long equation if and only if \begin{eqnarray*} z_{u}^{i}x_{vw}^{jk}x_{ij}^{pq}=z_{i}^{p}x_{jw}^{qk}x_{uv}^{ij}. \end{eqnarray*} \noindent{\bf Proof.} According to the definition of $R,S,\mu$, we have \begin{eqnarray*} S^{12}\circ R^{23}(m_{u}\o m_{v}\o m_{w}) &=&S^{12}(z_{u}^{i}m_{i}\o x_{vw}^{jk}m_{j}\o \mu^{-1}(m_{k}))\\ &=&z_{u}^{i}x_{vw}^{jk}y_{ij}^{pq}(m_{p}\o \mu^{-1}(m_{q})\o m_{k}),\\ R^{23}\circ S^{12}(m_{u}\o m_{v}\o m_{w}) &=&R^{23}(y_{uv}^{ij}m_{i}\o\mu^{-1}(m_{j})\o m_{w})\\ &=&y_{uv}^{ij}z_{i}^{p}x_{jw}^{qk}(m_{p}\o \mu^{-1}(m_{q})\o m_{k}). \end{eqnarray*} It follows that $S^{12}\circ R^{23}=R^{23}\circ S^{12}$ if and only if $ z_{u}^{i}x_{vw}^{jk}y_{ij}^{pq}=z_{i}^{p}x_{jw}^{qk}y_{uv}^{ij}. $ Furthermore, $R^{12}\circ R^{23}=R^{23}\circ R^{12}$ if and only if $ z_{u}^{i}x_{vw}^{jk}x_{ij}^{pq}=z_{i}^{p}x_{jw}^{qk}x_{uv}^{ij}. $ The proof is completed. \medskip In the following proposition, we use the notation: for any $F\in End(M\o M)$, we denote $F^{12}=F\o \mu, F^{23}=\mu\o F, F^{13}=(id\o\tau )\circ (F \o \mu)\circ (id\o \tau)$, and $\tau^{(123)}(x\o y\o z)=(z,x,y).$ \smallskip \noindent{\bf Proposition 5.5.} Let $(M,\mu)$ an $(H,\a)$-Hom-module and $R\in End(M\o M)$. The following statements are equivalent: (1) $R$ is a solution of the Hom-Long equation. (2) $U=\tau\circ R$ is a solution of the equation: $$U^{13}\circ U^{23}=\tau^{(123)}\circ U^{13}\circ U^{12}.$$ (3) $T=R\circ\tau$ is a solution of the equation: $$T^{12}\circ T^{13}=T^{23}\circ T^{13}\circ\tau^{(123)}.$$ (4) $W=\tau\circ R \circ \tau$ is a solution of the equation: $$\tau^{(123)}\circ W^{23}\circ W^{13}=W^{12}\circ W^{13}\circ\tau^{(123)}.$$ \noindent{\bf Proof.} We just prove$(1)\Leftrightarrow (2)$, and similar for $(1)\Leftrightarrow (3) $ and $(1)\Leftrightarrow (4).$ Since $R=\tau\circ U$, $R$ is a solution of the Hom-Long equation if and only if $R^{12}\circ R^{23}=R^{23}\circ R^{12}$, that is, \begin{eqnarray} \tau^{12}\circ U^{12}\circ \tau^{23}\circ U^{23}=\tau^{23}\circ U^{23}\circ\tau^{12}\circ U^{12}. \end{eqnarray} While $\tau^{12}\circ U^{12}\circ \tau^{23}=\tau^{23}\circ\tau^{13}\circ U^{13}$ and $\tau^{23}\circ U^{23}\circ\tau^{12}=\tau^{23}\circ\tau^{12}\circ U^{13}$, (5.2) is equivalent to $$\tau^{23}\circ \tau^{13}\circ U^{13}\circ U^{23}=\tau^{23}\circ\tau^{12}\circ U^{13}\circ U^{12},$$ which is equivalent to $U^{13}\circ U^{23}=\tau^{(123)}\circ U^{13}\circ U^{12}$ from the fact $\tau^{23}\circ\tau^{12}=\tau^{(123)}$. \medskip Next we will present a new solution for Hom-Long equation by the Hom-Long dimodule structures. For this, we give the notion of $(H,\a)$-Hom-Long dimodules. \medskip \noindent{\bf Definition 5.6.} Let $(H,\alpha)$ a Hom-bialgebra. A left-left $(H,\a)$-Hom-Long dimodule is a quadrupl $(M,\c, \rho,\mu)$, where $(M, \c, \mu)$ is a left $(H,\alpha)$-Hom-module and $(M, \rho, \mu)$ is a left $(H,\alpha)$-Hom-comodule such that \begin{eqnarray} \rho(h\c m)=\a(m_{(-1)})\o \alpha(h)\c m_{0}, \end{eqnarray} for all $h\in H$ and $m\in M$. \medskip \noindent{\bf Remark 5.7.} Clearly, left-left $(H,\a)$-Hom-Long dimodules is a special case of $(H,B)$-Hom-Long dimodules in Definition 2.1 by setting $(H,\a)=(B,\b).$ \medskip \noindent{\bf Example 5.8.} Let $(H,\a)$ be a Hom-bialgebra and $(M,\c,\mu)$ be a left $(H,\a)$-Hom-module. Define a left $(H,\a)$-Hom-module structure and a left $(H,\a)$-Hom-comodule structure on $(H\o M,\a\o \mu)$ as follows: \begin{eqnarray*} h\c(g\o m)=\a(g)\o h\c\mu(m), ~~\rho(g\o m)=g_1\o g_2\o\mu(m), \end{eqnarray*} for all $h,g\in H$ and $m\in M$. Then $(H\o M,\a\o \mu)$ is an $(H,\a)$-Hom-Long dimodule. \medskip \noindent{\bf Example 5.9.} Let $(H,\a)$ be a Hom-bialgebra and $(M,\rho,\mu)$ be a left $(H,\a)$-Hom-comodule. Define a left $(H,\a)$-Hom-module structure and be a left $(H,\a)$-Hom-comodule structure on $(H\o M,\a\o \mu)$ as follows: \begin{eqnarray*} h\c(g\o m)=hg\o\mu(m), ~~\rho(g\o m)=m_{-1}\o \a(g)\o m_0, \end{eqnarray*} for all $h,g\in H$ and $m\in M$. Then $(H\o M,\a\o \mu)$ is an $(H,\a)$-Hom-Long dimodule. \medskip \noindent{\bf Theorem 5.10.} Let $(H,\a)$ be a Hom-bialgebra and $(M,\c,\rho,\mu)$ be a $(H,\a)$-Hom-Long dimodule. Then the map \begin{eqnarray} R_{M}: M\o M\rightarrow M\o M,~~~~m\o n\mapsto n_{-1}\c m \o n_0, \end{eqnarray} is a solution of the Hom-Long equation, for any $m,n\in M.$ \medskip \noindent{\bf Proof.} For any $l,m,n\in M$, we calculate \begin{eqnarray*} R_{M}^{12}\circ R_{M}^{23}(l\o m\o n) &=&R_{M}^{12}(\mu(l)\o n_{(-1)}\c m\o n_0)\\ &=&(n_{(-1)}\c m)_{(-1)}\c\mu(l)\o(n_{(-1)}\c m)_{0}\o \mu(n_0)\\ &=&\a(m_{(-1)})\c\mu(l)\o\a(n_{(-1)})\c m_{0}\o\mu(n_0),\\ R_{M}^{23}\circ R_{M}^{12}(l\o m\o n) &=&R_{M}^{23}(m_{(-1)}\c l\o m_0\o \mu(n))\\ &=&\mu(m_{(-1)}\c l)\o\a(n_{(-1)}))\c m_0\o\mu(n_0)\\ &=&\a(m_{(-1)})\c\mu(l)\o\a(n_{(-1)})\c m_{0}\o\mu(n_0). \end{eqnarray*} So we have $R_{M}^{12}\circ R_{M}^{23}=R_{M}^{23}\circ R_{M}^{12}$, as desired. And this finishes the proof. \begin{center} {\bf ACKNOWLEDGEMENT} \end{center} The work of S. Wang is supported by the Anhui Provincial Natural Science Foundation (No. 1908085MA03). The work of X. Zhang is supported by the NSF of China (No. 11801304) and the Young Talents Invitation Program of Shandong Province. The work of S. Guo is supported by the NSF of China (No. 11761017) and Guizhou Provincial Science and Technology Foundation (No. [2020]1Y005). \renewcommand{\refname}{REFERENCES}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,588
St John's Catholic Church, New Ferry 3rd.Sunday of Easter Yr. B 22nd April2012 *St John the Evangelist * New Ferry, in the Diocese of Shrewsbury, Reg. Charity 234025 Fr. Frank Rice; Revd. Philip White; Revd. Michael Daly/phone/: 0151 645 3314 /email/: stjohntheevangelist@gmail.com /websites/: http://www.stjohnevang.co.uk http://www.lpa24.org 3^rd .Sunday of Easter Yr. B 22nd April2012 FAITH IN FOCUS: SENSING FAITH Christianity is not a religion of the head. If it were, then only the clever people could become holy while the rest of us would remain on the edges as we struggled to use our few wits to try and work out what our religion was all about. It would be a philosophy, a body of knowledge, accessible only to the intelligent. No, Christianity is about the five senses. It's a body-religion, a sensing faith. It uses sight, sound, smell, touch and taste. In doing this it simply continues what Jesus himself did when he ate and drank, was bathed in the Jordan, laid hands on people, used paste to cure the blind man etc. There is a slightly comic aspect to today's gospel when Jesus scares the wits out of the disciples and then simply asks, "Have you anything here to eat?" And this is just one of several times after the resurrection that Jesus appears to them and starts eating. Maybe it's no coincidence that St Luke is trying to teach us that the primary way we meet Christ today after his resurrection is when we eat and drink his body and blood. The Eucharist is the action of the Church in which Christ becomes present in many ways but most clearly in the act of communion. And, of course, we recall his presence by using all five of our senses. That's why the Church uses water to baptise, why we make the Sign of the Cross with it, why we sprinkle it during some services. It's why we lay hands on the sick, why we make music to God, why we anoint people with oil when they are baptised, confirmed, ordained or weakened by sickness. It explains why some people use incense to beautify their worship and others put out a mass of blazing candles and wear colourful vestments. It is all about worshipping God with our whole person, body as well as mind, about recognising Christ with all the faculties that God has given us. It goes without saying that our five senses do not exist apart from us, on some shelf where we take them down. They are part and parcel of us. It is people who incarnate sight, sound, smell, touch and taste. That's why the best way of ensuring that Christ is present in our world is to carry him around in ourselves. This must be what Jesus meant today when he said, "You are witnesses to this." WORD OF GOD_ I prayed and understanding was given me; I entreated and the spirit of Wisdom came to me. I esteemed her more than sceptres and thrones; compared with her, I held riches as nothing. (Wisdom 7:7-8) _WORD FOR TODAY_ They say home is where the heart is. Similarly, if you want to know where your real values lie then ask yourself what it is that takes up most of your thoughts and energy. Is it getting and having things? Or is it about the quality of your life before God? If it's the latter, then you've acquired true wisdom. _THIRD SUNDAY OF EASTER_ Although I am sure religion can be a great support and a stabilising influence in life, I am incapable of believing. (Mo Mowlam) Sometimes people tell me that they believe in a Supreme Being. I know immediately that they have no faith. You can't live or die for such a nebulous reality. What they do have is a valuable conviction, but their life is not remotely affected by it. (Maria Lopez) The body you receive in the sacrament accomplished its purpose by nailing to a tree. You are to become this body, you are to be nailed…the nails that hold you are God's commandments. (Austin Farrer) Tim Aldred is to give a talk on issues surrounding the world food supply from a Christian perspective. The talk is on Monday, 23 April, at 7.30 p.m. at St Agnes' church hall, West Kirby, and is free. Refreshments from 7.p.m All are welcome /_*Maris Stella High School Reunion*_//__/ /There will be a Reunion of former pupils of Maris Stella High School, New Brighton//taking place on Saturday 19th May with Mass at 12 noon in St Albans Church, Liscard followed by refreshments and a cash Bar in the Parish Hall at 1pm. //All former pupils and teachers most welcome! Tickets are available at £8 per head./ /For further information and to reserve a buffet ticket please contact Veronica Cuthbert 0151 336 7150 or Claire.hetherington@talktalk.net / Our sincere condolences are offered to the bereaved families of Laurence Nugent, Elizabeth McGowan and James Carolan. May all those who have died recently rest in God's peace and may their families and friends find consolation in the belief that one day they will be reunited in the Kingdom of God. 3rd.Sun. Yr. B 22nd. Evelyn & Philip Powell Parishioners 23^rd . Vera Meria 24^th . Raymond Burton 12md 6pm. Elizabeth McGowan Vigil service James Carolan Thurs. James Carolan Ivan Gregory Burial of Ivan's ashes 4th.Sun Easter. Yr.B Benefactors SSCF _Please pray for our sick and those who care for them_ Doreen Wright, Fred Archbold, John Rooney, John Price, Frances Heslin, Liam Halpen, Philomena Moore, Margaret Randles, Teresa McLean, Josie Cohen, Peter Williamson, Kathy Smith, Marjorie Hoey, Mrs H McCormack, Esther Roche, Fay Challoner, Sheila Stockley, Joan & Charles Reynolds, Kath Holland, Mary Bryden, Owen & Josie Toohey, Betty Kennedy, Helen Worth, Christopher Hadfield, Christopher & Raymond McNally, Mark Harrison, Genevieve Foster, & Chris Foster, Norah & John McManus. Remember also those in theparish who do not wish their illness to be made public but who also need our prayers. _*Money- thank you*_ *Offertory*£629 54p. Peter's Dance Money £130 *120 Club Winners*No 39 S Armstrong £20 This Sunday sees the return of the Children's Liturgy. New and old members as well as any new helpers are most welcome to join us at the 10am. Mass. I have been inundated with requests for duplicate baptism certificates- even from parents whose children have been baptised quite recently. These are clearly required by various schools at this time of year and also when people are getting married etc. and it is sad that they don't seem to be important enough to be held carefully by all. _*Be warned*_that from now on a charge will be applied when a duplicate document is required. The SVP are to hold a coffee morning in St Werburgh's parish centre on Monday 23^rd April from 9am. Why not have a break from Beatties and pop in there for a reviving brew instead? On 26^th May, Pantasaph will be the focus for An Our Lady of Lourdes Day. The day starts at 10am and you are requested to bring a packed lunch. There is a poster displayed with full details of the whole event. It is full steam ahead for the collection of the red APF boxes. Please bring yours in as soon as possible or risk the wrath of Tom D! The long awaited Family Irish themed night will take place on May 18^th from 7.30pm. in the parish centre. This event promises to be a really fun evening so make sure you get the date in your diaries and dust off your dancing techniques. Tickets a mere £5 or £15 per family (List at back of church or ring Maria on 201 5837 bring your favourite Irish tipple (British beer at own risk)-musician – Michael Coyne. Monday 23^rd is the feast of St George, Patron of England (and several other places too!). He was martyred at Lydda (Israel) around 303 during the persecution of Diocletian. His cult, which predates the legend of his slaying the dragon, spread quickly through east and west. During the crusades, George was seen to personify the ideals of Christian chivalry which is why he was adopted as several city states and countries. Don't forget the plant sale at SJP next Saturday 5^th . It would be blooming marvellous if all the plants were sold to help the school's PTA. A further reminder that our embargo on chatting during the 5 minutes prior to the start of Mass will begin this weekend. I hope I am not a killjoy but we have to think of our fellow parishioners for whom just a little silence is very important before we celebrate the Eucharist. One of the altar servers will ring a bell to remind you when the silence is to begin. It is just a good habit to develop. *On Saturday 12*^*th* *May There will be a public rosary in St. Werburgh's Square Birkenhead. The focus for the prayer will be world peace. Please join us.* As you may have noticed (opposite)*Ivan Gregory's*ashes will be buried at 11am on Friday 27^th in the graveyard of Christchurch Port Sunlight, near his home. Edite invites anyone who wishes to join her and the family for the short service there. Posted by George Filed in Parish Newsletter Comments are off but you can trackback from your own site.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,226
\section{Acknowledgments} We thank the MIT NLP group for their helpful discussion and comments. This work is supported by DSO grant DSOCL18002. \section{Introduction} \begin{figure}[h!] \includegraphics[width=1\linewidth]{./p1.pdf} \caption{Examples (lower-cased) where multi-sentence context is required to ask the correct questions. Sentences containing answers are in green, while answers are underlined. The red phrases indicate additional background used by a human to generate the question. 1-stage and 2-stage attention QG are results generated by our model with different numbers of attention stages.} \label{fig:example} \vspace{-3mm} \end{figure} The tremendous popularity of reading comprehension through datasets like SQuAD \cite{rajpurkar2016}, MS MARCO \cite{nguyen2016} and NewsQA \cite{trischler2016} has led to a surge in machine reading and reasoning techniques. These datasets are typically constructed using crowd sourcing, which provides high quality questions, but at a high cost of manual labor. There is an urgent need for automated methods to generate quality question-answer pairs from textual corpora. Our goal is to generate a suitable question for a given target answer -- a span of text in a provided document. To this end, we must be able to identify the relevant context for the question-answer pair from the document. Modeling long documents, however, is formidable, and our task involves understanding the relation between the answer and encompassing paragraphs, before asking the relevant question. Typically most existing methods have simplified the task by looking at just the answer containing sentence. However, this does not represent the human process of generating questions from a document. For instance, crowd workers for the SQuAD dataset, as illustrated in Figure \ref{fig:example}, used multiple sentences to ask a relevant question. In fact, as pointed out by \cite{Du2017}, around 30\% of the human-generated questions in SQuAD rely on information beyond a single sentence. To accommodate such phenomenon, we propose a novel approach for document-level question generation by explicitly modeling the context based on a multi-stage attention mechanism. As the first step, our method captures the immediate context, by attending the entire document with the answer to highlight phrases, e.g. \textit{``the unit was dissolved in''} from example 1 in Figure \ref{fig:example}, having a direct relationship with the answer, i.e. \textit{``1985''}. In an iterative step thereafter, we attend the original document representation with the attended document computed in the previous step, to expand the context to include more phrases, e.g. \textit{``abc motion pictures''}, that have an indirect relationship with the answer. We can repeat this process multiple times to increase the linkage-level of the answer-related background. The final document representation, contains relevant answer context cues by means of attention weights. Through a copy-generate decoding mechanism, where at each step a word is either copied from the input or generated from the vocabulary, the attention weights guide the generation of the context words to produce high quality questions. The entire framework, from context collection to copy-generate style generation is trained end-to-end. Our framework for document context representation, strengthened by more attention stages leads to a better question generation quality. Specifically, on SQuAD we get an absolute 5.79 jump in the Rouge points by using a second stage answer-attended representation of the document, compared to directly using the representation right after the first stage. We evaluate our hypothesis of using a controllable context to generate questions on three different QA datasets --- SQuAD, MS MARCO, and NewsQA. Our method strongly outperforms existing state-of-the-art models by an average absolute increase of 1.56 Rouge, 0.97 Meteor and 0.81 Bleu scores over the previous best reported results on all three datasets. \section{Related Work} Question generation has been extensively studied in the past with broadly two main approaches, rule-based and learning-based. \textbf{Rule-based techniques} These approaches usually rely on rules and templates of sentences' linguistic structures, and apply heuristics to generate questions \cite{chali2015,Heilman2011,Lindberg2013,labutov2015}. This requires human effort and expert knowledge, making scaling the approach very difficult. Neural methods tend to outperform and generalize better than these techniques. \textbf{Neural-based models} Since \citet{serban2016,Du2017}, there have been many neural sequence-to-sequence models proposed for question generation tasks. These models are trained in an end-to-end manner and exploit the corpora of the question answering datasets to outperform rule based methods in many benchmarks. However, in these initial approaches, there is no indication about parts of the document that the decoder should focus on in order to generate the question. To generate a question for a given answer, \cite{subramanian2017,kim2018,zhou2017,sun2018} applied various techniques to encode answer location information into an annotation vector corresponding to the word positions, thus allowing for better quality answer-focused questions. \cite{yuan2017} combined both supervised and reinforcement learning in the training to maximize rewards that measure question quality. \cite{liu2019} presented a syntactic features based method to represent words in the document in order to decide what words to focus on while generating the question. The above studies, only consider sentence-level question generation, i.e. looking at one document sentence at a time. Recently, \cite{du2018} proposed a method that incorporated coreference knowledge into the neural networks to better encode this linguistically driven connection across entities for document-level question generation. Unfortunately, this work does not capture other relationships like semantic similarity. As in example 2 of Figure \ref{fig:example}, two semantic-related phrases ``lower wages" and ``lower incomes" are needed to be linked together to generate the desired question. \cite{zhao2018} proposed another document-level question generation where they apply a gated self-attention mechanism to encode contextual information. However, their self-attention over the entire document is very noisy, redundant and contains many encoded dependencies that are irrelevant. \section{Problem Definition} In this section, we define the task of question generation. Given the document D and the answer A, we are interested in generating the question $\overline{Q}$ that satisfies: \[\overline{Q} = \argmax_{Q}~Prob(Q|D,A)\] \noindent where the document $D$ is a sequence of $l_D$ words: $D = {\{x_i\}}^{l_D}_{i=1}$ , the answer $A$ of length $l_A$ must be a sub-span of $D$: $A = {\{x_j\}}^{n}_{j=m}$, where $1 \leq m < n \leq l_D $, and the question $\overline{Q}$ is a well-formed sequence of $l_Q$ words: $\overline{Q} = \{y_k\}^{l_Q}_{k=1}$ that can be answered from $D$ using $A$. The generated words $y_k$ can be derived from the document words ${\{x_i\}}^{l_D}_{i=1}$ or from a vocabulary $V$. \section{Model Architecture} In this section, we describe our proposed model for question generation. The key idea of our model is to use a multi-stage attention mechanism to attend to the important parts of the document that are related to the answer, and use them to generate the question. Figure \ref{fig:architecture} shows the high level architecture of the proposed model. \subsection{Input and Context Encoding} The input representation for the document and its interaction with the answer are described as follows. \label{sec:enc} \paragraph{Input Encoding} Our model accepts two inputs, an answer $A$ and the document $D$ that the answer belongs to. Each of which is a sequence of words. The two sequences are indexed into a word embedding layer $W_{emb}$ and then passed into a shared Bidirectional LSTM layer \cite{sak2014long}: \begin{align} H^A = \text{BiLSTM}(\mathbf{W_{emb}}(A))\\ H^D = \text{BiLSTM}(\mathbf{W_{emb}}(D)) \end{align} where $H^A$ $\in$ $\mathbb{R}^{\ell_A \times d}$ and $H^D \in \mathbb{R}^{\ell_D \times d}$ are the hidden representations of $A$ and $D$ respectively, and $d$ is the hidden size of the Bidirectional LSTM. \paragraph{Context Encoding} The answer's context in the document is identified using our multi-stage attention mechanism, as described below. \begin{figure}[t!] \includegraphics[width=1\linewidth]{./architecture.pdf} \caption{The architecture of our model (with two-stage attention). For simplicity we assume that the document has 4 words and the answer has 3 words.} \label{fig:architecture} \end{figure} \noindent \textbf{Initial Stage} (context with direct relation to answer): We pass $H^D,H^A$ into an alignment layer. Firstly, we compute a soft attention affinity matrix between $H^D$ and $H^A$ as follows: \begin{equation} M_{ij}^{(1)} = \textbf{F}(h_{i}^{D})\:\textbf{F}(h_{j}^{A})^{\top} \label{align1} \end{equation} where $h_{i}^{D}$ is the $i^{th}$ word in the document and $h_{j}^{A}$ is the $j^{th}$ word in the answer. $\textbf{F}(\cdot)$ is a standard nonlinear transformation function (i.e., $\textbf{F}(x) = \sigma(\textbf{W}x + \textbf{b})$, where $\sigma$ indicates Sigmoid function), and is shared between the document and answer in this stage. $M^{(1)} \in \mathbb{R}^{ \ell_D \times \ell_A }$ is the soft matching matrix. Next, we apply a column-wise max pooling of $M^{(1)}$. The key idea is to generate an attention vector: \begin{align} a^{(1)} = \text{softmax}(\max_{col}(M^{(1)})) \end{align} \noindent where $a^{(1)} \in \mathbb{R}^{~l_D}$. Intuitively, each element $a_i^{(1)} \in a^{(1)}$ captures the degree of relatedness of the $i^{th}$ word in document $D$ to answer $A$ based on its maximum relevance on each word of the answer. To learn the context sensitive weight importance of document, we then apply the attention vector on $H^D$: \begin{align} C^{(1)} = H^D \odot a^{(1)} \end{align} \noindent where $\odot$ denotes element-wise multiplication. $C^{(1)} \in R^{l_D \times d}$ can be considered as the first attended contextual representation of document where the words directly related to the answer are amplified with the high attention scores whilst the unrelated words are filtered out with low attention scores.\\ \noindent \textbf{Iterative Stage} (enhance the context with indirect relations): In this stage, we expand the context by collecting more words from the document that are related to \textit{direct-context} computed in the first stage. We achieve this by attending the contextual attention representation of document obtained in stage 1 with original document representation as follows: \begin{align} &M_{ij}^{(2)} = \textbf{F}(h_{i}^{D})\:\textbf{F}(C_{j})^{\top} \\ &a^{(2)} = \text{softmax}(\max_{\text{col}}(M^{(2)}))\\ &C^{(2)} = H^D \odot a^{(2)} \end{align} We can repeat the steps in this stage to enhance the context to the answer-related linkage level $k$. We denote the answer-focused context representation after $k$ stages as $C^{(k)}$. In our experiments, we train our models with a predefined value $k$, which is fine-tuned on the validation set. \paragraph{Answer Masking} Due to the enriched information in the context representation, it is essential for the model to know the position of the answer so that: (1) it can generate question that is coherent with the answer, and (2) does not include the exact answer in the question. We achieve this by masking the word representation at the position of the answer in the context representation $C^{(k)}$ with a special masking vector: \begin{align} C^{\text{final}} = Mask(C^{(k)}) \end{align} \noindent $C^{\text{final}} \in R^{l_D \times d}$ can be considered as final contextual attention representation of document and will be used as the input to the decoder. \subsection{Decoding with Pointer Generator Network} Using our context rich input representation $C^{\text{final}}$ computed previously, we move forward to the question generation. Our decoding framework is inspired by the pointer-generator network \cite{pointer-generator}. The decoder is a BiLSTM, which at time-step $t$, takes as its input, the word-embedding of the previous time-step's output $W_e(y^{t-1})$ and the latest decoder state attended input representation $r^{t}$ (described later in Equation \eqref{eq:individual_context}) to get the decoder state $h^t$: \begin{align} h^t = BiLSTM([r^t, \mathbf{W_e}(y^{t-1})], h^{t-1}) \label{eq:decoder} \end{align} Using the decoded state to generate the next word, where words can either be copied from the input; or generated by selecting from a fixed vocabulary: \begin{align} P_\text{vocab} = \text{softmax}(\mathbf{V}^{\top}[h^t,r^t]) \label{eq:fixed_vocabulary} \end{align} The \textit{generation probability} $p_\text{gen} \in [0,1]$ at time-step $t$ depends on the context vector $r^t$, the decoder state $h^t$ and the decoder input $x^t = [r^t, \mathbf{W_e}(y^{t-1})]$: \begin{align} p_\text{gen} = \sigma(\mathbf{w_{r}}r^{t} + \mathbf{w_{x}}x^{t} + \mathbf{w_{h}}h^{t}) \label{eq:generate_copy} \end{align} \noindent where $\sigma$ is the sigmoid function. This gating probability $p_\text{gen}$ is used to evaluate the probability of eliciting a word $w$ as follows: \begin{align} P(w) &= p_\text{gen}P_\text{vocab}(w) + (1-p_\text{gen})\sum_{i:w_{i}=w} a_{i}^{t} \label{eq:vocabulary} \end{align} \noindent where $\sum_{i:w_{i}=w} a_{i}^{t}$ denotes the probability of word $w$ from the input being generated by the decoder: \begin{align} e_i^{t} &= \mathbf{u}^{\top}\tanh(C^{\text{final}}_i + h^{t-1}) \\ a^t &= \text{softmax}(e^t) \end{align} Unlike traditional sequence to sequence models, our input $C^{\text{final}}$ is already weighted via the answer level self-attention. This weighting is reflected directly in the final generation via the copy mechanism through $a^t$, and is also used to evaluate the input context representation $r^t$: \begin{equation} r^t = \sum_{i}a_i^{t}{C^{\text{final}}_i} \label{eq:individual_context} \end{equation} Finally, the word output at time step $t$, $y^t$ is identified as: \begin{align} y^{t} &= \argmax_w P(w) \end{align} \noindent The model is trained in an end to end framework, to maximize the probability of generating the target sequence $y^1,...,y^{l_Q}$. At each time step $t$, the probability of predicting $y^t$ is optimized using cross-entropy from the probability of words over the entire vocabulary (fixed and document words). Once the model is trained, we use beam search for inference during decoding. The beam search is parameterised by the possible number of paths $k$. \section{Experimental Setup} In this section we describe the experimental setting to study the proficiency of our proposed model. \subsection{Datasets} We evaluate our model on 3 question answering datasets: SQuAD \cite{rajpurkar2016}, MS Marco \cite{nguyen2016} and NewsQA \cite{trischler2016}. These form a comprehensive set of datasets to evaluate question generation. \vspace{-3mm} \bigskip \noindent \textbf{SQuAD.} SQuAD is a large scale reading comprehension dataset containing close to 100k questions posed by crowd-workers on a set of Wikipedia articles, where the answer is a span in the article. The dataset for our question generation task is constructed from the training and development set of the accessible parts of SQuAD. To be able to directly compare with other reported results, we consider the two following splits: \begin{itemize} \item Split1: similar to \cite{zhao2018}, we keep the SQuAD train set and randomly split the SQuAD dev set into our dev and test set with the ratio 1:1. The split is done at sentence level. \item Split2: similar to \cite{Du2017}, we randomly split the original SQuAD train set randomly into train and dev set with the ratio 9:1, and keep the SQuAD dev set as our test set. The split is done at article level. \end{itemize} \noindent \textbf{MS MARCO.} MS MARCO is the human developed question answering dataset derived from a million Bing search queries. Each query is associated with paragraphs from multiple documents resulting from Bing, and the dataset mentions the list of ground truth answers from these paragraphs. Similar to \cite{zhao2018}, we extract a subset of MS Marco where the answers are sub-spans within the paragraphs, and then randomly split the original train set into train (51k) and dev (6k) sets. We use the 7k questions from the original dev set as our test set. \vspace{-2mm} \bigskip \noindent \textbf{NewsQA.} NewsQA is the human generated dataset based on CNN news articles. Human crowd-workers are motivated to ask questions from headlines of the articles and the answers are found by other workers from the articles contents. In our experiment, we select the questions in NewsQA where answers are sub-spans within the articles. As a result, we obtain a dataset with 76k questions for train set, and 4k questions for each dev and test set. \vspace{-2mm} \bigskip \noindent Table \ref{tab:datasets} gives the details of the three datasets used in our experiments. \vspace{-2mm} \begin{table}[h] \centering \begin{small} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Dataset & Train & Dev & Test & $l_D$ & $l_Q$ & $l_A$ \\ \hline SQuAD-1 & 87,488 & 5,267 & 5,272 & 126 & 11 & 3\\ SQuAD-2 & 77,739 & 9,749 & 10,540 & 127 & 11 & 3 \\ MS Marco & 51,000 & 6,000 & 7,000 & 60 & 6 & 15\\ NewsQA & 76,560 & 4,341 & 4,292 & 583 & 8 & 5\\ \hline \end{tabular} \vspace{-1mm} \end{small} \caption{Description of the evaluation datasets. $l_D$ , $l_Q$ and $l_A$ stand for average length of document, question and answer respectively.} \vspace{-2mm} \label{tab:datasets} \end{table} \begin{table*}[ht!] \centering \begin{tabular}{|l||c|c|c|c||c|c|} \hline Model & Bleu-1 & Bleu-2 & Bleu-3 & Bleu-4 & Meteor & Rouge-L \\ \hline PCFG-Trans & 28.77 & 17.81 & 12.64 & 9.47 & 18.97 & 31.68 \\ SeqCopyNet & - & - & - & 13.02 & - & 44.00 \\ seq2seq+z+c+GAN & 44.42 & 26.03 & 17.60 & 13.36 & 17.70 & 40.42 \\ NQG++ & 42.36 & 26.33 & 18.46 & 13.51 & 18.18 & 41.60 \\ MPQG & - & - & - & 13.91 & - & - \\ APM & 43.02 & 28.14 & 20.51 & 15.64 & - & - \\ ASs2s & - & - & - & 16.17 & - & - \\ S2sa-at-mp-gsa & 45.69 & 30.25 & 22.16 & 16.85 & 20.62 & 44.99 \\ CGC-QG & 46.58 & 30.90 & 22.82 & 17.55 & 21.24 & 44.53 \\ \hline Our model & \textbf{46.60} & \textbf{31.94} & \textbf{23.44} & \textbf{17.76} & \textbf{21.56} & \textbf{45.89} \\ \hline \end{tabular} \vspace{-2mm} \caption{Results in question generation on SQuAD split1} \label{tab:split1} \end{table*} \begin{table*}[ht!] \centering \begin{tabular}{|l||c|c|c|c||c|c|} \hline Model & Bleu-1 & Bleu-2 & Bleu-3 & Bleu-4 & Meteor & Rouge-L \\ \hline LTA & 43.09 & 25.96 & 17.50 & 12.28 & 16.62 & 39.75 \\ MPQG & - & - & - & 13.98 & 18.77 & 42.72 \\ CorefNQG & - & - & 20.90 & 15.16 & 19.12 & - \\ ASs2s & - & - & - & 16.20 & 19.92 & 43.96 \\ S2sa-at-mp-gsa & 45.07 & 29.58 & 21.60 & 16.38 & 20.25 & 44.48 \\ \hline Our model & \textbf{45.13} & \textbf{30.44} & \textbf{23.40} & \textbf{17.09} & \textbf{21.25} & \textbf{45.81} \\ \hline \end{tabular} \vspace{-2mm} \caption{Results in question generation on SQuAD split2} \label{tab:split2} \vspace{-4mm} \end{table*} \subsection{Implementation Details} We use a one-layer Bidirectional LSTM with hidden dimension size of 512 for the encoder and decoder. Our entire model is trained end-to-end, with batch size 64, maximum of 200k steps, and Adam optimizer with a learning rate of 0.001 and L2 regularization set to $10^{-6}$. We initialize our word embeddings with frozen pre-trained GloVe vectors \cite{Pennington2014}. Text is lowercased and tokenized with NLTK. We tune the step of biattention used in encoder from \{1, 2, 3\} on the development set. During decoding, we used beam search with the beam size of 10, and stopped decoding when every beam in the stack generates the $\textless EOS \textgreater$ token. \subsection{Evaluation} Most of the prior studies evaluate the model performances against target questions using automatic metrics. In order to have an empirical comparison, we too use Bleu-1, Bleu-2, Bleu-3, Bleu-4 \cite{Papineni2002}, METEOR \cite{Denkowski2014} and ROUGE-L \cite{Lin2004} to evaluate the question generation methods. Bleu measures the average n-gram precision on a set of reference sentences. METEOR is a recall-oriented metric used to calculate the similarity between generations and references. ROUGE-L is used to evaluate longest common sub-sequence recall of the generated sentences compared to references. A question structurally and syntactically similar to the human question would score high on these metrics, indicating relevance to the document and answer. In order to have a more complete evaluation, we also report human evaluation results, where annotators evaluate the quality of questions generated on two important parameters: naturalness (grammar) and difficulty (in answering the question) (Section 6.2). \begin{table*}[ht!] \centering \begin{tabular}{|l||c|c|c|c||c|c|} \hline Model & Bleu-1 & Bleu-2 & Bleu-3 & Bleu-4 & Meteor & Rouge-L \\ \hline LTA & - & - & - &10.46 & - & - \\ QG+QA & - & - & - & 11.46 & - & - \\ S2sa-at-mp-gsa & - & - & - & 17.24 & - & - \\ \hline Our model & \textbf{41.43} & \textbf{29.97} & \textbf{23.01} & \textbf{18.25} & \textbf{42.77} & \textbf{19.43} \\ \hline \end{tabular} \caption{Results in question generation on MS MARCO} \label{tab:ms_macro} \end{table*} \begin{table*}[ht!] \centering \begin{tabular}{|l||c|c|c|c||c|c|} \hline Model & Bleu-1 & Bleu-2 & Bleu-3 & Bleu-4 & Meteor & Rouge-L \\ \hline PCFG-Trans & 16.90 & 7.94 & 4.72 & 3.08 & 13.74 & 23.78 \\ MPQG & 35.70 & 17.16 & 9.64 & 5.65 & 14.13 & 39.85 \\ NQG++ & 40.33 & 22.47 & 14.83 & 9.94 & 16.72 & 42.25 \\ CGC-QG & 40.45 & 23.52 & 15.68 & 11.06 & 17.43 & 43.16 \\ \hline Our model & \textbf{42.54} & \textbf{26.14} & \textbf{17.30} & \textbf{12.36} & \textbf{19.04} & \textbf{44.05} \\ \hline \end{tabular} \caption{Results in question generation on NewsQA} \label{tab:news_qa} \end{table*} \subsection{Baselines} As baselines, we compare our proposed model against several prior work on question generation. These include:\vspace{-3mm} \begin{itemize} \itemsep-0.2em \item \textbf{PCFG-Trans} \cite{Heilman2011}: a rule-based system that generates a question based on a given answer word span. \item \textbf{LTA} \cite{Du2017}: the seminal Seq2seq model for question generation. \item \textbf{ASs2s} \cite{kim2018}: a Seq2Seq model learns to identify which interrogative word should be used by replacing the answer in the original passage with a special token. \item \textbf{MPQG} \cite{Song2018}: a Seq2Seq model that matches the answer with the passage before generating question \item \textbf{QG+QA} \cite{duan2017}: a model that combines supervised and reinforcement learning for question generation \item \textbf{NQG++} \cite{zhou2017}: a Seq2Seq model with a feature-rich encoder to encode answer position, POS and NER tag information. \item \textbf{APM} \cite{sun2018}: a model that incorporates the relative distance between the context words and answer when generating the question. \item \textbf{S2sa-at-mp-gsa} \cite{zhao2018} : a Seq2Seq model that uses gate self-attention and maxout-pointer mechanism to encode the context of question. \item \textbf{SeqCopyNet} \cite{zhou2018_seq}: a Seq2Seq model that use the copying mechanism to copy not only a single word but a sequence from the input sentence. \item \textbf{Seq2seq+z+c+GAN} \cite{yao2018}: a GAN-based model captures the diversity and learning representation using the observed variables. \item \textbf{CorefNQG} \cite{du2018}: a Seq2Seq model that utilizes the coreference information to link the contexts. \item \textbf{CGC-QG} \cite{liu2019}: a Seq2Seq model that learns to make decisions on which words to generate and to copy using rich syntactic features. \end{itemize} \begin{comment} \bigskip \noindent \textbf{Du}: This is the first large scale data driven model. Here the paragraph and question are passed separately to the generate the question. There is no explicit mechanism to focus on this large input while generating a question. \bigskip \noindent \textbf{Yao}:Here they first compute answer encoded document representation by encoding the answer in the paragraph. Then, using self attention, a self-matched representation is presented to the encoder along with the original answer encoded paragraph representation. This is fed to the decoder for final output generation. \bigskip \noindent \textbf{Liu}: Construct a clue word predictor, by computing a dependency tree over the document and running it through a graph convolution. This clue-word predictor, along with other syntactic features is fed in to the model to generate a question by deciding to either generate or copy a word from the passage, guided by these features. \bigskip \noindent \textbf{Masking}: We study the impact of including the masking the answer in the paragraph representation. \bigskip \noindent \textbf{Depth}: We study the impact of varying the depth of the biattention recursion on the generation output. \end{comment} \section{Results and Analysis} In this section, we discuss the experimental results and some ablation studies of our proposed model. \subsection{Comparison with Baseline Models} We present the question generation performance of baseline models and our model on the three QA datasets in Tables \ref{tab:split1}, \ref{tab:split2}, \ref{tab:ms_macro} and \ref{tab:news_qa} \footnote{For most baselines, we don't have access to their implementations. Hence, we present results for only datasets that they report on in their papers.}. We find that our model consistently outperforms all other baselines and sets a new state-of-the-art on all datasets and across different splits. For SQuAD split-1, we achieve an average absolute improvement of 0.2 in Bleu-4, 0.3 in Meteor and 1.3 points in Rouge-L score compared to the best previous reported result. \footnote{We take 5 random splits and report the average across the splits. The lowest performance of the 5 runs also exceeds the state-of-the-art in this setting. Previous methods take an equal random split of the development set into dev/test sets. This can lead to inconsistencies in comparisons.} For SQuAD split-2, we achieve even higher average absolute improvement of 0.7, 1.0 and 1.4 points of Bleu-4, Meteor and Rouge-L scores respectively, compared to \textit{S2sa-at-mp-gsa} - the best previous model on the dataset and also a document-level question generation model. Showing that our model can identify better answer-related context for question generation compared to other document-level methods. On the MS MARCO dataset, where the ground truth questions are more natural, we achieve an absolute improvement of 1.0 in Bleu-4 score compared to the best previous reported result. On the NewsQA dataset, which is the harder dataset as the length of input documents are very long, our overall performance is still promising. Our model outperforms the CGC-QG model by an average absolute score 1.3 of Bleu-4, 1.6 of Meteor, and 0.9 of Rouge-L, again demonstrating that exploiting the broader context can help the question generation system better match humans at the task. \subsection{Human Evaluation} To measure the quality of questions generated by our system, we conduct a human evaluation. Most of the previous work, except the LTA system \cite{Du2017}, do not conduct any human evaluation, and for most of the competing methods, we do not have the code to reproduce the outputs. Hence, we conduct human evaluation using the exact same settings and metrics in \cite{Du2017} for a fair comparison. Specifically, we consider two criterion in human evaluation: (1) Naturalness, which indicates the grammaticality and fluency; and (2) Difficulty, which measures the syntactic divergence and the reasoning needed to answer the question. We randomly sample 100 sentence-question pairs from our SQuAD experimental outputs. We then ask four professional English speakers to rate the pairs in terms of the above criterion on a 1$-$5 scale (5 for the best). The experimental result is given in Table \ref{tab:human_evaluation}. \begin{table}[h] \centering \begin{small} \begin{tabular}{|p{3.5cm}|p{1.5cm}|p{1.5cm}|} \hline & Naturalness & Difficulty \\ \hline LTA & 3.36 & 3.03\\ Our model & 3.68 & \textbf{3.27} \\ \hline Human generated questions & \textbf{4.06} & 2.65\\ \hline \end{tabular} \end{small} \caption{Human evaluation results for question generation. Naturalness and difficulty are rated on a 1$-$5 scale (5 for the best).} \label{tab:human_evaluation} \end{table} The inter-rater agreement of Krippendorff's Alpha between human evaluations is 0.21. The results imply that our model can generate questions of better quality than the LTA system. Our system tends to generate difficult questions owing to the fact that it gathers context from the whole document rather than from just one or two sentences. \subsection{Ablation Study} In this section, we study the impact of (1) The proposed attention mechanism in the encoder; (2) The number of attention stages used in that mechanism; and (3) The masking technique used for the encoder. \begin{table*}[h!] \centering \begin{tabular}{|p{5.0cm}||p{1.1cm}|p{1.1cm}|p{1.5cm}|} \hline Model & Bleu-4 & Meteor & Rouge-L \\ \hline \noindent Original (2-stage attention) & \textbf{17.76} & \textbf{21.56} & \textbf{45.89} \\ ~~~ - without attention & 3.06 & 10.83 & 28.75\\ ~~~ - without masking & 5.19 & 13.08 & 31.14\\ ~~~ - with 1-stage attention & 14.52 & 18.28 & 40.10 \\ ~~~ - with 3-stage attention & 12.87 & 16.05 & 38.33 \\ \hline \end{tabular} \caption{Ablation study on SQuAD split 1.} \label{tab:ablation} \end{table*} \bigskip \noindent \textbf{Impact of using encoder attention~} In this ablation, we remove the attention mechanism in the encoder and just pass the vanilla document representation to the decoder. As shown in Table \ref{tab:ablation}, without using attention mechanism, the performance drops significantly (more than 14 Bleu points). We hypothesize that without attention, the model lacks the capability to identify the import parts of document and hence generates questions unrelated to the target answer. \bigskip \noindent \textbf{Impact of number of attention stages~} As shown in Table \ref{tab:ablation}, with an increase in the number of attention stages from 1 to 2, the performance of model improves significantly, with an increment of more than 3 Bleu-4 points. To have a deeper understanding about the impact of the number of attention stages, we calculate for the words in the document that occurred in the ground truth question, their total attention score at the end of input attention layer as in Figure \ref{fig:score}. For 1-stage and 2-stage attention, the total attention score of the question words to be copied from the document are 0.43 and 0.52 respectively, demonstrating that in SQuAD dataset, the 2-stage attention covers more of the question words in a focused manner. An example for this effect can be seen in Figure \ref{fig:density}. The extra stage clearly helps in gathering more relevant context to generate a question closer to the ground-truth. However, on further increasing the number of attention stages to 3, we observe that the quality of generated questions deteriorates. This can be attributed to the fact that for most of the questions in SQuAD, such 3-stage attention leads to a very cloudy context, where several words get covered, but with a diluted attention. 3-stage attention's coverage in Figure \ref{fig:score} shows this clearly, where its coverage in ground-truth questions is lower than even the 1-stage attention, justifying its poor question generation quality. \begin{figure}[t!] \centering \includegraphics[width=0.8\linewidth]{./attention_scores.png} \caption{Average total attention score of words in the document that occurred in the ground truth question when using different attention stages (SQuAD split 1).} \label{fig:score} \vspace{2mm} \end{figure} \begin{figure}[t!] \includegraphics[width=1.0\linewidth]{./attention3.png} \caption{Qualitative analysis of attention vector. The intensity of the color (red) denotes the strength of the attention weights.} \label{fig:density} \end{figure} \bigskip \noindent \textbf{Impact of masking~} While attending to the answer's context and the related sentences is crucial, we find that it is imperative to mask out the answer before getting the input representation. It is demonstrated from the experimental results in Table \ref{tab:ablation} where the Bleu-4 score is increased more than 12 points when applying this masking. \subsection{Case Study} In Figure \ref{fig:example}, we introduce some examples that the document-level information obtained from our proposed multi-stage attention mechanism is needed to generate the correct questions. In example 1, the two-stage attention model is able to identify the phrases \textit{``this unit''} referring to \textit{``abc motion pictures''}, which is out of the sentence containing the answer. In example 2, two semantic-related phrases \textit{``lower incomes''} and \textit{``lower wages''} in two different sentences are successfully linked by our two-stage attention model to generate the correct question. In example 3, the two-stage attention model is able to link two different sentences containing the same word (\textit{``french''}) and semantic-related words (\textit{``bible''} and \textit{``scriptures''}), forming relevant context for generating the expected question. \section{Conclusion} In this paper, we proposed a novel document-level approach for question generation by using multi-step recursive attention mechanism on the document and answer representation to extend the relevant context. We demonstrate that taking additional attention steps helps learn a more relevant context, leading to a better quality of generated questions. We evaluate our method on three QA datasets - SQuAD, MS MARCO and NewsQA, and set the new state-of-the-art results in question generation for all of them.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,795
Rebel Richard Dean (born 11 April 1966) is a British actor, singer, songwriter, entertainer, and musician. He is known for his acting and singing roles in Joseph, Casualty, Only Fools and Horses, and Hollyoaks. Biography Dean was given a guitar at the age of 10 and started playing at school's plays. By the age of fifteen he played the lead male in his first theatre role, Pharaoh in Joseph and the Amazing Technicolor Dreamcoat. After leaving school, he drove a Ford pick-up truck during the day and played the pubs and clubs at night. He has played many roles in theater, film and TV productions including Hollyoaks, Only Fools and Horses,  Casualty, This Life, The Vet (HTV), Prince and the Pauper, Inspector Wycliffe (HTV). Other films he has played the male lead in include Ebb Tide and California Eden. Dean starred in the stage show 4 Steps to Heaven for the latter part of 1998 and most of 1999 playing the young, middle and mature Elvis Presley. He has co-written the anthem for the Cornish county rugby team Trelawney's Army. Film Commercial/ Documentary Musical/ Voiceover/ Pantomime Discography Dean has released a number of singles and albums so far in his music career. Albums Singles Other notable music contributions include That'll Be The Day portraying legends from Elvis Presley to Freddie Mercury. References 1966 births British male musicians British male actors British male singers Living people Place of birth missing (living people)
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,957
package com.kumar.brajesh.couriertracker.network; /** * Created by brajesh.k on 07/07/15. */ public class BaseResponse { }
{ "redpajama_set_name": "RedPajamaGithub" }
286
! Mark Haddon The Curious Incident Of The Dog In The Nighttime Audiobook Listen to "The Curious Incident of the Dog in the Night-Time" by Mark Start a free day trial today and get your first audiobook free. by Mark Haddon. Written by Mark Haddon, Audiobook narrated by Jeff Woodman. Sign-in to download and listen to this audiobook today! First time visiting Audible? Get this book. Listen to Curious Incident of the Dog in the Night Time audiobook by Mark Haddon. Stream and download audiobooks to your computer, tablet or mobile phone. The Curious Incident of the Dog in the Night-Time Audible Audiobook – Unabridged. Mark Haddon . Audiobook. Mark Manson . Starting in , Haddon, a Harvard graduate, wrote and illustrated mostly children's books. During those days. Incident of the Dog in the Night-Time (Dramatised) (Unabridged) by Mark Haddon in iTunes. Read a description of this audiobook, customer reviews and more. Chapter 2," begins Christopher Boone, the remarkable protagonist of Mark Haddon's remarkable novel. Christopher has Asperger's syndrome, a version of. Advanced · Try Libby, our new app for enjoying ebooks and audiobooks! ×. Title details for The Curious Incident of the Dog in the Night Time by Mark Haddon. Listen to The Curious Incident of the Dog in the Night-Time (Unabridged) Audiobook by Mark Haddon, narrated by Ben Tibber. was won by Mike Carrington Ward's outstanding adaption of Mark Haddon's hugely successful novel The Curious Incident of the Dog in Night-time (Random. An explosive, highly charged, and hilarious middle-grade adventure from Haddon, acclaimed author of "The Curious Incident of the Dog in the Night-time.". The Curious Incident of the Dog in the Night-Time audiobook, by Mark Haddon Our hero in this exceptional mystery is a mystery himself. Christopher is 15 and. podarunkową · Załóż konto. Próbka. The Curious Incident of the Dog in the Night-Time - Mark Haddon Audiobook. The Curious Incident of the Dog in the. The Curious Incident of the Dog in the Night-Time has ratings and reviews. Brad said: The Prime Reasons Why I Enjoyed Mark Haddon's The. This is an audiobook version of the award-winning The Curious Incident of the Dog in the Night-time by Mark Haddon. A murder mystery like no. Booktopia has RC The Curious Incident of the Dog in the Night-time -CD Audio Book by Haddon, Mark. Buy a discounted audible edition of RC The. 'The dog was lying on the grass in the middle of the lawn in front of Mrs Shears' Christopher is a brilliant creation, and Mark Haddon's depiction of his world is. Buy The Curious Incident of the Dog in the Night-Time audio book on Unabridged Visit Audio Editions for more audio books by Mark Haddon!. The Audiobook (Cassette) of the The Curious Incident of the Dog in the Night- Time by Mark Haddon at Barnes & Noble. FREE Shipping on. Mark Haddon – The Curious Incident Of The Dog In The Night Time. The Curious Incident Of Random House Audiobooks – RC Format: 6 × CD. Country. in iTunes. Read a description of this audiobook, customer reviews and more. Mark Haddon, The Curious Incident of the Dog in the Night-Time (Dramatised. The Curious Incident of the Dog in the Night-Time Unabridged (Audio Download) : : Mark Haddon, Ben Tibber, Random House AudioBooks. The Curious Incident of the Dog in the Night-time (Audiobook CD): Haddon, Mark: The Curious Incident of the Dog in the Night Time. The Curious Incident of the Dog in the Night-Time Audiobook – Unabridged . © Mark Haddon (P) Recorded Books, LLC . was I surprised to learn it is fiction and Mark's knowledge and understanding is just next level impressive!. Ashapoorv: 'It helps a child understand the tough part of relationships and love, while it teaches adults that every child is special and one of a. Mark Haddon is a writer and artist. His bestselling novel, The Curious Incident of the Dog in the Night-Time, was published simultaneously by Jonathan Cape. The Curious Incident of the Dog in the Night-Time (Unabridged) (Audio Download): Mark Haddon, Ben Tibber, Random House AudioBooks: Amazon. A short summary of Mark Haddon's The Curious Incident of the Dog in the Night- time. This free synopsis covers all the crucial plot points of The Curious Incident. The curious incident of the dog in the night-time: a novel. by Mark Haddon; Jeff Woodman. Audiobook: Fiction. English. Unabridged. Prince Frederick, MD . The Curious Incident of the Dog in the Night-Time by Mark Haddon Audiobook. Published on Fri, 18 Jan The Curious Incident of the Dog in the. The Curious Incident of the Dog in the Night-Time Unabridged (Audio Download) : : Mark Haddon, Ben Tibber, Random House AudioBooks: Audible. This audio file should be playable on most browsers. It has been tested succesfully on: Microsoft Edge. OVER TEN MILLION COPIES SOLD The Curious Incident of the Dog in the Night- Time is a murder mystery novel like no other. The detective. The Curious Incident of the Dog in the Night-time, Haddon, Mark, Very Good Book. £ Buy it now. Free P&P. Authors: Haddon, Mark. Product Category. Audiobook Club Selection: The Curious Incident of the Dog Selection for Oct - Nov. Author: Mark Haddon. Narrator: Jeff Woodman. Christopher John Francis. The Curious Incident of the Dog in the Night-Time (Hörbuch-Download): Amazon. de: Mark Haddon, Ben Tibber, Random House AudioBooks: Bücher. Amazon has the Audible audiobook edition of The Curious Incident of the Dog in the Night-Time by Mark Haddon on sale for $ This book. The Curious Incident of the Dog in the Night-Time audiobook written by Mark Haddon. Narrated by Jeff Woodman. Get instant access to all your favorite books. Written by Mark Haddon and read aloud as an audiobook by Jeff Woodman, The Curious Incident of the Dog in the Night-Time follows the inner. hilarious middle-grade adventure from Mark Haddon, acclaimed author of The Curious Incident of the Dog in the Night-time. Buy the Audiobook Download. The Curious Incident of the Dog in the Night-time by Mark Haddon starting at $ The Curious Incident of the Dog in the Night-time has 23 available editions . The Curious Incident of the Dog in the Night-time The dog was lying on the grass in the middle of the lawn in front of Mrs Mark Haddon . Audiobooks. Shopping The Curious Incident of the Dog in the Night Time Audiobook Free Download now. Listen Listen to thousands of best sellers and new. The Curious Incident of the Dog in the Night-time study guide contains a biography of Mark Haddon, literature essays, quiz questions, major. The Curious Incident of the Dog in the Night-Time is a murder mystery novel like no other. The detective, and narrator, is Christopher Boone. The Curious Incident of the Dog in the Night-Time by Mark Haddon Audiobook. By Wanda Kit with Rating Miranda mazza. subscribers. Subscribe. Products 1 - 54 of 54 The Curious Incident of the Dog in the Night-time (Unabridged edition). By: Mark Haddon Audio Book. 'The dog was lying on the grass in. The Curious Incident of the Dog in the Night-Time Chapter summary. Brief summary of Chapter in The of the Dog in the Night-Time. by Mark Haddon. 39 :: 40 :: 41 :: 42 :: 43 :: 44 :: 45 :: 46 :: 47 :: 48 :: 49 :: 50 :: 51 :: 52 :: 53 :: 54 :: 55 :: 56 :: 57 :: 58 :: 59 :: 60 :: 61 :: 62 :: 63 :: 64 :: 65 :: 66 :: 67 :: 68 :: 69 :: 70 :: 71 :: 72 :: 73 :: 74 :: 75 :: 76 :: 77 :: 78
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,704
DeVry being probed by U.S. The Justice Department has asked DeVry Inc. to hand over documents regarding recruiter evaluation and compensation, the education company said. DeVry said it was cooperating with the request, received May 2 in connection with allegations that the company may have submitted false claims to the Department of Education.
{ "redpajama_set_name": "RedPajamaC4" }
4,534
Who is the last living actor from Gone With the Wind? Posted on December 3, 2022 by author Olivia de Havilland, an iconic actress of Hollywood's Golden Age and last surviving cast member of "Gone With the Wind," has died at age 104, her representatives said Sunday. Is Greg Giese still alive? "I had to get serious about a real job. I thought a little about it, but it was pretty much a pipe dream." Giese now lives in Corona, a retired insurance man who, at 74, is among the handful of cast members still alive and the youngest GWTW survivor. Is there anyone still living from The Wizard of Oz? LOS ANGELES — Jerry Maren, the last surviving munchkin from the classic 1939 film "The Wizard of Oz" and the one who famously welcomed Dorothy to Munchkin Land, has died at age 99. Maren died May 24 at a San Diego nursing home, his niece, Stacy Michelle Barrington, told The Associated Press on Wednesday. Is Melanie from Gone With the Wind still alive? Actress Olivia de Havilland, who played the doomed Southern belle Melanie in "Gone With the Wind," poses for a photograph Wednesday, Sept. 15, 2004, in Los Angeles. And then there were none. Olivia de Havilland, the last surviving star of the 1939 Civil War epic "Gone With the Wind," died Saturday in Paris at age 104. Is Bonnie from Gone With the Wind still alive? She is best known for her portrayal of Bonnie Blue Butler in Gone with the Wind (1939)…. Cammie King Died September 1, 2010 (aged 76) Fort Bragg, California, U.S. Resting place Holy Cross Cemetery, Culver City Occupation Child actress Years active 1938–1942 What movie star is 103 and still alive? List of centenarians (actors, filmmakers and entertainers) Dorothy Dickson 1893–1995 102 Caren Marsh Doll 1919– 103 Kirk Douglas 1916–2020 103 Ellen Albertini Dow 1913–2015 101 Who was Melanie Wilkes baby in Gone With the Wind? Beau Wilkes Melanie Hamilton Wilkes is a fictional character first appearing in the 1936 novel Gone with the Wind by Margaret Mitchell. In the 1939 film she was portrayed by Olivia de Havilland…. Melanie Hamilton Children Beau Wilkes (son, with Ashley) Unborn child (second child with Ashley; deceased) What killed Melanie Wilkes? Ashley and Scarlett are caught embracing. Despite the fact that various people tell her they saw the deed, Melanie does not believe them, and supports Scarlett. Melanie becomes pregnant. She dies in childbirth asking Scarlett to take care of Ashley and her son. Why did Rhett leave Scarlett? Rhett falls in love with Scarlett, but, despite their eventual marriage, their relationship never succeeds because of Scarlett's obsession with Ashley and Rhett's reluctance to express his feelings. Because Rhett knows that Scarlett scorns men she can win easily, Rhett refuses to show her she was won him. What happened to Scarlett O Hara's daughter? Cammie King, who as a cherubic little girl played the daughter of Scarlett O'Hara and Rhett Butler in "Gone With the Wind," then enjoyed something of a fan following at film festivals, died on Wednesday at her home in Fort Bragg, Calif. She was 76. The cause was cancer, her son, Matt Conlon, said. Previous: What are formal groups in business? Next: Which breed of dog is used in Vodafone?
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
733
{"url":"http:\/\/jnva.biemdas.com\/archives\/1744","text":"## Paolo Cubiotti, Jen-Chih Yao, Some qualitative properties of solutions of higher-order lower semicontinus differential inclusions\n\nFull Text: PDF\nDOI: 10.23952\/jnva.6.2022.5.10\n\nVolume 6, Issue 5, 1 October 2022, Pages 585-599\n\nAbstract. Let $n,k\\in{\\bf N}$, $T>0$, and $F:[0,T]\\times({\\bf R}^n)^k\\to 2^{{\\bf R}^n}$ be a lower semicontinuos and bounded multifunction with nonempty closed values. We prove that there exists a bounded and upper semicontinuous multifunction $G:{\\bf R}\\times({\\bf R}^n)^k\\to2^{{\\bf R}^n}$ with nonempty compact convex values such that every generalized solution $u:[0,T]\\to{\\bf R}^n$ of the differential inclusion $u^{(k)}\\in G(t,u,u^\\prime, \\ldots, u^{(k-1)})$ is a generalized solution to the differential inclusion $u^{(k)}\\in F(t,u,u^\\prime, \\ldots, u^{(k-1)})$. As an application, we prove an existence and qualitative result for the generalized solutions of the Cauchy problem associated to the inclusion $u^{(k)}\\in F(t,u,u^\\prime,\\ldots,u^{(k-1)})$. In particular, we prove that if $F$ is lower semicontinuous and bounded with nonempty closed values, then the solution multifunction admits an upper semicontinuous multivalued selection with nonempty compact connected values. Finally, by applying the latter result, we prove an analogous existence and qualitative result for the generalized solutions of the Cauchy problem associated to the differential equation $g(u^{(k)})= f(t,u,u^\\prime,\\ldots,u^{(k-1)})$, where $f$ is continuous. We only assume that $g$ is continuous and locally nonconstant.","date":"2022-11-30 00:58:23","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 12, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9107034206390381, \"perplexity\": 219.4267245258085}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446710712.51\/warc\/CC-MAIN-20221129232448-20221130022448-00177.warc.gz\"}"}
null
null
Grooverville is an unincorporated community in Brooks County, Georgia, United States. It was once known as Key and was located at the crossing of the Thomasville and Madison and Sharpe's Store Road, which was in Thomas County prior to the creation of Brooks County from Lowndes and Thomas counties in 1858. Grooverville was incorporated on December 8, 1859. The charter of Grooverville was terminated by an act of the Georgia Legislature effective July 1, 1995. Since then, Grooverville has been granted status as Grooverville Historic Township by State of Georgia, Department of Community Affairs. Grooverville Methodist Church and Liberty Baptist Church, the later listed on the National Register of Historic Places in 2013, are located in the area. Geography Grooverville is located at (30.729141, -83.715055). It is a circular area radius from the crossing of Liberty Church Road and Grooverville Road: . It is located approximately west-southwest of Quitman. References Unincorporated communities in Brooks County, Georgia Former municipalities in Georgia (U.S. state) Unincorporated communities in Georgia (U.S. state) Populated places disestablished in 1995 Unincorporated communities in Valdosta metropolitan area
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,133
Affiliate marketing is one of the most exciting and profitable forms of revenue for publishers who have built a following. In fact, it's predicted to become a 6.8 billion dollar industry in 2020. The discipline is fast evolving and publishers need to cater to the latest consumer trends and behaviors in order to maximize profits. In the coming year, more consumers will be shopping on their mobile devices. Digital assistants, such as Siri and Alexa, will make voice search an integral part of many consumers' lives. Meanwhile, more data will be collected on consumers' interactions with online content, providing valuable insights into how marketers can increase sales. The influencer marketing landscape has been changing significantly – consumers have become more skeptical about celebrity endorsements while turning to "micro-influencers" whom they follow and trust for product recommendations. This poses a unique opportunity for publishers who have been sharing content and gained the trust of their audience. More brands will be working with these content producers to reach highly targeted markets. As a publisher, you can get connected with more brands by joining an affiliate network, such as Aragon Premium, to maximize your exposure to advertisers. 40% of adults in the US use voice search once per day so it's becoming increasingly important to optimize your content for voice searches. Focus on long-tail keywords associated with high purchase intent. Create content that provides context to search engines while using clear and succinct verbiage that mimics how your target audience speaks. Since most voice searches are conducted on mobile devices, it's important to ensure that all your content, web pages, and tracking mechanisms are optimized for a fast and seamless mobile user experience to maximize conversion rates. "Banner blindness" has been a challenge for many advertisers. Publishers who create valuable content can overcome this issue with native advertising (e.g., social media ads) that focuses on sharing information in a way that is non-disruptive, targeted, contextual, and audience-centric. You can use native advertising to promote content and drive traffic to your website, from which you can link out advertisers' products using your affiliate links. To optimize your ROI, make sure you have access to a network of advertisers who have products that are a good fit for your audience. Then, create highly targeted native ads to attract specific audiences that are most likely to convert. Today's digital marketing technologies allow publishers to collect visitor data and generate insights so they can improve their content and promotion strategies. To fully take advantage of data-driven marketing, you need to define KPIs (key performance metrics) that align with your business objectives. For example, as a publisher who aims to monetize your website's content, you should track each click all the way to a sale instead of simply measuring click-through-rate or impression. The insights generated from data analytics can help you identify the most effective advertising channels, hone in on messaging that works best for your audience, and plug leaks in the purchasing path to increase conversions. Publishers of high-quality content are well positioned to take advantage of many affiliate marketing trends in the coming year. Having access to advertisers that offer high-quality products that are a good match for your audience is a great way to maximize your ROI. For example, publishers in the Aragon Premium network can become an affiliate of Digit, a popular app that helps users save money automatically. Monetize with Aragon Premium! We can help you get more out of your content creation and promotion efforts here.
{ "redpajama_set_name": "RedPajamaC4" }
5,638
\section*{Introduction} The polylogarithm is a very powerful tool in studying special values of $L$-functions and subject to many conjectures. Most notably, the Zagier conjecture claims that all values of $L$-functions of number fields can be described by polylogarithms. The interpretation of the polylogarithm functions in terms of periods of variations of Hodge structures has lead to a motivic theory of the polylog and to generalizations as the elliptic polylog by Beilinson and Levin. Building on this work, Wildeshaus has defined polylogarithms in a more general context and in particular for abelian schemes. Not very much is known about the extension classes arising from these ``abelian polylogarithms''. In an earlier paper \cite{K} we were able to show that the abelian polylogarithm, as defined by Wildeshaus, is indeed of motivic origin, i.e., is in the image of the regulator from $K$-theory. It was Levin in \cite{L}, who started to investigate certain "polylogarithmic currents" on abelian schemes, which are related to the construction by Wildeshaus. Very recently, Blotti\`ere could show in his thesis \cite{B} that these currents actually represent the polylogarithmic extension in the category of Hodge modules. Furthermore, specializing to the case of Hilbert modular varieties, he computed the residue of the associated Eisenstein classes, which are just the pull-back of the polylog along torsion sections of the universal abelian variety. This residue is given in terms of (critical) special values of $L$-functions of the totally real field, which defines the Hilbert modular variety. His computation uses the explicit description of the polylog in terms of the currents constructed by Levin \cite{L}. In this paper we will, following and extending ideas from the case of the elliptic polylog treated in \cite{HK} (which is in turn inspired by \cite{BL}), present a completely different approach to this residue computation, which avoids computations as much as possible and relates the polylog on the Hilbert modular variety to the classes constructed by Nori and Sczech. Instead of computing directly the degeneration on the base (as in the approach by Blotti\`ere), we work with the polylogarithm, which lives on the universal abelian scheme and use its good functorial properties to compute the degeneration. Contrary to earlier approaches to the degeneration, which work in the algebraic category of schemes, we view the problem as of topological nature and work entirely in the topological category. For this the notion of "topological polylogarithm", which is defined for an arbitrary real torus, is essential. We think that this unusual approach is of independent interest. The idea to study the polylogarithm on the moduli scheme of elliptic curves via its degeneration at the boudary is already prominent in \cite{BL}, where it is shown that the elliptic polylog degenerates into the cyclotomic polylog. These ideas were developed further by Wildeshaus \cite{W2} in the context of toroidal compactification of moduli schemes of abelian varieties. He could show, that also in this general context the polylogarithm is stable under degeneration. In \cite{HK} this degeneration principle was exploited for the moduli space of elliptic curves and used to relate the elliptic polylogarithm to the critical and non-critical values of Dirichlet L-functions. To describe the theorem more precisely, consider the specialization of the polylog, which gives Eisenstein classes (say in the category of mixed \'etale sheaves to fix ideas) \[ \operatorname{Eis}^k(\alpha)\in \operatorname{Ext}^{2g-1}_S(\mathbb{Q}_\ell, \operatorname{Sym}^k{\cal H}(g)), \] where $S$ is the Hilbert modular variety of dimension $g$ and ${\cal H}$ is the locally constant sheaf of relative Tate-modules of the universal abelian scheme. Let $j:S\to{\overline{S}}$ be the Baily-Borel compactification of $S$ and $i:{\partial S}:= {\overline{S}}\setminus S\to {\overline{S}}$ the inclusion of the cusps. The degeneration or residue map (see \ref{deg2} for the precise definition) is then \[ \operatorname{res}:\operatorname{Ext}^{2g-1}_S(\mathbb{Q}_\ell, \operatorname{Sym}^k{\cal H}(g))\to \operatorname{Hom}_{\partial S}(\mathbb{Q}_\ell,\mathbb{Q}_\ell). \] The target of this map is sitting inside a sum of copies of $\mathbb{Q}_\ell$ and the main result of this paper \ref{maintheorem} describes $\operatorname{res}(\operatorname{Eis}^k(\alpha))$ in terms of special values of (partial) $L$-functions of the totally real field defining $S$. There is a very interesting question raised by the results in this paper. In \cite{HK} we were able to construct extension classes related to non-critical values of Dirichlet-$L$-functions, if the residue map was zero on the specialization of the polylog. Is there an analogous result here? The paper is organized as follows: In the first section we review the definition of the Hilbert modular variety, define the residue or degeneration map and formulate our main theorem. The second section reviews the theory of the polylog and the Eisenstein classes emphasing the topological situation, which is not extensively covered in the literature. In the third section we give the proof of the main theorem. It is a great pleasure to thank David Blotti\`ere for a series of interesting discussions during his stay in Regensburg, which stimulated the research in this paper. Moreover, I like to thank Sascha Beilinson for making available some time ago his notes about his and A. Levin's interpretation of Nori's work. Finally, I thank J\"org Wildeshaus for helpful comments. \section{Polylogarithms and degeneration} We review the definition of a Hilbert modular variety to fix notations and pose the problem of computing the degeneration of the specializations of the polylogarithm at the boundary. The main theorem describes this residue in terms of special values of $L$-functions. \subsection{Notation}\label{notation} As in \cite{BL} we deal with three different types of sheaves simultaneously. Let $X/k$ be a variety and $L$ a coefficient ring for our sheaf theory, then we consider \begin{itemize} \item[i)] $k=\mathbb{C}$ the usual topology on $X(\mathbb{C})$ and $L$ any commutative ring \item[ii)] $k=\mathbb{R}$ or $\mathbb{C}$ and $L=\mathbb{Q}$ or $\mathbb{R}$ and we work with the category of mixed Hodge modules \item[iii)] $k=\mathbb{Q}$ and $L=\mathbb{Z}/l^r\mathbb{Z},\mathbb{Z}_l$ or $\mathbb{Q}_l$ and we work with the category of {\'e}tale sheaves \end{itemize} \subsection{Hilbert modular varieties} We recall the definition of Hilbert modular varieties following Rapoport \cite{R}. To avoid all technicalities, we will only consider the moduli scheme over $\mathbb{Q}$. The theory works over more general bases schemes without any modification. Let $F$ be a totally real field, $g:=[F:\mathbb{Q}]$, ${\cal O}$ the ring of integers, ${\mathfrak{D}}^{-1}$ the inverse different and $d_F$ its discriminant. Fix an integer $n\geq 3$. We consider the functor, which associates to a scheme $T$ over $\operatorname{Spec}\mathbb{Q}$ the isomorphism classes of triples $(A,\alpha,\lambda)$, where $A/T$ is an abelian scheme of dimension $g$, with real multiplication by ${\cal O}$, $\alpha:\underline{\operatorname{Hom}}_{et,{\cal O},sym}({\cal A}, {\cal A}^*)\to {\mathfrak{D}}^{-1} $ is a ${\mathfrak{D}}^{-1}$-polarization in the sense of \cite{R} 1.19, i.e., an ${\cal O}$-module isomorphism respecting the positivity of the totally positive elements in ${\mathfrak{D}}^{-1}\subset F$, and $\lambda:A[n]\cong ({\cal O}/n{\cal O})^2$ is a level $n$ structure satisfying the compatibility of \cite{R} 1.21. For $n\geq 3$ this functor is represented by a smooth scheme $S:=S^{{\mathfrak{D}}^{-1}}_n$ of finite type over $\operatorname{Spec} \mathbb{Q}$ . Let \[ {\cal A}\xrightarrow{\pi}S \] be the universal abelian scheme over $S$. In any of the three categories of sheaves i)-iii) from \ref{notation} we let \[ {\cal H}:= \underline{\operatorname{Hom}}_S(R^1\pi_*L,L) \] the first homology of ${\cal A}/S$. In the {\'e}tale case and $L=\mathbb{Z}_\ell$, the fiber of ${\cal H}$ at a point is the Tate module of the abelian variety over that point. \subsection{Transcendental description} For the later computation we need a description in group theoretical terms of the complex points $S(\mathbb{C})$ and of ${\cal H}$. Define a group scheme $G/\operatorname{Spec} \mathbb{Z}$ by the Cartesian diagram \[ \begin{CD} G@>>>\operatorname{Res}_{{\cal O}/\mathbb{Z}}\operatorname{Gl}_2\\ @VVV@VV\det V\\ {\mathbb{G}_m}@>>> \operatorname{Res}_{{\cal O}/\mathbb{Z}}{\mathbb{G}_m} \end{CD} \] and let \[ \mathfrak{H}^g_\pm:= \{\tau\in F\otimes\mathbb{C}|Im\; \tau \mbox{ totally positive or totally negative}\}. \] Then $\left(\begin{array}{cc}a&b \\ c&d\end{array}\right)\in G(\mathbb{R})$ acts on $\mathfrak{H}^g_\pm$ by the usual formula \[ \left(\begin{array}{cc}a&b \\ c&d\end{array}\right)\tau= \frac{a\tau+b}{c\tau+d} \] and the stabilizer of $1\otimes i\in \mathfrak{H}^g_\pm$ is \[ K_\infty:= (F\otimes \mathbb{C})^*\cap G(\mathbb{R}), \] so that \[ \mathfrak{H}^g_\pm\cong G(\mathbb{R})/K_\infty. \] With this notation one has \[ S(\mathbb{C})=G(\mathbb{Z})\backslash (\mathfrak{H}^g_\pm\times G(\mathbb{Z}/n\mathbb{Z})). \] On $S(\mathbb{C})$ acts $G(\mathbb{Z}/n\mathbb{Z})$ by right multiplication. The determinant $\det:G\to {\mathbb{G}_m}$ induces \[ S(\mathbb{C})\to {\mathbb{G}_m}(\mathbb{Z}/n\mathbb{Z}) \] and the fibers are the connected components. Define a subgroup $D\subset G$ isomorphic to ${\mathbb{G}_m}$ by $D:=\{\bigl(\begin{smallmatrix} a&0\\ 0&1 \end{smallmatrix}\bigr)\in G:a\in {\mathbb{G}_m} \bigr)$. This gives a section of $\det$. Then the action of $D(\mathbb{Z}/n\mathbb{Z})$ by right multiplication is transitive on the set of connected components. The embedding $G(\mathbb{Z})\subset \operatorname{Gl}_2({\cal O})$ defines an action of $G(\mathbb{Z})$ on ${\cal O}^{\oplus 2}$ and in the topological realization the local system ${\cal H}$ is given by the quotient \[ G(\mathbb{Z})\backslash (\mathfrak{H}^g_\pm\times{\cal O}^{\oplus 2} \times G(\mathbb{Z}/n\mathbb{Z})). \] In particular, as a family of real ($2g$-dimensional) tori, the complex points ${\cal A}(\mathbb{C})$ of the universal abelian scheme can be written as \[ G(\mathbb{Z})\backslash (\mathfrak{H}^g_\pm\times (F\otimes\mathbb{R}/{\cal O})^{\oplus 2}\times G(\mathbb{Z}/n\mathbb{Z})) \] and the level $n$ structure is given by the subgroup \[ (\frac{1}{n}{\cal O}/{\cal O})^{\oplus 2}\subset (F\otimes\mathbb{R}/{\cal O})^{\oplus 2}. \] The ${\cal O}$-multiplication on ${\cal A}(\mathbb{C})$ is in this description given by the natural ${\cal O}$-module structure on $F\otimes\mathbb{R}$. \subsection{Transcendental description of the cusps} The following description of the boundary cohomology is inspired by \cite{Ha}. Let $B\subset G$ the subgroup of upper triangular matrices, $T\subset B$ its maximal torus and $N\subset B$ its unipotent radical. We have an exact sequence \[ 1\to N\to B\xrightarrow{q} T\to 1. \] We denote by $G^1$, $B^1$ and $T^1$ the subgroups of determinant $1$. Let $K^B_\infty:=B(\mathbb{R})\cap K_\infty$, then the Cartan decomposition shows that $\mathfrak{H}^g_\pm=B(\mathbb{R})/K^B_\infty$. A pointed neighborhood of the set of all cusps is given by \begin{equation} \widetilde{S}_B:=B(\mathbb{Z})\backslash\bigl( B(\mathbb{R})/K^B_\infty \times G(\mathbb{Z}/n\mathbb{Z})\bigr). \end{equation} In particular, the set of cusps is \begin{equation}\label{cusps} {\partial S}(\mathbb{C})=B^1(\mathbb{Z})\backslash G(\mathbb{Z}/n\mathbb{Z}). \end{equation} The fibres of the map ${\partial S}(\mathbb{C})\to {\mathbb{G}_m}(\mathbb{Z}/n\mathbb{Z})$ induced by the determinant are \begin{equation}\label{cuspsasp1} B^1(\mathbb{Z})\backslash G^1(\mathbb{Z}/n\mathbb{Z})\cong \Gamma_G\backslash \mathbb{P}^1({\cal O}), \end{equation} where $\Gamma_G:=\ker(G^1(\mathbb{Z})\to G^1(\mathbb{Z}/n\mathbb{Z}))$. In particular, we can think of a cusp represented by $h\in G^1(\mathbb{Z}/n\mathbb{Z})$ as a rank $1$ ${\cal O}$-module $\mathfrak{b}_h$, which is a quotient \begin{equation}\label{phdefn} {\cal O}^2\xrightarrow{p_h}\mathfrak{b}_h, \end{equation} together with a level structure, i.e., a basis $h\in G^1(\mathbb{Z}/n\mathbb{Z})$. Explicitly, the fractional ideal $\mathfrak{b}_h$ is generated by any representatives $u,v\in{\cal O}$ of the second row of $h$. On $\widetilde{S}_B$ acts $G(\mathbb{Z}/n\mathbb{Z})$ by multiplication from the right. This action is transitive on the connected components of $\widetilde{S}_B$. Define \begin{equation}\label{cuspneighbor} {S}_B:=B(\mathbb{Z})\backslash\bigl( B(\mathbb{R})/K^B_\infty \times B(\mathbb{Z}/n\mathbb{Z})\bigr), \end{equation} then $S_B\subset \widetilde{S}_B$ is a union of connected components of $\widetilde{S}_B$. Let $K^T_\infty$ (respectively $T(\mathbb{Z})$) be the image of $K^B_\infty$ (respectively $B(\mathbb{Z})$) under $q:B(\mathbb{R})\to T(\mathbb{R})$. Define \begin{equation} S_T:= T(\mathbb{Z})\backslash \bigl(T(\mathbb{R})/K^T_\infty\times T(\mathbb{Z}/n\mathbb{Z})\bigr), \end{equation} then the map $q:B\to T$ induces a fibration \begin{equation} q:S_B\to S_T, \end{equation} whose fibers are $N(\mathbb{Z})\backslash \bigl(N(\mathbb{R})\times N(\mathbb{Z}/n\mathbb{Z})\bigr)$ with $N(\mathbb{Z}):=B(\mathbb{Z})\cap N(\mathbb{R})$. Denote by \begin{equation}\label{udefn} u:S_T\to pt \end{equation} the structure map to a point. For the study of the degeneration, one considers the diagram \begin{equation}\label{comdiagr} \begin{CD} S_B@>q>> S_T@>u >>pt\\ @VVV\\ S \end{CD} \end{equation} In fact we are interested in the cohomology of certain local systems on these topological spaces. For the computations it is convenient to replace $S_B$ and $S_T$ by homotopy equivalent spaces as follows. Define $K^{T^1}_\infty:=K^T_\infty\cap T^1(\mathbb{R})$. Then the inclusion induces an isomorphism \[ S^1_T:=T^1(\mathbb{Z})\backslash \bigl( T^1(\mathbb{R})/K^{T^1}_\infty\times T(\mathbb{Z}/n\mathbb{Z})\bigr)\cong S_T. \] The map $a\mapsto \bigl(\begin{smallmatrix} a&0\\ 0& a^{-1}\end{smallmatrix}\bigr)$ defines isomorphisms $ (F\otimes \mathbb{R})^*\cong T^1(\mathbb{R})$ and ${\cal O}^*\cong T^1(\mathbb{Z})$. Note that $K^{T^1}_\infty\subset (F\otimes \mathbb{R})^*$ is identified with the two torsion subgroup in $(F\otimes \mathbb{R})^*$ and that $K^{T^1}_\infty\cong (\mathbb{Z}/2\mathbb{Z})^g$ permutes the set of connected components of $T^1(\mathbb{R})$. \begin{lemma}\label{s1defn} Let $(F\otimes \mathbb{R})^1$ be the subgroup of $(F\otimes \mathbb{R})^*$ of elements of norm $1$ and ${\cal O}^{*,1}={\cal O}^*\cap (F\otimes \mathbb{R})^1$. Then \[ S^1_T:={\cal O}^{*,1}\backslash \bigl((F\otimes \mathbb{R})^1/K^{T^1}_\infty\cap(F\otimes \mathbb{R})^1 \times T(\mathbb{Z}/n\mathbb{Z})\bigr) \] is homotopy equivalent to $S_T$. Moreover, the inclusion of the totally positive elements $(F\otimes \mathbb{R})^1_+$ into $(F\otimes \mathbb{R})^1$ provides an identification \[ (F\otimes \mathbb{R})^1_+\cong(F\otimes \mathbb{R})^1/K^{T^1}_\infty\cap(F\otimes \mathbb{R})^1 . \] \end{lemma} \begin{proof} The exact sequence \[ 0\to (F\otimes \mathbb{R})^1\to (F\otimes \mathbb{R})^*\to \mathbb{R}^*\to 0 \] together with the fact that $K^{T^1}_\infty$ is the two torsion in $(F\otimes \mathbb{R})^*$ allows to identify \[ T^1(\mathbb{R})/K^{T^1}_\infty\cong \bigl((F\otimes \mathbb{R})^1/K^{T^1}_\infty\cap(F\otimes \mathbb{R})^1\bigr)\times \mathbb{R}_{>0}. \] The last identity is clear. \end{proof} We define $S^1_B$ to be the inverse image of $S^1_T$ under $q$, so that we have a Cartesian diagram \begin{equation}\label{eq:basedegener} \begin{CD} S^1_B@>q>> S^1_T\\ @VVV@VVV\\ S_B@>q>>S_T. \end{CD} \end{equation} Over $S_B$ the representation ${\cal O}^2$ has a filtration \begin{align} \label{eq:localsystemdegener} 0\to {\cal O}\to {\cal O}^2\xrightarrow{p} {\cal O}\to 0, \end{align} where the first map sends $a\in {\cal O}$ to the vector ${a\choose 0}$ and the second map is ${a\choose b}\mapsto b$. This induces a filtration on the local system ${\cal H}$ \begin{equation}\label{locexseq} 0\to {\cal N}\to {\cal H}\to {\cal M}\to 0, \end{equation} where ${\cal N}$ and ${\cal M}$ are the associated local systems. In particular, over $S^1_B$ one has a filtration of topological tori \begin{equation} \label{eq:toridegener} 0\to{\cal T}_{\cal N}\to {\cal A}(\mathbb{C})\xrightarrow{p} {\cal T}_{\cal M}\to 0, \end{equation} where ${\cal T}_{\cal N}:= {\cal N}\otimes\mathbb{R}/\mathbb{Z}$ and ${\cal T}_{\cal M}:={\cal M}\otimes\mathbb{R}/\mathbb{Z}$. By definition of $N$ the fibration in (\ref{eq:toridegener}) and (\ref{eq:basedegener}) are compatible, i.e., one has a commutative diagram \begin{equation}\label{degenerationdiagr} \begin{CD} {\cal A}(\mathbb{C})@>p>>{\cal T}_{\cal M}\\ @V\pi VV@VV\pi_{\cal M} V\\ S^1_B @>q>> S^1_T. \end{CD} \end{equation} \subsection{The degeneration map} In this section we explain the degeneration problem we want to consider. The polylogarithm on $\pi:{\cal A}\to S$ defines for certain linear combinations $\alpha$ of torsion sections of ${\cal A}$ an extension class \begin{equation}\label{pol} \operatorname{Eis}^k(\alpha)\in \operatorname{Ext}^{2g-1}_{?,S}(L, \operatorname{Sym}^k{\cal H}(g)), \end{equation} where $?$ can be $MHM,et,top$. The construction of this class will be given in section \ref{polsection} definition \ref{eisdefn}. Let ${\overline{S}}$ be the Baily-Borel compactification of $S$. Denote by ${\partial S}:={\overline{S}}\setminus S$ the set of cusps. We get \[ {\partial S}\xrightarrow{i}{\overline{S}}\xleftarrow{j}S. \] The adjunction map together with the edge morphism in the Leray spectral sequence for $Rj_*$ gives \begin{equation}\label{deg1} \begin{CD} \operatorname{Ext}^{2g-1}_S(L, \operatorname{Sym}^k{\cal H}(g))@>>> \operatorname{Ext}^{2g-1}_{{\partial S}}(L, i^*Rj_*\operatorname{Sym}^k{\cal H}(g))\\ &\searrow&@VVV\\ && \operatorname{Hom}_{{\partial S}}(L, i^*R^{2g-1}j_*\operatorname{Sym}^k{\cal H}(g)). \end{CD} \end{equation} There are several possibilities to compute $i^*R^{2g-1}j_*\operatorname{Sym}^k{\cal H}(g)$. \begin{thm}\label{degenercompu} Assume that $\mathbb{Q}\subset L$. Then, in any of the categories $MHM,et,top$, there is a canonical isomorphism \[ i^*R^{2g-1}j_*\operatorname{Sym}^k{\cal H}(g)\cong L, \] where $L$ has the trivial Hodge structure (resp. the trivial Galois action). \end{thm} \noindent{\bf Remark:\ } J\"org Wildeshaus has pointed out that the determination of the weight on the right hand side is not necessary for our main result, but follows from it. In fact, our main result gives non-zero classes in \[ \operatorname{Hom}_{{\partial S}}(L, i^*R^{2g-1}j_*\operatorname{Sym}^k{\cal H}(g)), \] so that the rank one sheaf $i^*R^{2g-1}j_*\operatorname{Sym}^k{\cal H}(g)$ has to be of weight zero.\\ Using this identification we define the residue or degeneration map: \begin{defn}\label{deg2} The map from (\ref{deg1}) together with the identification of \ref{degenercompu} define the {\em residue map} \[ \operatorname{res}:\operatorname{Ext}^{2g-1}_S(L, \operatorname{Sym}^k{\cal H}(g))\to\operatorname{Hom}_{{\partial S}}(L,L). \] The residue map is equivariant for the $G(\mathbb{Z}/n\mathbb{Z})$ action on both sides. \end{defn} \begin{proof} (of theorem \ref{degenercompu}). In the case of Hodge modules we use theorem 2.9. in Burgos-Wildeshaus \cite{BW} and in the \'etale case we use theorem 5.3.1 in \cite{Pi}. Roughly speaking, both results asserts that the higher direct image can be calculated using group cohomology and the ``canonical construction'', which associates to a representation of the group defining the Shimura variety a Hodge module resp. an \'etale sheaf. More precisely, from a topological point of view, the monodromy at the cusps is exactly the cohomology of $\widetilde{S}_B$. One has \[ H^{2g-1}(\widetilde{S}_B, \operatorname{Sym}^k{\cal H}(g))\cong\operatorname{Ind}_{B(\mathbb{Z}/n\mathbb{Z})}^{G(\mathbb{Z}/n\mathbb{Z})} H^{2g-1}(S_B,\operatorname{Sym}^k{\cal H}(g)) \] and \[ H^{2g-1}(S_B,\operatorname{Sym}^k{\cal H}(g))\cong \bigoplus_{r+s=2g-1} H^r(S_T,R^sq_*\operatorname{Sym}^k{\cal H}(g))). \] As the cohomological dimension of $\Gamma_T$ is $g-1$ and that of $\Gamma_N$ is $g$, one has in fact \[ H^{2g-1}(S_B,\operatorname{Sym}^k{\cal H}(g))\cong H^{g-1}(S_T,R^gq_*\operatorname{Sym}^k{\cal H}(g))). \] The exact sequence \[ 0\to {\cal O}\to {\cal O}^2\xrightarrow{p} {\cal O}\to 0 \] from (\ref{eq:localsystemdegener}) shows that $R^gq_*\operatorname{Sym}^k{\cal H}(g)$ can be identified via $p$ with $\operatorname{Sym}^k{\cal O}\otimes L$ with the induced $T(\mathbb{Z})$ action, which maps $\bigl(\begin{smallmatrix} a&0\\ 0& d\end{smallmatrix}\bigr)$ to $d^{k}$. To compute the coinvariants, extend the coefficients to $\mathbb{R}$, so that \[ {\cal O}\otimes \mathbb{R}\cong \bigoplus_{\tau:F\to\mathbb{R}} \mathbb{R} \] and $\bigl(\begin{smallmatrix} a&0\\ 0& d\end{smallmatrix}\bigr)\in T(\mathbb{Z})$ acts via $\tau(d)$ on the component indexed by $\tau$. Thus $\operatorname{Sym}^k{\cal O}\otimes L$ can only have a trivial quotient, if $k\equiv 0\mbox{ mod }g$ and on this one dimensional quotient the action is by the norm map $T(\mathbb{Z})\to \pm 1$. One gets: \[ H^{g-1}(S_T, \operatorname{Sym}^k{\cal O}\otimes L)\cong \left\{\begin{array}{cc} L&\mbox{ if } k \equiv 0 \mbox{ mod }g\\ 0&\mbox{ else} \end{array}\right. \] The above mentioned theorems imply that this topological computation gives also the result in the categories $MHM, et, top$. The Hodge structure on $H^{g-1}(S_T, \operatorname{Sym}^k{\cal O}\otimes L) $ is the trivial one, as one sees from the explicit description of the action of $T$ and the fact that the action of the Deligne torus $\mathbb{S}$, which defines the weight, is induced from the embedding $x\mapsto \bigl(\begin{smallmatrix} x&0\\ 0& 1\end{smallmatrix}\bigr)$., hence is trivial. The same remark and proposition 5.5.4. in \cite{Pi} show that the weight is also zero in the \'etale case. \end{proof} \subsection{Partial zeta functions of totally real fields} Let $\mathfrak{b}, \mathfrak{f}$ be relatively prime integral ideals of ${\cal O}$, $\epsilon: (\mathbb{R}\otimes F)^*\to \{\pm 1\}$ a sign character. This is a product of characters $\epsilon_\tau:\mathbb{R}^*\to \{\pm 1\}$ for all embeddings $\tau:F\to \mathbb{R}$. Denote by $|\epsilon |$ the number of non-trivial $\epsilon_\tau$ which occur in this product decomposition of $\epsilon$. Moreover let $x\in {\cal O}$ such that $x\not\equiv 0 \mbox{ mod }\mathfrak{b}^{-1}\mathfrak{f}$ and ${\cal O}_{\mathfrak{f}}^*:=\{ a\in {\cal O}|a \mbox{ totally positive and }a\equiv 1 \mbox{ mod }\mathfrak{f}\}$. Define \begin{equation} F(\mathfrak{b}, \mathfrak{f},\epsilon,x, s):= \sum_{\nu\in (x+\mathfrak{f}\mathfrak{b}^{-1})/{\cal O}_{\mathfrak{f}}^*} \frac{\epsilon(\nu)}{|N(\nu)|^s} \end{equation} for $Re\; s>1$. Here $N$ is the norm. On the other hand let $\operatorname{Tr}:F\to \mathbb{Q}$ be the trace map and define \begin{equation} L(\mathfrak{b}, \mathfrak{f},\epsilon,x, s):= \sum_{\lambda\in \mathfrak{b}(\mathfrak{f}{\mathfrak{D}})^{-1}/{\cal O}_{\mathfrak{f}}^*} \frac{\epsilon(\lambda)e^{2\pi i\operatorname{Tr} (x\lambda)}}{|N(\nu)|^s} \end{equation} These two $L$-functions are related by a functional equation. To formulate it we introduce the $\Gamma$-factor \[ \Gamma_\epsilon(s):=\pi^{-\frac{1}{2}(sg+|\epsilon|)} \Gamma\left(\frac{s+1}{2}\right)^{|\epsilon|} \Gamma\left(\frac{s}{2}\right)^{g-|\epsilon|}. \] The functional equation follows directly with Hecke's method for Gr\"ossencharacters and was first mentioned by Siegel: \begin{prop}[cf.\cite{Si} Formel (10)] The functional equation reads: \[ \Gamma_\epsilon(1-s)F(\mathfrak{b}, \mathfrak{f},\epsilon,x,1- s)= i^{-|\epsilon|}|d_F|^{-\frac{1}{2}}N(\mathfrak{f}^{-1}\mathfrak{b})\Gamma_\epsilon(s) L(\mathfrak{b}, \mathfrak{f},\epsilon,x, s), \] where $d_F$ is the discriminant of $F/ \mathbb{Q}$. \end{prop} The functional equation shows that $F(\mathfrak{b}, \mathfrak{f},\epsilon,x,1- k)$ can be non-zero for $k=1,2,\ldots$ only if $|\epsilon|$ is either $g$ or $0$. Let us introduce \[ \zeta(\mathfrak{b},\mathfrak{f},x,s):= \sum_{\nu\in (x+\mathfrak{f}\mathfrak{b}^{-1})/{\cal O}_{\mathfrak{f}}^*} \frac{1}{N(\nu)^s}. \] We get: \begin{cor}\label{fcteq} The functional equation shows that $F(\mathfrak{b}, \mathfrak{f},\epsilon,x,1- k)$ for $k=1,2,\ldots$ is non-zero for $|\epsilon|=0$ and $k$ even or for $|\epsilon|=g$ and $k$ odd. In these cases one has \[ \zeta(\mathfrak{b}, \mathfrak{f},x,1- k)=|d_F|^{-\frac{1}{2}}N(\mathfrak{f}^{-1}\mathfrak{b}) \frac{((k-1)!)^g}{(2\pi i)^{kg}}L(\mathfrak{b}, \mathfrak{f},\epsilon,x, k). \] \end{cor} \subsection{The main theorem} Here we formulate our main theorem. It computes the residue map from (\ref{deg2}) in terms of the partial $L$-functions. The transcendental description of the cusps gives \[ H^0({\partial S}(\mathbb{C}),L)=\operatorname{Ind}_{B^1(\mathbb{Z})}^{G(\mathbb{Z}/n\mathbb{Z})}L \] and $H^0({\partial S},L)$ is the subgroup of elements invariant under $D(\mathbb{Z}/n\mathbb{Z})$. Similarly, the $n$-torsion sections of ${\cal A}[n]$ over $S(\mathbb{C})$ can be identified with functions from $G(\mathbb{Z}/n\mathbb{Z})$ to $(\frac{1}{n}{\cal O}/{\cal O})^2$, which are equivariant with respect to the canonical $G^1(\mathbb{Z}):=\ker(G(\mathbb{Z})\to \mathbb{Z}^*)$ action. The action of $G(\mathbb{Z}/n\mathbb{Z})$ on $S$ induces via pull-back an action on ${\cal A}[n](S(\mathbb{C}))$ and we have: \[ {\cal A}[n](S(\mathbb{C}))=\operatorname{Ind}_{ G^1(\mathbb{Z})}^{G(\mathbb{Z}/n\mathbb{Z})}(\frac{1}{n}{\cal O}/{\cal O})^2. \] The group ${\cal A}[n](S)$ consists again of the elements invariant under $D(\mathbb{Z}/nZ)$. Let $D:={\cal A}[n](S)$ and consider the formal linear combinations \[ L[D]^0:=\bigl\{ \sum_{\sigma\in D}l_\sigma(\sigma):l_\sigma\in L\text{ and } \sum_{\sigma\in D} l_\sigma=0\bigr\}. \] The ${G(\mathbb{Z}/n\mathbb{Z})}$ action on $D$ carries over to an action on $L[D]^0$. For $\alpha\in L[D]^0$ and $k>0$ (or $\alpha\in L[D\setminus e(S)]^0$ and $k\geq 0$) we construct in \ref{eisdefn} a class \[ \operatorname{Eis}^k(\alpha)\in \operatorname{Ext}^{2g-1}_S(L,\operatorname{Sym}^k{\cal H}(g)), \] which depends on $\alpha$ in a functorial way. Thus, the resulting map \begin{equation}\label{resequiv} L[D]^0\xrightarrow{\operatorname{Eis}^k}\operatorname{Ext}^{2g-1}_S(L,\operatorname{Sym}^k{\cal H}(g))\xrightarrow{\operatorname{res}} \operatorname{Ind}_{B^1(\mathbb{Z})}^{G(\mathbb{Z}/n\mathbb{Z})}L \end{equation} is equivariant for the $G(\mathbb{Z}/n\mathbb{Z})$ action. \begin{thm}\label{maintheorem} Let $L\supset \mathbb{Q}$ and $\alpha=\sum_{\sigma\in D}l_\sigma(\sigma)$. Then $\operatorname{res}(\operatorname{Eis}^{m}(\alpha))$ is non-zero only for $m\equiv 0(g)$ and for every $h\in G(\mathbb{Z}/n\mathbb{Z})$ and $k>0$ \[ \operatorname{res}(\operatorname{Eis}^{gk}(\alpha))(h)=(-1)^{g-1} \sum_{\sigma\in D}l_\sigma \zeta({\cal O},{\cal O},p(h\sigma),-k). \] \end{thm} To use the basis given by the coinvariants in $\operatorname{Sym}^{gk}{\cal O}\otimes L$ as we did in the proof of theorem \ref{degenercompu} is not natural. A better description is as follows: For each $h\in G(\mathbb{Z}/n\mathbb{Z})$ choose an element $d_h\in D(\mathbb{Z}/n\mathbb{Z})$ such that $\tilde{h}:=hd_h^{-1}\in G^1(\mathbb{Z}/n\mathbb{Z})$. Then, as in (\ref{phdefn}) we have an ideal $\mathfrak{b}_{\tilde{h}}$ and a projection \[ {\cal O}^2\xrightarrow{p_{\widetilde{h}}}\mathfrak{b}_{\widetilde{h}}. \] Now use the identification $H^{g-1}(S_T, \operatorname{Sym}^{gk}\mathfrak{b}_{\widetilde{h}}\otimes L)\cong L$ at the cusp $h$. With this basis the above result reads \begin{cor} In this basis \[ \operatorname{res}(\operatorname{Eis}^{gk}(\alpha))(h)=(-1)^{g-1}N\mathfrak{b}_{\widetilde{h}}^{-k-1} \sum_{\sigma\in D}l_\sigma \zeta(\mathfrak{b}_{\widetilde{h}},{\cal O},p_{\widetilde{h}}(\sigma),-k). \] \end{cor} The theorem and the corollary will be proved in section \ref{proof}. \section{Polylogarithms}\label{polsection} In this section we review the theory of the polylogarithm on abelian schemes. Special emphasis is given the topological case, which will be important in the proof of the main theorem. The elliptic polylogarithm was introduced by Beilinson and Levin \cite{BL} and the generalization to higher dimensional families of abelian varieties is due to Wildeshaus \cite{W}. The idea to interprete the construction by Nori in terms of the topological polylogarithm is due to Beilinson and Nori (unpublished). The polylogarithm can be defined in any of the categories $MHM, et, top$ for any abelian scheme $\pi:{\cal A}\to S$, with unit section $e:S\to {\cal A}$ of constant relative dimension $g$. If we work in $top$, it even suffices to assume that $\pi:{\cal A}\to S$ is a family of topological tori (i.e., fiberwise isomorphic to $(\mathbb{R}/\mathbb{Z})^g$). For more details in the case of abelian schemes, see \cite{W} chapter III part I, or \cite{L}. In the case of elliptic curves one can also consult \cite{BL} or \cite{HK}. \subsection{Construction of the polylog} For simplicity we assume $L\supset \mathbb{Q}$ in this section and discuss the necessary modifications for integral coefficients later. Define a lisse sheaf $\operatorname{Log}^{(1)}$ on ${\cal A}$, which is an extension \[ 0\to {\cal H}\to\operatorname{Log}^{(1)}\to L\to 0 \] together with a splitting $s:e^*L\to e^*\operatorname{Log}^{(1)}$ in any of the three categories $MHM, et, top$ as follows: Consider the exact sequence \[ 0\to \operatorname{Ext}^1_{S}(L,{\cal H})\xrightarrow{\pi^*} \operatorname{Ext}^1_{{\cal A}}(L,\pi^*{\cal H})\to \operatorname{Hom}_{S}(L,R^1\pi_*\pi^*{\cal H})\to 0, \] which is split by $e^*$. Note that by the projection formula $ R^1\pi_*\pi^*{\cal H}\cong R^1\pi_*L\otimes {\cal H}$ so that \[ \operatorname{Hom}_S(L,R^1\pi_*\pi^*{\cal H})\cong \operatorname{Hom}_S({\cal H}, {\cal H}). \] Then $\operatorname{Log}^{(1)}$ is a sheaf representing the unique extension class in $\operatorname{Ext}^1_{{\cal A}}(L,\pi^*{\cal H})$, which splits when pulled back to $S$ via $e^*$ and which maps to $\operatorname{id} \in \operatorname{Hom}_S({\cal H}, {\cal H})$. Define \[ \operatorname{Log}^{(k)}:=\operatorname{Sym}^k\operatorname{Log}^{(1)}. \] \begin{defn} The {\em logarithm sheaf} is the pro-sheaf \[ \operatorname{Log}:=\operatorname{Log}_{\cal A}:=\varprojlim \operatorname{Log}^{(k)}, \] where the transition maps are induced by the map $\operatorname{Log}^{(1)}\to L$. In particular, one has exact sequences \[ 0\to \operatorname{Sym}^k{\cal H}\to \operatorname{Log}^{(k)}\to \operatorname{Log}^{(k-1)}\to 0 \] and a splitting induced by $s:e^*L\to e^*\operatorname{Log}^{(1)}$ \[ e^*\operatorname{Log}\cong \prod_{k\geq 0}\operatorname{Sym}^k{\cal H}. \] \end{defn} Any isogeny $\phi:{\cal A}\to {\cal A}$ of degree invertible in $L$ induces an isomorphism $\operatorname{Log}\cong \phi^*\operatorname{Log}$, which is on the associated graded induced by $\operatorname{Sym}^k\phi:\operatorname{Sym}^k{\cal H}\to \operatorname{Sym}^k{\cal H}$. For every torsion point $x\in {\cal A}(S)_{\operatorname{tors}}$ one gets an isomorphism \begin{equation}\label{logtriv} x^*\operatorname{Log}\cong e^*\operatorname{Log}\cong \prod_{k\geq 0}\operatorname{Sym}^k{\cal H}. \end{equation} The most important property of the sheaf $\operatorname{Log}$ is the vanishing of its higher direct images except in the highest degree. \begin{thm}[Wildeshaus, \cite{W}, cor. 4.4., p. 70]\label{logcohvanish} One has \[ R^i\pi_*\operatorname{Log}=0\mbox{ for }i\neq 2g \] and the augmentation $\operatorname{Log}\to L$ induces canonical isomorphisms \[ R^{2g}\pi_*\operatorname{Log}\cong R^{2g}\pi_*L\cong L(-g). \] \end{thm} For the construction of the polylogarithm one considers a non-empty disjoint union of torsion sections $i:D\subset {\cal A}$, whose orders are invertible in $L$ (more generally, one can also consider $D$ {\'e}tale over $S$). Let \[ L[D]:=\bigoplus_{\sigma\in D}L \] and $L[D]^0\subset L[D] $ the kernel of the augmentation map $L[D]\to L$. Elements $\alpha\in L[D]$ are written as formal linear combinations $\alpha=\sum_{\sigma\in D}l_{\sigma}(\sigma)$. Similarly, define \[ \operatorname{Log}[D]:=\bigoplus_{\sigma\in D}\sigma^*\operatorname{Log} \] and \[ \operatorname{Log}[D]^0:=\ker\left( \operatorname{Log}[D]\to L\right) \] to be the kernel of the composition of the sum of the augmentation maps $\operatorname{Log}[D]\to L[D]$ and the augmentation $L[D]\to L$. \begin{cor} \label{eq:polDlocseq} The localization sequence for $U:={\cal A}\setminus D$ induces an isomorphism \[ \operatorname{Ext}^{2g-1}_{U}(L[D]^0, \operatorname{Log}(g))\cong \operatorname{Hom}_S(L[D]^0,\operatorname{Log}[D]^0). \] \end{cor} \begin{proof} The vanishing result \ref{logcohvanish} implies that the localization sequence is of the form \[ 0\to \operatorname{Ext}^{2g-1}_{U}(L[D]^0, \operatorname{Log}(g))\to \operatorname{Hom}_S(L[D]^0,i^*\operatorname{Log})\to \operatorname{Hom}_S(L[D]^0,L)\to 0. \] Inserting the definition of $\operatorname{Log}[D]^0$ gives the desired result. \end{proof} \begin{defn}\label{logdefn} The {\em polylogarithm} $\operatorname{pol}^D$ is the extension class \[ \operatorname{pol}^D\in \operatorname{Ext}^{2g-1}_{U}(L[D]^0, \operatorname{Log}(g)), \] which maps to the canonical inclusion $L[D]^0\to \operatorname{Log}[D]$ under the isomorphism in \ref{eq:polDlocseq}. In particular, for every $\alpha\in L[D]^0$ we get by pull-back an extension class \[ \operatorname{pol}^D_\alpha \in \operatorname{Ext}^{2g-1}_{U}(L, \operatorname{Log}(g)). \] \end{defn} \subsection{Integral version of the polylogarithm, the topological case} In the topological and the \'etale situation it is possible to define the polylogarithm with integral coefficients. In this section we treat the topological case and the \'etale case in the next section. The construction presented here is a reinterpretation by Beilinson and Levin (unpublished) of results of Nori and Sczech. We start by defining the logarithm sheaf for any (commutative) coefficient ring $L$, in particular for $L=\mathbb{Z}$. In the topological situation, it is even possible to define more generally the polylogarithm for any smooth family of real tori of constant dimension $g$, which has a unit section. Let \[ \pi:{\cal T}\to S \] be such a family, $e:S\to {\cal T}$ the unit section and let ${\cal H}_L:= \underline{\operatorname{Hom}}_S(R^1\pi_*L,L)$ be the local system of the homologies of the fibers with coefficients in $L$. Let $\tilde{{\cal H}}_\mathbb{R}$ be the associated vector bundle of ${\cal H}_\mathbb{R}$. Then ${\cal T}\cong {\cal H}_\mathbb{Z}\backslash\tilde{{\cal H}}_\mathbb{R}$ and we denote by \[ \tilde{\pi}:\tilde{{\cal H}}_\mathbb{R}\to {\cal T} \] the associated map. Let \[ L[{\cal H}_\mathbb{Z}]:=e^*\tilde{\pi}_!L \] be the local system of group rings on $S$, which is stalk-wise the group ring of the stalk of the local system ${\cal H}_\mathbb{Z}$ with coefficients in $L$. The augmentation ideal of $ L[{\cal H}_\mathbb{Z}]\to L$ is denoted by ${\cal I}$ and we define \[ L[[{\cal H}_\mathbb{Z}]]:=\varprojlim_r L[{\cal H}_\mathbb{Z}]/{\cal I}^r \] the completion along the augmentation ideal. Note that ${\cal I}^n/{\cal I}^{n+1}\cong \operatorname{Sym}^n{\cal H}_L$. If $L\supset \mathbb{Q}$, one has even a ring isomorphism \begin{equation}\label{logisom} L [[{\cal H}_\mathbb{Z}]]\xrightarrow{\cong} \prod_{k\geq 0}\operatorname{Sym}^k{\cal H}_L, \end{equation} induced by $h\mapsto \sum_{k\geq 0}h^{\otimes k}/k!$ for $h\in{\cal H}_\mathbb{Z}$. \begin{defn} The {\em logarithm sheaf} $\operatorname{Log}$ is the local system on ${\cal T}$ defined by \[ \operatorname{Log}:=\tilde{\pi}_! L \otimes_{L[{\cal H}_\mathbb{Z}]}L[[{\cal H}_\mathbb{Z}]]. \] As a local system of $L[[{\cal H}_\mathbb{Z}]]$-modules, $\operatorname{Log}$ is of rank $1$. \end{defn} Any isogeny $\phi:{\cal T}\to{\cal T}$ of order invertible in $L$ induces an isomorphism $\operatorname{Log}\cong\phi^*\operatorname{Log}$, which is induced by $\phi:{\cal H}_\mathbb{Z}\to {\cal H}_\mathbb{Z}$. In particular, if the order of a torsion section $x:S\to {\cal T}$ is invertible in $L$, one has an isomorphism \[ x^*\operatorname{Log}\cong e^*\operatorname{Log}=L[[{\cal H}_\mathbb{Z}]]. \] To complete the definition of the polylogarithm, one has to compute the cohomology of $\operatorname{Log}$. As $L[[{\cal H}_\mathbb{Z}]]$ is a flat $L[{\cal H}_\mathbb{Z}]$-module one gets \[ R^i\pi_*\operatorname{Log}\cong R^i\pi_*\tilde{\pi}_!L\otimes_{L[{\cal H}_\mathbb{Z}]}L[[{\cal H}_\mathbb{Z}]] \] and because $\pi_*=\pi_!$ one has to consider $ R^i(\pi\circ \tilde{\pi})_!L$. But the fibers of \[ \pi\circ\tilde{\pi}: \tilde{{\cal H}}_\mathbb{R}\to S \] are just $g$-dimensional vector spaces and the cohomology with compact support lives only in degree $g$, where it is the dual of $\Lambda^{max}{\cal H}_L$. Hence, we have proved: \begin{lemma} Denote by $\mu_{\cal T}^\lor$ the $L$-dual of $\mu_{\cal T}:=\Lambda^{max}{\cal H}_L$. Then the higher direct images of $\operatorname{Log}$ are given by \[ R^i\pi_*\operatorname{Log}\cong \left\{\begin{array}{cc}\mu_{\cal T}^\lor&\mbox{ if }i=g\\ 0&\mbox{ else. }\end{array}\right. \] \end{lemma} As in \ref{eq:polDlocseq} one obtains \[ \operatorname{Ext}^{g}_{U}(L[D]^0, \operatorname{Log}\otimes\mu_{\cal T})\cong \operatorname{Hom}_S(L[D]^0,\operatorname{Log}[D]^0) \] and one defines the polylogarithm \[ \operatorname{pol}^D\in \operatorname{Ext}^{g-1}_{U}(L[D]^0, \operatorname{Log}\otimes\mu_{\cal T}) \] in the same way. For $\alpha\in L[D]^0$ one has again \[ \operatorname{pol}^D_\alpha\in \operatorname{Ext}^{g-1}_{U}(L, \operatorname{Log}\otimes\mu_{\cal T})= H^{g-1}(U,\operatorname{Log}\otimes\mu_{\cal T}). \] \subsection{Integral version of the polylogarithm, the \'etale case} This section will not be used in the rest of the paper and can be omitted by any reader not interested in the integral \'etale case. To define an integral \'etale polylogarithm, one has to modify the definition of the logarithm sheaf as in the topological case. The situation we consider here is again an abelian scheme \[ \pi:{\cal A}\to S \] of constant fiber dimension $g$ and unit section $e:S\to {\cal A}$. Let $\ell$ be a prime number, $L=\mathbb{Z}/\ell^k\mathbb{Z}$ and assume that $\ell$ is invertible in ${\cal O}_S$. Then the $\ell^r$-multiplication $[\ell^r]:{\cal A}\to {\cal A}$ is \'etale and the sheaves $[\ell^r]_!L$ form a projective system via the trace maps \[ [\ell^r]_!L\to [\ell^{r-1}]_!L. \] \begin{defn} The {\em logarithm sheaf} is the inverse limit \[ \operatorname{Log}_L:=\varprojlim_r[\ell^r]_!L \] with respect to the above trace maps. The logarithm sheaf with $\mathbb{Z}_\ell$-coefficients is defined by \[ \operatorname{Log}_{\mathbb{Z}_\ell}:=\varprojlim_k\operatorname{Log}_{\mathbb{Z}/\ell^k\mathbb{Z}}. \] \end{defn} Let ${\cal H}_\ell:=\varprojlim_r{\cal A}[\ell^r]$ be the Tate-module of ${\cal A}/S$. As $\ell$ is nilpotent in $L$, we get that $e^*\operatorname{Log}=L[[{\cal H}_\ell]]$ is the Iwasawa algebra of ${\cal H}_\ell$ with coefficients in $L$. Any isogeny $\phi: {\cal A}\to {\cal A}$ of degree prime to $\ell$ induces an isomorphism $[\ell^r]_!L\to \phi^*[\ell^r]_!L$, which induces \[ \operatorname{Log}\cong \phi^*\operatorname{Log}. \] \begin{prop} Let $L=\mathbb{Z}/\ell^k\mathbb{Z}$ or $L=\mathbb{Z}_\ell$. The higher direct images of $\operatorname{Log}$ are given by \[ R^i\pi_*\operatorname{Log}\cong \left\{\begin{array}{cc}L(-g)&\mbox{ if }i=2g\\ 0&\mbox{ else. }\end{array}\right. \] \end{prop} \begin{proof} It suffices to consider the case $L=\mathbb{Z}/\ell^k\mathbb{Z}$. We will show that the transition maps $R^i\pi_*[\ell^r]_!L\to R^i\pi_*[\ell^{s}]_!L$ are zero for $i<2g$ and every $s$, if $r$ is sufficiently big. By Poincar\'e duality we may consider the maps \begin{equation}\label{dualtransition} R^{2g-i}\pi_![\ell^s]_*L(g)\to R^{2g-i}\pi_![\ell^{r}]_*L(g). \end{equation} By base change we may assume that $S$ is the spectrum of an algebraically closed field. Denote by ${\cal A}_s$ the variety ${\cal A}$ considered as covering of ${\cal A}$ via $[\ell^s]$. Then \[ R^1\pi_![\ell^s]_*L(g)=H^1({\cal A},[\ell^s]_*L(g))=\operatorname{Hom}(\pi_1({\cal A}_s),L(g)). \] With this description we see that for every $f\in \operatorname{Hom}(\pi_1({\cal A}_s),L(g))$ there is an $r$, such that the restriction to $\pi_1({\cal A}_r)$ is trivial. This shows that the map in (\ref{dualtransition}) is zero, if $r$ is sufficiently big and $i<2g$ as the cohomology in degree $i$ is the $i$-th exterior power of the first cohomology. That (\ref{dualtransition}) is an isomorphism for $i=2g$ is clear. \end{proof} \subsection{Eisenstein classes} The Eisenstein classes are specializations of the polylogarithm. The situation is as follows. First let $\alpha\in L[{\cal A}[n]\setminus e(S)]^0$ and assume that $\mathbb{Q}\subset L$. Then one can pull-back the class $\operatorname{pol}^{{\cal A}[n]\setminus e(S)}_\alpha\in \operatorname{Ext}^{g-1}_{U}(L, \operatorname{Log}(g))$ along $e$ and gets: \[ e^*\operatorname{pol}^{{\cal A}[n]\setminus e(S)}_\alpha\in \operatorname{Ext}^{2g-1}_{S}(L, e^*\operatorname{Log}(g))= \prod_{k\geq 0}\operatorname{Ext}^{2g-1}_{S}(L,\operatorname{Sym}^k{\cal H}(g)). \] The $k$-th component is the {\em Eisenstein class} $\operatorname{Eis}^k(\alpha)$. For $k>0$, we can extend this definition to $\alpha\in L[{\cal A}[n]]^0$ with the following observation: \begin{lemma} Let $\lambda\in{\cal O}$ and $[\lambda]:{\cal A}\to{\cal A}$ the associated isogeny. Assume that the degree of $[\lambda]$ is prime to $n$. Then $[\lambda]$ induces via pull-back an isomorphism \[ \lambda^*:L[{\cal A}[n]]^0\to L[{\cal A}[n]]^0 \] and for $k>0$ \[ \operatorname{Eis}^k(\lambda^*(\alpha))=\lambda^k\operatorname{Eis}^k(\alpha). \] Here the $\lambda^k\operatorname{Eis}^k(\alpha)$ uses the ${\cal O}$-module structure on $\operatorname{Ext}^{2g-1}_{S}(L,\operatorname{Sym}^k{\cal H}(g))$. \end{lemma} \begin{proof} It is clear that $\lambda^*$ is an isomorphism. By definition $\operatorname{pol}^{{\cal A}[n]\setminus e(S)}_\alpha$ is functorial with respect to isogenies and one only has to remark that $\operatorname{Log}\cong [\lambda]^*\operatorname{Log}$ is on the associated graded given by $\operatorname{Sym}^\bullet[\lambda]:\operatorname{Sym}^\bullet{\cal H}\to \operatorname{Sym}^\bullet{\cal H}$. \end{proof} Let now $\alpha\in L[{\cal A}[n]]^0$, then for $\lambda\neq 1,0$ $\alpha-\lambda^*\alpha L[{\cal A}[n]\setminus e(S)]^0$ and we define for $k>0$ \begin{equation}\label{eisen} \operatorname{Eis}^k(\alpha):=(1-\lambda^k)^{-1}\operatorname{Eis}^k(\alpha-\lambda^*\alpha). \end{equation} It is a straightforward computation, that this definition does not depend on the choosen $\lambda$. \begin{defn}\label{eisdefn} For any $\alpha\in L[{\cal A}[n]\setminus e(S)]^0$, define the $k$-th {\em Eisenstein class} associated to $\alpha$, \[ \operatorname{Eis}^k(\alpha)\in \operatorname{Ext}^{2g-1}_{S}(L,\operatorname{Sym}^k{\cal H}(g)), \] to be the $k$-th component of $e^*\operatorname{pol}^{D}_\alpha$. For $k>0$ and $\alpha\in L[{\cal A}[n]]^0$ define $\operatorname{Eis}^k(\alpha)$ by the formula in (\ref{eisen}). \end{defn} Note that by the functoriality of the polylogarithm the map \begin{equation}\label{eisenequiv} L[{\cal A}[n]\setminus e(S)]^0\xrightarrow{\operatorname{Eis}^k}\operatorname{Ext}^{2g-1}_{S}(L,\operatorname{Sym}^k{\cal H}(g)) \end{equation} is equivariant for the $G(\mathbb{Z}/n\mathbb{Z})$ action on both sides. These Eisenstein classes should be considered as analogs of Harder's Eisenstein classes (but observe that we have only classes in cohomological degree $2g-1$). The advantage of the above classes is that they are defined by a universal condition, which makes a lot of their properties easy to verify. \section{Proof of the main theorem}\label{proof} In this section we assume that $\mathbb{Q}\subset L$. The proof of the main theorem will be in several steps. First we reduce to the case of local systems for the usual topology. The second step consists of a trick already used in \cite{HK}: instead of working with the Eisenstein classes directly, we work with the polylogarithm itself. The reason is that the polylog is characterized by a universal property and has a very good functorial behavior. The third step reviews the computations of Nori in \cite{Nori}. In the fourth step we compute the integral over $S^1_T$ and the fifth step gives the final result. \subsection{1. Step: Reduction to the classical topology} We distinguish the $MHM$ and the \'etale case. In the $MHM$ case, the target of the residue map from (\ref{deg2}) \begin{equation} \operatorname{res}:\operatorname{Ext}^{2g-1}_S(L, \operatorname{Sym}^k{\cal H}(g))\to\operatorname{Hom}_{{\partial S}}(L,L). \end{equation} is purely topological and does not depend on the Hodge structure. More precisely, the canonical map ``forget the Hodge structure'' denoted by $\operatorname{rat}$ induces an isomorphism \[ \operatorname{rat}:\operatorname{Hom}_{MHM,{\partial S}}(L,L)\cong \operatorname{Hom}_{top,{\partial S}}(L,L). \] By \cite{Sa} thm. 2.1 we have a commutative diagram \begin{equation} \begin{CD} \operatorname{Ext}^{2g-1}_{MHM,S}(L, \operatorname{Sym}^k{\cal H}(g))@>\operatorname{res} >>\operatorname{Hom}_{MHM,{\partial S}}(L,L)\\ @VV\operatorname{rat} V@VV \operatorname{rat} V\\ \operatorname{Ext}^{2g-1}_{top,S}(L, \operatorname{Sym}^k{\cal H}(g))@>\operatorname{res} >>\operatorname{Hom}_{top,{\partial S}}(L,L). \end{CD} \end{equation} This reduces the computation of the residue map for $MHM$ to the case of local systems in the classical topology. In the \'etale case one has an injection \[ \operatorname{Hom}_{et,{\partial S}}(L,L)\hookrightarrow \operatorname{Hom}_{et,{\partial S}\times\bar{\mathbb{Q}}}(L,L)\cong \operatorname{Hom}_{top,{\partial S}(\mathbb{C})}(L,L). \] and a commutative diagram \begin{equation} \begin{CD} \operatorname{Ext}^{2g-1}_{et,S}(L, \operatorname{Sym}^k{\cal H}(g))@>\operatorname{res} >>\operatorname{Hom}_{et,{\partial S}}(L,L)\\ @VVV@VVV\\ \operatorname{Ext}^{2g-1}_{top,S(\mathbb{C})}(L, \operatorname{Sym}^k{\cal H}(g))@>\operatorname{res} >>\operatorname{Hom}_{top,{\partial S}(\mathbb{C})}(L,L). \end{CD} \end{equation} Again, this reduces the residue computation to the classical topology. \subsection{2. Step: Topological degeneration} In this section we reduce the computation of $\operatorname{res}\circ\operatorname{Eis}^k$ to a computation of the polylog on ${\cal T}_{\cal M}$. We are now in the topological situation and use again the notations ${\partial S}$ and $S$ instead of ${\partial S}(\mathbb{C})$ and $S(\mathbb{C})$. Recall from (\ref{resequiv}) that $\operatorname{res}\circ\operatorname{Eis}^k$ is $G(\mathbb{Z}/n\mathbb{Z})$ equivariant. In particular, \[ \operatorname{res}(\operatorname{Eis}^k(\alpha))(h)=\operatorname{res}(\operatorname{Eis}^k(h\alpha))(\operatorname{id}), \] where $h\alpha$ denotes the action of $h$ on $\alpha$. To compute the residue it suffices to consider the residue at $\operatorname{id}$. Recall from (\ref{degenerationdiagr}) that we have a commutative diagram of fibrations \begin{equation}\label{fibrediagr} \begin{CD} {\cal A}(\mathbb{C})@>p>>{\cal T}_{\cal M}\\ @V\pi VV@VV\pi_{\cal M} V\\ S^1_B @>q>> S^1_T. \end{CD} \end{equation} The map $p:{\cal H}\to {\cal M}$ induces $\operatorname{Log}_{\cal A}\to p^*\operatorname{Log}_{{\cal M}}$. Let $D={\cal A}[n]$ and $U:={\cal A}\setminus D$ be the complement. Let $p(D)={\cal T}_{\cal M}[n]$ be the image of $D$ in ${\cal T}_{\cal M}$ and $V:={\cal T}_{\cal M}\setminus p(D)$ be its complement in ${\cal T}_{\cal M}$. Then $p$ induces a map \[ p:U\setminus p^{-1}(p(D))\to V. \] We define a trace map \begin{equation} \label{eq:poltrace} p_*:\operatorname{Ext}^{2g-1}_{U}(L, \operatorname{Log}_{\cal A}\otimes\mu_{\cal A})\to \operatorname{Ext}^{g-1}_{V}(L, \operatorname{Log}_{{\cal M}}\otimes\mu_{{\cal T}_{\cal M}}) \end{equation} as the composition of the restriction to $U\setminus p^{-1}(p(D))$ \[ \operatorname{Ext}^{2g-1}_{U}(L, \operatorname{Log}_{\cal A}\otimes\mu_{\cal A})\to \operatorname{Ext}^{2g-1}_{U\setminus p^{-1}(p(D))}(L, \operatorname{Log}_{\cal A}\otimes\mu_{\cal A}) \] with the adjunction map \[ \operatorname{Ext}^{2g-1}_{U\setminus p^{-1}(p(D))}(L, \operatorname{Log}_{\cal A}\otimes\mu_{\cal A})\to \operatorname{Ext}^{g-1}_{V}(L,R^gp_*p^*\operatorname{Log}_{\cal M}\otimes\mu_{\cal A}). \] As $\mu_{\cal A}\cong\mu_{{\cal T}_{\cal N}}\otimes\mu_{{\cal T}_{\cal M}}$, the projection formula gives \[ R^gp_*p^*\operatorname{Log}_{\cal M}\cong \operatorname{Log}_{\cal M}\otimes\mu_{{\cal T}_{\cal M}}. \] The composition of these maps gives the desired $p_*$ in (\ref{eq:poltrace}). The crucial fact is that the polylogarithm behaves well under this trace map. \begin{prop}\label{poldeg} With the notations above, let $\alpha\in L[D]^0$ and $\operatorname{pol}^D_{{\cal A},\alpha}\in \operatorname{Ext}^{2g-1}_{U}(L, \operatorname{Log}_{\cal A})$ be the associated polylogarithm. Denote by $p(\alpha)$ the image of $\alpha$ under the map \begin{align*} p:L[D]^0&\to L[p(D)]^0 \end{align*} induced by $p:{\cal A}(\mathbb{C})\to {\cal T}_{\cal M}$. Then \[ p_*\operatorname{pol}^D_{{\cal A},\alpha}=\operatorname{pol}^{p(D)}_{{\cal T}_{\cal M}, p(\alpha)}. \] \end{prop} \begin{proof} This is a quite formal consequence of the definition and the fact that the residue map commutes with the trace map. We use cohomological notation, then one has a commutative diagram \[\begin{CD} H^{2g-1}(U,\operatorname{Log}_{\cal A}\otimes\mu_{\cal A})@>>> H_D^{2g}({\cal A},\operatorname{Log}_{\cal A}\otimes\mu_{\cal A})\\ @VVV@VVV\\ H^{2g-1}(U\setminus p^{-1}(p(D)),\operatorname{Log}_{\cal A}\otimes\mu_{\cal A})@>>> H_{p^{-1}(p(D))}^{2g}({\cal A},\operatorname{Log}_{\cal A}\otimes\mu_{\cal A})\\ @VVp_*V@VVp_*V\\ H^{g-1}(V,\operatorname{Log}_{\cal M}\otimes\mu_{{\cal T}_{\cal M}})@>>> H_{p(D)}^{g}({\cal T}_{\cal M},\operatorname{Log}_{\cal M}\otimes\mu_{{\cal T}_{\cal M}}). \end{CD} \] We can identify \[ H_D^{2g}({\cal A},\operatorname{Log}_{\cal A}\otimes\mu_{\cal A})\cong \bigoplus_{\sigma\in D} \sigma^*\operatorname{Log}_{\cal A} \] and \[ H_{p(D)}^{g}({\cal T}_{\cal M},\operatorname{Log}_{\cal M}\otimes\mu_{{\cal T}_{\cal M}})\cong \bigoplus_{\sigma\in p(D)}\sigma^*\operatorname{Log}_{{\cal T}_{\cal M}}. \] With this identification the composition of the vertical arrows on the right is induced by $\operatorname{Log}_{\cal A}\to p^*\operatorname{Log}_{{\cal T}_{\cal M}}$. The polylog $\operatorname{pol}^D_{{\cal A},\alpha}$ belongs to the section $\alpha\in L[D]^0\subset\bigoplus_{\sigma\in D} \sigma^*\operatorname{Log}_{\cal A}$. This maps to $p(\alpha)\in L[p(D)]^0\subset \bigoplus_{\sigma\in p(D)} \sigma^*\operatorname{Log}_{{\cal T}_{\cal M}}$. Thus $\operatorname{pol}^D_{{\cal A},\alpha}$ is mapped under $p_*$ to $\operatorname{pol}^{p(D)}_{{\cal T}_{\cal M}, p(\alpha)}$. \end{proof} We want to prove the same sort of result for the Eisenstein classes themselves. To formulate it properly, we need: \begin{lemma}\label{GammaNcoinv} Let $q:S^1_B\to S^1_T$ be the fibration from (\ref{fibrediagr}). Then \[ R^gq_*\operatorname{Sym}^k{\cal H}\cong \operatorname{Sym}^k{\cal M}\otimes\mu_{{\cal T}_{\cal N}}^{\lor}. \] \end{lemma} \begin{proof} Recall the exact sequence \[ 0\to {\cal N}\to{\cal H}\to{\cal M}\to 0 \] from (\ref{locexseq}). By definition of $N(\mathbb{Z})$, the coinvariants of $\operatorname{Sym}^k{\cal H}$ for $N(\mathbb{Z})$ are exactly $\operatorname{Sym}^k{\cal M}$. The lemma follows, as $R^gq_*$ corresponds by definition of the fibering exactly to the coinvariants under $N(\mathbb{Z})$. \end{proof} Define a trace map \[ q_*:\operatorname{Ext}^{2g-1}_{S_B}(L,\operatorname{Sym}^k{\cal H}\otimes\mu_{\cal A})\to \operatorname{Ext}^{g-1}_{S_T}(L,\operatorname{Sym}^k{\cal M}\otimes\mu_{{\cal T}_{\cal M}}) \] by adjunction for $q$, the isomorphism $R^gq_*\operatorname{Sym}^k{\cal H}\cong\operatorname{Sym}^k{\cal M}\otimes\mu_{{\cal T}_{\cal N}}^{\lor}$ from lemma \ref{GammaNcoinv} and the isomorphism $\mu_{\cal A}\cong\mu_{{\cal T}_{\cal N}}\otimes\mu_{{\cal T}_{\cal M}}$. The behaviour of $\operatorname{Eis}^k(\alpha)$ under $q_*$ is given by: \begin{thm}\label{eisendeg} Let $k>0$ and $\alpha\in L[D]^0$. Then \[ q_*(\operatorname{Eis}^k_{\cal A}(\alpha))=\operatorname{Eis}^k_{{\cal T}_{\cal M}}(p(\alpha)), \] where $p:L[D]^0\to L[p(D)]^0$ is the map from \ref{poldeg}. \end{thm} \begin{proof} Consider the following diagram in the derived category: \begin{equation}\label{commdiagr} \begin{CD}Rp_*\operatorname{Log}_{\cal A} \otimes\mu_{\cal A} @>>> Rp_*e_*e^*\operatorname{Log}_{\cal A} \otimes\mu_{\cal A}\\ @VVV@|\\ Rp_*p^*\operatorname{Log}_{{\cal T}_{\cal M}}\otimes\mu_{\cal A} @. e'_*Rq_*e^*\operatorname{Log}_{\cal A} \otimes\mu_{\cal A}\\ @VVV@VVV\\ \operatorname{Log}_{{\cal T}_{\cal M}}\otimes\mu_{{\cal T}_{\cal M}}[-g]@>>> e'_*e'^*\operatorname{Log}_{{\cal T}_{\cal M}}\otimes\mu_{{\cal T}_{\cal M}}[-g] \end{CD} \end{equation} We will show that this diagram is commutative and thereby explain all the maps. First consider the commutative diagram \[ \begin{CD}Rp_*\operatorname{Log}_{\cal A} \otimes\mu_{\cal A} @>>> Rp_*e_*e^*\operatorname{Log}_{\cal A} \otimes\mu_{\cal A}\\ @VVV@VVV\\ Rp_*p^*\operatorname{Log}_{{\cal T}_{\cal M}}\otimes\mu_{\cal A} @>>> Rp_*e_*e^*p^*\operatorname{Log}_{{\cal T}_{\cal M}}\otimes\mu_{\cal A}, \end{CD} \] where the horizontal arrows are induced from adjunction $\operatorname{id}\to e_*e^*$ and the vertical arrows from $ \operatorname{Log}_{\cal A} \to p^*\operatorname{Log}_{{\cal T}_{\cal M}}$. One has $p\circ e= e'\circ q$ and hence \[ Rp_*e_*e^*p^*\operatorname{Log}_{{\cal T}_{\cal M}}\otimes\mu_{\cal A}\cong e'_*Rq_*q^*e'^*\operatorname{Log}_{{\cal T}_{\cal M}}\otimes\mu_{\cal A}. \] The projection formula gives \[ e'_*Rq_*q^*e'^*\operatorname{Log}_{{\cal T}_{\cal M}}\otimes\mu_{\cal A}\cong e'_*e'^*\operatorname{Log}_{{\cal T}_{\cal M}}\otimes\mu_{\cal A}\otimes Rq_*L. \] Projection to the highest cohomology gives a commutative diagram \[ \begin{CD} Rp_*p^*\operatorname{Log}_{{\cal T}_{\cal M}}\otimes\mu_{\cal A} @>>> e'_*e'^*\operatorname{Log}_{{\cal T}_{\cal M}}\otimes\mu_{\cal A}\otimes Rq_*L\\ @VVV@VVV\\ \operatorname{Log}_{{\cal T}_{\cal M}}\otimes\mu_{\cal A}\otimes \mu_{{\cal T}_{\cal N}}^\lor @>>> e'_*e'^*\operatorname{Log}_{{\cal T}_{\cal M}}\otimes\mu_{\cal A}\otimes \mu_{{\cal T}_{\cal N}}^\lor, \end{CD} \] where the horizontal maps are adjunction maps $\operatorname{id}\to e'_*e'^*$. Finally we use $\mu_{\cal A}\otimes \mu_{{\cal T}_{\cal N}}^\lor\cong \mu_{{\cal T}_{\cal M}}$ to obtain the commutative diagram (\ref{commdiagr}). Applying $\operatorname{Ext}^{2g-1}_{V}(L,-)$ to this diagram, where $V:={\cal T}_{\cal M}\setminus p(D)$ we get \[ \begin{CD} \operatorname{Ext}^{2g-1}_{U}(L,\operatorname{Log}_{\cal A}\otimes\mu_{\cal A})@>>> \operatorname{Ext}^{2g-1}_{S}(L,e^*\operatorname{Log}\otimes\mu_{\cal A})\\ @Vp_*VV@VVq_*V\\ \operatorname{Ext}^{g-1}_{V}(L,\operatorname{Log}_{{\cal T}_{\cal M}}\otimes\mu_{{\cal T}_{\cal M}})@>>> \operatorname{Ext}^{g-1}_{S}(L,e'^*\operatorname{Log}_{{\cal T}_{\cal M}}\otimes\mu_{{\cal T}_{\cal M}}). \end{CD} \] Now, as $k>0$, we may assume that $\alpha\in L[D\setminus e(S)]^0$ and $p(\alpha)\in L[p(D)\setminus e'(S)]^0$. The result follows then from proposition \ref{poldeg}. \end{proof} In a similar (but simpler) way one shows: \begin{thm}\label{isogenyinvariance} Let $\phi:{\cal T}_{\cal M}\to {\cal T}_{{\cal M}'}$ be an isogeny of tori, then $\phi$ induces a morphism $\phi_*:e^*\operatorname{Log}_{\cal M}\to e^*\operatorname{Log}_{{\cal M}'}$ and \[ \phi_*\operatorname{Eis}^k_{{\cal T}_{\cal M}}(\alpha)=\operatorname{Eis}^k_{{\cal T}_{{\cal M}'}}(\phi(\alpha)). \] \end{thm} \subsection{3. Step: Explicit description of the polylog} In this section we follow Nori \cite{Nori} to describe the polylog $\operatorname{pol}^{{\cal T}_{\cal M}[n]}_\beta$ for any $\beta\in L[{\cal T}_{\cal M}[n]\setminus 0]^0$ explicitly. The presentation is also influenced by unpublished notes of Beilinson and Levin. In fact it is useful for the connection with $L$-functions to consider a more general situation and to allow arbitrary fractional ideals $\mathfrak{a}$ instead just ${\cal O}$. We assume $L=\mathbb{C}$. The geometric situation is this: Recall that $T^1(\mathbb{Z})={\cal O}^*$ and let $\mathfrak{a}\subset F$ be a fractional ideal with the usual $T^1(\mathbb{Z})$-action. We can form as usual the semi direct product \[ \mathfrak{a}\rtimes T^1(\mathbb{Z}), \] where the multiplication is given by the formula $(v,t)(v',t')=(v+tv',tt')$. Similarly, we can form $\mathfrak{a}\otimes\mathbb{R}\rtimes T^1(\mathbb{R})$ and we define \[ {\cal T}_\mathfrak{a}:=\mathfrak{a}\rtimes T(\mathbb{Z})\backslash \left(\mathfrak{a}\otimes\mathbb{R}\rtimes T^1(\mathbb{R})\right)/K^T_\infty. \] We have \[ \pi_\mathfrak{a}:{\cal T}_\mathfrak{a}\to S^1_T \] and we consider the polylog for this real torus bundle of relative dimension $g$. The case ${\cal T}_{\cal M}$ is the one where $\bigl(\begin{smallmatrix}a&0\\0&d\end{smallmatrix}\bigr)\in T^1(\mathbb{Z})$ acts via $d\in {\cal O}^*$ on ${\cal O}$. Let us describe the logarithm sheaf $\operatorname{Log}_{{\cal T}_\mathfrak{a}}$ in this setting. As the coefficients are $L=\mathbb{C}$, we can use the isomorphism from (\ref{logisom}) \begin{align} \mathbb{C} [[\mathfrak{a}]]&\xrightarrow{\cong}\prod_{k\geq 0}\operatorname{Sym}^k\mathfrak{a}_\mathbb{C}=:{\hat{\cal U}}(\mathfrak{a}) \\ \nonumber v&\mapsto \exp(v):=\sum_{k=0}^\infty \frac{v^{\otimes k}}{k!} \end{align} The action of $(0,t)\in\mathfrak{a}\rtimes T^1(\mathbb{Z})$ on ${\hat{\cal U}}(\mathfrak{a})$ is induced by the action of $T^1(\mathbb{Z})$ on $\mathfrak{a}$. The action of \[ (v,\operatorname{id})\in\mathfrak{a}\rtimes T^1(\mathbb{Z}) \] on ${\hat{\cal U}}(\mathfrak{a})$ is given by multiplication with $\exp(v)$. The logarithm sheaf $\operatorname{Log}_{{\cal T}_\mathfrak{a}}$ is just the local system defined by the quotient \[ \mathfrak{a}\rtimes T^1(\mathbb{Z})\backslash \left(\mathfrak{a}\otimes\mathbb{R}\rtimes T^1(\mathbb{R})\times{\hat{\cal U}}(\mathfrak{a}) \right)/K^T_\infty. \] A ${\cal C}^\infty$-section $f$ of $\operatorname{Log}_{{\cal T}_\mathfrak{a}}$ is a function $f:\mathfrak{a}\otimes\mathbb{R}\rtimes T^1(\mathbb{R})\to{\hat{\cal U}}(\mathfrak{a})$, which has the equivariance property \[ f((v,t)(v',t'))=(v,t)^{-1}f(v',t'). \] In a similar way, we can describe $\operatorname{Log}_{{\cal T}_\mathfrak{a}}$-valued currents. The global ${\cal C}^\infty$-section \[ \exp(-v):(v,t)\mapsto \sum_{k=0}^\infty\frac{(-v)^{\otimes k}}{k!}, \] with $(v,t)\in \mathfrak{a}\otimes \mathbb{R}\rtimes T^1(\mathbb{R})$ defines a trivialization of $\operatorname{Log}_{{\cal T}_\mathfrak{a}}$ as ${\cal C}^\infty$-bundle. Every current $\mu(v,t)$ with values in $\operatorname{Log}_{{\cal T}_\mathfrak{a}}$ can then be written in the form \[ \mu(v,t)=\nu(v,t)\exp(-v), \] where $\nu(v,t)$ is now a current with values in the constant bundle ${\hat{\cal U}}(\mathfrak{a})$. In particular, $ \nu(v,t)$ is invariant under the action of $\mathfrak{a}\subset \mathfrak{a}\rtimes T^1(\mathbb{Z})$. \begin{lemma} Let ${\bf v}:\mathfrak{a}\otimes\mathbb{R}\to {\hat{\cal U}}(\mathfrak{a})$ be the canonical inclusion given by $\mathfrak{a}\otimes\mathbb{R}\subset\operatorname{Sym}^1 \mathfrak{a}\otimes\mathbb{C}$, then the canonical connection $\nabla$ on $\operatorname{Log}_{{\cal T}_\mathfrak{a}}$ acts on $\nu$ by \[ \nabla \nu=(d-d{\bf v})\nu. \] \end{lemma} \begin{proof} Straightforward computation. \end{proof} Following Nori \cite{Nori} we describe the polylog as a $\operatorname{Log}_{{\cal T}_\mathfrak{a}}$-valued current $\mu(v,t)$ on ${\cal T}_\mathfrak{a}$, such that \begin{equation}\label{polproperty} \nabla\mu(v,t)=\delta_\beta, \end{equation} where \[ \delta_\beta:=\sum_{\sigma\in D}l_\sigma\delta_\sigma \] and $\delta_\sigma$ are the currents defined by integration over the cycles on ${\cal T}_\mathfrak{a}$ given by the section $\sigma$. If we write as above \[ \mu(v,t)=\nu(v,t)\exp(-v) \] we get the equivalent condition \begin{equation}\label{trivpolproperty} (d-d{\bf v})\nu(v,t)=\delta_\beta. \end{equation} As $\nu(v,t)$ is invariant under the $\mathfrak{a}$-action, we can develop $\nu(v,t)$ into a Fourier series \begin{equation}\label{fourierseries} \nu(v,t)=\sum_{\rho\in \mathfrak{a}^\lor}\nu_{\rho}(t)e^{2\pi i \rho(v)}. \end{equation} The property (\ref{trivpolproperty}) reads for the Fourier coefficients $\nu_{\rho}(t)$: \begin{equation} (d+2\pi id\rho-d\underline{v})\nu_\rho(t)= (e^{-2\pi i \rho(\beta)})\operatorname{vol}, \end{equation} where $\operatorname{vol}$ is the unique constant coefficient $g$-form on $\mathfrak{a}\otimes\mathbb{R}$, such that the integral $\int_{\mathfrak{a}\otimes\mathbb{R}/\mathfrak{a}}\operatorname{vol}=1$ and \[ e^{-2\pi i \rho(\beta)}:= \sum_\sigma l_\sigma e^{-2\pi i \rho(\sigma)}. \] We do not explain in detail the method of Nori to solve this equation, we just give the result. This suffices, because the cohomology class of the polylogarithm is uniquely determined by the equation (\ref{polproperty}) and we just need to give a solution for it. Fix a positive definite quadratic form $q$ on $\mathfrak{a}\otimes\mathbb{R}$, viewed as an isomorphism \[ q:(\mathfrak{a}\otimes\mathbb{R})^\lor\cong \mathfrak{a}\otimes\mathbb{R}. \] Define a left action of $t\in T^1(\mathbb{R})$ by $q_t(v,w):=q(t^{-1}v,t^{-1}w)$. Consider $\rho$ as element in $(\mathfrak{a}\otimes\mathbb{R})^\lor$. Then $q_t(\rho)$ can be considered as a vector field and we denote by $\iota_\rho$ the contraction with this vector field $q_t(\rho)$. We may also consider $q_t(\rho)$ as element in ${\hat{\cal U}}(\mathfrak{a})$ and denote this by ${\bf q}_t(\rho)$. \begin{thm}[Nori] With the notations above, one has for $0\neq \rho$ \[ \nu_\rho(t)=\sum_{m=0}^{g-1} \frac{(-1)^m(e^{-2\pi i\rho(\beta)})} {(2\pi i\rho(q_t(\rho))-{\bf q}_t(\rho))^{m+1}} \iota_\rho(d\circ\iota_\rho)^m \operatorname{vol} \] and \[ \nu_0(t)=0 \] \end{thm} \begin{proof} Write $\Phi_\rho$ for the operator multiplication by $2\pi id\rho-d{\bf v}$ and $\Psi_\rho:=d+\Phi_\rho$. One checks that $\Psi_\rho\circ \Psi_\rho=0=\iota_\rho\circ\iota_\rho$ and that $\Psi_\rho\circ\iota_\rho+\iota_\rho\circ\Psi_\rho$ is an isomorphism. Indeed $\Phi_\rho\circ\iota_\rho +\iota_\rho\circ\Phi_\rho$ is multiplication by $2\pi i\rho(q_t(\rho))-{\bf q}_t(\rho)$ and ${\cal L}_\rho:=d\circ\iota_\rho+\iota_\rho\circ $ is the Lie derivative with respect to the vector field $q_t(\rho)$. The formula in the theorem is just \[ \iota_\rho\circ(\Psi_\rho\circ\iota_\rho+\iota_\rho\circ\Psi_\rho)^{-1}(e^{-2\pi i\rho(\beta)})\operatorname{vol} \] and to check that \[ \Psi_\rho\circ\iota_\rho\circ(\Psi_\rho\circ\iota_\rho+\iota_\rho\circ\Psi_\rho)^{-1}=\operatorname{id} \] note that $\iota_\rho\circ\Psi_\rho$ commutes with $(\Psi_\rho\circ\iota_\rho+\iota_\rho\circ\Psi_\rho)^{-1}$ and $\iota_\rho\circ\Psi_\rho(e^{-2\pi i\rho(\beta)})\operatorname{vol}=0$. \end{proof} \begin{cor}\label{polcurrent} The polylogarithm $\operatorname{pol}^{{\cal T}_\mathfrak{a}[n]}_\beta$ is given in the topological realization by the current \[ \mu(v,t)=\nu(v,t)\exp(-v) \] where $\nu(v,t)$ is the current given by \[ \sum_{m=0}^{g-1}\sum_{k=0}^\infty{k+m\choose k} \sum_{\rho\in \mathfrak{a}^\lor\setminus 0} \frac{(-1)^me^{2\pi i\rho(v-\beta)}} {(2\pi i\rho(q_t(\rho)))^{k+m+1}}{\bf q}_t(\rho)^{\otimes k} \iota_\rho(d\circ\iota_\rho)^m\operatorname{vol}. \] \end{cor} \begin{proof} This follows from the formula $\frac{1}{(A-B)^{m+1}}=\sum_{k=0}^\infty\frac{B^{k}}{A^{k+m+1}} {k+m\choose k}$. \end{proof} The Eisenstein classes are obtained by pull-back of this current along the zero section $e$. As for $k>0$ the series over the $\rho$ converges absolutely, this is defined and only terms with $m=g-1$ survive. We get the following formula for the Eisenstein classes. \begin{cor} Let $\beta\in \mathbb{C}[{\cal T}_\mathfrak{a}[n]\setminus 0]^0$ and $k>0$, then the topological Eisenstein class is given by \[ \operatorname{Eis}^k(\beta)=\frac{(k+g-1)!}{k!}\sum_{\rho\in \mathfrak{a}^\lor\setminus 0} \frac{(-1)^{g-1}e^{-2\pi i\rho(\beta)}} {(2\pi i\rho(q_t(\rho)))^{k+g}}{\bf q}_t(\rho)^{\otimes k} q_t(\rho)^*\iota_{{\cal E}}\operatorname{vol}. \] Here, we have written ${\cal E}$ for the Euler vector field and $q_t(\rho)$ is considered as a function $q_t(\rho):S_T\to \mathfrak{a}\otimes \mathbb{R}$, which maps $t$ to the vector $q_t(\rho)$. \end{cor} \begin{proof} From \ref{polcurrent} we have to compute \[ e^*\iota_\rho(d\circ\iota_\rho)^m\operatorname{vol}. \] For this remark that the Lie derivative ${\cal L}_\rho= d\circ\iota_\rho + \iota_\rho\circ d$ with respect to the vector field $q_t(\rho)$ acts in the same way on $\operatorname{vol}$ as $d\circ\iota_\rho$. One sees immediately that $e^*\iota_\rho(d\circ\iota_\rho)^m\operatorname{vol}=0$, if $m<g-1$ and a direct computation in coordinates gives that $\iota_\rho({\cal L}_\rho)^{g-1}\operatorname{vol}=(g-1)!q_t(\rho)^*\iota_{{\cal E}}\operatorname{vol}$. \end{proof} \subsection{4. Step: Computation of the integral} To finish the proof of theorem \ref{maintheorem} we have to compute $u_*\operatorname{Eis}^k(\beta)$, where $u:S^1_T\to pt$ is the structure map. As we need only to compute the corresponding integral for the component of $S^1_T$ corresponding to $\operatorname{id}$, we let $\Gamma_T\subset T^1(\mathbb{Z})$ be the stabilizer of $\operatorname{id} \in T(\mathbb{Z}/n\mathbb{Z})$ and consider \[ u_{\operatorname{id}}:\Gamma_T\backslash \bigl(T^1(\mathbb{R})/K^{T^1}_\infty\bigr) \to pt. \] To compute the integral, we introduce coordinates on $T^1(\mathbb{R})\cong (F\otimes \mathbb{R})^*$ and on the torus ${\cal T}_\mathfrak{a}$. We identify $F\otimes \mathbb{R}\cong \prod_{\tau:F\to \mathbb{R}}\mathbb{R}$ and denote by $e_1,\ldots,e_g$ the standard basis on the right hand side and by $x_1,\ldots,x_g$ the dual basis. For any element $u=\sum u_ie_i$ or $u=\sum u_ix_i$ we write $Nu:=u_1\cdots u_g$. Let $q$ be the quadratic form given by $\sum x_i^2$. We identify the orbit of $q$ under $T^1(\mathbb{R})$ with $(F\otimes \mathbb{R})^*_+$ by mapping \begin{align} (F\otimes \mathbb{R})^*&\to T^1(\mathbb{R})q\\ \nonumber t&\mapsto q_{t}. \end{align} This map factors over $(F\otimes \mathbb{R})^*_+$ and the map is compatible with the $T^1(\mathbb{Z})$ action on both sides. We let $t_1,\ldots, t_g$ be coordinates on $(F\otimes \mathbb{R})^1$ so that $t_1^2,\ldots,t_g^2$ are coordinates on $(F\otimes \mathbb{R})^1_+$. If we write $\rho=\sum\rho_ix_i$ and $t_i:=x_i(t)$, then \[ \rho(q_{t}(\rho))= \sum t_i^2\rho_i^2 \] and ${\bf q}_{t}(\rho)$ has coordinates $t_i^2\rho_i$. More precisely, if we let ${\bf e}_1,\ldots,{\bf e}_g$ be the basis $e_1,\ldots,e_g$ considered as elements of ${\hat{\cal U}}(\mathfrak{a})$, which identifies ${\hat{\cal U}}(\mathfrak{a})$ with the power series ring $\mathbb{C}[[{\bf e}_1,\ldots,{\bf e}_g]]$, then ${\bf q}_{t}(\rho)=\sum t_i^2\rho_i{\bf e}_i$. The volume form is given by \[ \operatorname{vol}=|d_F|^{-1/2}N\mathfrak{a}^{-1}dx_1\land\ldots\land dx_g \] and we can write the Euler vector field as ${\cal E}=\sum x_i\partial_{x_i}$. One gets (observe that $Nt=1$) \[ q_{t}(\rho)^* \iota_{\cal E}\operatorname{vol}=|d_F|^{-1/2}2^{g-1}N(\rho)N\mathfrak{a}^{-1} \sum_{k=1}^g(-1)^{k-1}t_kdt_1\land\ldots \widehat{dt_k}\ldots\land dt_g. \] Explicitly, the Eisenstein class is given as a current on $T^1(\mathbb{R})$ by \begin{multline} \operatorname{Eis}^k(\beta)(t)=\\ \frac{(k+g-1)!}{k!} \sum_{\rho\in\mathfrak{a}^\lor\setminus 0} \frac{(-1)^{g-1}e^{-2\pi i\rho(\beta)}(\sum t_i^2\rho_i{\bf e}_i)^{\otimes k}} {(2\pi i\sum\rho_i^2t_i^2)^{k+g}} q_{t}(\rho)^* \iota_{\cal E}\operatorname{vol} \end{multline} Define an isomorphism $(\mathbb{R}\otimes F)^1\times \mathbb{R}^*\cong(\mathbb{R}\otimes F)^*$ by mapping $(t,r)\mapsto y:=rt$. Then we get: \begin{equation}\label{volrelation} \frac{dy_1}{y_1}\land\ldots\land\frac{dy_g}{y_g}= \frac{dr}{r}\land \sum_{k=1}^g(-1)^{k-1}t_kdt_1\land\ldots\widehat{dt_k} \ldots\land dt_g. \end{equation} We use this decomposition to write $\operatorname{Eis}^k(\beta)(t)$ as a Mellin transform: \begin{multline} \operatorname{Eis}^k(\beta)(t)=\\ \sum_{\rho\in \mathfrak{a}^\lor\setminus 0} {(-1)^{g-1}e^{-2\pi i\rho(\beta)}} \int_{\mathbb{R}_{>0}}\displaylimits {e^{-u(2\pi i\sum\rho_i^2t_i^2)}} \frac{(\sum t_i^2\rho_i{\bf e}_i)^{\otimes k}}{k!} u^{k+g}\frac{du}{u} \land q_{t}(\rho)^* \iota_{\cal E}\operatorname{vol}. \end{multline} Substitute $u=r^2=N(y)^{2/g}$ and use (\ref{volrelation}) to get \begin{multline} \operatorname{Eis}^k(\beta)(t)=\\ \sum_{\rho\in \mathfrak{a}^\lor\setminus 0} \frac{(-1)^{g-1}2^{g}e^{-2\pi i\rho(\beta)}N(\rho)}{|d_F|^{1/2}N\mathfrak{a}} \int_{\mathbb{R}_{>0}}\displaylimits e^{-2\pi i\sum\rho_i^2y_i^2} \frac{(\sum y_i^2\rho_i{\bf e}_i)^{\otimes k}}{k!} N(y){dy_1}\land\ldots\land{dy_g}. \end{multline} The application of $u_{\operatorname{id},*}$ amounts to integration over \[ \Gamma_T\backslash \bigl(T^1(\mathbb{R})/K^{T^1}_\infty\bigr)\cong {\cal O}^*_{(n)}\backslash (F\otimes \mathbb{R})^1_+, \] where ${\cal O}^*_{(n)}$ are the totally positive units, which are congruent to $1$ modulo the ideal generated by $(n)$. This gives with the usual trick \begin{multline} u_{\operatorname{id},*}\operatorname{Eis}^k(\beta)=\\ \sum_{\rho\in {\cal O}^*_{(n)}\backslash\bigl(\mathfrak{a}^\lor\setminus 0\bigr)} \frac{(-1)^{g-1}2^ge^{-2\pi i\rho(\beta)}N(\rho)}{|d_F|^{1/2}N\mathfrak{a}} \int_{(F\otimes\mathbb{R})^*_+}\displaylimits e^{-2\pi i\sum\rho_i^2y_i^2} \frac{(\sum y_i^2\rho_i{\bf e}_i)^{\otimes k}}{k!} N(y){dy_1}\land\ldots\land{dy_g}. \end{multline} The integral is a product of integrals for $j=1,\ldots, g$: \[ \int_{\mathbb{R}_{>0}}\displaylimits e^{-2\pi i\rho_j^2y_j^2}\rho_j^k\frac{{\bf e}_j^{\otimes k}}{k!}y_j^{2k+2} \frac{dy_j}{y_j}= \frac{{\bf e}_j^{\otimes k}} {2\rho_j(2\pi i\rho_j)^{k+1}}. \] We now consider $\operatorname{Eis}^{gk}(\beta)$ instead of $\operatorname{Eis}^k(\beta)$. If we consider $e^*\operatorname{pol}^{D}_{\beta}$ as a power series in the ${\bf e}_i$ we are interested in the coefficient of $\frac{N{\bf e}^{\otimes k}}{k!^g}$. In fact, the integrallity properties of $\operatorname{Eis}^{gk}(\beta)$ are better reflected if we write it in terms of a basis $a_1,\ldots, a_g$ of $\mathfrak{a}$. Then $N{\bf e}^{\otimes k}=N\mathfrak{a}^{-k} N{\bf a}^{\otimes k}$, where ${\bf a}_1,\ldots, {\bf a}_g$ denote again the images of $a_1,\ldots, a_g$ in ${\hat{\cal U}}(\mathfrak{a})$. We get: \begin{cor}\label{Eisintegration} With the above basis ${\bf a}_1,\ldots, {\bf a}_g$, The integral over the Eisenstein class is given by \[ u_{\operatorname{id},*}\operatorname{Eis}^{gk}(\beta)= \frac{(-1)^{g-1}(k!)^g}{(2\pi i)^{g(k+1)}|d_F|^{1/2}N\mathfrak{a}^{k+1}} \sum_{\rho\in {\cal O}^*_{(n)}\backslash\bigl(\mathfrak{a}^\lor\setminus 0\bigr)} \frac{e^{-2\pi i\rho(\beta)}} {N(\rho)^{k+1}}\frac{N{\bf a}^{\otimes k}}{k!^g}. \] \end{cor} \subsection{5. Step: End of the proof} To finish the proof of the theorem \ref{maintheorem}, let $\alpha\in L[{\cal A}[n]\setminus e(S)]^0$ and suppose we want to compute $\operatorname{res}(\operatorname{Eis}^k(\alpha))(h)$. Using the equivariance of $\operatorname{res}\circ \operatorname{Eis}^k$ from (\ref{resequiv}), this amounts to compute $\operatorname{res}(\operatorname{Eis}^k(h\alpha))(\operatorname{id})$. Theorem \ref{degenercompu} shows that \[ \operatorname{res}(\operatorname{Eis}^k(h\alpha))(\operatorname{id})=u_{\operatorname{id},*}q_*\operatorname{Eis}^k(h\alpha), \] where $q:S^1_B\to S^1_T$ and $u_{\operatorname{id}}:\Gamma_T\backslash \bigl(T^1(\mathbb{R})/K^{T^1}_\infty\bigr) \to pt$ is the structure map of the component corresponding to $\operatorname{id}\in T^1(\mathbb{Z}/n\mathbb{Z})$. From theorem \ref{eisendeg} we get \[ q_*\operatorname{Eis}^k(h\alpha)=\operatorname{Eis}^k(p(h\alpha)). \] Using corollary \ref{Eisintegration} for $\mathfrak{a}={\cal O}$ and the formula \ref{fcteq} for $\mathfrak{b}=\mathfrak{f}={\cal O}$ we get \begin{equation}\label{finalformula} \frac{(-1)^{g-1}(k!)^g}{(2\pi i)^{gk+g}|d_F|^{1/2}} \sum_{\rho\in {\cal O}^*_{(n)}\backslash\bigl({\cal O}^\lor\setminus 0\bigr)} \frac{e^{-2\pi i\rho(p(h\alpha))}} {N(\rho)^{k+1}}=(-1)^{g-1}\sum_{\sigma\in D}l_\sigma \zeta({\cal O},{\cal O},p(h\sigma),-k), \end{equation} which is the formula in the main theorem \ref{maintheorem}. To prove the corollary, we use that the map of real tori \[ {\cal A}(\mathbb{C})\xrightarrow{p}{\cal T}_{\cal M} \] factors through $\phi:{\cal T}_{\mathfrak{b}_{\widetilde{h}}}\to{\cal T}_{\cal M} $, where $\phi$ is induced by the inclusion $\mathfrak{b}_{\widetilde{h}}\subset {\cal O}$. Using corollary \ref{Eisintegration} for $\mathfrak{a}=\mathfrak{b}_{\widetilde{h}}$, we get the desired formula \[ \operatorname{res}(\operatorname{Eis}^{gk}(\alpha))(h)=(-1)^{g-1}N\mathfrak{b}_{\widetilde{h}}^{-k-1} \sum_{\sigma\in D}l_\sigma \zeta(\mathfrak{b}_{\widetilde{h}},{\cal O},p_{\widetilde{h}}(\sigma),-k), \] which ends the proof.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,246
{"url":"https:\/\/dev.digicert.com\/en\/certcentral-apis\/discovery-api\/certificates\/view-certificate-details.html","text":"# View certificate details\n\nPOST https:\/\/daas.digicert.com\/apicontroller\/v1\/certificate\/getCertificateDetails\n\nGet details for a specific certificate. Details include distinguished name information, CA information, validity period, encryption type, and more.\n\n## Request parameters\n\nName\n\nReq\/Opt\n\nType\n\nDescription\n\naccountId\n\nrequired\n\nstring\n\nAccount ID.\n\ndivisionIds\n\noptional\n\narray\n\nDivision IDs.\n\ncertificateId\n\nrequired\n\nstring\n\nUnique DigiCert-generated ID for the certificate found on the endpoint. Get the certificate ID from the List certificates request.\n\n## Response parameters\n\nName\n\nType\n\nDescription\n\ndata\n\nobject\n\nContainer.\n\n..\u00a0certId\n\nstring\n\nUnique DigiCert-generated ID for the certificate found on the endpoint.\n\n..\u00a0serialNum\n\nstring\n\nSerial number assigned to the certificate on issuance.\n\n..\u00a0validFrom\n\ninteger\n\nValidity start date.\n\nFormat: epoch in millisecond.\n\nEpoch corresponds to 0 hours, 0 minutes, and 0 seconds (00:00:00) Coordinated Universal Time (UTC) on a specific date, which varies from system to system.\n\nExample: 1855828800000\n\n..\u00a0expiryDate\n\ninteger\n\nValidity end date.\n\nFormat: epoch in millisecond.\n\nEpoch corresponds to 0 hours, 0 minutes, and 0 seconds (00:00:00) Coordinated Universal Time (UTC) on a specific date, which varies from system to system.\n\nExample: 1855828800000\n\n..\u00a0subject\n\nstring\n\nFull certificate distinguished name.\n\n..\u00a0issuedBy\n\nstring\n\nRoot certificate that the certificate was issued from.\n\n..\u00a0cn\n\nstring\n\nCommon name on the certificate.\n\n..\u00a0ca\n\nstring\n\nCertificate Authority that issued the certificate.\n\n.. lastDiscoveredDate\n\ninteger\n\nDate certificate was last found by CertCentral Discovery scan.\n\n..\u00a0firstDiscoveredDate\n\ninteger\n\nDate certificate was first found by CertCentral Discovery scan.\n\nFormat: epoch in millisecond.\n\nEpoch corresponds to 0 hours, 0 minutes, and 0 seconds (00:00:00) Coordinated Universal Time (UTC) on a specific date, which varies from system to system.\n\nExample: 1855828800000\n\n..\u00a0keyLength\n\nstring\n\nEncryption key size for the certificate.\n\n..\u00a0algoType\n\nstring\n\nEncryption algorithm that the certificate uses.\n\n..\u00a0accountId\n\nstring\n\nAccount ID.\n\n..\u00a0certStatusString\n\nstring\n\nStatus of the certificate.\n\n..\u00a0owner\n\nstring\n\nOwner as defined in CertCentral Discovery.\n\n..\u00a0org\n\nstring\n\nOrganization name on the certificate.\n\n..\u00a0orgunit\n\nstring\n\nOrganization unit on the certificate.\n\n..\u00a0city\n\nstring\n\nCity on the certificate.\n\n..\u00a0state\n\nstring\n\nState on the certificate.\n\n..\u00a0country\n\nstring\n\nCountry on the certificate.\n\n..\u00a0sanCount\n\nstring\n\nNumber of subject alternative names on the certificate.\n\n..\u00a0publicKeyAlgo\n\nstring\n\nEncryption algorithm for the certificate's public key.\n\n..\u00a0san\n\nstring\n\nSubject alternative names on the certificate.\n\n..\u00a0certRating\n\nstring\n\nCertificate security rating, based on industry standards and the certificate's settings.\n\n..\u00a0tags\n\nstring\n\nCustom tags added by certificate owner, subscriber, or other admin.\n\n..\u00a0certStatusError\n\nstring\n\nErrors retrieving certificate status.\n\n..\u00a0certIssues\n\nstring\n\nChart data for certificate issues.\n\n.. renewalEmailPreference\n\nboolean\n\nWhether renewal email preference\u00a0is enabled or not.Default: true\n\nstring\n\nEmail address for the contact associated with the certificate.\n\n.. actions\n\nobject\n\nAction performed on the certificate.\n\n.. filePath\n\nstring\n\nFile path of the certificate.\n\nValues are comma-separated.\n\n.. source\n\nstring\n\nThe scan used to identify the certificate.\n\nPossible values: sensor, agent\n\nNote: Possible values areManual Upload, Cloud scan for server certificates.\n\n.. serverHost\n\nstring\n\nThe server host associated with the certificate.\n\nValues are comma-separated.\n\n.. selfSignedCaOptIn\n\nboolean\n\nWhether email preference enabled for the self-signed certificates.\n\n.. systemCert\n\nboolean\n\nWhether any system certificates are available or not.","date":"2022-12-07 22:48:20","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.19436129927635193, \"perplexity\": 11790.798265859985}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446711221.94\/warc\/CC-MAIN-20221207221727-20221208011727-00816.warc.gz\"}"}
null
null
The reflectance of the surface of a material is its effectiveness in reflecting radiant energy. It is the fraction of incident electromagnetic power that is reflected at the boundary. Reflectance is a component of the response of the electronic structure of the material to the electromagnetic field of light, and is in general a function of the frequency, or wavelength, of the light, its polarization, and the angle of incidence. The dependence of reflectance on the wavelength is called a reflectance spectrum or spectral reflectance curve. Mathematical definitions Hemispherical reflectance The hemispherical reflectance of a surface, denoted , is defined as where is the radiant flux reflected by that surface and is the radiant flux received by that surface. Spectral hemispherical reflectance The spectral hemispherical reflectance in frequency and spectral hemispherical reflectance in wavelength of a surface, denoted and respectively, are defined as where is the spectral radiant flux in frequency reflected by that surface; is the spectral radiant flux in frequency received by that surface; is the spectral radiant flux in wavelength reflected by that surface; is the spectral radiant flux in wavelength received by that surface. Directional reflectance The directional reflectance of a surface, denoted RΩ, is defined as where is the radiance reflected by that surface; is the radiance received by that surface. This depends on both the reflected direction and the incoming direction. In other words, it has a value for every combination of incoming and outgoing directions. It is related to the bidirectional reflectance distribution function and its upper limit is 1. Another measure of reflectance, depending only on the outgoing direction, is I/F, where I is the radiance reflected in a given direction and F is the incoming radiance averaged over all directions, in other words, the total flux of radiation hitting the surface per unit area, divided by π. This can be greater than 1 for a glossy surface illuminated by a source such as the sun, with the reflectance measured in the direction of maximum radiance (see also Seeliger effect). Spectral directional reflectance The spectral directional reflectance in frequency and spectral directional reflectance in wavelength of a surface, denoted and respectively, are defined as where is the spectral radiance in frequency reflected by that surface; is the spectral radiance received by that surface; is the spectral radiance in wavelength reflected by that surface; is the spectral radiance in wavelength received by that surface. Again, one can also define a value of (see above) for a given wavelength. Reflectivity For homogeneous and semi-infinite (see halfspace) materials, reflectivity is the same as reflectance. Reflectivity is the square of the magnitude of the Fresnel reflection coefficient, which is the ratio of the reflected to incident electric field; as such the reflection coefficient can be expressed as a complex number as determined by the Fresnel equations for a single layer, whereas the reflectance is always a positive real number. For layered and finite media, according to the CIE, reflectivity is distinguished from reflectance by the fact that reflectivity is a value that applies to thick reflecting objects. When reflection occurs from thin layers of material, internal reflection effects can cause the reflectance to vary with surface thickness. Reflectivity is the limit value of reflectance as the sample becomes thick; it is the intrinsic reflectance of the surface, hence irrespective of other parameters such as the reflectance of the rear surface. Another way to interpret this is that the reflectance is the fraction of electromagnetic power reflected from a specific sample, while reflectivity is a property of the material itself, which would be measured on a perfect machine if the material filled half of all space. Surface type Given that reflectance is a directional property, most surfaces can be divided into those that give specular reflection and those that give diffuse reflection. For specular surfaces, such as glass or polished metal, reflectance is nearly zero at all angles except at the appropriate reflected angle; that is the same angle with respect to the surface normal in the plane of incidence, but on the opposing side. When the radiation is incident normal to the surface, it is reflected back into the same direction. For diffuse surfaces, such as matte white paint, reflectance is uniform; radiation is reflected in all angles equally or near-equally. Such surfaces are said to be Lambertian. Most practical objects exhibit a combination of diffuse and specular reflective properties. Water reflectance Reflection occurs when light moves from a medium with one index of refraction into a second medium with a different index of refraction. Specular reflection from a body of water is calculated by the Fresnel equations.<ref name="Ottav">Ottaviani, M. and Stamnes, K. and Koskulics, J. and Eide, H. and Long, S.R. and Su, W. and Wiscombe, W., 2008: 'Light Reflection from Water Waves: Suitable Setup for a Polarimetric Investigation under Controlled Laboratory Conditions. Journal of Atmospheric and Oceanic Technology, 25 (5), 715--728.</ref> Fresnel reflection is directional and therefore does not contribute significantly to albedo which primarily diffuses reflection. A real water surface may be wavy. Reflectance, which assumes a flat surface as given by the Fresnel equations, can be adjusted to account for waviness. Grating efficiency The generalization of reflectance to a diffraction grating, which disperses light by wavelength, is called diffraction efficiency''. Other radiometric coefficients See also Bidirectional reflectance distribution function Colorimetry Emissivity Lambert's cosine law Transmittance Sun path Light Reflectance Value Albedo References External links Reflectivity of metals . Reflectance Data. Physical quantities Radiometry Dimensionless numbers
{ "redpajama_set_name": "RedPajamaWikipedia" }
259
Last weekend (July 25/26) the RMTC hosted ARTA's Australian Women's Amateur Singles and Doubles. The field was small but the leading competitors were on show. In the singles event Rosie Snell showed that she is still up there with the best when she easily knocked off the young(ish) upstart Xanthe Ranger 6/3 6/2 to set herself up for a tilt at the title against reigning champ Laura Fowler. The final was as good as you could wish for with great rallies, volleys, winning galleries, nervous mistakes and cunning serving all on display. The first set was won by Laura 6/3 in what at that stage was shaping as a somewhat predictable match, however, from then on nothing went to any sort of script. Rosie's game plan of playing to Laura's backhand allowed her to take a strong lead in the second set before Laura fought back to level it at 5/5. Rosie started the last game at the service end but when Laura blazed away to lead 40/15 as well as laying a chase better than 2, the match looked over. At the change of ends Rosie ripped a crosscourt forehand to win the chase and then closed out the set with aplomb. Rosie's form continued in the 3rd set until she led 5/2 only for the match to turn again. As Laura lifted her game, Rosie seemed to forget about hitting to the backhand corner and before we knew it the score was 5/5 in the 3rd. Laura's comeback was also aided somewhat by Rosie's belief that her Pound serve might catch Laura out, unlikely given Laura's speed of foot. At 5/5 and then deuce, Rosie then held her first and only match point which Laura duly saved with a forehand volley. Congratulations to both players and to Laura for retaining the Australian Women's Amateur Singles Title. The doubles was split into to 2 divisions. In the main draw Laura and Xanthe showcased their skills in another good match against Prue McCahey and Rosie providing spectators with another 3 set tussle. In Division 1 Anabelle Guest and Julia Page were too steady for the fast improving Royal South Yarra pair Kim Dudson and Brigitte Claney. During the second set the lawn tennis strokes of the RSY pair started to show but a few well placed forehands from Anabelle coupled with Julia's gritty ground strokes held them off. Guest/Page def. Claney/Dudson 6/2 6/5. The Christmas in July Mixed Doubles. Eight pairs celebrated Christmas in July at the club on Saturday evening. The ladies drew their partners, none seemed put out by their choice, and we all reconvened bright and early the following morning to play tennis. James Guest and Kate Leeming emerged triumphant; they defeated Andrew Schnaider and Judith Sear in the final by 6 games to 4. In her final tournament before her jaunt across Africa, Ktpie demonstrated that she is not just a renowned ultra cyclist, but also a highly competent tennis player. She also suceeded where many have failed in rendering her talkative partner almost silent in admiration of her many obvious qualities. Pliant though he was, JVCG showed off a wide range of serves, some of which worked quite well. Judith connected with a number of cracking shots, but frustratingly for her many of them were returned. Andrew was clearly below par, yet coped manfully with the demands of 5 matches in a single morning: his uniquely extreme 'frying pan' grip on the backhand volley provided amusement to the spectators plus the occasional winner. It was a noticeably friendly group of players, almost all of whom lingered long after the organiser had served a picnic lunch, enjoying the conviviality of each other's company. John Hewson was so overcome by the fun time he had that he telephoned twice, on consecutive days, to thank me for running the event. Ed Hughes (past President of the USCTA) with Pat Dunne, Paul Rosedale, Daniel Williams (partly obscured) & Hilton Booth (Australian #1 & captain). The apparition on the right of shot is Simon Carr. The Australian team of Hilton Booth (Hobart), Pat Dunne (Hobart), Daniel Williams (Ballarat & Melbourne), Simon Carr (Melbourne) and reserve Paul Rosedale (Melbourne) lost the inaugural Limb Trophy match 8-0 to a very strong UK team at Lord's recently. They then travelled to Newport, where they regained the Clothier Cup from the US team 5-3. B-Grade Pennant 2009 completed on Thursday. The winners were the progressive Welch twins, who defeated James Wheeler and Tim Robinson by 2 matches to 1 in the final. Two days earlier D-Grade Pennant had terminated in disarray, the 'winning' team of 3 being made up entirely of fill-ins.
{ "redpajama_set_name": "RedPajamaC4" }
4,472
\section{Introduction} Vacuum in NLED theories is far from being trivial. For example, in Quantum Electrodynamics (QED), the vacuum polarization effects lead to effective interaction terms that are non-linear in the electric and magnetic fields, generating, among other phenomena, light-by-light scattering testable with modern accelerators, see a recent compilation from ATLAS results in \cite{atlas}. Switching on the interaction leads to excitations creating virtual electron-positron pairs so that this theory conceives that fluctuations can give rise to very interesting properties allowing to describe it as a magnetized medium \cite{Adler}. One of the quoted consequences is the possible existence of {\it birefringence}, by which electromagnetic (EM) waves propagating parallel or perpendicular to a constant electric or magnetic background field in fixed direction display different propagation speeds. Great experimental effort has been undertaken in the task of detection, like that by Paulus and coworkers using an x-ray free-electron laser in EuXFEL \cite{jena}, but to present date this phenomenon has not been experimentally found yet, as it involves tiny effects \cite{melrose} due to the non-linear coupling $\xi=\frac{8\alpha^2 \hbar^3}{45m_e^4c^5}\sim \frac{8\alpha}{45B_c^2}\sim 6.7\times 10^{-30}$ $\rm m^3/J$ where $B_c=\frac{m^2 c^2}{e\hbar}=4.4\times 10^{13}$ G is the critical magnetic field. Alternatively, this non-detection has been interpreted as a possible manifestation of new physics. One of the most relevant experimental setups to capture the effect of birefringence is the Polarization of Vacuum with Laser (PVLAS)\cite{Ejlli:2020yhk} that proposed axions \cite{Cadene} being produced from photon decay and thus novel effects being responsible for the null experimental signal, contrary to expectation. To date, the possible existence of axions has not been discarded and has triggered a variety of experimental projects \cite{Cadene,PVLAS:2007wzd}. In addition, the study of vacuum properties could illuminate theories Beyond Standard Model that enable to explain with non-conventional mechanisms, the existence of the so-called dark matter, milli-charged particles \cite{kouvaris} and other exotic phenomenology that are, so far, out of the prediction of the standard scenarios. Besides vacuum birefringence, the so-called {\it vacuum instability} refers to the possibility of production of electron-positron pairs excited when their rest mass energy threshold is available and arises in the theory for values of magnetic/electric fields higher than the Schwinger critical ones, $B_c/E_c$, with $E_c=\frac{m^2c^3}{e\hbar}=1.3\times10^{18}$ V/m, respectively. Finally, {\it pressure anisotropy} associated to magnetized vacuum with a fixed background magnetic field, is another theoretical finding of QED in the one-loop approximation, somewhat explored in \cite{Elizabeth1,PPCF1}, where differences in parallel and transverse directions to the external field appear as a consequence of the breaking of the SO(3) symmetry. Apart from QED, there are additional alternative NLED that have been proposed in the literature incorporating quantum corrections. As mentioned, the Maxwell Lagrangian, EH Lagrangian yield NLED that takes into account one-loop quantum corrections to QED. In the ModMax NLED \cite{ModMax} SO(2) electromagnetic duality invariance and conformal invariance are fulfilled. The Lagrangian density is not analytic everywhere failing at configurations for which the Lorentz invariants are zero. Other NLEDs, such as the Born-Infeld theory \cite{Borninfeld} instead, smoothes divergences and can be explored for strong EM fields as it gives the restriction on the possible maximum electric field. Born-Infeld theory induces a dual invariance but displays no birefringence in vacuum \cite{nobirefrin}. Another difficulty is that the value of the electric field in the center of the point-like charge depends on the direction of approach to it but resolving this problem leads to application in gravity \cite{banados} and holographic superconductors. The scope of this paper is revisiting the energy-momentum (EMT), angular momentum (AMT) tensors, and magnetic properties of the vacuum in selected NLED in a scenario where photons are propagating in the presence of a background magnetic field of arbitrary strength. We will restrict nevertheless to energy scales below the pair production instability i.e. photon frequency $\omega\leq 2 m_e c^2$. In our study, we focus on three points. We use the Hilbert method to calculate the EMT opening the discussion related to the equivalence between the results from improved Noether and Hilbert EMT. The second is that all properties are obtained as a function of derivatives of an effective Lagrangian with respect to the two scalar invariants of the theory, $\mathcal{F,G}$, thus becoming a more general study applicable to different NLEDs. In the same fashion, all magnitudes obtained for the EH effective theory are valid for arbitrary values of the magnetic field, being equivalent to the case of a background pure external electric field or perpendicular electric and magnetic fields. The paper is organized as follows. In Section \ref{section:waves} we introduce how the EM wave propagation can help study vacuum properties in the context of the NLED we consider. In section \ref{section:1} we present the effective NLED described by the Euler-Heisenberg Lagrangian for arbitrary magnetic field strength and study the propagation of a photon probe in its associated vacuum. We use a Lagrangian expansion in invariants of the theory along with coefficients, which are derivatives up to second order of it. This quadratic Lagrangian allows us to obtain generalized Maxwell equations and, in particular, that governing the displacement photon field and dispersion equations in the Fourier space In section \ref{section:3} we proceed to obtain the EMT and AMT. Using the Hilbert construction, symmetric, gauge invariant, and conserved EMT is obtained. Additional properties like EMT tracelessness and anisotropy, are discussed. The former implies the lack of conformal symmetry of the Lagrangian and produces birefringence and the latter is a consequence of the rotational symmetry breaking when a fixed magnetic background field is present. Later, other physical quantities of the theory stemming from EMT and AMT are calculated, i.e. energy density, Poynting vector, pressure components and angular momentum vector. We remark on differences found when using alternative procedures. In section \ref{sec:mag} we discuss the photon magnetization and define a photon magnetic moment giving generic expression as a function of magnetic field strength in the range of validity of our approach and comparing it to existing limiting values. In section \ref{sec:exp} we follow with some discussion about possible strategies and experimental tests to discriminate among magnitudes obtained in the exposed scenarios. Finally, in \ref{conclusions}, our conclusions are presented. A final appendix section \ref{apendice} is included, where subsections detail complementary lengthy calculations appearing along this manuscript. \section{Propagation of EM waves in vacuum} \label{section:waves} In order to further study vacuum properties as they arise in NLED theories special attention is devoted to wave propagation. For example, radiation (photon) emission from pulsars \cite{Mielniczuk,Taverna} i.e. magnetized neutron stars (NSs) has been reported to show the indirect signal of birefringence. The latter has also been claimed responsible in $e^+e^-$ production from photon fusion in Breit-Wheeler processes showing separation in the differential angular distribution relative to the initial photon polarization and magnetic field angle \cite{breit}. The vacuum birefringence effect over the propagation of photons is equivalent to that an ordinary anisotropic medium produces over light propagation. That means that vacuum {\it behaves} as a refractive medium. Hence, the study of propagation of photons in magnetized vacuum using NLEDs in the lowest order for fields allow us to interpret and design experimental tests for vacuum phenomenology using traditional treatments of non-linear optics. Only precise observations and analysis of theoretical models may aim to isolate them. However, one should keep in mind that when these very large fields are generated in dense environments, matter effects are the main ingredient to include, although quantum vacuum aspects remain an important contribution indicating that a consistent treatment seems unavoidable. The scenario depicted before is valid for field strengths smaller than the critical ones. However, generally speaking, magnetic field strengths in nature can reach extreme values, such as in the interior of white dwarfs or pulsars, where fields attain strengths $B\sim 10^{12-15}$ G \cite{Koe,Hard}, and those generated in heavy ion collisions. For the latter, and in the earliest moments after the collision, the system is subjected to what is expected to be the strongest magnetic field created in the laboratory. In heavy-ion collisions of $\mathrm{Au}-\mathrm{Au}$ at the RHIC energy $\sqrt{s_{N N}}=200\, \mathrm{GeV}$ and impact parameter $b=4\, \mathrm{fm}$, a local field $e B \approx 1.3 \cdot m_{\pi}^{2}$ with $m_{\pi}^{2}=140^{2} \cdot 0.512 \times 10^{14}$ G $\approx 10^{18}$ G is estimated \cite{ionion,Blab}. So far, there is a fundamental technical issue as it is not possible to generate steady fields stronger than $B\sim 4.5\times 10^{5}$ G in the lab because the magnetic stresses of such fields exceed the tensile strength of terrestrial materials. Regarding oscillating fields, special mention deserves those generated in laser facilities. For optical lasers, peak powers beyond 1 PW are available in several present and future projects such as in ELI\cite{eli}, CoReLS\cite{corels}, Appollon \cite{apollon}, Vulcan \cite{vulcan} or CLPU \cite{CLPU}, just to cite some of them. This would promote the laser intensity beyond that currently available at $\sim 10^{23}$ W cm$^{-2}$ \cite{corels}. For larger intensities close to $10^{24}$ W cm$^{-2}$ a 100 PW laser system is needed \cite{shen}. With better focusing, the laser intensity could be even higher. This will effectively drive laser-matter interaction in the strong-field QED regime. Note that for short times $\Delta t \ll P$, being $P$ the wave field period, and in localized regions, one can assume an equivalent constant magnetic field strength so the treatment remains valid. {On the other hand in the context of Dirac materials \cite{diracmaterials} it is possible to test the strong field properties of the vacuum of this theory with the advantage of the critical field being $\mathcal{O}(1)$ T.} \subsection{Effective EH Lagrangian for non linear electrodynamics}\label{section:1} The one-loop photon-photon interaction processes are described by the Euler-Heisenberg Lagrangian density, first proposed by Euler, Kockel \cite{HEKockel} and Heisenberg and independently by Weisskopf \cite{Weiss}. Following a renormalization procedure, it becomes finite and gauge invariant \cite{ren,Schwinger} under the form \begin{equation} \label{eqEH} \mathcal{L}=-\mu_{0}^{-1} \mathcal{F}-\frac{1}{8 \pi^2} \int_{0}^{i\infty} \frac{d s}{s^{3}} e^{- m^2_e s }\left [ (e s)^{2} a b \coth (e a s) \cot(e b s)-\frac{(e s)^{2}}{3}\left(a^{2}-b^{2}\right)-1 \right ], \end{equation} where $a=\left[\left(\mathcal{F}^{2}+\mathcal{G}^{2}\right)^{1 / 2}+\mathcal{F}\right]^{1 / 2}$ and $b=\left[\left(\mathcal{F}^{2}+\mathcal{G}^{2}\right)^{1 / 2}-\right.$ $\mathcal{F}]^{1 / 2}$. $m_{e}, e$ are the mass and charge of the electron, respectively. $\mathcal{F}, \mathcal{G}$ are secular invariants derived from the gauge and Lorentz invariants of the generic electromagnetic fields $({\bf E, B})$ defined as \begin{equation} \mathcal{F}=\frac{1}{4}F^{\mu\nu}F_{\mu\nu}=\frac{1}{2}(-\epsilon_0 E^2+\frac{B^2}{\mu_0}),\quad \mathcal{G}=\frac{1}{4}F^{\mu\nu}\tilde{F^{\mu\nu}}=\sqrt{\frac{\epsilon_0}{\mu_0}}\boldsymbol{(-E.B)}, \end{equation} We have used $\mu_{0}$, $\epsilon_{0}$ as electrical permittivity and magnetic permeability of the empty space, respectively. The field-strength tensor components are related to electromagnetic (EM) fields as $E_{i}=c F_{0 i}$, $B_{i}=-\frac{1}{2} \epsilon_{i j k} F^{j k}$ with $i=1,2,3$. In addition, $\tilde{F}^{\mu\nu}=\epsilon^{\mu\nu\alpha\beta}F_{\alpha\beta}/2$ is the dual tensor and $\epsilon^{\mu\nu\alpha}$, $\epsilon^{\mu\nu\alpha\beta}$ are the totally antisymmetric Levi-Civita tensors of rank 3 and 4, respectively. We use the Einstein convention of summing over repeated indices. Note that for a pure magnetic field case (${\bf E}=0$) we have $\mathcal{F}=B^{2}/2$ and $\mathcal{G}=0$. Bianchi identities are fulfilled as $\frac{1}{\sqrt{-g}} \partial_{\nu} \left[ \sqrt{g} {\tilde F}^{\mu \nu}\right]=0$. At this point it is worth noting that for the sake of completeness we introduce the curved space notation, however, for most of the calculations we will restrict to a case assimilated to Minkowski flat space where we use the convention { $g_{\mu \nu}=\eta_{\mu\nu}=\rm{diag}(+1,-1,-1,-1)$} and $g$ for its determinant. In general, for arbitrary value of the EM fields or equivalently $a$, $b$ is not possible to get a handy analytical expression of this Lagrangian in Eq. (\ref{eqEH}). Fortunately, there is a more general treatment at $\mathcal{G}=0$ performing an expansion of the Lagrangian in terms of the Lorentz invariants. In particular, the scalar derivatives $\mathcal{L}_{\mathcal{F}}=\frac{\partial \mathcal{L}}{\partial \mathcal{F}}$, $\mathcal{L}_{\mathcal{FF}}= \frac{\partial ^2\mathcal{L}}{\partial \mathcal{F}^2}$, $\mathcal{L}_{\mathcal{GG}}=\frac{\partial ^2 \mathcal{L}}{\partial\mathcal{G}^2}$ can be calculated analytically and we provide expressions in the limit $E\rightarrow0$, i.e. $b\rightarrow0$ which is the one we are interested in the present work, see appendix \ref{apendice:1}. In order to focus on the system under study we now introduce the general expression of the Euler-Heisenberg Lagrangian in Eq. (\ref{eqEH}) and decompose the EM field strength tensor as $\mathfrak{F}^{\mu \nu}(x) \equiv$ $F^{\mu \nu}(x)+f^{\mu \nu}(x)$ with contributions from the background (external) field $F^{\mu \nu}(x)=\partial^{\mu} A^{\nu}(x)-\partial^{\nu} A^{\mu}(x)$ and from the photon field $f^{\mu \nu}(x)=\partial^{\mu} a^{\nu}(x)-\partial^{\nu} a^{\mu}(x)$. From this we can construct the effective Lagrangian under the form \begin{equation} \mathcal{L}=-\frac{1}{4}\mathfrak{F}^{\mu \nu}\mathfrak{F}_{\mu \nu} +\mathcal{L}^{(1)} +\mathcal{L}^{(2)}. \label{lagfF} \end{equation} The first term corresponds to the classical Maxwell Lagrangian and the last two terms correspond to quantum corrections up to second order in the radiation field. $\mathcal{L}^{(1)}$ and $\mathcal{L}^{(2)}$ describe interaction terms as \cite{Karbstein:2015cpa} \begin{align} \mathcal{L}^{(1)}&=\frac{1}{2}{\mathcal L}_{\mathcal F}f^{\mu\nu}F_{\mu\nu},\\ \mathcal{L}^{(2)}&= \frac{1}{4}{\mathcal{L}}_{\mathcal F} f^{\mu\nu}f_{\mu\nu}+ \frac{1}{8}f^{\alpha\beta}\left (\frac{\partial^2 \mathcal{L}}{\partial {\mathcal F}^2} F_{\alpha\beta}F_{\mu\nu}+ \frac{\partial^2 \mathcal{L}}{\partial {\mathcal G}^2}\tilde{F}_{\alpha\beta}\tilde{F}_{\mu\nu} \right )f^{\mu\nu}. \label{Lagra2} \end{align} The explicit expressions for the non-zero scalar derivatives $\mathcal{L}_{\mathcal{F}}$, $\mathcal{L}_{\mathcal{G}}, \mathcal{L}_{\mathcal{F F}}$ and $\mathcal{L}_{\mathcal{G G}}$ are shown in appendix \ref{apendice:1} for completeness and have been previously compiled in \cite{Marklund:2008gj}. $\mathcal{L}^{(1)}$ arises as a topological term in the expansion of Eq.(\ref{eqEH}) \cite{Karbstein:2015cpa,Bialynicka-Birula:2014bja}, { but giving null contribution to the equations of motion and the magnitudes arising from volume integrated space-time averaged quantities from EMT.} For the scope of this work we neglect higher orders terms, $\mathcal{L}^{(n)}$, $n>2$ which are indeed possible and can be generated in an analogous way. They would describe the interaction of background and radiation fields as well as self-interaction to quartic order in the fields. From the perspective of non-linear optical theory quartic terms in radiation field are related to possible new non-linear interactions, which do not occur at the tree level \cite{Battesti:2012hf} while for background fields is thus connected to the interaction of a magnetic field with electron-positron pairs capable to induce vacuum anisotropic pressures \cite{Elizabeth,PPCF1} similar to the ``Casimir effect''. Therefore, the Lagrangian in Eq. (\ref{lagfF}) retains contributions from background external field (B) and photon interaction with external fields (ph-B) as $\mathcal{L}=\mathcal{L}^{\rm (B)}+\mathcal{L}^{\rm (ph-B)}$. The terms involving photons can be explicitly written as (we take $\mu_0=\epsilon_0=1$) \begin{equation} \mathcal{L}^{\rm (ph-B)}= -\frac{1}{4}(1-{\mathcal{L}}_{\mathcal F}) f^{\mu\nu}f_{\mu\nu}- \frac{1}{2}(1-{\mathcal{L}}_{\mathcal F}) f^{\mu\nu}F_{\mu\nu}+ \frac{\mathcal{L}_{\mathcal{FF}}}{8} (f^{\mu\nu}F_{\mu\nu})^2 + \frac{\mathcal{L}_{\mathcal{GG}}}{8} (f^{\mu\nu}\tilde{F}_{\mu\nu})^2. \label{Lagrafoton} \end{equation} The second order interaction terms take the following explicit form in terms of the photon field $({\bf E}_w$, ${\bf B}_w)$ as well as the external magnetic field ${\bf B}_e\equiv {\bf B}$, \begin{equation} \label{2orderLagrangianEB} \mathcal{L}^{\rm (ph-B)} =\frac{(1-\mathcal{L}_{\mathcal{F}})}{2}(E_w^2-B_w^2)+(1-\mathcal{L}_{\mathcal{F}})({\bf B}\cdot {\bf B}_w)+ \frac{\mathcal{L}_{\mathcal{FF}}}{2}({\bf B} \cdot{\bf B}_w)^2+ \frac{\mathcal{L}_{\mathcal{GG}}}{2}({\bf B}\cdot {\bf E}_w)^2. \end{equation} Note that this quadratic approximation for $\mathcal{L}^{\rm (ph-B)}$ has also been the starting point of previous studies \cite{DiPiazza:2002ve,Neves:2021tbt,Shabad:2011hf} in weak or strong field limits. It is important to recall that by using the soft-photon approximation in the Lagrangian in Eq.(\ref{2orderLagrangianEB}) we restrict the validity of our approach to a maximum value $B\leq 430 B_{c}$ \cite{DittrichLibro}. Above this strength, we have to include two-loop corrections \cite{Villalba-Chavez:2009ruk} with explicit quantum treatment \cite{Chaichian,perez1,perez2,perez3}. However, for most usual physical motivations: a study of laboratory astrophysics with pulsating laser fields and/or Neutron Star physics, a single loop approximation suffices, because typical magnetic fields in external layers and magnetospheres usually stay hundreds of times lower than the critical magnetic field. Although the EH NLED description may conceal some microscopic phenomena related to photon-photon interaction, it is a robust theory since it considers the vacuum behaves as a non-linear refractive optical medium As an example, from Eq. (\ref{2orderLagrangianEB}) and Lagrangian derivatives in appendix \ref{apendice:1} we recover the weak field (WF) approximation expressions $\mathcal {L}_{\mathcal {F}}=\frac{\xi\mathcal {F}}{4}$, $\mathcal {L}_{\mathcal {FF}}=\frac{\xi}{2}$, $\mathcal {L}_{\mathcal {GG}}=\frac{7\xi}{4}$ obtaining the well-known form \begin{equation} \mathcal{L}_{EH}^{WF}=-\mathcal{F}+\frac{\xi}{4}(4\mathcal{F}^2 + 7\mathcal{G}^2), \label{nolinealL} \end{equation} usually taken for studies of non-linear laser optics \cite{Rizzo:2010di,Ferrando:2007pgk}. From the Lagrangian $\mathcal{L}^{(\rm ph-B)}$ in Eq. (\ref{2orderLagrangianEB}) we can readily obtain Maxwell equations (see appendix \ref{apendice:0}) and related electric permittivity and magnetic permeability tensors \begin{figure}[t!] \centering \includegraphics[width=0.8\linewidth]{permitivitiesvsB.pdf} \caption{Electric permittivities and magnetic permeabilities versus magnetic field strength. These quantities remain positive definite ensuring the unitarity and causality constraint in vacuum (see appendix \ref{apendice:0}).} \label{fig:velocity} \end{figure} In Fig. (\ref{fig:velocity}) we have plotted electric permittivities and magnetic permeabilities from Eq. (\ref{epsilon}) and Eq. (\ref{epsimu2}) in appendix \ref{apendice:0} to illustrate these B-dependent quantities remain positive definite as causality and unitarity require\cite{Shabad:2011hf}, $\mathcal{L}_{\mathcal{FF}} \geq 0$, $\mathcal{L}_{\mathcal{GG}}\geq 0$ and $1-\mathcal{L}_{\mathcal{F}}\geq 0$. \subsection{Energy-momentum and Angular momentum tensors} \label{section:3} In this section, we are interested in determining the EMT and AMT, starting off from the Lagrangian in Eq.(\ref{lagfF}) describing a photon probe propagating transverse to an external magnetic field. There are various techniques at hand for this, stemming from the effective theory and, usually, the approach chosen is based on its intended use. {Among the most common are those from Noether \cite{Landau}, based on the symmetries in the Lagrangian and Noether theorem \cite{Noether} (current and charges are preserved), and Hilbert \cite{Hilbert}, which is based on variational geometry. Other several approaches have been developed to analyze the wave propagation problem, in particular those due to Boillat \cite{boillat}, Bialynicka-Birula and Bialynicki-Birula \cite{bb} and more recently by Novello et al. \cite{novello}. It is important to note that, in general, they do not yield the same result when computed. Just to mention some important differences, the Noether canonical procedure needs the Belinfante tensor to yield a symmetric tensor. Often the procedures providing gauge invariant, symmetric, and conserved EMT \cite{Blaschke:2016ohs,Baker:2020eqs} are best suited for studies with non-flat geometry such as those in General Relativity while Noether and its (improved) extensions for most of remaining physical applications.} Despite the fact that we consider as a case study the EH NLED whose limit at zero external magnetic fields is Maxwell electrodynamics for both, we will show in what follows that, as a clear example of the previous, Noether and Hilbert approaches do not yield the same EMT tensor. In the robust Hilbert method, we compute the EMT, $T_{H,\,\mu\nu}$, from the variation of the effective Lagrangian $\mathcal{L}$ with respect to the metric tensor, $g^{\mu\nu}$ in the usual way. It has two main contributions \begin{equation} T_H^{\gamma\rho} =T_H^{(0) \gamma\rho} + t_H^{\gamma\rho}, \end{equation} where $T_H^{(0) \gamma\rho}$ is that of the external background field while $t_{H}^{\gamma\rho}$ is that from the photon field. Therefore from the photon Lagrangian $\mathcal{L}^{\rm (ph-B)}$ \begin{equation} t_{H}^{\gamma\rho} =\left.\frac{2}{\sqrt{-g}} \frac{\delta [\mathcal{L}^{\rm (ph-B)}]}{\delta g_{\gamma \rho}}\right|_{g=\eta}, \label{defEMT1} \end{equation} \noindent where we use the following relations of derivatives with respect to the metric \begin{align} \frac{\partial\sqrt{-g}}{\partial g_{\gamma\rho}}&= \frac{1}{2}g^{\gamma\rho}\sqrt{-g},\\ \frac{\partial g^{\lambda\nu}}{\partial g_{\beta\gamma}}&=-\frac{1}{2} (g^{\beta\lambda}g^{\gamma\nu}+ g^{\gamma\lambda}g^{\beta\nu}). \end{align} In order to obtain the EMT we rewrite the Lagrangian into a more explicit form as \begin{align} \mathcal{L}^{\rm (ph-B)}&=-\frac{\sqrt{-g}}{4}g^{\alpha\mu}g^{\beta\nu}f_{\alpha\beta}f_{\mu \nu}+ \frac{\sqrt{-g}}{4}\left ( 2\mathcal{L}_{\mathcal{F}}g^{\alpha\mu}g^{\beta\nu}f_{\alpha\beta}F_{\mu\nu}\right .\nonumber\\ &+\left.\frac{\mathcal L_{\mathcal{FF}}}{2} g^{\alpha\lambda}g^{\beta\epsilon}g^{\mu\delta}g^{\nu\tau} f_{\lambda\epsilon}f_{\delta\tau}F_{\alpha\beta}F_{\mu\nu} +\frac{\mathcal{L}_{\mathcal{GG}}}{2} g^{\alpha\lambda} g^{\beta\epsilon}g^{\mu\delta}g^{\nu\tau} f_{\lambda\epsilon}f_{\delta\tau}\tilde{F}_{\alpha\beta} \tilde{F}_{\mu\nu} \right ).\label{Lagg} \end{align} Performing the derivatives and recovering the flat space \begin{align} t_H^{\gamma\rho}&=(1-\mathcal{L}_{\mathcal{F}}) f^{\gamma}_{\lambda}f^{\lambda \rho}+ \frac{\mathcal{L}_{\mathcal{FF}}} {2} f^{\mu\nu} F_{\mu\nu} (F^{\gamma\alpha}f_{\alpha}^{\rho}+F^{\rho\alpha}f_{\alpha}^{\gamma})+\frac{\mathcal{L}_{\mathcal{GG}}} {2} f^{\mu\nu} \tilde{F}_{\mu\nu} (\tilde{F}^{\gamma\alpha}f_{\alpha}^{\rho} +\tilde{F}^{\rho\alpha}f_{\alpha}^{\gamma})\nonumber\\ &+\frac{\eta^{\gamma\rho}}{4}\left ((1-\mathcal{L}_{\mathcal{F}})f_{\mu\nu}f^{\mu\nu}+\frac{\mathcal{L}_{\mathcal{FF}}}{2} +f^{\mu\nu}F_{\mu\nu}f^{\alpha\beta}F_{\alpha\beta}+\frac{\mathcal{L}_{\mathcal{GG}}} {2}f^{\mu\nu} \tilde{F}_{\mu\nu}f^{\alpha\beta}\tilde{F}_{\alpha\beta}\right)\nonumber\\ &+\mathcal{L}_{\mathcal{F}} (F^{\gamma\alpha}f_{\alpha}^{\rho}+F^{\rho\alpha}f_{\alpha}^{\gamma})+\frac{\eta^{\gamma\rho}}{2}\mathcal{L}_{\mathcal{F}}f^{\mu\nu} F_{\mu\nu}. \label{defEMT1} \end{align} \noindent The obtained EMT is thus gauge invariant, conserved $\partial_{\nu} t_H^{\mu\nu} = 0$ and symmetric $t_H^{\mu\nu}=t_H^{\nu\mu}$. Under this form, it displays anisotropy and lack of tracelessness i.e. it has a non-vanishing trace \begin{equation} t^{\mu}_{H,\mu}=-\frac{1}{2} (\mathcal{L}_{\mathcal{FF}} f_{\mu\nu}F^{\mu\nu}+\mathcal{L}_{\mathcal{GG}} f_{\mu\nu}{\tilde F}^{\mu\nu}) \neq 0. \end{equation} Note that in this context, the anisotropy is a consequence of the presence of a fixed orientation background magnetic field that breaks the rotational symmetry (as part of the Lorentz symmetry). In our case, physical quantities are invariant only under rotations around $\pmb{\hat z}$-direction and thus the vacuum becomes axisymmetric. At this point and illuminated by the conformal symmetry methods, we comment on the implications of the absence of {tracelessness} of $t_H^{\mu\nu}$. It is well known that a theory with traceless EMT is invariant under scale transformation. This transformation does not change the metric signature neither the magnitude of four-vectors nor the light cone as shown in \cite{Cote:2019kbg,Wald}. As the second order Lagrangian $\mathcal{L}^{\rm ph-B}$ is not conformally invariant (see apendix \ref{apendice:3}) by the conformal scaling rules, $t_H^{\mu\nu}$ is not traceless. { The magneto-electric terms proportional to $\mathcal{L}_{\mathcal{GG}}$,$\mathcal{L}_{\mathcal{FF}}$, scale differently from $t^{\gamma \rho}_{\rm H}$ which encloses the photon Maxwell term and non-linear term proportional to $\mathcal{L}_{\mathcal{F}}$}, {explicitly the transformed $\widetilde{t}^{\gamma \rho}_H$ reads as,} \begin{equation} \widetilde{t}^{\gamma \rho}_H \supset \Omega^{-6} t^{\gamma \rho}_{\rm ph}+\Omega^{-10}t^{\gamma \rho}_{\rm ph-B}\neq t^{\gamma \rho}_H, \end{equation} \noindent where $\Omega$ is the scale parameter. As expected, in the limit of vanishing external field the EH Lagrangian describes the Maxwell theory and its properties: massless photon and traceless EMT. Note that, an analogy between non-conformal Proca's theory and NLED, was made in \cite{Noble} developing the concept of a "quasi-photon" that acquires a kind of effective mass. However, Proca EMT scales as $\widetilde {t }_{H}^{Proca}\sim \Omega^{-4}t_{H}^{Proca }$, different from the scaling of NLED, being thus not trivial. As it merits a more rigorous analysis we leave it for future work \cite{Blaschke:2016ohs}. Further, the non-conformal invariance of the light cone, see \cite{Cote:2019kbg} and \cite{Wald}, indicates that the photon dispersion law in the magnetized vacuum described with EH NLED model exhibits birefringence, corresponding to different values of the refraction indices in Eq.(\ref{indrefraccion}) as derived from non-vanishing values of the magneto-electric terms. \noindent The explicit components of the photon contribution $t_{H}^{ij}$ obtained from Eq.(\ref{defEMT1}) can be written in terms of the photon fields ${\bf H}_{w}$,${\bf B}_w$, ${\bf D}_w$ and ${\bf E}_w$ as \begin{equation} t_H^{00}=\frac{1}{2} (\mathbf{D}_w.\mathbf{E}_w+ \mathbf{H}_w.\mathbf{B}_w) {+}\mathcal{L}_{\mathcal{GG}}(\mathbf{B}_{e}\cdot\mathbf{E}_w)^2, \end{equation} while components $i=j=1,2$ are given by \begin{equation} t^{ii}_H=\frac{1}{2}(\mathbf{D}_w.\mathbf{E}_w+\mathbf{H}_w.\mathbf{B}_w)-(D_{w,i}E_{w,i} +H_{w,i}B_{w,i})+\mathcal{L}_{\mathcal{FF}}(\mathbf{B}_{e}\cdot\mathbf{B}_w)^2. \label{EMT-SWfield1} \end{equation} \noindent Instead, for $i=j=3$, we obtain \begin{equation} t_H^{33}=\frac{1}{2}(\mathbf{D}_w.\mathbf{E}_w+\mathbf{H}_w.\mathbf{B}_w) -(D_{w,3}E_{w,3} +H_{w,3}B_{w,3})-\mathcal{L}_{\mathcal{GG}} (\mathbf{B}_{e}\cdot\mathbf{E}_w)^2, \label{EMT-SWfield2} \end{equation} \noindent while for the ${\bf B_e}=B_e{{\bf \hat z}}$ the only spatial non-diagonal component $i\ne j$ is \begin{equation} t_H^{13}=t_H^{31}=-\mathcal{L}_{\mathcal{GG}} (\mathbf{B}_{e}\cdot\mathbf{E}_w)E_{w,1}B_{e,3} +\mathcal{L}_{\mathcal{FF}} (\mathbf{B}_{e}\cdot\mathbf{B}_w)B_{w,1}B_{e,3}, \end{equation} and the non-vanishing components $0i$ corresponding to the momentum density, $\pmb{ \mathcal{P}}_w$, are \begin{align} {t}_H^{02}&= {\frac{1}{2}}\left[ (E_{w,3}H_{w,1}-E_{w,1}H_{w,3})+(D_{w,3}B_{w,1}-D_{w,1}B_{w,3})\right. \nonumber\\ &\left .+{\mathcal L}_{\mathcal{FF} } E_{w,1} B_{e,3} (\mathbf{B_e}\cdot {\bf B}_w)+{\mathcal L}_{\mathcal{GG}} B_{w,1} B_{e,3} ({\bf B_e}\cdot {\bf E}_{w}) \right ]. \label{poyntingw} \end{align} \noindent { Besides, the AMT, a rank-3 tensor density, connected to the symmetric EMT via tensor relations \cite{Blaschke:2016ohs} as \begin{align} \mathfrak{M}^{\mu\nu\lambda}&= x^{\mu}T_H^{\nu\lambda}(x)-x^{\nu}T_H^{\mu \lambda}(x), \label{cuatritorque1} \end{align} \noindent is conserved $\partial_{\lambda}\mathfrak{M}^{\mu\nu\lambda} =T_H^{\nu\mu}-T_H^{\mu \nu}=0,$ and antisymmetric in the $\mu \nu$ indices. } \noindent In \cite{Neves:2021tbt,Villalba-Chavez:2012pmx} the EMT has been calculated using the usual Noether method for the photon probe starting from the same quadratic Lagrangian Eq.(\ref{Lagrafoton}), being however not symmetric. Special caution must be taken when comparing to these findings and proceeding to further physical interpretations. However, both formalisms retain some general properties of NLED in the presence of a fixed orientation external field, i.e. breaking of Lorentz symmetry, and thus the behavior of physical quantities should be possibly described, in the cases beforementioned, either ``\`a la Noether'' or ``\`a la Hilbert''. As is pointed out in \cite{Baker:2020eqs}, the reason this occurs for the EH NLED described as in Eq.(\ref{Lagrafoton}) is the presence of the product of four metric tensors from magneto-electric terms. Consequently, additional terms will appear in the Hilbert EMT if compared to the one obtained from Noether's method (see Lagrangian (\ref{Lagg})). Trivially, from the general non-linear theory in the limit of $B_e\rightarrow 0$ one can recover a Maxwell classical contribution for the photon probe, whose derivation from Noether plus Belinfante symmetrization \cite{Belifante} or Hilbert techniques, reproduces the identical result and fulfills the requirements of gauge invariance, symmetric conservation, and tracelessness. In our previous work \cite{Perez-Garcia:2022rzy}, the Hilbert-Einstein method was also used to obtain the diagonal components of the EMT in the weak field limit in Eq. (\ref{nolinealL}). That calculation was performed considering the external plus wave fields contributions to the strength tensor i.e. $F^{\mu\nu} + f^{\mu\nu}$ but neither were explicitly separated nor the latter was approximated to their quadratic contributions. Hence, it is worth noting that the resulting EMT contains all possible interactions between the external magnetic and the photon field probes, including self-interaction, that, nevertheless, should coincide with the $t^{\mu\nu}$ presented here if we make the appropriate approximations above mentioned. This result is not a priori straightforward to obtain \cite{Noble,Dereli et al.(2007)}. Once we have defined the tensorial quantities of interest, EMT and AMT, as space-time dependent also retaining dependences of the photon fields, we now proceed to provide their physical meaning. To do that we express the explicit components of the photon contribution to $T_H^{\mu\nu}$ in terms of the fields ${\bf B}={\bf B}_e+{\bf B}_w$, ${\bf E}={\bf E}_w$, obtaining magnitudes such as angular momentum, energy density, Poynting vector or directional pressures. {On general grounds, magnitudes being conserved stem from the dynamical equations they follow in a system. Then, EMT and AMT conservation laws usually lead to the standard procedure to define physical quantities as integrals of {\it densities} over a spatial volume $V$ \cite{Villalba-Chavez:2012pmx,BirulaBook}. At the same time, the presence of oscillating fields leads to the importance of averaging over time $t$ and coordinate propagation, $y$, whose associated Fourier coordinates are frequency $\omega$ and momentum $\pmb{k}_\bot$, respectively. } It is worth to remind that the magneto-electric terms in $\mathcal{L}^{\rm (ph-B)}$ in Eq. (\ref{Lagrafoton}) could be rewritten as \begin{equation} \mathcal{L}^{\rm (ph-B)}= -\frac{1}{4}(1-{\mathcal{L}}_{\mathcal F}) f^{\mu\nu}f_{\mu\nu}- \frac{1}{2}(1-{\mathcal{L}}_{\mathcal F}) f^{\mu\nu}F_{\mu\nu}+ \frac{Q_B^{\kappa\lambda\mu\nu}}{8}f_{\mu\nu}f_{\kappa\lambda} , \label{Lagrafotonplebanski} \end{equation} \noindent where the tensor $Q_{B}^{\kappa\lambda\mu\nu}$ \cite{Neves:2021tbt} depends on the constant and uniform background field $B$ \begin{align} Q_B^{\kappa\lambda\mu\nu}=\mathcal{L}_{\mathcal{FF}}F^{\mu\nu}F^{\kappa\lambda} +\mathcal{L}_{\mathcal{GG}}F^{\mu\nu}\tilde{F}^{\kappa\lambda}, \end{align} \noindent being antisymmetric under the exchange $\mu\leftrightarrow\nu$ or $\kappa\leftrightarrow\lambda$ and symmetric under $\mu\nu\leftrightarrow \kappa\lambda$. Considering time-independency and the only non-null components of the $Q_B$ rank-4 tensor it is reduced to a rank-2 tensor in space. If we define different contributions to $Q_B$ proportional to wave fields $Q_{HB}^{ij} \propto B^i_w B^j_w$, $Q_{DE}^{ij} \propto E^i_w E^j_w$, $Q_{DB}^{ij} \propto E^i_w B^j_w$ as in \cite{Villalba-Chavez:2012pmx} it becomes $Q_B^{0i0j}=-\frac{1}{2} Q_{DE}^{ij}$ and $Q_{HB}^{ij}=\frac{1}{2}\epsilon^{ikl}\epsilon^{jpq}Q_B^{klpq}$ and $Q_{DB}^{ij}=-Q_{HE}^{ji}=\epsilon^{ipq}Q_B^{0jpq}$. { In the magnetic background case, the electric permittivity and magnetic permeability match those shown in Eq. (\ref{epsilon})-(\ref{epsimu2}), see appendix (\ref{apendice:0}) and $Q_{DE}^{ij}=\mathcal{L}_{\mathcal{GG}}B^2\delta^{ij}$, and $Q_{HB}^{ij}=\mathcal{L}_{\mathcal{FF}}B^2\delta^{ij}$, with $Q_{DB}^{ij}=-Q_{HE}^{ji}=0$. } {In that way, the EH NLED vacuum would be described as a classical Maxwell anisotropic medium and the photon's propagation would proceed via an effective metric to fulfill the null geodesic requirement \cite{novello}.} {Even when we have shown that in our case study, EH NLED, with the Lagrangian in Eq. (\ref{Lagrafoton}), the Hilbert EMT and AMT generally differ from the Noether ones, thus leading to the ambiguity of the observables, we now discuss those differences that may be relevant from the physical point of view.} From the AMT in Eq. (\ref{cuatritorque1}) we consider the angular momentum vector $\pmb{\mathcal{J}}$. As the obtained Hilbert EMT is symmetric its conservation is straightforward \cite{Blaschke:2016ohs}. $\pmb{ \mathcal{J}}$ is given by volume integration as \begin{align} {\mathcal{J}}^k&=\frac{1}{2} \epsilon^{kij}\langle \mathfrak{M}^{ij0}\rangle_V =\frac{1}{2} \epsilon^{kij}\langle x^i T_H^{j0}-x^jT_H^{i0} \rangle_V, \label{torque} \end{align} \noindent with $i,j,k=1,2,3$. The mechanical relation between the EMT and AMT can be established \cite{Blaschke:2016ohs}. Hence, in our procedure a conserved angular momentum vector also leads to a zero torque, $\frac{d {\pmb {\mathcal J}}}{d t}=\pmb{ \tau}=0$ and zero perpendicular component of the magnetization ($\pmb{ \tau}=\pmb{\mathcal M}_{\omega,\bot}\times{\bf B}_e$). However, as obtained in \cite{Villalba-Chavez:2012pmx} a non-conserved angular momentum vector, non-zero torque, and perpendicular magnetization arises when a non-symmetric EMT is used when the Noether or Hamiltonian method is selected. In such a case, an intrinsic photon spin could be defined \cite{Obukhov:2008dz} Let us now write the expression for the photon energy density $\rho_w=t_H^{00}$ and its integrated value, the photon energy contribution, ${\mathcal P}_w^{0}=\langle \rho_w \rangle_V$. The space-time averaged form, denoted with $\langle .. \rangle$, for ${\mathcal P}_w^{0}$ is \begin{align} {\mathcal P}_w^{0}&=\left\langle \frac{D_{w}^2}{2\epsilon_{\bot}}+\frac{B_{w}^2}{2\mu_{\bot}}\right\rangle {+}\frac{1}{2}\left\langle\frac{{ 3}\mathcal{L}_{\mathcal{GG}} ({\bf B}_e\cdot {\bf D}_w){\bf D}_w}{\epsilon_{\bot}\epsilon_{\parallel}} -\mathcal{L}_{\mathcal{FF}}({\bf B}_e\cdot {\bf B}_w){\bf B}_w \right\rangle {\bf B}_e, \label{energy00} \end{align} { We remind at this point that in general ${\mathcal P}_{w}^0$ is not the Hamiltonian $H$ because it does not fulfill the Legendre transformation $H={\bf D}_w \cdot {\bf E}_w({\bf D}_w,{\bf B}_w)-\mathcal{L}^{\rm (ph-B)}({\bf D}_w,{\bf B}_w)$. Only for pure photon polarization mode (3) ${\mathcal P}_w^0\equiv H.$ On the contrary Noether $T_N^{00}$ leads to the Hamiltonian\cite{Blaschke:2016ohs,Bialynicka-Birula:2014bja,Villalba-Chavez:2012pmx}. } Regarding the Poynting vector, $\pmb{{\mathcal P}}_w$, we can obtain it from the EMT components $t_H^{0i}$. Having a fixed orientation in the $\pmb{\hat{z}}$-direction for the background field the only non-zero component is ${\mathcal P}_w^{2}=\langle t_H^{02}\rangle $ \begin{align} \pmb{{\mathcal P}}_w &= \left\langle D_{w,3}B_{w,1}\left(\frac{1}{\epsilon_{\parallel}\mu_{\bot}}+1\right)-D_{w,1}B_{w,3}\left(\frac{1}{\epsilon_{\bot}\mu_{\parallel}}+1\right)\right\rangle \pmb{\hat y} \nonumber\\ &+\langle E_{w,1} B_{e,3} (\mathbf{B_e}\cdot \mathbf{B}_w){\mathcal L}_{\mathcal{FF}}+B_{w,1}B_{e,3} ({\bf B_e}\cdot {\bf E}_{w}){\mathcal L}_{\mathcal{GG}} \rangle \pmb{\hat y}\label{Poynting}, \end{align} \noindent and it {does not} match the one obtained by the Noether method. { Due to the plane wave photon field, propagating in the $\pmb{\hat{y}}$-direction the only non-trivially fulfilled EMT conservation relation concerns the $i=2$ component.} Regarding photon pressure components, they are related to the stress energy tensor (spatial part in EMT) $t_H^{ij}$. We thus consider the flux of the i-th component of momentum carried in the j-th direction and viceversa. As the background field points in the $\pmb{{\hat z}}$-direction from Eq. (\ref{EMT-SWfield1}) the expression for $p_w^1=\langle t_H^{11}\rangle $, $p_w^2=\langle t_H^{22}\rangle$ reads \begin{equation} p_w^{i} =\frac{1}{2}\langle(\mathbf{D}_w\mathbf{E}_w+\mathbf{H}_w \mathbf{B}_w)-(D_{w,i}E_{w,i} +H_{w,i}B_{w,i})-\mathcal{L}_{\mathcal{FF}}(\mathbf{B}_{e}\cdot\mathbf{B}_w)^2\rangle, \label{EMT-SWfield11} \end{equation} where $i=1,2$, while for $p_w^3=\langle t_H^{33}\rangle$ we obtain \begin{equation} p_w^3 =\frac{1}{2}\langle(\mathbf{D}_w\mathbf{E}_w+\mathbf{H}_w\mathbf{B}_w) -(D_{w,3}E_{w,3} +H_{w,3}B_{w,3})-\mathcal{L}_{\mathcal{GG}} (\mathbf{B}_{e}\cdot\mathbf{E}_w)^2\rangle. \label{EMT-SWfield22} \end{equation} The expressions in Eq.(\ref{EMT-SWfield11}), (\ref{EMT-SWfield22}) clearly show the anisotropy regarding photon propagation with respect to the fixed direction of the magnetic field $\mathbf{B}_{e}$, breaking the rotational symmetry that otherwise would exist in vacuum. This result reinforces the finding that the averaged pressure perpendicular to the magnetic field in a more refined treatment does not match any of the individual components $p_w^1$ nor $p_w^2$. As the system becomes axially symmetric it seems natural to define non-linear pressures into parallel, $P_{w,\parallel}^{NL}$, and transverse, $P_{w,\bot}^{NL}$, directions to the background external field under the form \begin{equation} P_{w,\bot}^{NL}=\frac{ p^1_{w} + p^2_{w}}{2},\quad P_{w,\parallel}^{NL}= p^3_{w} . \end{equation} Summarizing, the non-linear (NL) contribution to photon energies ${\mathcal P}^{0,NL}_w \equiv E_w^{NL}$and directional pressures using the Hilbert procedure we find take the following form for each light mode \begin{align} E_w^{NL,(2)}&\simeq ({-}\mathcal{L}_{\mathcal{F}}+{\frac{3}{2}}\mathcal{L}_{\mathcal{GG}}B_e^2)\langle E_{w}^2\rangle, \\ E_w^{NL,(3)}&\simeq -(\mathcal{L}_{\mathcal{F}}+\frac{1}{2}\mathcal{L}_{\mathcal{FF}}B_e^2)\langle E_{w}^2\rangle, \\ P^{NL,(2)}_{w,\parallel}&\simeq-\frac{3}{2}\mathcal{L}_{\mathcal{GG}} B_e^2\langle E_w^2\rangle,\\ P_{w,\bot}^{NL,(2)}&\simeq -\frac{1}{2}(\mathcal{L}_{\mathcal{F}}-\mathcal{L}_{\mathcal{GG}}B_e^2)\langle E_{w}^2\rangle,\\ P_{w,\parallel}^{NL,(3)}&\simeq\frac{1}{2}\mathcal{L}_{\mathcal{FF}} B_e^2\langle E_w^2\rangle,\\ P_{w,\bot}^{NL,(3)}&\simeq-\frac{1}{2}(\mathcal{L}_{\mathcal{F}}+3\mathcal{L}_{\mathcal{FF}}B_e^2)\langle E_{w}^2\rangle, \end{align} where we have used $\frac{D_w^2}{\epsilon_{(\parallel,\bot)}}=\frac{ B_w^2}{\mu_{(\bot,\parallel)}}$. In order to further remark differences among the two procedures we note that Hilbert energies fulfill $E^{H,NL,(2)}-\mathcal{L}_{\mathcal{GG}}B_e^2=E^{N,NL,(2)}$ for mode (2), while for mode (3) remains unchanged $E^{H,NL,(3)} =E^{N,NL,(3)}$. Concerning pressures we find that, depending on the photon polarization mode, differences appear in the parallel or perpendicular components. For polarization mode (2) $P^{N,NL, (2)}_{w,\bot}=P^{H,NL, (2)}_{w,\bot}$ while the parallel $P^{H,NL, (2)}_{w,\parallel}-\mathcal{L}_{\mathcal{GG}}B_e^2 =P^{N, NL}_{w,\parallel}$. On the contrary, for mode (3), the perpendicular pressure differs in Hilbert and Noether's methods $P^{H,NL, (3)}_{w,\bot}-\mathcal{L}_{\mathcal{FF}}B_e^2=P^{N,NL, (3)}_{w,\bot}$ while the parallel is the same, $P^{H,NL, (3)}_{w,\parallel}=P^{N,NL, (3)}_{w,\parallel}$. The photon pressure for monochromatic planar waves can alternatively be calculated from the stress tensor or Poynting vector. As the propagation of the photon proceeds in $\pmb{\hat{y}}$-direction $P_{w,\bot}$ defines the target pressure. We have checked that for each polarization mode $P^{\rm ph}\equiv P^{NL,(2,3)}_{w,\bot}={{\mathcal P}_w^{02}}/2$, being therefore accessible to experimental setups \cite{radpres}. \begin{figure}[t] \centering \includegraphics[width=0.48\linewidth]{EHNversusB.pdf} \includegraphics[width=0.48\linewidth]{PressureNHversusB.pdf} \caption{Non-linear contribution to energy $E^{NL}\equiv \langle \rho_w\rangle_V$ (left) and parallel and perpendicular pressures (right) as a function of magnetic field strength for both polarization modes. $E^{NL,(2)}>E^{NL,(3)}$ with increasing magnetic field strength. $E^{NL,(3)}=E^{N,NL,(3)}=E^{H,NL,(3)}$. For mode (2) $P^{NL,(2)}_{\bot}> P^{NL,(2)}_{\parallel}$ while it is the opposite, $P^{NL,(3)}_{\bot}< P^{NL,(3)}_{\parallel}$, for mode (3). We have also plotted the $E^{N\,NL(2)}$ which would be derived from Hamiltonian or Noether method. } \label{fig:energies} \end{figure} In Fig. (\ref{fig:energies}) we show the non-linear contribution to photon energy $E^{NL}\equiv \langle \rho \rangle_V$ as obtained in Hilbert (H) and Noether (N) procedures (left) and parallel and perpendicular photon pressures (right) as a function of magnetic field strength for both polarization modes. For increasing magnetic field strength, { $E^{H,NL,(2)}>E^{H,NL,(3)}$} and $P^{NL,(2)}_{\bot}> P^{NL,(2)}_{\parallel}$ while for mode (3) it is the opposite, $P^{NL,(3)}_{\bot}< P^{NL,(3)}_{\parallel}$.$E^{NL,(3)}=E^{N,NL,(3)}=E^{H,NL,(3)}$. \subsection{Magnetization} \label{sec:mag} The magnetic energy density associated with the photon can be obtained from the magnetization in presence of an external magnetic field as, \begin{equation} \mathcal{E}_{mag}=-\frac{1}{2} \pmb{\mathcal{M}}_w \cdot\mathbf{B}_e \label{Emag} \end{equation} \noindent where the magnetization is calculated generically from $\mathcal{L}^{(\rm ph-B)}$ in Eq. (\ref{Lagrafoton}) using ${\bf H}=-\frac{\partial \mathcal{L}^{(\rm ph-B)}}{\partial {\bf B}_e} = {\bf B}_e-\pmb{\mathcal{M}}$ or by the Hamiltonian $H^{(\rm ph-B)}$ \begin{equation} H^{(\rm ph-B)}=\int d^3x \frac{D_{w}^2}{2\epsilon_{\bot}}+\frac{B_{w}^2}{2\mu_{\bot}} -\frac{1}{2}\left (\frac{\mathcal{L}_{\mathcal{GG}}}{\epsilon_{\bot}\epsilon_{\parallel}} ({\bf B}_e\cdot {\bf D}_w)^2 -\mathcal{L}_{\mathcal{FF}}({\bf B}_e\cdot {\bf B}_w)^2. \right)\label{Hamiltonian} \end{equation} The photon contribution has the form \begin{equation} {\mathbf{\mathcal M}}_w= \mathcal{L}_{\mathcal{GG}} ( {\bf E}_w \cdot {\bf B}_e) {\bf E}_{w,3} +\mathcal{L}_{\mathcal{FF}} ({\bf B}_w \cdot {\bf B}_e ) {\bf B}_{w,3}. \label{magnetizationf1} \end{equation} {The magnetization depends on the polarization mode and provided values of the Lagrangian derivatives, it is always positive.This induces a paramagnetic nature for the photon.} Considering photon oscillating fields ${\bf B}_w={\bf E}_w={\bf E}_0 e^{i(\omega t-k_{\bot} y)}$ in Eq.(\ref{magnetizationf1}) and from space-time averages $\langle \mathcal{M}_{w} \rangle$ , only even powers of wave fields will give a non-vanishing contribution. Further, when considering effective measurable energy values we must actually integrate over volume, i.e. compute $\langle \mathcal{E}_{mag} \rangle_V$. This allows us to compute the effective magnetic moment of a photon probe defined as \begin{equation} \mid \pmb{\mu}_{ph}\mid=-\frac{d \langle \mathcal{E}_{mag} \rangle_V } {d B_e} \frac{1}{V\langle N_V\rangle}, \end{equation} where V is the volume and $N_V^{(i)}$ is the number density of ith mode ($i=2,3$) and is given by $N_V^{(2,3)}= \frac{1}{2}\frac{E_0^2}{\omega^{(2,3)}}$ \cite{Villalba-Chavez:2012pmx}. $\pmb{\mu}_{ph}$ will characterize the interaction of propagating photons with the vacuum virtual pairs under the presence of a magnetic field. More specifically for the two modes $\mid \pmb{\mu}_{ph}^{(2,3)} \mid$ reads as \begin{align} \mid \pmb{\mu}_{ph}^{(2)}\mid &=\dfrac{\alpha}{16\pi}\dfrac{1}{b^3} \left( 3-12 \zeta^{(1,1)}\left( -1,\dfrac{1}{2b} \right )+3\psi\left( \dfrac{1}{2b} \right ) + b \left[-3 +\log \Gamma \left(\dfrac{1}{2b} \right ) \left (\dfrac{\pi}{b} \right )^2\right.\right.\nonumber\\ & +\left.\left.\psi^{(1)}\left (1+\dfrac{1}{2b}\right )+ 2b^2\right ]\right )\dfrac{\mid\bf{k}_{\bot}\mid}{B_c}, \end{align} \begin{align} \mid \pmb{\mu}_{ph}^{(3)}\mid &=\dfrac{\alpha}{8\pi}\dfrac{1}{b^4}\left ( -\psi^{(1)}\left (1+\dfrac{1}{2b}\right ) \right. + b \left [ 4- 4 \psi\left (1+\dfrac{1}{2b}\right )+2\psi\left (\dfrac{1}{2b}\right ) \right ] \nonumber\\ &+b^2 \left[4-2 \log(2\pi) +\left.\left. 4\log\left (\Gamma \left( \dfrac{1}{2b} \right ) \left (\dfrac{\pi}{b} \right )^{1/2}\right ) \right]\right )\dfrac{\mid{\bf{k}_{\bot}}\mid}{B_c}, \right. \end{align} \noindent where now $b=B_e/B_c$, $\psi^{(1)}=\partial_h\psi[h]$ being $\psi$ the PolyGamma o Digamma function, (first derivative of ${\rm ln}\, \Gamma$). $\zeta^{(1,1)}[s,h]=\partial_h \zeta^{\prime}$ with $\zeta^{\prime}=\partial_s\zeta[s,h]$ and $\zeta[s,h]$ is the Hurwitz zeta function \cite{Adam}. In appendix (\ref{apendice:1}) we provide expressions for $\psi$ and $\zeta^{\prime}$ in weak and strong field limits. The photon-effective magnetic moment inherits the orientation of the magnetization. For both modes, it is parallel to the external magnetic field, determining the photon field component. In the weak field limit, the photon effective magnetic moment yields for modes (2) and (3) \begin{equation} \mid \pmb{\mu}_{ph}^{WF\,(2)}\mid =\frac{7}{2} \xi\mid \pmb{k}_{\bot}\mid B_e =\frac{\alpha}{4\pi}\frac{28}{45} \frac{B_e}{B_c^2}\mid \bf{k}_{\bot}\mid, \end{equation} \begin{equation} \mid \pmb{\mu}_{ph}^{WF\,(3)}\mid=2 \xi\mid \pmb{k}_{\bot}\mid B_e =\frac{\alpha}{4\pi}\frac{8}{45} \frac{B_e}{B_c^2}\mid{\bf k}_{\bot}\mid. \end{equation} Instead, in the strong field limit only mode (2) contributes to it tending to a constant value found to be a thousand times smaller than the anomalous electron magnetic moment, $\mu_e$, \begin{align} \mid \pmb{\mu}_{ph}^{SF\,\,(2)}\mid=\frac{\alpha}{3\pi}\frac{\langle E_w^2 \rangle}{B_c}\sim \frac{\alpha }{3\pi}\frac{e}{2m_e}\frac{\mid {\bf k}_{\bot}\mid}{m_e}\sim 10^{-3}\mu_e, \quad\quad \mid \pmb{\mu}_{ph}^{SF\,\,(3)} \mid=0. \end{align} \noindent As was pointed out in \cite{Villalba-Chavez:2012pmx} performing an analogy between the parallel magnetic moment for a photon probe $\pmb{\mu}_{ph}$ and the spin magnetic moment of the electron, the former could be interpreted as a consequence of the existence of spin for a photon probe. However, it is merely an analogy, since spin for particles has to obey additional quantum mechanical properties that the bosonic photon field does not satisfy. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{mmoment2y3.pdf} \caption{Effective magnetic moment as a function of magnetic field strength, $b=B/B_c$, for mode (2) (mode (3)) is shown in red (blue) solid line. Weak (strong) field limit is depicted in orange dashed (grey dot-dashed) line for mode (2). The weak field limit for mode (3) is depicted by the dashed green line. } \label{fig:MM} \end{figure} In Fig. (\ref{fig:MM}) we plot the photon effective magnetic moment as a function of dimensionless parameter $b=B/B_c$ for light polarization modes (2) and (3). For comparison, we have depicted the magnetic moment in the weak and strong field limits as well. One can see that the effective magnetic moment for mode (2) (model (3)) is over (under) estimated if we take the approximate values arising from the weak and strong limits in their range of validity. Then, for a magnetic field $B_e \gtrsim 2B_c$ the effective magnetic moment of photons polarized on mode (2) tends asymptotically to a constant value. Instead, photon magnetic moment for mode (3) slowly decreases with the magnetic field strength, being zero its strong field limit value. As a complementary remark, let us note that the magnetic moment of the electron in presence of a magnetic field considering radiative corrections in QED, was studied in \cite{Ferrer:2015wca} for weak and strong magnetic field limits providing expressions ($\frac{\alpha}{2\pi}\frac{e}{2m_e}$) and ($m\frac{\alpha}{4\pi} {\rm ln}\frac{ e B}{m_e^2}$), respectively. So far, there is no general expression for the whole range of magnetic fields. However, our study contributes to clarifying physical implications of those limits. The physical picture that emerges for the effective magnetic moment of photons is thus a consequence of their propagation in a magnetized vacuum. Likewise photons in a medium, photons in vacuum behave as quasi-particles interacting with virtual pairs and external magnetic fields via this effective moment. \section{Experimental prospects for testing vacuum properties with photon propagation with EH NLED} \label{sec:exp} Indirect observation of photon magnetization and magnetic moment are possible in the astrophysical scenario, as already shown in the literature \cite{Mielniczuk,Turolla:2017tqt}. When photons cross the magnetic field in the magnetosphere of a NS they are deflected and their trajectories are bent, rising magnetic lensing effects where a non-vanishing photon magnetic moment plays an important role, see \cite{PhysRevLett.94.161101}. Instead, on Earth, much lower strength pulsating fields can be generated in modern laser facilities on tabletop experiments \cite{PVLAS:2007wzd,Tommasini:2009nh} constituting yet another interesting opportunity to measure quantities such as photon magnetization, photon effective magnetic moment or photon pressure in the weak field limit, as discussed. As we have shown, photon magnetization and effective magnetic moment are mainly determined by the magneto-electric terms in the Lagrangian $1/2\mathcal{L}_{\mathcal{GG}}({\bf B}_e \cdot {\bf E}_w)^2$ and $1/2\mathcal{L}_{\mathcal{FF}}({\bf B}_e\cdot {\bf B}_w)^2$ depending on the polarization mode. In the EH weak field limit ${\mathcal L}_{\mathcal {GG}} B_e^2 \rightarrow 7/4 \xi B_e^2$ while ${\mathcal L}_{\mathcal {FF}} B_e^2\rightarrow 2\xi B_e^2$, respectively. Therefore, experiments aimed to get magnetization properties, even if challenging, would give us the $\mathcal{L}_{\mathcal{GG}}$ and $\mathcal{L}_{\mathcal{FF}}$ values independently. As already mentioned, the photon pressure, $P^{\rm ph}$, corresponds to the perpendicular component $P_{w,\bot}$. For mode (2) it is written as $P^{\rm ph}=P_0+\mathcal{M}B_e$ with $P_0$, the classical photon pressure, increased in a quantity proportional to the magnetization of the photon i.e. for each polarization mode will be different and likely better suited to discriminate over the Cotton Mouton birefringence usually quoted. The latter is sensitive to the refraction index difference $\Delta n$ related to the difference $(\mathcal{L}_{\mathcal{GG}}-\mathcal{L}_{\mathcal{FF}})B_e^2$. Experimental setups based on light scattering \cite{Ferrando:2007pgk} and Dirac materials \cite{diracmaterials} have already obtained measurements of magnetization that seem promising. For the latter their associated values of Schwinger fields are $E_c\sim 10^5$ V/cm and $B_c\sim 1$ T, both being experimentally accessible and providing a platform to explore the strong field regime of QED and to observe a new class of magneto-electric effects such as a high electric field modulated magnetization and a very large enhancement of the dielectric constant. These effects are also highly anisotropic as they depend on relative orientation of E/B fields and their crystallographic orientation Considering photon probes from a laser beam whose intensity is related with electric and magnetic field of the photon by $\frac{I}{c}=\epsilon_0 E_w^2=\frac{B_w^2}{\mu_{0}}$, typical current values of magnetic field generated in laboratory attain around $\sim 45 \,\rm T=4.5\times 10^5$ G, while future intensities of $I=10^{28}$ $\rm W/m^2$, in CGS units $I=10^{24}$ $\rm W/cm^2$, will require $\sim 10^2$ PW lasers. Very recent advances on this side \cite{megaT} include the possibility of generating ultrahigh magnetic fields of the MegaTesla order using microtube implosions driven by ultraintense and ultrashort laser pulses in a novel scheme. For a future benchmark intensity, $I=10^{24}$ $\rm W/cm^2$ and $B_e \sim 30 \,\rm T=3\times 10^5$ G, we can estimate the theoretical vacuum magnetization for both photon modes $\mathcal{M}_w^{(2)}=\frac{7}{2} \frac{\xi I}{c} B_e \sim 2.3 \times 10^{-4}\,\rm G$ and ${\mathcal M}_w^{(3)}=2 \frac{\xi I}{c} B_e \sim 1.3 \times 10^{-4}\,\rm G$. To illustrate with numbers, at atmospheric conditions of pressure $P$ and temperature $T$, one can obtain the estimate of Inverse Cotton Mouton (ICM) magnetization, $\mathrm{M}_{ICM}^{\text {atom}}$ finding these values are two orders of magnitude higher than those for Helium magnetization \cite{Rizzo:2010di} at reach in current facilities \cite{Karbstein:2019oej,King}. However, we should keep in mind that ionization appears at different intensities, typically $I>10^{19}$ $\rm W/m^2$ depending on the noble gas and the consequent ion current could somewhat perturb the ICM measurement. Recently, on this same line, it has been stated \cite{Roso} that performing photon-photon collision experiments using two counterpropagating laser beams could yield very promising results for intensities close to $I\sim 10^{24}$ $\rm W/cm^2$. \section{Conclusions} \label{conclusions} Starting from the EH NLED the EMT and AMT associated to a photon probe, propagating transverse to a constant and uniform background magnetic field have been calculated in vacuum, using the Einstein-Hilbert method. Our calculations have been performed using NLED in the soft-photon regime which assures the validity of our results up to $B_e \sim 430 B_c$. Thus for applications in astrophysical scenarios or studies related to ultraintense laser experiments, concerning strengths below the critical field, $B_c$, our findings remain valid. In particular energy density, pressures, angular momentum, magnetization, and photon effective magnetic moment, have been derived for those strengths in a more general and accurate fashion for the selected NLED, rather than using the usual weak and strong field limits. By construction the robust and elegant Euler-Hilbert method guarantees symmetric, conserved, and gauge-invariant EMT. Besides, the EMT found, is anisotropic and lacks tracelessness. Using tools of conformal symmetry we have analyzed the physical meaning of the non-vanishing trace and have found that the theory neither is invariant under conformal transformations nor the light cone and, consequently, the vacuum displays birefringence. This is also interpreted as a sort of effective mass being acquired by the quasi-photon in its propagation in the magnetized vacuum. Besides, we have confirmed that, in general, the assumption of the equivalence of EMT from the Hilbert method and that by improved Noether technique used in \cite{Villalba-Chavez:2012pmx} is not fulfilled. We have also identified this effect originating in the magneto-electric terms of the Lagrangian {and the coupling of an EM field with curved space-time. Usually, EH NLED is considered as a modified Maxwell theory in a medium that reduces the coupling of curved space-time to a three-dimensional space preserving the equivalence between EMT obtained with Noether and Hilbert methods for the Maxwell non-linear theory}. Despite finding that physical magnitudes thus differ, some general properties of the non-linear theory like EMT anisotropy and not being traceless are valid for both formalisms. As EMT is anisotropic, the rotational symmetry is broken and is replaced by an axial symmetry driven by the direction of the external magnetic field. We choose the Hilbert method and derive some observables from it. For non-linear pressures we find spatial components being different for each polarization mode. Physically this could be interpreted in terms of magnetized electron-positron pairs that {\it transfer} momentum to photons depending on their specific polarization mode. In addition, averaged parallel and perpendicular pressure components can be defined. We can associate the transverse one to the photon effective pressure in the vacuum. Therefore, the photon probe feels that EH NLED vacuum is shrunk in the direction transverse to the magnetic field while the parallel direction remains equal. In this qualitative sense, vacuum anisotropic pressures have some analogy with the Casimir effect. Both effects lead to differences in pressures and bear a common origin, axial symmetry. The former is determined by the external magnetic field and the latter due to boundary conditions: the presence of plates in Casimir effect\cite{faccio,Elizabeth,Marklund:2008gj}. In addition, we have also studied the magneto-electric properties of the magnetized vacuum finding that it behaves as paramagnetic. A photon effective magnetic moment is also calculated. It agrees for mode (2) with the result obtained by \cite{Mielniczuk}. In the weak field limit, we reproduce the linear behavior of the magnetic field obtained in complementary way from radiative corrections of QED and the corresponding dispersion equations \cite{Elizabeth}. Instead, for strengths, $B_e\gg 2B_c$, the effective photon momentum tends to another constant value in accordance to results obtained in the strong magnetic field limit \cite{Villalba-Chavez:2012pmx,Mielniczuk}. Finally, based on previous works regarding experiments proposed by \cite{Ferrando:2007pgk,Tommasini:2009nh} and promising measurement of vacuum magnetization of Dirac materials we have discussed the possibility of measuring magnetization of vacuum, photon pressure in future experiments although they present important challenges. \section*{Acknowledgments} The authors thank to C. Albertus, L. Roso, L. Volpe, A. W. Romero Jorge for their invaluable remarks and discussions. A.P.G. and A.P.M. acknowledge the support of the Agencia Estatal de Investigaci\' on through the grant PID2019-107778GB-100 and from Junta de Castilla y Le\'on, SA096P20 project. The work of E.R.Q. was supported by the project of No. NA211LH500-002 from AENTA-CITMA, Cuba. \newpage \section{Appendix} \label{apendice} \subsection{Derivatives of the effective EH Lagrangian}\label{apendice:1} As mentioned, in section \ref{section:1} the Lagrangian derivatives can be calculated for arbitrary external magnetic field $B\equiv B_e$ and the result reads as follows \begin{align}\label{LFLFFLGG} \mathcal{L}_{\mathcal{F}}=-\mu_{0}^{-1}-\frac{\alpha}{2 \pi \mu_{0}} \int_{0}^{\infty} d z e^{-\frac{B_{\text {c }}}{B} z}\left[-\frac{2}{3 z}-\frac{1}{z \sinh ^{2}(z)}+\frac{\operatorname{coth}(z)}{z^{2}}\right],\\ \mathcal{L}_{\mathcal{FF}}=-\frac{\alpha}{2 \pi \mu_{0}B^2} \int_{0}^{\infty} d z e^{-\frac{B_{\text {c }}}{B} z}\left[2\frac{z\operatorname{coth}(z)-1}{z \sinh ^{2}(z)} +\frac{1}{z \sinh ^{2}(z)} -\frac{\operatorname{coth(z)}}{z^{2}}\right],\\ \mathcal{L}_{\mathcal{GG}}=-\frac{\alpha}{2 \pi \mu_{0}B^2} \int_{0}^{\infty} d z e^{-\frac{B_{\text {c }}}{B} z}\left[-\frac{2}{3}\operatorname{coth} (z) -\frac{1}{z \sinh ^{2}(z)}+\frac{\operatorname{coth}(z)}{z^{2}}\right]. \end{align} In the limit $E\rightarrow 0$, i.e. $b\rightarrow0$ we can solve these integrals using functional regularization so that using $\mathcal{L}^{EH}$ in Eq. (\ref{eqEH}) \begin{align} \label{derivativesB} \mathcal{L}_{\mathcal{F}}=\frac{\partial \mathcal{L}^{EH}}{\partial \mathcal{F}}&=-\mu_{0}^{-1}-\frac{\alpha}{2 \pi \mu_{0}}\left (\frac{1}{3}+2 h^{2}-8 \zeta^{\prime}(-1, h)+4 h {\rm ln} \Gamma(h)-2 h {\rm ln} h+\frac{2}{3} {\rm ln} h-2 h {\rm ln} 2 \pi\right ),\nonumber \\ \mathcal{L}_{\mathcal{FF}}=\frac{\partial^2 \mathcal{L}^{EH}}{\partial^2 \mathcal{F}}&=\frac{\alpha}{2\pi \mu_{0}B^2}\left (\frac{2}{3} + 4 h^2 \psi(1+h)-2h-4h^2-4h {\rm ln} \Gamma(h) +2h {\rm ln} 2\pi -2h{\rm ln} h\right ), \nonumber \\ \mathcal{L}_{\mathcal{GG}}=\frac{\partial^2 \mathcal{L}^{EH}}{\partial^2 \mathcal{G}}&=\frac{\alpha}{2\pi\mu_{0}B^2}\left (-\frac{1}{3}-\frac{2}{3}\left( \psi(1+h) -2h^2 +(3h)^{-1}\right ) +8\zeta^{\prime}(-1,h)\right . \nonumber \\ &\left .-4h {\rm ln} \Gamma(h) + 2h {\rm ln} 2\pi +2h {\rm ln} h \right ), \end{align} \noindent where $h=\frac{B_c}{2B_e}$, and the quantum corrections are proportional to the fine structure constant, $\alpha$. $\psi$ denotes the PolyGamma o Digamma function (first derivative of ${\rm ln}\, \Gamma$) and $\zeta^{\prime}$ is the first derivative of the Hurwitz zeta function with respect to the first argument. { Let us remark that the expansion on the derivatives of the effective Lagrangian $\mathcal{L}^{EH}$ and their integration Eq. (\ref{derivativesB}), for $\mathcal{G}=0$ could be extended for the study of the propagation of photon: in a pure background electric field and/or in the background of an orthogonal electric and magnetic field, with $h\rightarrow \frac{B_c}{2\sqrt{2\mathcal{F}}}$. } For the weak field (WF) case $h>1$ the functions ${\zeta}^{\prime} (-1,h)$, ${\rm ln}\,\Gamma(h)$ and $\psi(1+h)$ have the asymptotic expressions \begin{align} &{\zeta}^{\prime} (-1,h)=\frac{1}{12}-\frac{h^2}{4}+\frac{{\rm ln} h}{2}B_2(h) +\underbrace{\int_{0}^{\infty}\frac{e^{-hx}}{x^2}\left ( \frac{1}{1-e^{-x}}-\frac{1}{x}-\frac{1}{2}-\frac{x}{12}\right )}_{\frac{1}{720}\frac{1}{h^2}}, \quad Re (h) >0\nonumber\\ &{\rm ln}\Gamma(h)=-\frac{{\rm ln} h}{2}+\frac{1}{12 h}+\frac{1}{360 h^2} +h{\rm ln} h -h-\frac{1}{2}+\frac{1}{2}{\rm ln} 2\pi,\nonumber\\ &\psi(1+h)={\rm ln} h +\frac{1}{2h}-\frac{1}{12h^2}+\frac{1}{120h^4}, \end{align} and $B_2(h)=h^2-h+\frac{1}{6}$ is the second Bernoulli polynomial Then, $\mathcal{L}_{\mathcal{F}}$, $\mathcal{L}_{\mathcal{FF}}$ and $\mathcal{L}_{\mathcal{GG}}$ become \begin{equation}\label{LF} \mathcal{L}_{\mathcal{F}}=-1-\frac{\alpha}{2\pi}\left (\frac{1}{3} +2h^2-8(\frac{1}{12} -\frac{h^2}{2}+\frac{{\rm ln} h}{2}B_2(h)+\frac{1}{720h^2}) +4h\Gamma[h]+2h {\rm ln} \,(2\pi) +2h{\rm ln} h\right ), \end{equation} \begin{align}\label{LFLFFLGGex} \mathcal {L}_{\mathcal{GG}}=\frac{\alpha}{2\pi B_e^2} \left ( \right.-\frac{1}{3}-\frac{2}{3}\left (\psi(1+h) +2h^2 +\frac{1}{3h}\right )-2h\left(\frac{1}{6h}-\frac{1}{180h^3}\right )-\frac{2}{3}{\rm ln}(h)\nonumber\\ -\left. 4h\left (\frac{{\rm ln}(h)}{6}+\frac{1}{360h^2}\right )\right ),\nonumber\\ \mathcal {L}_{\mathcal{GG}}\xrightarrow{WF}\frac{\alpha}{2\pi B_e^2}\left (\frac{1}{18h^2}+\frac{1}{90h^2}+\frac{1}{360h^2 }\right ) =\frac{7}{2}\frac{8}{45}\frac{\alpha}{4\pi B_c^2}=\frac{7}{2}\xi,\\ \mathcal{L}_{\mathcal{FF}}=\frac{\alpha}{2\pi B_e^2}\left ( \frac{2}{3} -2h\left (\frac{1}{6h}-\frac{1}{180h^3}\right ) +6h^2\left ( \frac{2}{3}\psi(1+h) +2h^2 +\frac{1}{3h}\right ) \right ), \end{align} \begin{equation} \mathcal {L}_{\mathcal{FF}}\xrightarrow{WF}\frac{\alpha}{2\pi B_e^2}\left (\frac{1}{90h^2}+\frac{1}{90h^2}\right )=\frac{\alpha}{2\pi B_e^2}\left (\frac{4}{90h^2}\right )=\frac{8}{45}\frac{\alpha}{4\pi B_c^2}=2\xi, \end{equation} The asymptotic behavior, i.e. $h<1$ i.e. strong field (SF) limit of functions ${\zeta}^{\prime} (-1,h)$, ${\rm ln}\,\Gamma(h)$ and $\psi(1+h)$ reads \begin{align} {\zeta}^{\prime} (-1,h)&=-\frac{h^2}{4}+\frac{{\rm ln} h}{2}B_2(h),\\ {\rm ln}\,\Gamma(h)&=-\frac{{\rm ln} h}{2}+h{\rm ln} h -h+\frac{1}{2}{\rm ln} 2\pi,\\ \psi(1+h)&=\gamma +\frac{\pi^2}{6}h-\zeta(3)h^2..., \end{align} \noindent and in this limit $\mathcal{L}_{\mathcal{F}}$, $\mathcal{L}_{\mathcal{FF}}$ and $\mathcal{L}_{\mathcal{GG}}$ have the form \begin{align} \mathcal{L}_{\mathcal{F}}=\frac{\alpha}{3\pi} {\rm ln}\, (B/B_c),\\ \mathcal{L}_{\mathcal{GG}}=\frac{\alpha}{3\pi B_e B_c},\\ \mathcal{L}_{\mathcal{FF}}=\frac{1}{B_e^2}\frac{\alpha}{3\pi}. \end{align} \subsection{Maxwell equations for magnetized vacuum in EH NLED} \label{apendice:0} We summarize the equations of motion and dispersion laws for a photon probe in the presence of an external fixed background magnetic field. It can be extracted directly from the Lagrangian in Eq. (\ref{lagfF}) presented before. We first select the direction of a photon probe propagating in $\pmb{\hat{y}}-$direction, transverse to a fixed constant magnetic field along the $\pmb{\hat{z}}-$direction, ${\bf B}=B(0,0,1)$. Therefore the equation of motion is determined using Euler-Lagrange equations $\frac{1}{\sqrt{-g}}\partial_{\nu} \left[ \sqrt{g} { F}^{\mu \nu}\right]=0$. We can express the non-linear Maxwell equations in the local flat geometry in the more familiar form focusing on the wave fields using the electric displacement field. $\mathbf{D}_w$, and magnetic field, $\mathbf{H}_w$, \begin{align} \frac{\partial\mathbf{D}_w}{\partial t}=-\nabla \times \mathbf{H}_w, \quad \nabla \cdot \mathbf{D}_w=0, \label{magpo1} \end{align} \noindent and a second pair for ${\bf E}_w$ and ${\bf B}_w$ \begin{align} \nabla \cdot \mathbf{B}_w=0,\quad \frac{\partial \bf{B}_w}{\partial t}=-\nabla \times \mathbf{\bf{E}}_w, \label{magpo2} \end{align} \noindent with the constitutive equations for $\mathbf{D}_w$ and $\mathbf{H}_w$ components being \begin{align} D_{w,i}&=\frac{\partial}{\partial E_{w,i}}[ \mathcal{L}^{\rm (ph-B)}]=\epsilon_{ij}E_{w,j}=E_{w,i} + P_{w,i},\\ H_{w,i}&=-\frac{\partial}{\partial B_{w,i}}[ \mathcal{L}^{\rm (ph-B)}]=(\mu^{-1})_{ij}{B}_{w,j} =B_{w,i}-M_{w,i}, \label{mag} \end{align} with $i=1,2,3$. Similar to an optical medium, ${\bf P}_w$ and ${\bf M}_w$ are the resulting polarization and magnetization of the photon probe due to the magnetized vacuum. Note that the presence of an arbitrary strength external magnetic field will impact not only the electric permittivity and magnetic permeability tensors, $\epsilon (B)$ and $\mu^{-1}(B)$, but also Maxwell equation solutions for the photon fields in ${\bf D}_w$ and ${\bf H}_w$ thus affecting its propagation in a magnetized vacuum. In our configuration, explicit expressions for the $\epsilon$ and $\mu^{-1}$ components can be expressed in terms of Lagrangian derivatives as \begin{align} \epsilon_{11}=(\mu^{-1})_{11}=1-\mathcal {L}_{\mathcal F},\label{epsilon}\\ \epsilon_{22}=(\mu^{-1})_{22}=1-\mathcal {L}_{\mathcal F},\\ \epsilon_{33}=(1-\mathcal {L}_{\mathcal F}+2{\mathcal F}\mathcal {L}_{\mathcal {GG}}),\\ (\mu^{-1})_{33}=(1-\mathcal {L}_{\mathcal F}-2{\mathcal F}\mathcal {L}_{\mathcal {FF}}),\label{epsimu2} \end{align} \noindent being zero otherwise. Further, we define in our system $\epsilon_{\parallel}=\epsilon_{33}$, $\epsilon_{11}=\epsilon_{22}=\epsilon_{\bot}$, $\mu_{33}=\mu_{\parallel}$, $\mu_{11}=\mu_{22}=\mu_{\bot}$. From Maxwell equations in Eqs.(\ref{magpo1}),(\ref{magpo2}) for ${\bf D}_w$ and ${\bf H}_w$ and assuming a plane wave photon field in the form ${\bf E}_w={\bf E}_0 \rm exp[-i(k_{\perp}y-\omega t)]$, propagating in the $\pmb{\hat{y}}-$direction we obtain the dispersion equation in Fourier space for a monochromatic wave as \begin{equation} (\epsilon_{ijk}\epsilon_{lab}k_{j}(\mu^{-1})_{kl}k_a+\omega^{2}\epsilon_{ib})E_{w\,b}=0,\label{ecdis} \end{equation} \noindent where $i,j,k,a,b,l=1,2,3$. Solutions to the previous Eq.(\ref{ecdis}) describe two physical polarization modes of the photon field, (2) and (3). If we now set the wave number $k_{\bot}=k_2$ assuming propagation in the $\pmb{\hat{y}}$-direction we obtain linear birefringence due to the linear polarization of the radiation \begin{align} \omega^{(2)}& \simeq c\mid {\bf k}_{\bot} \mid (1-\frac{{\mathcal L}_{\mathcal {GG}}\bf{B}_e^2}{2}),\\ \omega^{(3)}& \simeq c\mid{\bf k}_{\bot} \mid (1-\frac{{\mathcal L}_{\mathcal {FF}}\bf{B}_e^2}{2}). \end{align} \noindent in line with \cite{Elizabeth,Shabad:2011hf}. Besides, Cotton Mouton birefringence \cite{Rizzo:2010di} appears and the refraction index is associated to the two different polarizations modes: $n_{\parallel}$ for mode (2) and $n_{\bot}$ for mode (3) % \begin{equation}\label{indrefraccion} n_{\parallel,\bot}=\frac{\mid \bf{k_{\bot}}\mid }{\omega^{(2,3)}}=\sqrt{\frac{\epsilon_{\parallel,\bot}}{\mu_{\bot,\parallel}}}. \end{equation} \noindent If we now define its difference as \begin{equation} \Delta n=\frac{(\mathcal{L}_{\mathcal{GG}}-\mathcal{L}_{\mathcal{FF}})\bf{B}_e^2}{2}. \end{equation} In the weak field limit (see appendix \ref{apendice:1}) it is reduced to the well-known result $\Delta n^{\rm WF}_{CM}=3/4\xi B_e^2$, instead for the strong limit $\Delta n^{\rm SF}_{\rm CM}=\frac{\alpha}{6\pi}(\frac{B_e}{B_c}-1)$. Since the velocity $v_{\parallel,\bot}=1/n_{\parallel,\bot}$, in the strong field limit the condition, $v_{\parallel}^{SF}=1-\frac{\alpha}{6\pi}(\frac{B_e}{B_c})>0$ ($\alpha/(4\pi)=1/137$) fixes the validity of one-loop approximation up to values of the magnetic field $B_e/B_c\leq \pi/\alpha\sim 430$ \cite{DittrichLibro}. If we particularize the previous for electric permittivity and magnetic permeability in weak and strong field limits we obtain the following expressions. In the weak field limit magnetic permeability components in presence of an external magnetic field $\textbf{B}_e=B_e \pmb{\hat{z}}$ are \begin{align}\label{epsilon2} \epsilon_{11}=(1-\xi B_e^2),\\ \epsilon_{33}=(1+\frac{5}{2}\xi B_e^2),\\ (\mu^{-1})_{11}=(1-\xi B_e^2),\\ (\mu^{-1})_{33}=(1-3\xi B_e^2), \end{align} and refraction indices and quadratic velocities in parallel and perpendicular components as \begin{align} n_{\parallel}=1+\frac{7}{4}\xi B_e^2 \quad n_{\bot}=1+\xi B_e^2 \\ v^2_{\parallel}=(1-\frac{7}{2}\xi B_e^2),\quad v^2_{\bot}=(1-2\xi B_e^2), \end{align} for strong magnetic field we have the electric permittivity and magnetic permeability are \begin{align}\label{epsilon3} \epsilon_{11}= (\mu^{-1})_{11}\simeq 1-\frac{\alpha}{3\pi}\left [ {\rm ln}\left (\frac{B_e}{B_c}\right )\right], \\ \epsilon_{33}\simeq 1-\frac{\alpha}{3\pi}\left [ {\rm ln} \left (\frac{B_e}{B_c}\right ) -\frac{B_e}{B_c} \right ],\\ \mu_{33}\simeq 1-\frac{\alpha}{3\pi}\left [{\rm ln} \left (\frac{B_e}{B_c}\right )+1 \right ]. \end{align} and \begin{align} n_{\parallel}\simeq \left(1+\frac{\alpha}{6\pi}\frac{B_e}{B_c} \right), \quad n_{\bot}\simeq 1+\frac{\alpha}{6\pi},\\ v^2_{\parallel}\simeq \left(1-\frac{\alpha}{3\pi}\frac{B_e}{B_c} \right),\quad v^2_{\bot}\simeq \left(1-\frac{\alpha}{3\pi} \right). \end{align} In the weak magnetic field limit for photon propagation perpendicular to the external magnetic field \cite{Elizabeth,Hugo1} we obtain the dispersion equation \begin{align} \omega^{WF,(2)}\simeq\mid\boldsymbol{k_\perp}\mid \left(1-\frac{7}{4}\xi \boldsymbol{B}_e^2 \right).\quad \omega^{WF,(3)}\simeq\mid\boldsymbol{k_\perp}\mid (1-\xi \boldsymbol{B}_e^2)\label{Bperk23} \end{align} \noindent while for strong magnetic field limit the result is \begin{align}\label{omegaS} \omega^{SF,(2)} \simeq \mid\boldsymbol{k_\perp}\mid \left (1-\frac{\alpha}{6\pi}\frac{B_e}{B_c}\right ),\quad \omega^{SF,\,(3)}\simeq\mid \boldsymbol{k_\perp}\mid \left (1-\frac{\alpha}{6\pi}\right). \end{align} \subsection{Non conformal invariant Lagrangian and EMT} \label{apendice:3} In order to check if a Lagrangian $\mathcal {L}$ is invariant under a conformal transformation, we start from the rescaling properties of the metric tensor as well as the of fields. Therefore let $A_{\mu} \longrightarrow \widetilde{A}_{\mu}=e^{-w \sigma} A_{\mu}, \quad g_{\mu \nu} \longrightarrow \widetilde{g}_{\mu \nu}=e^{-2 \sigma} g_{\mu \nu}=\Omega^{-2} g_{\mu \nu}$ be the infinitesimal conformal transformation, see \cite{Wald}. It follows that the strength tensor with two covariant indices is conformally invariant \begin{align} \tilde{g}^{ab}=\Omega^{-2}g^{ab},\quad\quad \sqrt{-\tilde{g}}=\Omega^{4}\sqrt{-g},\\ \tilde{F}_{ab}=F_{ab}, \quad\quad \tilde{F}^{ab}=\Omega^{-4}F^{ab},\quad\quad \tilde{F}_{a}^{b}=\Omega^{-2}F_{a}^{b},\\ \tilde{A}_{b}=A_{b},\quad\quad \tilde{A}^b=\Omega^{-2}A^{b}, \end{align} \noindent transforming each contribution in the Lagrangian, metric and fields, we get \begin{align} \sqrt{-g}\mathcal {L}_{\mathcal{F}}f_{\mu\nu}f^{\mu\nu}&= \sqrt{-\tilde{g}}\mathcal {L}_{\mathcal{F}} \tilde{f}_{\mu\nu}\tilde{f}^{\mu\nu}, \label{Lconfor}\\ \sqrt{-g}{\mathcal{L}}_{\mathcal{FF}}(f^{\alpha\beta}F_{\alpha\beta})^2&=\Omega^{4}\sqrt{-\tilde{g}}{\mathcal{ L}}_{\mathcal{FF}}(\tilde{f}^{\alpha\beta}\tilde{F}_{\alpha\beta})^2,\\ \sqrt{-g}m^2 A_{\mu}A^{\mu}&= \sqrt{-\tilde{g}}\Omega^{-2}m^2 \tilde{A}_{\mu} \tilde{A}^{\mu}. \end{align} Then, we can see that the rescaled Lagrangian $\widetilde{\mathcal {L}}$ yields \begin{align} \mathcal{L}\rightarrow\widetilde{\mathcal{L}}=\frac{(1-\mathcal{L}_{\mathcal{F}})}{4}\sqrt{-g} \tilde{f}_{\mu\nu}\tilde{f}^{\mu\nu} +\frac{\Omega^{4}{\sqrt{-g}\mathcal{ L}}_{\mathcal{FF}}}{8}\sqrt{-\tilde{g}}(\tilde{f}^{\alpha\beta}\tilde{F}_{\alpha\beta})^2 + \frac{\Omega^{4}{\mathcal{ L}}_{\mathcal{GG}}}{8}\sqrt{-\tilde{g}} (\tilde{\tilde{f}}^{\alpha\beta}\tilde{\tilde{F}}_{\alpha\beta})^2, \label{lagranconf} \end{align} \noindent and only remains invariant the first term of $\widetilde{\mathcal{L}}$ Eq. (\ref{Lconfor}) which corresponds to the Maxwell term (and topological term) and its correction for the photon probe. The conformal transformation of the EMT yields \begin{equation} t^{\gamma\rho} \xrightarrow{CT} \Omega^6\widetilde{t_{\rm ph}^{\gamma\rho}} +\Omega^{10}\widetilde{t_{\rm ph-B}^{\gamma\rho}}. \end{equation} \noindent As we expected, in general, $t^{\gamma\rho}$ is not conformally invariant due to magneto-electric terms although at $B_e\rightarrow 0$ we recover a conformal $t^{\gamma\rho}$.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,727
\section{Introduction} Let $\Gamma$ be a countably infinite discrete group and let $\bm{a}$ be a measure preserving action of $\Gamma$, i.e., $\bm{a} = \Gamma \cc ^a (X,\mu )$ where $X$ is a standard Borel space, $\mu$ is a Borel probability measure on $X$, and $a:\Gamma\times X\ra X$ is a Borel action of $\Gamma$ on $X$ that preserves $\mu$. In this note we examine how ergodicity and mixing properties of $\bm{a}$ can influence, and be influenced by, the freeness behavior of $\bm{a}$ and its factors. When $\bm{a}$ is not ergodic for example, the ergodic decomposition of $\bm{a}$ directly exhibits a \emph{non-trivial} action (i.e., with underlying measure not a point mass) that is a factor of $\bm{a}$ which is non-free. More generally, if $\Gamma$ contains some non-trivial normal subgroup $N$ for which the restriction $\bm{a}\resto N$ of $\bm{a}$ to $N$ is non-ergodic, then the action of $\Gamma$ on the set $Z$ of ergodic components of $\bm{a}\resto N$ corresponds to a non-trivial factor of $\bm{a}$ which is manifestly non-free. Indeed, this factor is not even faithful as $N$ acts trivially on $Z$. Working from the other direction, if $\pi : (X,\mu ) \ra (Y,\nu )$ factors $\bm{a}$ onto some non-trivial action $\bm{b} = \Gamma \cc ^b (Y,\nu )$ which is not faithful, then for any $B\subseteq Y$ with $0<\nu (B)<1$ the set $\pi ^{-1}(B)$ will be a non-trivial subset of $X$ witnessing that the kernel of $\bm{b}$ (i.e., the set of group elements fixing almost every point) does not act ergodically under the action $\bm{a}$. These observations are rephrased in the following proposition. \begin{proposition}\label{prop:normfree} The following are equivalent for a measure preserving action $\bm{a}$ of $\Gamma$: \begin{enumerate} \item All non-trivial factors of $\bm{a}$ are faithful. \item All non-trivial normal subgroups of $\Gamma$ act ergodically. \end{enumerate} \end{proposition} Note that when $\Gamma$ contains a finite normal subgroup $N$ then no non-trivial action $\bm{a}= \Gamma \cc ^a (X,\mu )$ of $\Gamma$ can have the property (2) (and therefore (1)) of Proposition \ref{prop:normfree}: if $\bm{a} \resto N$ is ergodic then $X$ is finite, so the kernel of $\bm{a}$ is non-trivial and does not act ergodically. However, the observations preceding Proposition \ref{prop:normfree} also show the following: \begin{proposition}\label{prop:normfree2} The following are equivalent for a measure preserving action $\bm{a}$ of $\Gamma$: \begin{enumerate} \item All non-trivial factors of $\bm{a}$ have finite kernel. \item All infinite normal subgroups of $\Gamma$ act ergodically. \end{enumerate} \end{proposition} Propositions \ref{prop:normfree} and \ref{prop:normfree2} express the equivalence of a freeness property on the one hand, and an ergodicity property on the other. In this note we show that by strengthening the ergodicity assumption on $\bm{a}$, an appropriately strong freeness results. \begin{definition} A measure preserving action $\bm{a}$ of $\Gamma$ is called \emph{totally ergodic} if the restriction of $\bm{a}$ to every infinite subgroup of $\Gamma$ is ergodic. \end{definition} There are many examples of totally ergodic actions. All mildly mixing actions are totally ergodic for example, since the restriction of a mildly mixing action to an infinite subgroup is again mildly mixing and hence ergodic. In particular, all mixing actions are totally ergodic. The following theorem says that totally ergodic actions are, up to a finite kernel, always free. \begin{theorem}\label{thm:ifree} Let $\bm{a} = \Gamma \cc ^a (X,\mu )$ be a non-trivial measure preserving action of the countably infinite group $\Gamma$. Suppose that $\bm{a}$ is totally ergodic. Then there exists a finite normal subgroup $N$ of $\Gamma$ such that the stabilizer of $\mu$-almost every $x\in X$ is equal to $N$. \end{theorem} \begin{corollary}\label{thm:allmix} All faithful totally ergodic actions of countably infinite groups are free. In particular, all faithful mildly mixing and all faithful mixing actions of countably infinite groups are free. \end{corollary} A totally ergodic action of particular importance is the \emph{Bernoulli shift} of $\Gamma$. This is the measure preserving action $\bm{s}_\Gamma$ of $\Gamma$ on $([0,1]^\Gamma ,\lambda ^\Gamma )$ (where $\lambda$ is Lebesgue measure) given by \[ (\gamma ^{s_\Gamma}f)(\delta ) = f(\gamma ^{-1}\delta ) \] for $\gamma ,\delta \in \Gamma$ and $f\in [0,1]^\Gamma$. By a \emph{Bernoulli factor} of $\Gamma$ we mean a factor of $\bm{s}_\Gamma$. One consequence of Theorem \ref{thm:ifree} is a particularly nice group theoretic characterization of groups all of whose non-trivial Bernoulli factors are free. \begin{corollary}\label{cor:tfae} Let $\Gamma$ be an infinite countable group. Then the following are equivalent \begin{enumerate} \item Every non-trivial totally ergodic action of $\Gamma$ is free. \item Every non-trivial mixing action of $\Gamma$ is free. \item Every non-trivial Bernoulli factor of $\Gamma$ is free. \item There exists a non-trivial measure preserving action $\bm{a}$ of $\Gamma$ such that every non-trivial factor of $\bm{a}$ is free. \item There exists a non-trivial measure preserving action $\bm{a}$ of $\Gamma$ such that every non-trivial factor of $\bm{a}$ is faithful. \item $\Gamma$ contains no non-trivial finite normal subgroup. \end{enumerate} \end{corollary} \begin{proof}[Proof of Corollary \ref{cor:tfae} from Theorem \ref{thm:ifree}] (6)$\Ra$(1) follows immediately from Theorem \ref{thm:ifree}. The implication (1)$\Ra$(2) is clear. (2)$\Ra$(3) holds since $\bm{s}_\Gamma$ is mixing and every factor of a mixing action is mixing. (3)$\Ra$(4) and (4)$\Ra$(5) are also clear. (5)$\Ra$(6) follows from the discussion following Proposition \ref{prop:normfree} above. \end{proof} \begin{corollary} Let $\Gamma$ be any infinite countable group that is either torsion free or ICC. Then every non-trivial totally ergodic action of $\Gamma$ is free and in particular every non-trivial Bernoulli factor of $\Gamma$ is free. \end{corollary} {\bf Acknowledgements.} I would like to thank Alekos Kechris for many useful comments and suggestions. I would also like to thank Benjy Weiss for his comments, and particularly for suggesting the term "total ergodicity." The research of the author was partially supported by NSF Grant DMS-0968710. \section{Preliminary Definitions and Notation} $\Gamma$ will always denote a countably infinite discrete group and $e$ will denote the identity element of $\Gamma$. Let $\bm{a} = \Gamma \cc ^a (X,\mu )$ be a measure preserving action of $\Gamma$. The \emph{stabilizer} of a point $x\in X$ is the subgroup $\Gamma _x$ of $\Gamma$ given by \[ \Gamma _x = \{ \gamma \in \Gamma \csuchthat \gamma ^a x = x \} . \] For a subset $C\subseteq \Gamma$ we let \[ \mbox{Fix}^a(C) = \{ x\in X \csuchthat \forall \gamma \in C \ \, \gamma ^ax =x \} . \] We write $\mbox{Fix} ^a(\gamma )$ for $\mbox{Fix} ^a(\{ \gamma \} )$. The \emph{kernel} of $\bm{a}$ is the set \[ \mbox{ker}(\bm{a}) = \{ \gamma \in \Gamma \csuchthat \mu (\mbox{Fix}^a(\gamma ) ) = 1 \} . \] It is clear that $\mbox{ker}(\bm{a})$ is a normal subgroup of $\Gamma$. The action $\bm{a}$ is called \emph{(essentially) free} if the stabilizer of $\mu$-almost every point is trivial, or equivalently, $\mu (\mbox{Fix}^a(\gamma )) = 0$ for each $\gamma \in \Gamma \setminus \{ e \}$. It is called \emph{faithful} if $\mbox{ker}(\bm{a}) = \{ e \}$, i.e., $\mu (\mbox{Fix}^a(\gamma ))<1$ for each $\gamma \in \Gamma \setminus \{ e\}$. Let $\bm{b} = \Gamma \cc ^b (Y,\nu )$ be another measure preserving action of $\Gamma$. We say that $\bm{b}$ is a \emph{factor} of $\bm{a}$ (or that $\bm{a}$ \emph{factors onto} $\bm{b}$) if there exists a measurable map $\pi : X \ra Y$ with $\pi _*\mu = \nu$ and such that for each $\gamma \in \Gamma$ the equality $\pi (\gamma ^a x )= \gamma ^b\pi (x)$ holds for $\mu$-almost every $x\in X$. \begin{definition} A measure preserving action $\bm{b}= \Gamma \cc ^b (Y, \nu )$ is called \emph{trivial} if $\nu$ is a point mass. Otherwise, $\bm{b}$ is called \emph{non-trivial}. \end{definition} \section{Proof of Theorem \ref{thm:ifree}} \begin{proof}[Proof of Theorem \ref{thm:ifree}] We begin with a lemma also observed by Darren Creutz and Jesse Peterson \cite{CP12}. \begin{lemma}\label{lem1} Let $\bm{b} = \Gamma \cc ^b (Y,\nu )$ be a non-trivial totally ergodic action of $\Gamma$. \begin{enumerate} \item[(i)] Suppose that $C \subseteq \Gamma$ is a subset of $\Gamma$ such that $\nu ( \{ y \in Y\csuchthat C\subseteq \Gamma _y \} ) >0$. Then the subgroup $\langle C \rangle$ generated by $C$ is finite. \item[(ii)] $\Gamma _y$ is almost surely locally finite. \end{enumerate} \end{lemma} \begin{proof}[Proof of Lemma \ref{lem1}] Beginning with (i), the hypothesis tells us that the set $\mbox{Fix}^b(C)$ is non-null. Since $\nu$ is not a point mass there is some $B\subseteq \mbox{Fix}^b(C)$ with $0<\nu (B) <1$. The set $B$ witnesses that $\bm{b}\resto \langle C \rangle$ is not ergodic. As $\bm{b}$ is totally ergodic we conclude that $\langle C\rangle$ is finite. For (ii), let $\mc{F}$ denote the collection of finite subsets $F$ of $\Gamma$ such that $\langle F \rangle$ is infinite and let $\mbox{NLF}\subseteq Y$ denote the set of points $y\in Y$ such that $\Gamma _y$ is not locally finite. Then \[ \mbox{NLF} = \bigcup _{F\in \mc{F}} \{ y\in Y \csuchthat F\subseteq \Gamma _y \} \] By part (i), $\nu ( \{ y\in Y\csuchthat F\subseteq \Gamma _y \} ) =0$ for each $F\in \mc{F}$. Since $\mc{F}$ is countable it follows that $\nu (\mbox{NLF} ) =0$. \qedhere[Lemma] \end{proof} Now let $\bm{a}=\Gamma \cc ^a (X,\mu )$ be a totally ergodic action as in the statement of Theorem \ref{thm:ifree}. Let $N= \{ \gamma \in \Gamma \csuchthat \mu (\mbox{Fix}^a(\gamma )) =1 \}$ denote the kernel of $\bm{a}$. Then $N$ is a normal subgroup of $\Gamma$ that is finite by Lemma \ref{lem1}.(i). Ignoring a null set, the action $\bm{a}$ descends to an action $\bm{b} = \Delta \cc ^b (X,\mu )$ of the quotient group $\Delta = \Gamma /N$ that is still totally ergodic, and which is moreover faithful. Thus, after replacing $\Gamma$ by $\Gamma /N$ and $\bm{a}$ by $\bm{b}$ if necessary, we may assume that $\bm{a}$ is faithful toward the goal of showing that $\bm{a}$ is free. For each $\gamma \in \Gamma$ let $C_\Gamma (\gamma )$ denote the centralizer of $\gamma$ in $\Gamma$. Observe that $\mbox{Fix}^a(\gamma )$ is an invariant set for $\bm{a}\resto C_\Gamma (\gamma )$, for if $\delta \in C_\Gamma (\gamma )$ then $\delta ^a \cdot \mbox{Fix}^a(\gamma ) = \mbox{Fix}^a(\delta \gamma \delta ^{-1}) = \mbox{Fix}^a(\gamma )$. Thus for $\gamma \neq e$, if $C_\Gamma (\gamma )$ is infinite then $\bm{a}\resto C_\Gamma (\gamma )$ is ergodic and the $\bm{a}\resto C_\Gamma (\gamma )$-invariant set $\mbox{Fix}^a(\gamma )$ must therefore be null since $\bm{a}$ is faithful. Letting $C_\infty$ denote the collection of elements of $\Gamma \setminus \{ e \}$ whose centralizers are infinite, this simply means that $\mu ( \{ x\in X \csuchthat \gamma \in \Gamma _x \} ) =0$ for all $\gamma \in C_\infty$, and so \begin{equation}\label{eqn2} \mu (\{ x\in X \csuchthat \Gamma _x \cap C_\infty \neq \emptyset \} ) =0 . \end{equation} By Lemma \ref{lem1}.(ii), $\Gamma _x$ is almost surely locally finite. By a theorem of Hall and Kulatilaka \cite{HK64} and Kargapolov \cite{K63}, every infinite locally finite group contains an infinite abelian subgroup. In particular, each infinite locally finite subgroup of $\Gamma$ intersects $C_\infty$. It follows from this and (\ref{eqn2}) that $\Gamma _x$ is almost surely finite. Since there are only countably many finite subgroups of $\Gamma$ there must be some finite subgroup $H_0 \leq \Gamma$ such that $\mu (A_0) > 0$ where \[ A_0 = \{ x\in X\csuchthat \Gamma _x = H_0 \} . \] Let $N_0$ denote the normalizer of $H_0$ in $\Gamma$. If $T$ is a transversal for the left cosets of $N_0$ in $\Gamma$ then $\{ t ^a A_0 \} _{t\in T}$ are pairwise disjoint non-null subsets of $X$ all of the same measure. It follows that $T$ is finite and therefore $N_0$ is infinite and $\bm{a}\resto N_0$ ergodic. The set $A_0$ is non-null and invariant for $\bm{a}\resto N_0$, so $\mu (A_0) =1$, i.e., $\Gamma _x = H_0$ almost surely. As $\bm{a}$ is faithful we conclude that $H_0 = \{ e \}$ and that $\bm{a}$ is therefore free. \end{proof} \section{An Example} In general, total ergodicity does not imply weak mixing, and weak mixing does not imply total ergodicity. For example, the action of $\Z$ corresponding to an irrational rotation of $\T = \R /\Z$ equipped with Haar measure is totally ergodic, but not weakly mixing. There are also many examples of weakly mixing measure preserving actions that lack total ergodicity. One such action is exhibited in \ref{ex1} below. Example \ref{ex1} also illustrates that total ergodicity of a measure preserving action is \emph{not} necessary to ensure that each non-trivial factor of that action is free. \begin{example}\label{ex1} Here is an example of a free weakly mixing action $\bm{s}$ that is not totally ergodic, but that still has the property that every non-trivial factor of $\bm{s}$ is free: Let $F$ denote the free group of rank 2 with free generating set $\{ u, v \}$ and let $H= \langle u \rangle$ be the cyclic subgroup of $F$ generated by $u$. The generalized Bernoulli shift action $\bm{s}=\bm{s}_{F,F/H} = F\cc ^s ([0,1]^{F/H}, \lambda ^{F/H})$ is weakly mixing (see \cite{KT08}) but not totally ergodic since $H$ fixes each set in the $\sigma$-algebra generated by the projection function $f\mapsto f(H)$. Given a subgroup $K\leq F$, if $\bm{s}\resto K \cong \bm{s}_{K, F /H}$ is not ergodic then $K\cc F /H$ has a finite orbit (see \cite{KT08}), say $K\gamma H$ is finite where $\gamma \in F$. Then for any $z\in K$ there is some $n>0$ such that $z^n \in \gamma H\gamma ^{-1}$, and therefore $z\in \gamma H \gamma ^{-1}$. This shows that $K\subseteq \gamma H \gamma ^{-1}$ so that $K$ is cyclic. The restriction of $\bm{s}$ to each non-cyclic subgroup of $F$ is therefore ergodic, so if $\bm{a} = F\cc ^a (X,\mu )$ is any factor of $\bm{s}$ then $\bm{a}$ also has this property and, assuming $\bm{a}$ is non-trivial, an argument as in the proof of Lemma \ref{lem1} shows that the stabilizer $F_x$ of $\mu$-almost every $x\in X$ is locally cyclic, hence cyclic. Arguing as in the last paragraph of the proof of Theorem \ref{thm:ifree} we see that there is some normal cyclic subgroup $H_0$ of $F$ such that $F _x = H_0$ almost surely. The only possibility is that $H_0= \{ e \}$, and thus $\bm{a}$ is free. \end{example} \section{A Question} The proof of Theorem \ref{thm:ifree} relies on Hall, Kulatilaka, and Kargapolov's result, whose only known proofs make use of the Feit-Thompson theorem from finite group theory. \begin{question} Is there a direct ergodic theoretic proof of Theorem \ref{thm:ifree}? \end{question}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,336
Home News How Facebook Can Be Used To Stop Disease Outbreaks How Facebook Can Be Used To Stop Disease Outbreaks Last Updated: January 4, 2018 6:36 pm For over two decades now, the Internet has been at the forefront of pretty much everything – from search to news to entertainment to your friendships. But one area that it could positively impact has not been tested fully. We are talking about disease outbreaks, where Facebook and the likes have failed to step up to communicate important info or gather the right news during calamities, disease outbreaks or epidemics. Falling in line with the same, a study conducted by European researchers concluded that Facebook's unique mix of real and virtual can actually be beneficial to prevent the spread of diseases in the foreseeable future. They discovered that Facebook account and phone records will come in handy to pinpoint individuals who'll need vaccination to stop disease outbreaks. As asserted in the research paper published in the Journal of the Royal Society Interface, the task of finding hosts in real-time during a potential disease outbreak is definitely quite challenging. However, if one could map out the digital communication activity of all individuals in real-time using phone and social media records, it would ultimately help prevent an outbreak. The research that was conducted with more than 500 college students over two-odd years and found that Facebook, the world's biggest online social network, could play a vital role in targeted vaccination for diseases spreading on a physical turf. The paper's co-author Enys Mones describes Facebook's role as under: "If you are a hub for your friends in the sense that you have many contacts via phone calls or on Facebook, making you a bridge between diverse communities, chances are high that you are also likely to be a bridge to connect those communities in case of an epidemic, such as influenza." The data collected through the study helped researchers pinpoint individuals who are situated centrally in real-life human networks. The idea is that more energy is focussed on them in times of disease outbreak, especially when you have limited resources for vaccination. This would reduce the spread of pathogens, in turn limiting the deadliness of the disease. The system looks like a plausible method for disease control especially as dangers of global warming and climate change unleash havoc on earth in the future. It is also the cheaper to stop the spread, as you can identify central individuals well in advance of the outbreak. VIARappler SOURCERSIF Facebook Hit with Antitrust Lawsuit That Calls for Breakup with WhatsApp and Instagram How to Enable Dark Mode in Facebook on iOS and Android Facebook Launches Nostalgic Collage App 'E.gg' on iOS Google, Facebook, and Amazon to Set up NPCI Rival: Report Instagram to Remove 'Recent' Tab From Hashtag Pages to Reduce Misinformation
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,737
package org.apache.sshd.server.channel; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.nio.charset.StandardCharsets; import java.util.Collection; import java.util.Collections; import java.util.EnumSet; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import org.apache.sshd.client.SshClient; import org.apache.sshd.client.channel.ClientChannel; import org.apache.sshd.client.channel.ClientChannelEvent; import org.apache.sshd.client.session.ClientSession; import org.apache.sshd.common.PropertyResolverUtils; import org.apache.sshd.common.channel.Channel; import org.apache.sshd.common.channel.ChannelAsyncOutputStream; import org.apache.sshd.common.channel.Window; import org.apache.sshd.common.util.buffer.Buffer; import org.apache.sshd.common.util.buffer.ByteArrayBuffer; import org.apache.sshd.server.SshServer; import org.apache.sshd.util.test.BaseTestSupport; import org.apache.sshd.util.test.BogusChannel; import org.apache.sshd.util.test.CommandExecutionHelper; import org.apache.sshd.util.test.NoIoTestCase; import org.junit.FixMethodOrder; import org.junit.Test; import org.junit.experimental.categories.Category; import org.junit.runners.MethodSorters; @FixMethodOrder(MethodSorters.NAME_ASCENDING) @Category({ NoIoTestCase.class }) public class ChannelSessionTest extends BaseTestSupport { public ChannelSessionTest() { super(); } @Test public void testNoFlush() throws Exception { try (SshServer server = setupTestServer(); SshClient client = setupTestClient()) { server.setShellFactory(session -> new CommandExecutionHelper(null) { @Override protected boolean handleCommandLine(String command) throws Exception { OutputStream out = getOutputStream(); out.write((command + "\n").getBytes(StandardCharsets.UTF_8)); return !"exit".equals(command); } }); server.start(); client.start(); try (ClientSession session = client.connect(getCurrentTestName(), TEST_LOCALHOST, server.getPort()) .verify(7L, TimeUnit.SECONDS) .getSession()) { session.addPasswordIdentity(getCurrentTestName()); session.auth().verify(5L, TimeUnit.SECONDS); try (ClientChannel channel = session.createChannel(Channel.CHANNEL_SHELL)) { channel.open().verify(7L, TimeUnit.SECONDS); OutputStream invertedIn = channel.getInvertedIn(); String cmdSent = "echo foo\nexit\n"; invertedIn.write(cmdSent.getBytes()); invertedIn.flush(); long waitStart = System.currentTimeMillis(); Collection<ClientChannelEvent> result = channel.waitFor(EnumSet.of(ClientChannelEvent.CLOSED), TimeUnit.SECONDS.toMillis(11L)); long waitEnd = System.currentTimeMillis(); assertTrue("Wrong channel state after " + (waitEnd - waitStart) + " ms.: " + result, result.containsAll(EnumSet.of(ClientChannelEvent.CLOSED))); byte[] b = new byte[1024]; InputStream invertedOut = channel.getInvertedOut(); int l = invertedOut.read(b); String cmdReceived = (l > 0) ? new String(b, 0, l) : ""; assertEquals("Mismatched echoed command", cmdSent, cmdReceived); } } } } /* * Test whether onWindowExpanded is called from server session */ @Test public void testHandleWindowAdjust() throws Exception { Buffer buffer = new ByteArrayBuffer(); buffer.putInt(1234); try (ChannelSession channelSession = new ChannelSession() { { Window wRemote = getRemoteWindow(); wRemote.init(PropertyResolverUtils.toPropertyResolver(Collections.emptyMap())); } }) { AtomicBoolean expanded = new AtomicBoolean(false); channelSession.asyncOut = new ChannelAsyncOutputStream(new BogusChannel(), (byte) 0) { @Override public void onWindowExpanded() throws IOException { expanded.set(true); super.onWindowExpanded(); } }; channelSession.handleWindowAdjust(buffer); assertTrue("Expanded ?", expanded.get()); } } @Test // see SSHD-652 public void testCloseFutureListenerRegistration() throws Exception { AtomicInteger closeCount = new AtomicInteger(); try (ChannelSession session = new ChannelSession() { { Window wRemote = getRemoteWindow(); wRemote.init(PropertyResolverUtils.toPropertyResolver(Collections.emptyMap())); } }) { session.addCloseFutureListener(future -> { assertTrue("Future not marted as closed", future.isClosed()); assertEquals("Unexpected multiple call to callback", 1, closeCount.incrementAndGet()); }); session.close(); } assertEquals("Close listener not called", 1, closeCount.get()); } }
{ "redpajama_set_name": "RedPajamaGithub" }
3,032
Q: Altering the value of an option (select) element? I'm attempting to select the selected value of a <select> form element, and append the value with -R. (This will be for regex matching later on). Anyway, so far I've tried the following: Attempt 1: var country = $('select[name=g_country\\['+value+'\\]]').val(); $('select[name=g_country\\['+value+'\\]]').find('option[value=' + value +']').val(country + '-R'); Attempt 2: var country = $('select[name=g_country\\['+value+'\\]]').val(); $('select[name=g_country\\['+value+'\\]]').val(country + '-R'); I can tell that the selection of the correct form element (using delete_x, where x is a number) works fine, as the form elements to disable when .select-delete is clicked, however the value setting doesn't. The commented portion down the bottom is what I've been using to check the value of the <select> element post-value change (or what should be post-value change). Here's a link to my jsFiddle: http://jsfiddle.net/gPF8X/11/ Any help/edits/answers will be greatly appreciated! A: Try this: $('.select-delete').click( function() { var value = $(this).attr('id'); value = value.replace('delete_', ''); var country = $('select[name=g_country\\['+value+'\\]] option:selected').val() + '-R'; alert(country); $('select[name=g_country\\['+value+'\\]] option:selected').val(country); $('select[name=g_country\\['+value+'\\]]').attr('disabled','1'); $('input[name=g_url\\['+value+'\\]]').attr('disabled','1'); $(this).css('display', 'none'); var check = $('select[name=g_country\\['+value+'\\]]').val(); $('#test').append(check); }); There is an issue with your HTML as well. A: Finally came up with the right selector, props to @gjohn for the idea. Here's my final working code, that appropriately adds -R to the end of g_country[x]: $('.select-delete').click( function() { var value = $(this).attr('id'); value = value.replace('delete_', ''); var country = $('select[name=g_country\\['+value+'\\]]').val(); $('select[name=g_country\\['+value+'\\]] > option:selected').val(country + '-R'); $('select[name=g_country\\['+value+'\\]]').attr('disabled','1'); $('input[name=g_url\\['+value+'\\]]').attr('disabled','1'); $(this).css('display', 'none'); });
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,923
\section{Introduction} \label{sec:intro} Spin-orbit coupling (SOC) is a relativistic effect originating from the interaction between the spin and orbital motions of electrons. It has played a key role in various aspects of condensed-matter physics, including the electronic structure of solids and the transport properties in mesoscopic systems. \cite{winkler2003,nikolic2010oxford} It has been known since the 1950s that SOC can induce anisotropic spin splitting in some {\uppercase\expandafter{\romannumeral 3}}-{\uppercase\expandafter{\romannumeral 5}}\ semiconductors with the zinc-blende structure, known as the Dresselhaus splitting.\cite{winkler2003} In 2D and quasi-2D systems, the SOC resulting from the electric field perpendicular to the 2D plane gives rise to a Rashba splitting linear in $k$ with interesting ``helical'' spin textures.\cite{winkler2003,nikolic2010oxford} The SOC is also crucial in determining the transport behavior of low-dimensional electronic systems. One famous example is the weak antilocalization in spin-orbit-coupled 2D electronic systems, where the backscattering amplitudes interfere destructively due to a geometric Berry phase\cite{berry84} associated with the intrinsic SOC, leading to a suppressed resistivity when an external magnetic field is absent.\cite{hln} SOC is also responsible for spin precession in 1D and quasi-1D systems,\cite{nikolic2010oxford} the spin Hall effect in paramagnetic metals,\cite{Hirsch99} and numerous other effects. The SOC has received renewed attention recently because of its central role in the physics of topological insulators (TIs) and related topological states. Typically, the transition from a topologically trivial to a non-trivial phase is accomplished by a SOC-driven inversion of states of different symmetry at the conduction-band minimum (CBM) and valence-band maximum (VBM). For example, such a SOC-driven topological band inversion between $\Gamma_6$-derived ($s$-like) and $\Gamma_8$-derived ($p$-like) states at the zone center is responsible for the quantum spin Hall (QSH) state observed in HgTe/CdTe quantum wells.\cite{bhz06,konig06science} Similarly, the Kane-Mele model of 2D graphene-like systems \cite{kane-prl05-a,kane-prl05-b} enters the QSH state when two band inversions occur at the K and K$'$ points as the SOC strength is increased at a constant staggered potential. In 3D band insulators with time-reversal (TR) symmetry, a SOC-induced band inversion can transform the system from a trivial insulator into a strong TI displaying an odd number of gapless Dirac cones in the surface states, as occurs for Bi$_2$Se$_3$\ and Bi$_2$Te$_3$.\cite{kane-rmp10,zhang-rmp11,yao-review12,zhang-np09} In the case of a 3D strong TI with inversion symmetry such as Bi$_2$Se$_3$, the strong $\mathbb{Z}_2$\ index can be uniquely determined by the parities of the occupied bands at the TR-invariant momenta (TRIM) in the Brillouin zone (BZ).\cite{fu-prb07} If the highest occupied states and lowest unoccupied states at one of the TRIM possess opposite parities without SOC, and they are inverted by turning on SOC, then the system transforms from a normal to a topological insulator. For example, in Bi$_2$Se$_3$, two pairs of Kramers-degenerate occupied states at the BZ center ($\Gamma$) are inverted by SOC, resulting in the nontrivial $\mathbb{Z}_2$\ index. For TIs without inversion symmetry, the band inversion may happen at arbitrary points in the BZ, instead of at the TRIM. We can identify such band inversion points as the points where a band touching occurs between valence and conduction bands as the SOC is adiabatically turned on; TR symmetry implies that an inversion at $\mathbf k_0$ will always be accompanied by one at $-\mathbf k_0$. Even in the absence of inversion symmetry, therefore, a band inversion driven by SOC is typically a hallmark of the non-trivial topology in TIs with TR symmetry. The SOC also plays a crucial role in giving rise to the Chern insulator (CI) state, also known as the quantum anomalous Hall state, which can occur in 2D insulators lacking time-reversal symmetry. The possibility of a CI state was first introduced by Haldane,\cite{Haldane-model} who constructed an explicit model that demonstrates the effect. Although the Haldane model is a model of spinless Fermions on a honeycomb lattice, its key feature is the presence of complex second-neighbor hoppings, which can be regarded as arising from intrinsic atomic SOC through a second-order perturbation process in a more realistic system of spinor electrons.\cite{hongbin-prb13} An example is a Bi bilayer with an applied Zeeman field, as will be discussed below. The concept of topological band inversion has been much discussed in the topological-insulator literature, but in the absence of symmetry it may be difficult to recognize when a band inversion has actually occurred. The usual approach is to look at the symmetry or orbital character at a high-symmetry point in the BZ where a band inversion is suspected, but this only works if sufficient symmetry is present. Some authors have tried to deduce the presence of band-inversion behavior by studying other properties of the system, such as by looking at the qualitative shape of the bands near the symmetry point, \cite{klintenberg-archive10} or even more indirectly, by studying the variation of the band-energy differences with strain in the absence of SOC.\cite{yang-nm12} However, the reliability of such methods is questionable, as they do not give a direct and quantitative evaluation of the SOC-induced band inversion. In this paper, we propose that the calculation of spin-orbit spillage, which measures the degree of mismatch between the occupied band projection operators with and without SOC, provides a simple and effective measure of SOC-driven band inversion in insulators. We demonstrate that the mapping of this spin-orbit spillage in $k$-space easily allows a direct identification of any region in the BZ where band inversion has occurred, and that the maximum spillage is a useful indicator of topological character. We illustrate the method in the context of both tight-binding models and realistic first-principles calculations. The paper is organized as follows. In Sec.~\uppercase\expandafter{\romannumeral 2}\ the formal definition of SOC-induced spillage is introduced, and the correspondence between topological indices and spillage is also discussed. In Sec.~{\uppercase\expandafter{\romannumeral 3}}\ the formalism is applied to various systems, including the two-band Dirac model, 2D Kane-Mele model, a Bi bilayer with tunable SOC and exchange field, and realistic materials including Bi$_2$Se$_3$, In$_2$Se$_3$, and Sb$_2$Se$_3$. In Sec.~{\uppercase\expandafter{\romannumeral 4}}\ we make a brief summary. \section{Formalism} \label{sec:formal} \subsection{Definitions} \label{sec:def} Mathematically, the mismatch between two projection operators $P$ and $\widetilde{P}$, both of rank $N$, can be represented by a quantity \begin{equation} \gamma=N-\Tra{P\widetilde{P}}=\Tra{P\widetilde{Q}}=\Tra{Q\widetilde{P}} \label{equa:spillage} \end{equation} where $Q=1-P$ and $\widetilde{Q}=1-\widetilde{P}$ denote the complementary projections. This measure of mismatch is often referred to as ``spillage'' since it measures the weight of states that spill from $P$ into $\widetilde{Q}$, or equivalently, from $\widetilde{P}$ into $Q$. Clearly the spillage vanishes if $P=\widetilde{P}$ at one extreme, and rises to $N$ at the other extreme if there is no overlap at all between the subspaces associated with $P$ and $\widetilde{P}$. Thus, the spillage provides a measure of the degree of mismatch between the two subspaces. Here we apply this concept to the band projection operators \begin{equation} P(\mathbf{k})=\sum_{n=1}^{n_\mathrm{occ}} \ket{\psi_{n\kk}}\bra{\psi_{n\kk}} \label{equa:bandproj} \end{equation} associated with a given wavevector $\mathbf{k}$ in the BZ of a crystalline insulator with $N\!=\!n_\mathrm{occ}$ occupied bands. We assume an effective single-particle Hamiltonian such as that appearing in density-functional theory (DFT).\cite{dft1,dft2} Then the SOC-induced spillage $\gamma(\mathbf{k})$ is defined as \begin{equation} \gamma(\mathbf{k})=\Tra{P(\mathbf{k}) \widetilde{Q}(\mathbf{k})} \label{equa:def1} \end{equation} where $P$ and $\widetilde{P}$ (and their complements) refer to the system with and without SOC respectively. More explicitly, \begin{align} \gamma(\mathbf{k})& =n_\mathrm{occ}-\Tra{P(\mathbf{k}) \widetilde{P}(\mathbf{k})}\notag\\ & =n_\mathrm{occ}-\sum_{m,n=1}^{n_\mathrm{occ}}\vert M_{mn}(\mathbf{k})\vert^2 \label{equa:def2} \end{align} where \begin{equation} M_{mn}(\mathbf{k})= \ip{\psi_{m\kk}}{\tilde{\psi}_{n\kk}} \label{equa:Mmn} \end{equation} is the overlap between occupied Bloch functions with and without SOC at the same wavevector $\mathbf{k}$. Equivalently, this can be written as $M_{mn}(\mathbf{k})= \ip{u_{m\kk}}{\tilde{u}_{n\kk}}$ if one prefers to work in terms of the cell-periodic $\ket{u_{n\kk}}$ defined as $u_{n\kk}(\mathbf{r})=e^{-i\mathbf{k}\cdot\mathbf{r}}\psi_{n\kk}(\mathbf{r})$. In the case of realistic DFT calculations in a plane-wave basis, the overlap matrix elements are easily evaluated as \begin{equation} M_{mn}(\mathbf{k})= \sum_{\mathbf{G}} \ip{\psi_{m\kk}}{\mathbf{k}+\mathbf{G}} \ip{\mathbf{k}+\mathbf{G}}{\tilde{\psi}_{n\kk}} \,, \label{equa:plane-wave} \end{equation} where $\ket{\mathbf{k}+\mathbf{G}}$ is the plane wave $e^{i(\mathbf{k}+\mathbf{G})\cdot\mathbf{r}}$ for reciprocal vector $\mathbf{G}$ normalized to the unit cell. The evaluation should also be straightforward in other first-principles basis sets. For simple lattice models the Hamiltonian is typically written in an orthonormal tight-binding basis, so that the wavefunctions are \begin{equation} \ket{\psi_{n\kk}}=\sum_j C_{nj,\mathbf{k}}\,\ket{\chi_{j\mathbf{k}}} \label{equa:psichi} \end{equation} where $\ket{\chi_{j\mathbf{k}}}$ are the Bloch basis states \begin{equation} \chi_{j\mathbf{k}}(\mathbf{r})=\sum_{\mathbf{R}} e^{i\mathbf{k}\cdot\mathbf{R}} \,\varphi_j(\mathbf{r}-\mathbf{R}) \label{equa:chi} \end{equation} and $\varphi_j(\mathbf{r}-\mathbf{R})$ is the $j$'th tight-binding basis orbital in unit cell $\mathbf{R}$. Then the spillage is trivially computed using \begin{equation} M_{mn}(\mathbf{k})=\sum_j C^*_{mj,\mathbf{k}} \widetilde{C}_{nj,\mathbf{k}} \,. \label{equa:tboverlap} \end{equation} Since the use of Wannier interpolation methods \cite{MLWF-2,yates-prb07,MLWF-rmp} is becoming increasingly frequent, we also comment on this case here. In this approach, the occupied Bloch states are again written as in Eq.~(\ref{equa:psichi}), but this time the Bloch basis states are \begin{equation} \chi_{j\mathbf{k}}(\mathbf{r})=\sum_{\mathbf{R}} e^{i\mathbf{k}\cdot\mathbf{R}} \,w_j(\mathbf{r}-\mathbf{R}) \label{equa:chiw} \end{equation} where $w_j(\mathbf{r}-\mathbf{R})$ is the $j$'th Wannier function in unit cell $\mathbf{R}$. Then the spillage is again computed using Eqs.~(\ref{equa:def2}) and (\ref{equa:tboverlap}). This will be accurate as long as the WFs for the systems with and without SOC are chosen to be the same, or as similar as possible. As we shall see in the following section, the results from the Wannier basis match those of the direct plane-wave calculation very closely for the cases studied here. In the case of complex unit cells or supercells with many bands near the gap, it may be difficult to identify precisely which bands have been inverted by the SOC. In this case it may be helpful to define a valence-band-resolved spillage as $ \gamma_{n}(\mathbf{k})=[L(\mathbf{k})L^{\dagger}(\mathbf{k})]_{nn}, $ where $L_{nm}(\mathbf{k})\!=\!\ip{\psi_{n\kk}}{\tilde{\psi}_{m\kk}}$ is the overlap matrix between the occupied states without SOC and the unoccupied states with SOC. Then the total spillage is $\gamma(\mathbf{k})\!=\!\sum_{n=1}^{n_\mathrm{occ}}\gamma_{n}(\mathbf{k})$. Similarly, $\bar{\gamma}_{m}=(L^{\dagger}L)_{mm}$ provides a conduction-band-resolved spillage. However, it should be noted that $\gamma_n$ and $\bar{\gamma}_m$ are not gauge-invariant; they will change under a unitary transformation among the occupied or unoccupied states. A natural gauge choice is the one associated with the singular-value decomposition $L\!=\!V\Sigma W^{\dagger}$. Transforming the sets of occupied and unoccupied states according to the unitary matrices $V$ and $W$ respectively, the overlap matrix between the transformed states is just $\Sigma$, which is real and diagonal. The columns of $V$ ($W$) corresponding to the leading eigenvalues indicate which linear combinations of valence (conduction) states contribute the most to the total spillage. We leave the exploration of these refinements for a future study. \subsection{Relation to topological character} \label{sec:topo} Here we argue that the presence of non-trivial topological indices will be reflected in certain features of the spillage distribution in the BZ. We first consider the relatively simple case in which the SOC-driven band inversion is associated with the crossing of highest valence and lowest conduction states belonging to two different irreducible representations (irreps) at a high-symmetry point $\mathbf{k}\!=\!\Lambda_0$ in the BZ. Since the states belonging to different irreps have no overlap with each other, the spillage at $\Lambda_0$ must be greater than or equal to the irrep dimension. In TR-invariant Bi$_2$Se$_3$, for example, the four states around the Fermi level at $\Gamma$ consist of two Kramers doublets of opposite parity. In this case the dimension of the irreps is two, so we expect a peak in $\gamma(\mathbf{k})$ centered at $\Gamma$ whose height is $\gamma_{\rm max}\!\ge\!2$. As we shall show in Sec.~\ref{sec:materials}, this is exactly what we find in Bi$_2$Se$_3$. Next, we argue that a correspondence between topological character and spillage should also remain valid for more general cases without special lattice symmetry. Let us first consider the case of CIs (i.e., with broken TR symmetry). We assume the Bloch functions $\psi_{n\kk}$ are those of a normal system with Chern number $C\!=\!0$, while $\tilde{\psi}_{n\kk}$ are topologically nontrivial with a nonzero Chern number $\widetilde{C}$. We argue that this implies the existence of at least one point in the BZ where the spillage is $\ge1$. If we assume the contrary, i.e., $\gamma(\mathbf{k})<1$ everywhere in the BZ, then the determinant of the overlap matrix of Eq.~(\ref{equa:Mmn}) between $\psi_{n\kk}$ and $\tilde{\psi}_{n\kk}$ obeys $\det(M_\mathbf{k})>0$ everywhere, since a singular $M$ would imply $\gamma\ge1$. Because the system $\ket{\psi_{n\kk}}$ is topologically normal, we know it is possible to choose a smooth and periodic gauge for it, and we assume without loss of generality that this has been done. But if $M_\mathbf{k}$ is nowhere singular, the $\ket{\psi_{n\kk}}$ can be used as ``trial functions'' to construct a smooth and periodic gauge for the $\ket{\tilde{\psi}_{n\kk}}$, as follows. At each $\mathbf{k}$, carry out a singular value decomposition to express $M=V^\dagger\Sigma W$ ($V$ and $W$ are unitary and $\Sigma$ is real positive diagonal), and then use the unitary matrix $V^\dagger W$ to transform the original $\tilde{\psi}_{n\kk}$ to a new set $\tilde{\psi}'_{n\kk}$. Then $M=V^\dagger\Sigma V$, i.e., it is Hermitian and positive definite. Intuitively, this means that a smooth and periodic gauge has been chosen for the states $\tilde{\psi}'_{n\kk}$ to make them ``maximally aligned'' with the states $\psi_{n\kk}$. But a smooth and periodic gauge is inconsistent with a nonzero Chern number, completing the proof by contradiction. Thus, if $\gamma<1$ everywhere in the BZ, then $M_{mn,\mathbf{k}}$ is nonsingular everywhere, and the system $\ket{\tilde{\psi}_{n\kk}}$ is normal. Conversely, a topological system must have $\gamma(\mathbf{k})\ge1$ somewhere in the BZ, which provides both a signal for the topological phase and an indication of where in the BZ the band inversion has occurred. For the TR-invariant $\mathbb{Z}_2$\ TIs, similar arguments can be put forward that work even in the absence of inversion symmetry. If the system of $\ket{\psi_{n\kk}}$ is in the $\mathbb{Z}_2$-even\ phase, one can always make a smooth gauge choice over the entire BZ that respects TR symmetry. In the $\mathbb{Z}_2$-odd\ case, however, such a gauge choice does not exist.\cite{fu-prb06,soluyanov-prb12} Therefore, $\det(M_\mathbf{k})$ must vanish somewhere in BZ, or else the smooth gauge could be transferred to the $\ket{\tilde{\psi}_{n\kk}}$, resulting in a contradiction. Due to the TR symmetry, $\det(M_\mathbf{k})=\det(M_{-\mathbf{k}})$, so one would generically expect $\gamma(\mathbf{k})\!\geq\!1$ at two points ($\mathbf{k}_0$ and $-\mathbf{k}_0$) in the BZ. For the case of inversion-symmetric TIs, $\mathbf{k}_0$ and $-\mathbf{k}_0$ merge at one of the TRIM, the two spillages add up, and one expects $\gamma\ge2$ at one of the TRIM. In the following section, we numerically test and confirm the above arguments by applying the formalism to systems in different topological phases. \section{Applications} \subsection{Application to two-band Dirac Hamiltonian} \label{sec:dirac} As a warm-up exercise, we first apply the spillage formula to a minimal model of a band inversion in 2D $(k_x,k_y)$ space, namely a Dirac model at half filling as described by the Hamiltonian \begin{align} H\!=\!m(1-\lambda)\sigma_z+k_x\sigma_x+k_y\sigma_y \end{align} where $\sigma_j$ are Pauli matrices. Here $m$ is a mass and $\lambda$ is a control parameter that inverts the bands at $\lambda\!=\!1$. Physically, such a model may describe the low-energy physics in the vicinity of a band touching event associated with the transition from a normal to a quantum anomalous Hall insulator, or at one of the band touching events (at $\mathbf{k}_0$ or $-\mathbf{k}_0$) in the transition to a spin-Hall insulator. The energy spectrum of the above Hamiltonian is $E_{\pm}\!=\!\pm\sqrt{m^2(1-\lambda)^2+k_x^2+k_y^2}$, where the gap closes at $\lambda\!=\!1$ at $\Gamma$ ($k_x\!=\!k_y\!=\!0$). The spillage is just $\gamma(\lambda,\mathbf{k})\!=\!1-\vert\ip{\psi_{1\mathbf{k}}^{0}}{\psi_{1\mathbf{k}}^{\lambda}}\vert^2$, where $\ket{\psi_{1\mathbf{k}}^{0}}$ ($\ket{\psi_{1\mathbf{k}}^{\lambda}}$) is the occupied eigenstate at zero (non-zero) $\lambda$. \begin{figure} \centering \includegraphics[width=7cm]{spillage_diracmodel.eps} \caption{The spillage of the half-filled Dirac Hamiltonian as $\lambda$ increases from 0.4 to 1.9. } \label{fig:dirac} \end{figure} Figure \ref{fig:dirac} shows the spillage \textit{vs.} $k_x$ at $k_y\!=\!0$ as $\lambda$ is increased from 0.4 to 1.9. When $\lambda\!=\!0.4$, the spillage is negligible almost everywhere, and is exactly zero at $\Gamma$. On the other hand, when $\lambda\!=\!0.99$, which is very close to the gap closure point, one finds two peaks of spillage emerging on either side of $\Gamma$, with a peak value approaching 0.5 as $\lambda\rightarrow 1$. As $\lambda$ passes through the critical point at $\lambda\!=\!1$, one finds that the spillage at $\Gamma$ jumps from 0 to 1, and then gradually spreads out in BZ as $\lambda$ is increased further. This interesting behavior can be interpreted as follows. When $\lambda\!=\!0$, the $\sigma_z$ term dominates around $\Gamma$, so that the pesudospin is mostly along the $z$ direction around $\Gamma$. On the other hand, if $\lambda$ is very close to 1, the $\sigma_x$ and $\sigma_y$ terms dominate near (but not exactly at) $\Gamma$, forcing the pseudospin direction to point in the $(x, y)$ plane and resulting in a spillage of 1/2. However, the $\sigma_x$ and $\sigma_y$ terms vanish at $\Gamma$, which means the pseudospin has to point along the $\pm z$ direction. Therefore, when $\lambda\!<\!1$ ($\lambda\!>\!1$), the pseudospin is parallel (anti-parallel) with the pseudospin direction at $\lambda\!=\!0$, such that the spillage jumps from 0 to 1 as $\lambda$ passes through the critical point. \subsection{Application to the Kane-Mele model} \label{sec:kane-mele} The Kane-Mele model is a four-band TB model on a graphene lattice, including nearest-neighbor (NN) spin-independent hoppings and both NN and next-NN spin-dependent hoppings: \begin{align} H\!=\!& \sum_{\langle ij\rangle} t c^{\dagger}_{i} c_{j}+ \sum_{\langle\!\langle ij\rangle\!\rangle}i \lambda_\mathrm{so}\nu_{ij}c^{\dagger}_{i}s_z c_{j}\notag\\ & +\sum_{\langle ij\rangle}i\lambda_\mathrm{R} c^{\dagger}_{i} (\mathbf{s}\times\hat{\mathbf{d}}_{ij})_{z}c_{j} +\sum_{i}{\epsilon(-1)^{i}c^{\dagger}_{i}c_{i}} \,. \label{equa:kane-mele} \end{align} Here spin is implicit, $t$ is the NN spin-independent hopping amplitude, $\lambda_\mathrm{so}$ is the strength of the next-NN non-spin-flip SOC, $\lambda_\mathrm{R}$ is the NN Rashba-like SOC amplitude, and $\epsilon$ is the magnitude of on-site energy, with signs $\pm 1$ for A and B sublattices respectively. Also, $\nu_{ij}\!=\!\pm 1$ with the sign depending on the chirality of the next-NN bond from site $i$ to $j$, and $\hat{\mathbf{{d}}}_{ij}$ is the unit vector pointing from site $i$ to its NN $j$. In this model, $\lambda_\mathrm{so}$ competes with $\lambda_\mathrm{R}$ and $\epsilon$, in the sense that $\lambda_\mathrm{so}$ tends to drive the system to the QSH phase while $\lambda_\mathrm{R}$ and $\epsilon$ tend to retain the trivial band topology. For simplicity, we first drop the Rashba coupling, so that spin is a good quantum number. The system is in the QSH phase when $3\sqrt{3}\lambda_\mathrm{so}\!>\!\epsilon$, and in the normal phase otherwise. Without the Rashba term, the Kane-Mele model can be considered as a superposition of two copies of the Haldane model with opposite Chern numbers.\cite{Haldane-model} If one calculates the 2D Chern numbers for spin-up and spin-down electrons separately, one would find that the two Chern numbers are $\pm1$ in the QSH phase. While the Haldane-model system goes from a normal insulator to a CI via a band inversion at either the K or K$'$ point, the Kane-Mele model transitions to the QSH state via simultaneous band inversions at both K and K$'$, but for opposite spins at these two points. \begin{figure} \centering \subfigure{ \includegraphics[width=7.0cm,bb=18 251 559 552]{KM_spinspillage.eps}} \subfigure{ \includegraphics[width=7.0cm,bb=44 256 536 544]{KM_spillage.eps}} \caption{Spin-orbit spillage of the Kane-Mele model in the QSH phase, with $t\!=\!1$, $\lambda_\mathrm{so}\!=\!0.1t$, and $\epsilon\!=\!0.1t$. (a) Spin-resolved spillage without Rashba coupling; solid (green) and dashed (red) lines denote spin-up and spin-down spillage. Inset shows $\Gamma$-M-K-M-K$^\prime$-$\Gamma$ path used here (blue) and K-$\Gamma$-M-K path used in Fig.~\ref{fig:bi} (magenta). (b) Total spillage without (solid line) and with (dashed line) Rashba coupling.} \label{fig:km} \end{figure} The SOC-induced spillage without the Rashba term is shown in Fig.~\ref{fig:km}(a). In this case the spins act independently, so the spin-up and spin-down spillages $\gamma_\sigma(\mathbf{k})= n_\mathrm{occ}/2-\sum_{m,n=1}^{n_\mathrm{occ}/2} \vert M_{n\sigma,m\sigma}(\mathbf{k})\vert^2$ (where $\sigma\!=\!\{\uparrow,\downarrow\}$) are shown separately. Clearly the spin-up band inversion at K is responsible for $\gamma_{\uparrow}\!=\!1$, and conversely at K$'$. The total spillage $\gamma(\mathbf{k})=\gamma_{\uparrow}(\mathbf{k})+ \gamma_{\downarrow}(\mathbf{k})$ is shown by the solid line in Fig.~\ref{fig:km}(b). The symmetry between the behavior at K and K$'$ has been restored by summing over spins. Note that the peak values are $\gamma=\!1$ exactly; the fact that they do not exceed one is an artifact of the simplicity of the model. It is also interesting to note that in the absence of time-reversal symmetry, the spin-resolved spillage is closely related to the van Vleck paramagnetism in spin-orbit coupled systems. When the Rashba coupling is included, as shown by the dashed line in Fig.~\ref{fig:km}(b), spin is no longer a good quantum number, so that a spin decomposition is not well-defined. As expected, adding the Rashba term does not significantly change the results;\cite{comment_rashba} one still finds that the spillage reaches unity at K and K$'$ as before, providing an indication of the spin-Hall phase. \subsection{Application to Chern insulators} \label{sec:ci} We now consider the case of broken TR symmetry, so that the $\mathbb{Z}_2$\ index is no longer well-defined, but the possibility of CI phases appears. As discussed in Sec.~\ref{sec:intro}, SOC is important here as well. Here we study a buckled honeycomb Bi bilayer with a Zeeman field applied normal to the plane, which can be regarded as having been cut from a 3D Bi crystal on a $(111)$ plane. The Bi $(111)$ bilayer has been proposed as a candidate for QSH insulator.\cite{murakami-prl06} If a Zeeman field is further applied, it is possible to obtain CI phases with Chern numbers $C\!=\!1$ or $C\!=\!-2$.\cite{hongbin-prb12,hongbin-prb13} To describe this system we use a TB model based on Bi 6$s$ and 6$p$ orbitals, where the first-neighbor $ss$, $sp$, $pp\sigma$, and $pp\pi$ hoppings, as well as the second-neighbor $pp\sigma$ hoppings, are included. The hopping parameters are taken from a TB model for 3D bulk Bi.\cite{bi-tb} In order to obtain non-zero Chern numbers, an on-site $p$-shell SOC ($\lambda_{\rm SOC}$) and a Zeeman field ($H_z$) are further applied. It turns out that if $H_z$ is fixed at 0.8\,eV, then the phases with $C\!=\!-2$ and $+1$ are realized at $\lambda_{\rm SOC}\!=\!2.4$\,eV and 0.6\,eV respectively. If the SOC is completely turned off, $C\!=\!0$. \begin{figure} \centering \subfigure{\includegraphics[height=4.0cm,bb=50 256 539 540]{bi_spillage.eps}} \subfigure{\includegraphics[height=4.2cm,bb=1 233 625 566]{Bi_spillage_2Dcart.eps} } \caption{(a) Spin-orbit spillage of the Bi bilayer for $C\!=\!1$ (dashed blue) and $C\!=\!-2$ (solid red) phases, plotted along the K-$\Gamma$-M-K path (magenta path in inset of Fig.~\ref{fig:km}(a)). (b) Spillage for $C\!=\!1$ phase plotted in the 2D BZ ($k_x$ and $k_y$ in units of \text{\normalfont\AA}$^{-1}$). (c) Same for $C\!=\!-2$ phase.} \label{fig:bi} \end{figure} The spillage for the Bi bilayer is shown along a high-symmetry $k$-path in Fig.~\ref{fig:bi}(a), and as a distribution in the 2D BZ in Figs.~\ref{fig:bi}(b-c), for the two parameter sets giving the $C\!=\!1$ and $C\!=\!-2$ phases. In both cases the spillage distribution is concentrated at $\Gamma$, indicating a band inversion there, although it is much more sharply peaked in the $C\!=\!1$ case. Clearly the spillages provide a signature of the presence of a Chern-insulator phase, including the location of the band inversion and the magnitude (but not the sign) of the Chern number. Here again the peak values of the spillage are exactly equal to the magnitude of the Chern number. For more realistic systems with more bands included, the spillage can be expected to exceed these values slightly, but a clear correlation between the peak values of spillage and the Chern number is still expected. \subsection{Application to 3D topological insulators} \label{sec:materials} \begin{figure} \centering \includegraphics[width=6cm]{lattice.eps} \caption{(a) Lattice structure of Bi$_2$Se$_3$. (b) The BZ of Bi$_2$Se$_3$; the spillage and bandstructures shown in Fig.~\ref{fig:materials}(a) and Fig.~\ref{fig:proj} are plotted along the black path. } \label{fig:lattice} \end{figure} In this subsection we apply our formalism to realistic first-principles calculations of Bi$_2$Se$_3$, In$_2$Se$_3$\ and Sb$_2$Se$_3$. Bi$_2$Se$_3$\ is a well-known strong TI,\cite{zhang-np09} where the SOC-induced band inversion takes place at $\Gamma$. We also consider In$_2$Se$_3$\ and Sb$_2$Se$_3$\ in the same crystal structure (known as $\beta$ phase for In$_2$Se$_3$\ and not realized experimentally for Sb$_2$Se$_3$), which are theoretically predicted (and experimentally confirmed for In$_2$Se$_3$) to be trivial insulators. \cite{zhang-np09,oh-prl12,armitage1,jpl-prb13} Here it is interesting to note that even though Sb and In have very similar atomic SOC strength, the substitution of In atoms tends to drive Bi$_2$Se$_3$ into a trivial-insulator phase much faster than does Sb substitution, due to the existence of In $5s$ orbitals.\cite{jpl-prb13} As shown in Fig.~\ref{fig:lattice}, the considered structure is rhombohedral, with two cations and three Se atoms in the primitive unit cell. The five 2D monolayers are stacked in an \hbox{$A$-$B$-$C$-$A$-...} sequence along the (111) direction to form quintuple layers (QLs). Experimentally the in-plane hexagonal parameters are $a\!=\!4.138$ and 4.05\,\AA, and the QL size is $c\!=\!9.547$ and 9.803\,\AA, for Bi$_2$Se$_3$\ and In$_2$Se$_3$\ respectively. In our calculations, we take the experimental lattice parameters of Bi$_2$Se$_3$\ and In$_2$Se$_3$, but relax their internal atomic coordinates. As for Sb$_2$Se$_3$, because its rhombohedral structure is not adopted in nature, both the lattice parameters and atomic positions are relaxed. The ground state of rhombohedral Sb$_2$Se$_3$\ is predicted to be a trivial insulator with $a\!=\!4.11$\,\AA\ and $c\!=\!10.43$\,\AA. We use the \textsc{Quantum ESPRESSO} package\cite{QE-2009} to carry out first-principles calculations on these systems both with and without SOC. The PBE generalized-gradient approximation (GGA) is taken to treat the exchange-correlation functional, \cite{pbe-1,pbe-2} and norm-conserving pseudopotentials are generated from OPIUM package.\cite{opium-web,opium-paper} The energy cutoff is taken as 65\,Ry for In$_2$Se$_3$\ and 55\,Ry for Bi$_2$Se$_3$\ and Sb$_2$Se$_3$, with an 8$\times$8$\times$8 Monkhorst-Pack $\bf k$ mesh.\cite{monkhorst-pack} The wavefunctions defined in the plane-wave basis are extracted from these calculations and Eq.~(~\ref{equa:plane-wave}) is applied to evaluate the spillage. As mentioned in Sec.~\ref{sec:intro}, the spillage can also be calculated in the Wannier basis. Starting from the first-principles calculations, we use the \textsc{Wannier90} package\cite{wannier90} to construct Wannier functions (WFs) and a corresponding realistic TB model\cite{comment_wannier90} for each of the three materials. The basis WFs are constructed by projecting 30 atomic $p$ trial orbitals onto the Bloch subspace of $p$-like bands to generate a 30-band spinor model for Bi$_2$Se$_3$\ and Sb$_2$Se$_3$, whereas four additional In $5s$ projectors and bands are included in the model for In$_2$Se$_3$. In order that they will retain their atomic-like identity as much as possible, the projected WFs are not optimized to minimize the spread functional.\cite{MLWF-rmp} We find that the WFs generated by this projection method are almost the same for the systems with and without SOC, so that the matrix elements $M_{mn}(\mathbf{k})$ defined in Sec.~\ref{sec:def} can be evaluated with good accuracy. \begin{figure} \centering \subfigure{ \includegraphics[height=4.7cm,bb=34 236 551 556] {spillage_3.eps}} \subfigure{\includegraphics[width=7.0cm,bb=86 246 515 545]{bi2se3_spillage2D.eps} } \caption{(a) Spin-orbit spillage of rhombohedral Bi$_2$Se$_3$, Sb$_2$Se$_3$\ and In$_2$Se$_3$\ as indicated by blue, green and red lines respectively. Solid (dashed) lines show the spillage computed from direct plane-wave (Wannier-based) calculations. (b) Spillage of Bi$_2$Se$_3$\ in the $(k_x,k_y)$ plane at $k_z\!=\!0$ (units of \text{\normalfont\AA}$^{-1}$).} \label{fig:materials} \end{figure} The spillage from the direct plane-wave calculations are shown as the solid lines in Fig.~\ref{fig:materials}(a). For Bi$_2$Se$_3$, the spillage $\gamma(\mathbf{k})$ has a peak value of 2.12 at $\Gamma$, which is slightly larger than 2, indicating that two Kramers degenerate bands at $\Gamma$ have been inverted by SOC. On the other hand, the effect of SOC in In$_2$Se$_3$\ and Sb$_2$Se$_3$\ seems to be negligible everywhere in the BZ, which is consistent with the fact that they are both trivial insulators. \begin{figure} \centering \includegraphics[width=8.5cm,bb=33 252 579 555]{proj_band.eps} \caption{ (a) Wannier-interpolated bandstructure of Sb$_2$Se$_3$. (b) Same for Bi$_2$Se$_3$. Color coding indicates weight of Sb $5p$ or Bi $6p$ orbitals. } \label{fig:proj} \end{figure} The calculations carried out in the Wannier basis are shown by the dashed lines in Fig.~\ref{fig:materials}(a). The spillage is typically slightly larger for the direct plane-wave calculations, since the fact that the WFs have a slightly different plane-wave representation with and without SOC is not taken into account in the Wannier-based calculations. Still, the qualitative features are the same, showing that the Wannier-based approach can successfully provide the same kind of information about the nature and location of the topological band inversion. In Fig.~\ref{fig:materials}(b) we also show the spillage of Bi$_2$Se$_3$\ in the ($k_x, k_y$) plane at $k_z\!=\!0$, as calculated in the Wannier basis, which again indicates a highly localized band inversion near $\Gamma$ and is fully consistent with the expected picture of the band inversion in Bi$_2$Se$_3$. To see the band inversion from another perspective, we plot in Fig.~\ref{fig:proj} the bulk bandstructures of Sb$_2$Se$_3$\ and Bi$_2$Se$_3$\ projected onto Sb $5p$ and Bi $6p$ orbitals respectively. It is clear that for Sb$_2$Se$_3$, the Sb $5p$ orbitals are almost all concentrated in the conduction bands, whereas in Bi$_2$Se$_3$\ there is a localized region around $\Gamma$ where the corresponding Bi $6p$ orbitals contribute mostly to the top valence band. This is precisely the region of the band inversion corresponding to the peak at $\Gamma$ in Fig.~\ref{fig:materials}. \section{Summary} To summarize, we have introduced the SOC-induced spillage $\gamma(\mathbf{k})$ as a useful quantitative tool for evaluating the degree of band inversion driven by SOC and mapping it as a function of $\mathbf{k}$ in the BZ. We have applied this method to the two-band Dirac model, the 2D Kane-Mele model and a tight-binding model of a Bi bilayer with applied Zeeman field, as well as to realistic materials including both trivial and topological insulators. A clear correspondence between non-trivial topological indices and non-trivial spillage distributions is evident. In the two-band Dirac model, one observes interesting behavior in the distribution of spillage through a topological phase transition process. In the Kane-Mele model, one gets two peaks of spillage at K and K$'$ with the peak value of 1, which indicates that a single band is inverted at these two points corresponding to an odd 2D $\mathbb{Z}_2$\ index. In the Bi bilayer with applied Zeeman field, a peak of spillage shows up at $\Gamma$, with the peak value corresponding to the absolute value of the Chern number. In Bi$_2$Se$_3$, the spillage is slightly greater than 2 at one of the TRIM, namely $\Gamma$, implying that two bands are inverted by SOC there and signaling the presence of a nontrivial strong $\mathbb{Z}_2$\ index. As mentioned above, other methods exist for the direct computation of topological Chern and $\mathbb{Z}_2$\ indices, with or without inversion symmetry,\cite{fukui-jpsj05,fu-prb07,soluyanov-prb11} and we still recommend these if a direct and definitive determination of the topological indices is needed. However, the present spillage-based approach has the advantage of providing a BZ map of the strength, position, and degree of localization of the band inversion responsible for the topological character, thus giving valuable physical intuition about the origin of the topological properties of the material in question. In addition, compared with direct methods for topological index calculation, the spillage calculation only requires the evaluation of overlaps between two wavefunctions at the same $\mathbf{k}$ point, which is easy to implement and numerically very efficient. Therefore, it is our hope that the calculation of SOC spillage will prove to be a widely useful tool that can be applied both for high-throughput screening for topological materials and for obtaining a deeper understanding of the critical features of the bandstructures in known topological materials. \acknowledgments This work was supported by NSF Grant DMR-10-05838. We appreciate valuable discussions with Hongbin Zhang and Huaqing Huang.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,094
Erin and Chris chose View Point and Buckhorn Creek for their beautiful Wedding. The day was like a fairytale, and we were so honored to capture all the Wedding Photography! The first look started the day and both looked amaZing. We loved the looks on both of their faces when they saw each other for the first time. We then went right into formals around the gorgeous property. The details were superb, including the custom arch under neath at the ceremony. Our friends from Jumping Jukebox kept the party bumping all evening, leading up to the Grand Exit, where sparklers and a limo awaited their departure. Enjoy some of our favorites of the day!
{ "redpajama_set_name": "RedPajamaC4" }
9,587
{"url":"http:\/\/www.mscroggs.co.uk\/blog\/tags\/weak%20imposition","text":"mscroggs.co.uk\nmscroggs.co.uk\n\nsubscribe\n\n# Blog\n\n2020-02-16\n This is the fifth post in a series of posts about my PhD thesis.\nIn the fifth and final chapter of my thesis, we look at how boundary conditions can be weakly imposed on the Helmholtz equation.\n\n### Analysis\n\nAs in chapter 4, we must adapt the analysis of chapter 3 to apply to Helmholtz problems. The boundary operators for the Helmholtz equation satisfy less strong conditions than the operators for Laplace's equation (for Laplace's equation, the operators satisfy a condition called coercivity; for Helmholtz, the operators satisfy a weaker condition called G\u00e5rding's inequality), making proving results about Helmholtz problem harder.\nAfter some work, we are able to prove an a priori error bound (with $$a=\\tfrac32$$ for the spaces we use):\n$$\\left\\|u-u_h\\right\\|\\leqslant ch^{a}\\left\\|u\\right\\|$$\n\n### Numerical results\n\nAs in the previous chapters, we use Bempp to show that computations with this method match the theory.\nThe error of our approximate solutions of a Dirichlet (left) and mixed Dirichlet\u2013Neumann problems in the exterior of a sphere with meshes with different values of $$h$$. The dashed lines show order $$\\tfrac32$$ convergence.\n\n### Wave scattering\n\nBoundary element methods are often used to solve Helmholtz wave scattering problems. These are problems in which a sound wave is travelling though a medium (eg the air), then hits an object: you want to know what the sound wave that scatters off the object looks like.\nIf there are multiple objects that the wave is scattering off, the boundary element method formulation can get quite complicated. When using weak imposition, the formulation is simpler: this one advantage of this method.\nThe following diagram shows a sound wave scattering off a mixure of sound-hard and sound-soft spheres. Sound-hard objects reflect sound well, while sound-soft objects absorb it well.\nA sound wave scattering off a mixture of sound-hard (white) and sound-soft (black) spheres.\nIf you are trying to design something with particular properties\u2014for example, a barrier that absorbs sound\u2014you may want to solve lots of wave scattering problems on an object on some objects with various values taken for their reflective properties. This type of problem is often called an inverse problem.\nFor this type of problem, weakly imposing boundary conditions has advantages: the discretisation of the Calder\u00f3n projector can be reused for each problem, and only the terms due to the weakly imposed boundary conditions need to be recalculated. This is an advantages as the boundary condition terms are much less expensive (ie they use much less time and memory) to calculate than the Calder\u00f3n term that is reused.\n\nThis concludes chapter 5, the final chapter of my thesis. Why not celebrate reaching the end by cracking open the following figure before reading the concluding blog post.\nAn acoustic wave scattering off a sound-hard champagne bottle and a sound-soft cork.\nPrevious post in series\n PhD thesis, chapter 4\nThis is the fifth post in a series of posts about my PhD thesis.\nNext post in series\n PhD thesis, chapter \u221e\n\n### Similar posts\n\n PhD thesis, chapter 4 PhD thesis, chapter 3 PhD thesis, chapter \u221e PhD thesis, chapter 2\n\nComments in green were written by me. Comments in blue were not written by me.\n\nAllowed HTML tags: <br> <a> <small> <b> <i> <s> <sup> <sub> <u> <spoiler> <ul> <ol> <li>\nTo prove you are not a spam bot, please type \"sexa\" backwards in the box below (case sensitive):\n2020-02-13\n This is the fourth post in a series of posts about my PhD thesis.\nThe fourth chapter of my thesis looks at weakly imposing Signorini boundary conditions on the boundary element method.\n\n### Signorini conditions\n\nSignorini boundary conditions are composed of the following three conditions on the boundary:\n\\begin{align*} u &\\leqslant g\\\\ \\frac{\\partial u}{\\partial n} &\\leqslant \\psi\\\\ \\left(\\frac{\\partial u}{\\partial n} -\\psi\\right)(u-g) &=0 \\end{align*}\nIn these equations, $$u$$ is the solution, $$\\frac{\\partial u}{\\partial n}$$ is the derivative of the solution in the direction normal to the boundary, and $$g$$ and $$\\psi$$ are (known) functions.\nThese conditions model an object that is coming into contact with a surface: imagine an object that is being pushed upwards towards a surface. $$u$$ is the height of the object at each point; $$\\frac{\\partial u}{\\partial n}$$ is the speed the object is moving upwards at each point; $$g$$ is the height of the surface; and $$\\psi$$ is the speed at which the upwards force will cause the object to move. We now consider the meaning of each of the three conditions.\nThe first condition ($$u \\leqslant g$$ says the the height of the object is less than or equal to the height of the surface. Or in other words, no part of the object has been pushed through the surface. If you've ever picked up a real object, you will see that this is sensible.\nThe second condition ($$\\frac{\\partial u}{\\partial n} \\leqslant \\psi$$) says that the speed of the object is less than or equal to the speed that the upwards force will cause. Or in other words, the is no extra hidden force that could cause the object to move faster.\nThe final condition ($$\\left(\\frac{\\partial u}{\\partial n} -\\psi\\right)(u-g)=0$$) says that either $$u=g$$ or $$\\frac{\\partial u}{\\partial n}=\\psi$$. Or in other words, either the object is touching the surface, or it is not and so it is travelling at the speed caused by the force.\n\n### Non-linear problems\n\nIt is possible to rewrite these conditions as the following, where $$c$$ is any positive constant:\n$$u-g=\\left[u-g + c\\left(\\frac{\\partial u}{\\partial n}-\\psi\\right)\\right]_-$$\nThe term $$\\left[a\\right]_-$$ is equal to $$a$$ if $$a$$ is negative or 0 if $$a$$ is positive (ie $$\\left[a\\right]_-=\\min(0,a)$$). If you think about what this will be equal to if $$u=g$$, $$u\\lt g$$, $$\\frac{\\partial u}{\\partial n}=\\psi$$, and $$\\frac{\\partial u}{\\partial n}\\lt\\psi$$, you can convince yourself that it is equivalent to the three Signorini conditions.\nWriting the condition like this is helpful, as this can easily be added into the matrix system due to the boundary element method to weakly impose it. There is, however, a complication: due to the $$[\\cdot]_-$$ operator, the term we add on is non-linear and cannot be represented as a matrix. We therefore need to do a little more than simply use our favourite matrix solver to solve this problem.\nTo solve this non-linear system, we use an iterative approach: first make a guess at what the solution might be (eg guess that $$u=0$$). We then use this guess to calculate the value of the non-linear term, then solve the linear system obtained when the value is substituted in. This gives us a second guess of the solution: we can do follow the same approach to obtain a third guess. And a fourth. And a fifth. We continue until one of our guesses is very close to the following guess, and we have an approximation of the solution.\n\n### Analysis\n\nAfter deriving formulations for weakly imposed Signorini conditions on the boundary element method, we proceed as we did in chapter 3 and analyse the method.\nThe analysis in chapter 4 differs from that in chapter 3 as the system in this chapter is non-linear. The final result, however, closely resembles the results in chapter 3: we obtain and a priori error bounds:\n$$\\left\\|u-u_h\\right\\|\\leqslant ch^{a}\\left\\|u\\right\\|$$\nAs in chapter 3, The value of the constant $$a$$ for the spaces we use is $$\\tfrac32$$.\n\n### Numerical results\n\nAs in chapter 3, we used Bempp to run some numerical experiments to demonstrate the performance of this method.\nThe error of our approximate solutions of a Signorini problem on the interior of a sphere with meshes with different values of $$h$$ for two choices of combinations of discrete spaces. The dashed lines show order $$\\tfrac32$$ convergence.\nThese results are for two different combinations of the discrete spaces we looked at in chapter 2. The plot on the left shows the expected order $$\\tfrac32$$. The plot on the right, however, shows a lower convergence than our a priori error bound predicted. This is due to the matrices obtained when using this combination of spaces being ill-conditioned, and so our solver struggles to find a good solution. This ill-conditioning is worse for smaller values of $$h$$, which is why the plot starts at around the expected order of convergence, but then the convergence tails off.\n\nThese results conclude chapter 4 of my thesis. Why not take a break and snack on the following figure before reading on.\nThe solution of a mixed Dirichlet\u2013Signorini problem on the interior of an apple, solved using weakly imposed boundary conditions.\nPrevious post in series\n PhD thesis, chapter 3\nThis is the fourth post in a series of posts about my PhD thesis.\nNext post in series\n PhD thesis, chapter 5\n\n### Similar posts\n\n PhD thesis, chapter 5 PhD thesis, chapter 3 PhD thesis, chapter \u221e PhD thesis, chapter 2\n\nComments in green were written by me. Comments in blue were not written by me.\n\nAllowed HTML tags: <br> <a> <small> <b> <i> <s> <sup> <sub> <u> <spoiler> <ul> <ol> <li>\nTo prove you are not a spam bot, please type \"orez\" backwards in the box below (case sensitive):\n2020-02-11\n This is the third post in a series of posts about my PhD thesis.\nIn the third chapter of my thesis, we look at how boundary conditions can be weakly imposed when using the boundary element method. Weak imposition of boundary condition is a fairly popular approach when using the finite element method, but our application of it to the boundary element method, and our analysis of it, is new. But before we can look at this, we must first look at what boundary conditions are and what weak imposition means.\n\n### Boundary conditions\n\nA boundary condition comes alongside the PDE as part of the problem we are trying to solve. As the name suggests, it is a condition on the boundary: it tells us what the solution to the problem will do at the edge of the region we are solving the problem in. For example, if we are solving a problem involving sound waves hitting an object, the boundary condition could tell us that the object reflects all the sound, or absorbs all the sound, or does something inbetween these two.\nThe most commonly used boundary conditions are Dirichlet conditions, Neumann conditions and Robin conditions. Dirichlet conditions say that the solution has a certain value on the boundary; Neumann conditions say that derivative of the solution has a certain value on the boundary; Robin conditions say that the solution at its derivate are in some way related to each other (eg one is two times the other).\n\n### Imposing boundary conditions\n\nWithout boundary conditions, the PDE will have many solutions. This is analagous to the following matrix problem, or the equivalent system of simultaneous equations.\n\\begin{align*} \\begin{bmatrix} 1&2&0\\\\ 3&1&1\\\\ 4&3&1 \\end{bmatrix}\\mathbf{x} &= \\begin{pmatrix} 3\\\\4\\\\7 \\end{pmatrix} &&\\text{or}& \\begin{array}{3} &1x+2y+0z=3\\\\ &3x+1y+1z=4\\\\ &4x+3y+1z=7 \\end{array} \\end{align*}\nThis system has an infinite number of solutions: for any number $$a$$, $$x=a$$, $$y=\\tfrac12(3-a)$$, $$z=4-\\tfrac52a-\\tfrac32$$ is a solution.\nA boundary condition is analagous to adding an extra condition to give this system a unique solution, for example $$x=a=1$$. The usual way of imposing a boundary condition is to substitute the condition into our equations. In this example we would get:\n\\begin{align*} \\begin{bmatrix} 2&0\\\\ 1&1\\\\ 3&1 \\end{bmatrix}\\mathbf{y} &= \\begin{pmatrix} 2\\\\1\\\\3 \\end{pmatrix} &&\\text{or}& \\begin{array}{3} &2y+0z=2\\\\ &1y+1z=1\\\\ &3y+1z=3 \\end{array} \\end{align*}\nWe can then remove one of these equations to leave a square, invertible matrix. For example, dropping the middle equation gives:\n\\begin{align*} \\begin{bmatrix} 2&0\\\\ 3&1 \\end{bmatrix}\\mathbf{y} &= \\begin{pmatrix} 2\\\\3 \\end{pmatrix} &&\\text{or}& \\begin{array}{3} &2y+0z=2\\\\ &3y+1z=3 \\end{array} \\end{align*}\nThis problem now has one unique solution ($$y=1$$, $$z=0$$).\n\n### Weakly imposing boundary conditions\n\nTo weakly impose a boundary conditions, a different approach is taken: instead of substituting (for example) $$x=1$$ into our system, we add $$x$$ to one side of an equation and we add $$1$$ to the other side. Doing this to the first equation gives:\n\\begin{align*} \\begin{bmatrix} 2&2&0\\\\ 3&1&1\\\\ 4&3&1 \\end{bmatrix}\\mathbf{x} &= \\begin{pmatrix} 4\\\\4\\\\7 \\end{pmatrix} &&\\text{or}& \\begin{array}{3} &2x+2y+0z=4\\\\ &3x+1y+1z=4\\\\ &4x+3y+1z=7 \\end{array} \\end{align*}\nThis system now has one unique solution ($$x=1$$, $$y=1$$, $$z=0$$).\nIn this example, weakly imposing the boundary condition seems worse than substituting, as it leads to a larger problem which will take longer to solve. This is true, but if you had a more complicated condition (eg $$\\sin x=0.5$$ or $$\\max(x,y)=2$$), it is not always possible to write the condition in a nice way that can be substituted in, but it is easy to weakly impose the condition (although the result will not always be a linear matrix system, but more on that in chapter 4).\n\n### Analysis\n\nIn the third chapter of my thesis, we wrote out the derivations of formulations of the weak imposition of Dirichet, Neumann, mixed Dirichlet\u2013Neumann, and Robin conditions on Laplace's equation: we limited our work in this chapter to Laplace's equation as it is easier to analyse than the Helmholtz equation.\nOnce the formulations were derived, we proved some results about them: the main result that this analysis leads to is the proof of a priori error bounds.\nAn a priori error bound tells you how big the difference between your approximation and the actual solution will be. These bounds are called a priori as they can be calculated before you solve the problem (as opposed to a posteriori error bounds that are calculated after solving the problem to give you an idea of how accurate you are and which parts of the solution are more or less accurate). Proving these bounds is important, as proving this kind of bound shows that your method will give a good approximation of the actual solution.\nA priori error bounds look like this:\n$$\\left\\|u-u_h\\right\\|\\leqslant ch^{a}\\left\\|u\\right\\|$$\nIn this equation, $$u$$ is the solution of the actual PDE problem; $$u_h$$ is our appoximation; the $$h$$ that appears in $$u_h$$ and on the right-hand size is the length of the longest edge in the mesh we are using; and $$c$$ and $$a$$ are positive constants. The vertical lines $$\\|\\cdot\\|$$ are a measurement of the size of a function.\nOverall, the equation says that the size of the difference between the solution and our approximation (ie the error) is smaller than a constant times $$h^a$$ times the size of the solution. Because $$a$$ is positive, $$h^a$$ gets smaller as $$h$$ get smaller, and so if we make the triangles in our mesh smaller (and so have more of them), we will see our error getting smaller.\nThe value of the constant $$a$$ depends on the choices of discretisation spaces used: using the spaces in the previous chapter gives $$a$$ equal to $$\\tfrac32$$.\n\n### Numerical results\n\nUsing Bempp, we approximated the solution on a series of meshes with different values of $$h$$ to check that we do indeed get order $$\\tfrac32$$ convergence. By plotting $$\\log\\left(\\left\\|u-u_h\\right\\|\\right)$$ against $$\\log h$$, we obtain a graph with gradient $$a$$ and can easily compare this gradient to a gradient of $$\\tfrac32$$. Here are a couple of our results:\nThe errors of a Dirichlet problem on a cube (left) and a Robin problem on a sphere (right) as $$h$$ is decreased. The dashed lines show order $$\\tfrac32$$ convergence.\nNote that the $$\\log h$$ axis is reversed, as I find these plots easier to interpret this way found. Pleasingly, in these numerical experiments, the order of convergence agrees with the a priori bounds that we proved in this chapter.\n\nThese results conclude chapter 3 of my thesis. Why not take a break and refill the following figure with hot liquid before reading on.\nThe solution of a mixed Dirichlet\u2013Neumann problem on the interior of a mug, solved using weakly imposed boundary conditions.\nPrevious post in series\n PhD thesis, chapter 2\nThis is the third post in a series of posts about my PhD thesis.\nNext post in series\n PhD thesis, chapter 4\n\n### Similar posts\n\n PhD thesis, chapter 5 PhD thesis, chapter 4 PhD thesis, chapter \u221e PhD thesis, chapter 2\n\nComments in green were written by me. Comments in blue were not written by me.","date":"2020-04-10 07:05:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 2, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.813861072063446, \"perplexity\": 395.7026489692124}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-16\/segments\/1585371886991.92\/warc\/CC-MAIN-20200410043735-20200410074235-00251.warc.gz\"}"}
null
null
Under the Table Tennis is Tim Fite's ninth album. Like a number of his previous albums, it was released free of charge on his website. Track listing "For-Closure" (2:57) "Oversight" (1:09) "Someone Threw the Baby Out" (4:20) "Not Covred" (3:08) "Jobs" (2:57) "ExExEx" (1:57) "No Notice" (2:26) "Never Lay Down" (3:46) "Money Back" (3:36) "Napkin" (1:38) "Go Sell It On the Mountain" (3:19) "WYNPM" (3:17) "Support Tim Fite" (1:28) "We Didn't Warn You" (3:29) 2010 albums Tim Fite albums Albums free for download by copyright owner
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,038
Q: How to use multiple replace in r without changing the original dataset? I have a dataset where I need to replace, for one of the variables, all the value above the .99 percentile and below 0, with a NA. Since I need to plot multiple variables, I am trying to create a template where I can just imput the variables i need to plot and then have it save w/o changing the original dataset since I need to do different kind of graph. How do I nest two replace function tho? na.omit(replace(data$Sodio, which(data$Sodio <0), NA))) this is the first one I used, but I also need to replace the number above this number quantile(data$Sodio, probs=c(0.99), na.rm=TRUE) So i'd need something like na.omit(replace(data$Sodio, which(data$Sodio>quantile(data$Sodio, probs=c(0.99), na.rm=TRUE), NA))) Is it possible to just write one string and achieve both? A: You can combine the two conditions with OR (|) new_data <- transform(data, Sodio = replace(Sodio, Sodio > quantile(Sodio, probs=0.99, na.rm=TRUE) | Sodio < 0, NA))
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,484
package com.seagate.imageadapter.adapters; import android.content.Context; import android.view.View; import android.view.ViewGroup; import com.facebook.drawee.backends.pipeline.Fresco; import com.facebook.imagepipeline.core.ImagePipelineConfig; import com.seagate.imageadapter.holders.BaseViewHolder; import com.seagate.imageadapter.holders.FrescoHolder; import com.seagate.imageadapter.instrumentation.InstrumentedDraweeView; import com.seagate.imageadapter.instrumentation.PerfListener; /** * RecyclerView Adapter for Fresco */ public class FrescoAdapter extends Adapter { public FrescoAdapter(Context context, PerfListener perfListener, ImagePipelineConfig imagePipelineConfig, Delegate ad) { super(context, perfListener, ad); Fresco.initialize(context, imagePipelineConfig); } @Override public BaseViewHolder<?> onCreateViewHolder(ViewGroup parent, int viewType) { View holderView = getDelegate().getHolderView(parent, viewType); final InstrumentedDraweeView instrView = (InstrumentedDraweeView) super.getInstrumentedView(holderView); return new FrescoHolder(getContext(), holderView, instrView, getPerfListener()); } @Override public void dispose() { Fresco.shutDown(); } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,097
\section*{Introduction} Experiments that detect the physical contacts between DNA loci in the nucleus, such as Hi-C~\cite{Lieberman-Aiden2009,Rao2014}, show that DNA in the nucleus is not a randomly folded polymer. Rather, across cell types and organisms, Hi-C experiments reveal that chromosomes are built up by a network of three dimensional (3D) compartments. At the mega($10^6$)-base-pair scale, two types of coexisting structures stand out. In one, all chromosome loci belong to one of the two so-called A/B compartments, where the chromatin in one compartment is generally more open, accessible, and actively transcribed than the other. In the second type, linear subsections of the genome assemble into topological domains~\cite{Dixon2012,Nora2012}, often referred to as topologically associated domains (TADs)~\cite{Dixon2012,Nora2012,Rao2014}. Plotting Hi-C data as a heat map, TADs show up as local regions with sharp borders with more internal than external contacts. The positions of these borders are correlated with several genetic processes, such as transcription, localization of some epigenetic marks, and DNA-binding profiles of several proteins---most notably CTCF and cohesin~\cite{Dixon2012,Nora2012}. The methods that detect TADs are not the same as those that find A/B compartments. Therefore, TADs and A/B compartments are treated as different 3D structures that are only weakly related to each other. Just as for TADs, there are several algorithms tailored for detecting A/B compartments~\cite{Lieberman-Aiden2009,dixon2015chromatin}, each with their strengths and weaknesses. To algorithmically detect TAD-like structures, there exists by now a menagerie of network~\cite{Boulos2013,Cabreros2015,YXRWang2017,Sarnataro2017,Belyaeva2017} and clustering approaches~\cite{yu2017identifying,weinreb2015identification,haddad2017ic,KKYan2017,Norton2018,Ball2011,Gopalan2013}. Arguably these methods yield overlapping results, but it is unclear by how much. In particular, some methods cannot deal with TAD-within-TAD hierarchies that become apparent when zooming in TADs in highly resolved Hi-C maps. This means that there is not a universal definition for what a TAD really is. Some network approaches are based on community detection methods that are related to what we use here. In Cabreros \emph{et al.}~\cite{Cabreros2015}, the authors suggest a method based on the stochastic block model~\cite{Ball2011,Gopalan2013}, which is another side of the network community detection field compared to the modularity maximization method that we use here (but as shown recently~\cite{Newman2016}, they are connected). However, there are some limitations in their approach. For example, they binarize the Hi-C data (`no connection' or `connection' with a rather arbitrarily chosen thresholds) thereby discarding contact frequency variations in the Hi-C data. Furthermore, there is no comparative study connecting their communities to biological factors or any mechanistic models. Wang \emph{et al}.~\cite{YXRWang2017} takes a step in this direction, but the method to detect the TADs itself relies on biological factor data and nontrivial threshold criteria. To overcome some of these problems, we start by acknowledging that the chromosomes have a richer 3D organization than simply TADs and A/B compartments. These are just two examples. To capture this, we develop network-based method that allows us to scan through 3D structures on all scales with a resolution parameter. In particular, our approach is based on the GenLouvain method, originally designed for network community detection. For a specific value of the resolution parameter, the method finds the optimal community structure with respect to a null model of the network that has to be specified beforehand. Based on the physics of compact polymer globules, we put forward a null model that is consistent with the average contact probabilities in real Hi-C data~\cite{Lieberman-Aiden2009}. This goes beyond previous Louvain-like studies~\cite{KKYan2017,Norton2018} that treat the Hi-C data as a network with random connections. Furthermore, most studies, such as Yan \emph{et al}. and Norton \emph{et al}.~\cite{KKYan2017,Norton2018}, treat TADs as linear contiguous sequences of chromatin. This restriction overrides the GenLouvain algorithm's ability to find the (not necessarily contiguous) optimal community structure in the data set~\cite{Porter2009,Fortunato2010}. We remove this restriction in our study. Therefore, to reduce confusion, we will not use the term TAD, but rather the \emph{3D community} for the cluster of nodes that comes out of the GenLouvain algorithm since they are not necessarily linear contiguous sequences. \section*{Methods} We use the GenLouvain algorithm~\cite{GenLouvain} to detect communities in the Hi-C maps (this is an extension of the original Louvain method~\cite{Blondel2008}). This algorithm offers the possibility to find communities at several scales with a single resolution parameter $\gamma$. Contrasting other methods with similar features, the spectrum of communities that we detect is not necessarily hierarchical or nested, as in e.g. Wang \emph{et al}., Fraser \emph{et al}., and Bianco \emph{et al}.~\cite{YXRWang2017,Fraser2015,Bianco2017}. Instead, two different values of $\gamma$ give two different collections of communities, and these do not necessarily have anything to do with each other. To find the communities, the GenLouvain method tries putting the nodes into different communities to maximize the so-called modularity function $Q$. This function quantifies by how much more dense the connections are within the communities compared to what they would be in a particular type of network, say a random network. GenLouvain's modularity function is \begin{equation} Q = \frac{1}{2m} \sum_{i \ne j} \left[ \left( A_{ij} - \gamma P_{ij}^{\rm null} \right) \delta(g_i,g_j) \right ] \,, \label{eq:modularity} \end{equation} where the sum runs over all nodes in the network, in our case genomic loci, and $A_{ij}$ is the number of contacts between all node pairs $i$ and $j$ ($A_{ij}$ is the adjacency matrix in standard network terminology). The resolution parameter $\gamma$ controls the overall scale of communities we would like to detect, where larger values of $\gamma$ correspond to smaller communities. Furthermore, $P_{ij}^{\rm null}$ is the null-model term that is network-type-specific, and the difference $A_{ij} - P_{ij}^{\rm null}$ therefore measures how strongly nodes $i$ and $j$ are connected in the real network, compared to how strongly we expect them to be given $P_{ij}^{\rm null}$. Finally, $\delta(g_i,g_j)$ is the Kronecker delta that is unity only if nodes $i$ and $j$ belong to the same community (otherwise it is zero), and $m$ is a normalization factor so that $Q$ goes between $-1$ and $+1$. One of the most popular choices for $P_{ij}^{\rm null}$ is the Newman-Girvan (NG) null-model term for a random network, $P_{ij}^{\mathrm{NG}} = k_i k_j/(2m)$~\cite{Newman2004}. For a unweighted network where $A_{ij}$ is either zero or one, $k_i$ and $k_j$ are the number of links for nodes $i$ and $j$ ($k_i = \sum_j A_{ij}$). Simply put, the NG null-model assumes that the probability that $i$ and $j$ are connected is proportional to the product of their number of links. The same interpretation holds for weighted networks where $k_i$ becomes the sum of weights on the edges connected to $i$ (``strength'' in network terminology). However, the NG null-model is too rough an approximation to find communities in Hi-C maps, because it does not obey the well-established contact patterns that we know exist in DNA, or in fact any polymer system. For example, DNA is a long polymer where nodes are arranged in a linear sequence and then folded in 3D. This sets limitations for how frequently two pieces of DNA, or nodes, can join in 3D space (this is usually not a restriction in most networks). As was first discovered in Lieberman-Aiden \emph{et al}.~\cite{Lieberman-Aiden2009}, and many papers thereafter~\cite{burton2014species,yu2012spatial}, the contact probability between two nodes $i$ and $j$ decays as a power-law with the linear distance between them, that is $ \propto|i-j|^{-\alpha}$. Based on this, we propose the following null-model: \begin{equation} P_{ij}^{\mathrm{FG}} = \frac{2m k_i k_j |i-j|^{-\alpha}}{\sum_{i' \ne j'} k_{i'} k_{j'} |i' - j'|^{-\alpha}} \,. \label{eq:Q_FG} \end{equation} The value of the decay exponent $\alpha$ is debatable, but we use $\alpha=1$. There are two main reasons. First, at the mega-base-pair scale of human DNA, the Hi-C data suggest that it is close to one~\cite{Lieberman-Aiden2009} ($\alpha$ is also close to one in mice~\cite{yu2012spatial}). Second, in the next section we will study community detection in the fractal globule polymer (hence the superscript FG in $P_{ij}^{\rm FG}$) where $\alpha=1$~\cite{Tamm2015,Mirny2011}. Nonetheless, we point out that our method does note rely on this choice, and $\alpha$ is in principle a free parameter. \subsection*{3D communities in fractal globules} \label{sec:fractal_globule} Before we investigate the human Hi-C data, we wish to better understand the types of 3D structures that our community detection method picks up. To do this, we use computer-generated fractal globule polymers (denoted by the ``crumpled globule'' in the original article~\cite{grosberg1993crumpled}). This is a compact polymer that mimics the large scale structure of human chromosomes, in particular the scaling relations of the end-to-end distance and contact frequency~\cite{Lieberman-Aiden2009}. The advantage of this approach is that we have the explicit 3D coordinates for every part of the polymer (because we made it), which allows us to visualize and analyze the 3D structure of the communities that we detect. For the Hi-C map, we only have pairwise contact frequencies. \begin{figure}[ht] \centering \begin{tabular}{ll} (\textbf{a}) & (\textbf{b}) \\ \includegraphics[width=0.3\textwidth]{FG_model_community_s1_g0p6.pdf} & \includegraphics[width=0.3\textwidth]{FG_louvain_heatmap_s1_kikj_g0p4.pdf} \\ (\textbf{c}) & (\textbf{d}) \\ \includegraphics[width=0.3\textwidth]{FG_louvain_heatmap_s1_kikj_g0p6.pdf} & \includegraphics[width=0.3\textwidth]{FG_louvain_heatmap_s1_kikj_g0p8.pdf} \\ (\textbf{e}) & (\textbf{f}) \\ \includegraphics[width=0.3\textwidth]{s1_kikj_etoe.pdf} & \includegraphics[width=0.3\textwidth]{s1_kikj_etoe_div_rg.pdf} \\ \end{tabular} \caption{3D communities in simulated fractal globules. (\textbf{a}, left) The 3D representation of fractal globule from the CDP algorithm. The colors highlight communities that we detect using Eq.~\eqref{eq:Q_FG}, $\gamma = 0.6$. (\textbf{a}, right) The communities are not contiguous: the small globule is a subsection of the lower left part of polymer, and the stretched version shows the alternating communities (ABCAB). (\textbf{b})-(\textbf{d}) The contact maps of simulated fractal globules after KR-normalization, with various resolution parameters: (\textbf{b}) $\gamma = 0.4$, (\textbf{c}) $\gamma = 0.6$, and (\textbf{d}) $\gamma = 0.8$. To show non-contiguous communities, we superimpose them as the squares; the same color indicates that they belong to the same 3D community. (\textbf{e}) The end-to-end distance for the fractal (FG, blue) and the equilibrium globules (EG, red) averaged over $200$ polymer realizations. The triangles denote the end-to-end distance for community boundaries, and dashed lines represent the chain as a whole. To find the communities, we use $\gamma = 0.4$. The data are obtained from the simulation of $200$ sample globules for each polymer model. The error bars show the standard error of the mean, and the two guided slopes ($1/2$ and $1/3$) show the known scaling of equilibrium and fractal globules at intermediate length scales. (\textbf{f}) The same data as in the panel (\textbf{e}), where we scale the vertical axis with the radius of gyration $R_g\left(=\sqrt{\frac{1}{2N^2}\sum_{i,j}\lVert\mathbf{r}_i-\mathbf{r}_j\rVert^2}\right)$ where $N$ is the total number of polymer segments and $\mathbf{r}_i$ is the coordinate of the $i$th segment.} \label{fig:FG_louvain_heatmap} \end{figure} \subsubsection*{Generating fractal globules with the conformation-dependent polymerization algorithm} As introduced in an article~\cite{Grosberg2016}, there are several variants of fractal globules-like structures~\cite{Rao2014,Sanborn2015,Goloborodko2016,Goloborodko2016a}. Due to simplicity and speed, we use the conformation-dependent polymerization (CDP) model~\cite{Tamm2015}. In a nutshell, this is a Monte Carlo method that produces a fractal globule by simulating a biased random walk on lattice where the propagation probability depends on the entire walk's trajectory over the lattice~\cite{Tamm2015}. This yields on-lattice space-filling polymers. To generate off-lattice fractal globules, the structure is randomized with simulated annealing where the position of a monomer is randomly displaced under the constraint of a fixed inter-monomer distance. With properly chosen parameters, the CDP method produces a fractal globule with contact frequency (probability) that decays as $\sim s^{-1}$ (as it should~\cite{Mirny2011}), where $s$ is the contour distance along the polymer. To use the notation from Tamm \emph{et al}.~\cite{Tamm2015}, we use the relation $p \propto (1 + An)$ with $A = 10^4$ for an unoccupied site and $p$ to an occupied site is given by $\epsilon = 10^{-4}$. This result is consistent with the Hi-C data for the human genome at the mega-base-pair scale~\cite{Lieberman-Aiden2009}. We show a realization of a CDP in Fig.~\ref{fig:FG_louvain_heatmap}(\textbf{a}). From every simulated CDP polymer, we construct a contact map by counting the number of contact events between polymer beads $i$ and $j$. The contact refers to the case where the Euclidean distance between two beads are shorter than three lattice spacings. To obtain good statistics, we first generate an on-lattice polymer, and then during the annealing stage we register all contacts in $10^3$ random variations of the on-lattice structure. In addition, we have confirmed that different threshold values for what we consider as a contact do not qualitatively alter our results presented below. Just as in the Hi-C experiments, in the final step we normalize the contact map with the KR-norm~\cite{Knight2013} so that each row and column sums to unity. As a reference case, we use the equilibrium globule. This is a self-avoiding polymer in a closed spherical volume. When we generate equilibrium globules, we contain it in a volume with the same diameter as the fractal globules' radius of gyration. \subsubsection*{Structure of 3D communities in fractal globules} \label{sec:polymer_scailing_properties} Figure~\ref{fig:FG_louvain_heatmap}(\textbf{b}) shows the contact map for one simulated CDP, where high pixel intensity indicates many contacts. Just as in real Hi-C maps, the simulated map contains locally concentrated contact domains along the diagonal, that is, along the polymer chain. To find these domains algorithmically, we put the contact map into our modified GenLouvain method. By varying the resolution parameter $\gamma$, we detect communities on various scales. On top of the contact map in Figs.~\ref{fig:FG_louvain_heatmap}(\textbf{b})--(\textbf{d}), we overlay examples of 3D communities for $\gamma=0.4$ (\textbf{b}), $\gamma=0.6$ (\textbf{c}), and $\gamma=0.8$ (\textbf{d}), where the boxes represent community boundaries. Based on these, we ask what the 3D structure of these communities is, and if and how they are different from the polymer as a whole. Contrasting current views on TADs, we find that 3D communities do not have to be a contiguous polymer segment. Rather, linearly distant parts of the polymer can fold in 3D to form a community. We show this in Fig.~\ref{fig:FG_louvain_heatmap}(\textbf{a}) (left), where we mark the polymer segments belonging to the same community with the same color ($\gamma=0.6$). In Fig.~\ref{fig:FG_louvain_heatmap}(\textbf{a}) (right), we cut out a subsection with three communities and stretch it out. Labeling the communities as A, B, and C, they are clearly ordered in a non-contiguous sequence: they appear as A-B-C-A-B rather than A-B-C. Furthermore, because of the above-average contact frequencies inside a community, we would like to quantify how its 3D structure differs from the fractal globule polymer as a whole. To do this, we examine the scaling relation of the end-to-end distance---the Euclidean distance between the two boundary monomers defining that (contiguous) community---with respect to the subchain length $s$. In Fig.~\ref{fig:FG_louvain_heatmap}(\textbf{e}), we show this relation for the community subchains (the blue triangles) and for all subchains (the blue dashed lines). It shows that the end-to-end distance for the entire globule grows as $\sim s^{1/3}$, as is expected for a space-filling curve (deviations for large-$s$ comes from finite-size effects and insufficient statistics). For the communities, we notice that the end-to-end distances are systematically smaller than for a randomly chosen subchain. Our simulations even suggest that the scaling exponent is smaller than $1/3$. Consistent properties are crosschecked in Fig.~\ref{fig:FG_louvain_heatmap}(\textbf{f}) where the end-to-end distance divided by the average chain size ($2\times$radius of gyration) is plotted. Overall, this shows that our method detects 3D communities that are compact substructures of the fractal globule. This observation supports that some TADs in chromosomes are end-closed loop structures~\cite{Rao2014}. For comparison, we made the same analysis for the equilibrium globule (EG). In Fig.~\ref{fig:FG_louvain_heatmap}(\textbf{e}), we see that the end-to-end distance for our 3D communities and all subchains have the same scaling: $\sim s^{1/2}$ for $s \lesssim N^{2/3}$ and $\sim s^0$ for $s \gtrsim N^{2/3}$, as we expect from an $N$ monomer ideal chain. Concurrently, the end-to-end distance normalized by the radius of gyration [Fig.~\ref{fig:FG_louvain_heatmap}(\textbf{f})] are almost same for both 3D communities and all subchains, showing a sharp contrast to the FG. We further investigate the asphericity of the communities' 3D structure in Supplementary Information (SI). We find that the community subchains in FGs tend to be more sphere-like than the FG itself, and the subchains in EGs. This leads to enhanced contact frequency between monomers within a community (Supplementary Fig.~\ref{fig:AsphericityEffectiveDistance}). Refer to SI for further discussion. All these analyses support that our community detection method successfully identifies strongly interacting subchains in an FG from the contact map, and that these communities have polymeric properties that cannot be explained by the global expected behavior. Rather, they are more similar to features of TADs. Additionally, in the following section, we corroborate our method by comparing the communities in Hi-C maps detected by our method to the TAD data reported in the literature~\cite{Rao2014}. \subsection*{Community detection for the Hi-C map} \label{sec:results} \begin{figure}[ht] \centering \begin{tabular}{lll} (\textbf{a}) & (\textbf{b}) & (\textbf{c}) \\ \includegraphics[width=0.25\textwidth]{chr1_normalized_heat_map_s1_kikj_g0p6.pdf} & \includegraphics[width=0.25\textwidth]{chr1_normalized_heat_map_s1_kikj_g0p7.pdf} & \includegraphics[width=0.25\textwidth]{chr1_normalized_heat_map_s1_kikj_g0p8.pdf} \\ (\textbf{d}) & (\textbf{e}) & (\textbf{f}) \\ \includegraphics[width=0.25\textwidth]{PPV_100kb_s1_kikj_chr1.pdf} & \includegraphics[width=0.3\textwidth]{chr1_num_TAD_per_community_stat.pdf} & \includegraphics[width=0.2\textwidth]{chr1_coloring_communities.pdf} \\ (\textbf{g}) & (\textbf{h}) & (\textbf{i}) \\ \includegraphics[width=0.25\textwidth]{RNAexp_for_communities_s1_kikj_NullModel_Normalized_100kb_chr1_g0p6.pdf} & \includegraphics[width=0.25\textwidth]{RNAexp_for_communities_s1_kikj_NullModel_Normalized_100kb_chr1_g0p65.pdf} & \includegraphics[width=0.25\textwidth]{RNAexp_for_communities_s1_kikj_NullModel_Normalized_100kb_chr1_g0p7.pdf} \\ \end{tabular} \caption{3D communities in real Hi-C data (chromosome 1). (\textbf{a})--(\textbf{c}): Normalized Hi-C data with squares showing the structure of 3D communities. The black regions are the unmappable regions. The resolution parameter ranges from $\gamma = 0.6$ (\textbf{a}), $\gamma = 0.7$ (\textbf{b}), and $\gamma = 0.8$ (\textbf{c}). As in Figs.~\ref{fig:FG_louvain_heatmap}(\textbf{b})--(\textbf{d}), we assign the same colors to those squares that belong to the same 3D community. It is clear that they are not contiguous sequences. (\textbf{d}) The fraction of community boundary points predicted by our method that coincide with the ones in Rao \emph{et al}.~\cite{Rao2014} (the squares), and binding positions for CTCF (the circles) for different values of $\gamma$. (\textbf{e}) The average number of TADs for each community, and the number of communities as functions of $\gamma$. (\textbf{f}) The community division along chromosome 1 for three different values of $\gamma$. The purple squares represent the largest community in the panels (\textbf{g})--(\textbf{i}), while the other colors indicate smaller communities. (\textbf{g})--(\textbf{i}) Communities' gene activity sorted by their relative size for different values of $\gamma$: (\textbf{g}) $\gamma = 0.6$, (\textbf{h}) $\gamma = 0.65$, and (\textbf{i}) $\gamma = 0.7$. The circles show the median RNA expression levels, and vertical lines are quartiles. We omit communities that are smaller than $50$ nodes. We find the communities using Eqs.~\eqref{eq:modularity} and \eqref{eq:Q_FG}. } \label{fig:real_HiC_results} \end{figure} Based on our community detection approach, we now proceed to analyze Hi-C data from human cells~\cite{Rao2014}. The data comes in the form of matrices where each entry represents the number contacts between two chromosome loci $i$ and $j$. As is standard in the field, we normalized the data with the KR-norm~\cite{Knight2013} which balances the matrix such that every row and column sum to unity (we also used the KR-norm for the fractal globule contact data). The data is available in various resolutions, from $10^3$ base pairs (1 kbp) to $10^6$ base pairs (1 Mbp), but we used 100 kbp which is the scale where both TADs and A/B compartments can be detected~\cite{Rao2014}. In Figs.~\ref{fig:real_HiC_results}(\textbf{a})--(\textbf{c}), we show that our algorithm detects differently sized contiguous blocks, or TADs, along the chromosome arms as we change the resolution parameter $\gamma$. Similar to the simulated data, it is clear that several TADs form 3D communities. To investigate how these TADs correspond to TADs defined in other studies, we compared the border locations of each contiguous block to TAD borders defined by Rao \emph{et al}.~\cite{Rao2014} (the group that produced the dataset we use in this study) in Fig.~\ref{fig:real_HiC_results}(\textbf{d}). At $\gamma\approx 0.7$, about 70\% of the borders overlap. Moreover, several studies have shown that binding sites for the insulator protein CTCF are strongly correlated with TAD borders (e.g., in Dixon \emph{et al}.~\cite{Dixon2012}). We therefore check the overlap between CTCF binding sites (mapped by ENCODE~\cite{ENCODE}) and all borders at different $\gamma$ values. We note that CTCF has the highest overlap also at $\gamma\approx 0.7$ [Fig.~\ref{fig:real_HiC_results}(\textbf{d})]. As we discussed in the introduction, there are different algorithms that detect TADs. However, regardless of the definition that is used, the TADs that come out have substructures that we can interpret as TAD-within-TADs~\cite{weinreb2015identification} with new sets of borders. Our algorithm lets researchers scan through all these TAD-within-TADs. According to Fig.~\ref{fig:real_HiC_results}(\textbf{d}), CTCF correlates well with TAD borders at $\gamma\approx 0.7$. This level also coincides with TADs from Rao \emph{et al}.~\cite{Rao2014}. Our method opens the possibility to investigate which family of proteins is important for different hierarchical levels. We next investigate properties of the 3D communities. First, we count the number of communities and average number of TADs within each community at different $\gamma$ [Fig.~\ref{fig:real_HiC_results}(\textbf{e})] as well as their localization along the chromosome arms [Fig.~\ref{fig:real_HiC_results}(\textbf{f})]. The highest number of TADs/community is when $\gamma = 0.6 $ and $\gamma = 0.75 $ (about $17$ TADs/community). Interestingly, for $\gamma < 0.6$, the chromosome consists of only two communities (but with several TADs within them). Second, we map gene activity within each community using RNA-seq data from ENCODE~\cite{ENCODE} (GEO Acc. Nr: GSE88583). In Figs.~\ref{fig:real_HiC_results}(\textbf{g})--(\textbf{i}) and Supplementary Figs.~\ref{fig:RNAexp_for_communities_chr1_6}--\ref{fig:RNAexp_for_communities_chr19_X}, we show the average coverage of RNA-seq reads within communities for different $\gamma$. At small $\gamma$ values where only two communities are defined, one community is clearly more active than the other. The active and less active communities are then split up into smaller communities as $\gamma$ increases. Already in the original Hi-C paper by Lieberman-Aiden \emph{et al}.~\cite{Lieberman-Aiden2009}, they used principal component analysis on the Hi-C data to partition the chromosomes into two classes. Denoting them by A/B compartments, they found that the A compartment contains transcriptionally active chromatin, whereas the B compartment is less active. We took one step in this direction by looking at different types of chromatin. We use 15 chromatin states defined by ENCODE~\cite{Ernst2010,Ernst2011} to see what types of chromatin states are enriched in different communities at different $\gamma$ values (Supplementary Fig.~\ref{fig:comm_state}). These results show that at low values of $\gamma$ one of the two large communities that are identified is enriched in chromatin states associated with transcription, while the other community is not. This again confirms that these represent the A and B compartments respectively. A very small third compartment is also visible that consists of centromere proximal repetitive regions. With increasing $\gamma$ values, we can then see that first the active compartment (A) starts to split up into smaller compartments and then the less active parts of the genome (B) also start to split into smaller communities. It is clear that the genome does not split up into evenly sized sub-compartments, but rather the 3D space is dominated by a few large communities. It is striking and unprecedented that by tuning a single parameter, we detect both TADs and A/B compartments with the same algorithm. We therefore argue that TADs and A/B compartments are not two conceptually different organizational structures in the nucleus, but rather different ends of the same organizational spectrum. \section*{Discussion} From Hi-C experiments, it is clear that inter-phase chromosomes are built up by a network of 3D compartments on various scales---from kilo($10^3$)-base-pair sized loops to mega($10^6$)-base-pair sized 3D structures. This pattern is consistent across organisms and cell types. To let researchers scan through the spectrum of 3D compartments, we have tailored the GenLouvain community detection method to find 3D communities in fractal globule polymer systems. Apart from verifying our method on computer-generated polymers, we have applied it to analyze human Hi-C data. First, we have found that chromatin segments belonging to the same 3D community do not have to be in next to each other along the DNA. In other words, several TADs can belong to the same 3D community. Second, we have found that CTCF proteins---a loop-stabilizing protein that is ascribed a big role in TAD formation---are only correlated well with community borders at one level of organization. It remains to find what other factors are important at higher or lower levels. Third, just by adjusting a single parameter ($\gamma$), our method picks up the two most prominent 3D compartments, TADs and A/B compartments, which are traditionally treated as two weakly related 3D structures and detected with different algorithms. Rather than seeing them as different, our work put them on an equal footing, and we argue that they represent two ends of a continuous spectrum of 3D communities of different sizes.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,689
We are pained to learn that Tommie Bedell, a dear young friend and relative of ours, died of meningitis on the 25th ult., at Gainesville, Ga. He was engaged in business in Atlanta, when attacked, and was sick forty-six days. He was the eldest son of the Capt. P. E. Bedell, who was killed near the close of the late war, and was the pride, the hope and the stay of his widowed mother. He was just verging into manhood, was a youth of noble impulses, and was entering upon the duties of life with a fair prospect of success and usefulness; but alas! His morning sun has gone down in the darkness of death, and his manly form sleeps in the cold confines of the grave! But that life shall be restored, that body shall be reanimated, and our dear Tommie shall live again. Shall live again, we trust, in that beautiful world of which he has heard so much from the lips of a devoted mother, and whose Redeemer and King she taught him to love and praise. Words are too feeble to express the sympathy we feel for the widowed mother and little orphaned brothers and sister in their bereavement and bitter sorrow. We can only weep with them, and pray Him who raised the widow's son of Naia, who wept with Martha and Mary at the grave of Lazarus, and who raised him also from the dead, to deal mercifully with the stricken ones, sanctify their sorrow and lead them gently and safely through the dangers and toils of life, and take them at last to the mansions above, where the happy circle shall be formed anew, to be broken no more forever!
{ "redpajama_set_name": "RedPajamaC4" }
4,516
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <!--NewPage--> <HTML> <HEAD> <!-- Generated by javadoc (build 1.6.0_21) on Fri May 03 11:00:22 CEST 2013 --> <TITLE> PopupPlugin </TITLE> <META NAME="date" CONTENT="2013-05-03"> <LINK REL ="stylesheet" TYPE="text/css" HREF="../../stylesheet.css" TITLE="Style"> <SCRIPT type="text/javascript"> function windowTitle() { if (location.href.indexOf('is-external=true') == -1) { parent.document.title="PopupPlugin"; } } </SCRIPT> <NOSCRIPT> </NOSCRIPT> </HEAD> <BODY BGCOLOR="white" onload="windowTitle();"> <HR> <!-- ========= START OF TOP NAVBAR ======= --> <A NAME="navbar_top"><!-- --></A> <A HREF="#skip-navbar_top" title="Skip navigation links"></A> <TABLE BORDER="0" WIDTH="100%" CELLPADDING="1" CELLSPACING="0" SUMMARY=""> <TR> <TD COLSPAN=2 BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A NAME="navbar_top_firstrow"><!-- --></A> <TABLE BORDER="0" CELLPADDING="0" CELLSPACING="3" SUMMARY=""> <TR ALIGN="center" VALIGN="top"> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../overview-summary.html"><FONT CLASS="NavBarFont1"><B>Overview</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-summary.html"><FONT CLASS="NavBarFont1"><B>Package</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#FFFFFF" CLASS="NavBarCell1Rev"> &nbsp;<FONT CLASS="NavBarFont1Rev"><B>Class</B></FONT>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="class-use/PopupPlugin.html"><FONT CLASS="NavBarFont1"><B>Use</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-tree.html"><FONT CLASS="NavBarFont1"><B>Tree</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../deprecated-list.html"><FONT CLASS="NavBarFont1"><B>Deprecated</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../index-files/index-1.html"><FONT CLASS="NavBarFont1"><B>Index</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../help-doc.html"><FONT CLASS="NavBarFont1"><B>Help</B></FONT></A>&nbsp;</TD> </TR> </TABLE> </TD> <TD ALIGN="right" VALIGN="top" ROWSPAN=3><EM> </EM> </TD> </TR> <TR> <TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> &nbsp;<A HREF="../../gui/control/PickingPlugin.html" title="class in gui.control"><B>PREV CLASS</B></A>&nbsp; &nbsp;<A HREF="../../gui/control/QuadTreeShapePickSupport.html" title="class in gui.control"><B>NEXT CLASS</B></A></FONT></TD> <TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> <A HREF="../../index.html?gui/control/PopupPlugin.html" target="_top"><B>FRAMES</B></A> &nbsp; &nbsp;<A HREF="PopupPlugin.html" target="_top"><B>NO FRAMES</B></A> &nbsp; &nbsp;<SCRIPT type="text/javascript"> <!-- if(window==top) { document.writeln('<A HREF="../../allclasses-noframe.html"><B>All Classes</B></A>'); } //--> </SCRIPT> <NOSCRIPT> <A HREF="../../allclasses-noframe.html"><B>All Classes</B></A> </NOSCRIPT> </FONT></TD> </TR> <TR> <TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2"> SUMMARY:&nbsp;NESTED&nbsp;|&nbsp;FIELD&nbsp;|&nbsp;<A HREF="#constructor_summary">CONSTR</A>&nbsp;|&nbsp;<A HREF="#methods_inherited_from_class_edu.uci.ics.jung.visualization.control.AbstractPopupGraphMousePlugin">METHOD</A></FONT></TD> <TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2"> DETAIL:&nbsp;FIELD&nbsp;|&nbsp;<A HREF="#constructor_detail">CONSTR</A>&nbsp;|&nbsp;METHOD</FONT></TD> </TR> </TABLE> <A NAME="skip-navbar_top"></A> <!-- ========= END OF TOP NAVBAR ========= --> <HR> <!-- ======== START OF CLASS DATA ======== --> <H2> <FONT SIZE="-1"> gui.control</FONT> <BR> Class PopupPlugin</H2> <PRE> java.lang.Object <IMG SRC="../../resources/inherit.gif" ALT="extended by ">edu.uci.ics.jung.visualization.control.AbstractGraphMousePlugin <IMG SRC="../../resources/inherit.gif" ALT="extended by ">edu.uci.ics.jung.visualization.control.AbstractPopupGraphMousePlugin <IMG SRC="../../resources/inherit.gif" ALT="extended by "><B>gui.control.PopupPlugin</B> </PRE> <DL> <DT><B>All Implemented Interfaces:</B> <DD>edu.uci.ics.jung.visualization.control.GraphMousePlugin, java.awt.event.MouseListener, java.util.EventListener</DD> </DL> <HR> <DL> <DT><PRE>public class <B>PopupPlugin</B><DT>extends edu.uci.ics.jung.visualization.control.AbstractPopupGraphMousePlugin</DL> </PRE> <P> Menu po kliknuti pravym tlacitkom na vrchol. <P> <P> <DL> <DT><B>Author:</B></DT> <DD>Lukas Sekerak</DD> </DL> <HR> <P> <!-- ======== CONSTRUCTOR SUMMARY ======== --> <A NAME="constructor_summary"><!-- --></A> <TABLE BORDER="1" WIDTH="100%" CELLPADDING="3" CELLSPACING="0" SUMMARY=""> <TR BGCOLOR="#CCCCFF" CLASS="TableHeadingColor"> <TH ALIGN="left" COLSPAN="2"><FONT SIZE="+2"> <B>Constructor Summary</B></FONT></TH> </TR> <TR BGCOLOR="white" CLASS="TableRowColor"> <TD><CODE><B><A HREF="../../gui/control/PopupPlugin.html#PopupPlugin(edu.uci.ics.jung.visualization.VisualizationViewer, graph.GraphHolder, gui.tooltip.IStatus, render.DisplayControl, graph.util.TranslatingSreen)">PopupPlugin</A></B>(edu.uci.ics.jung.visualization.VisualizationViewer&lt;<A HREF="../../graph/objects/Vertex.html" title="class in graph.objects">Vertex</A>,java.lang.Integer&gt;&nbsp;vv, <A HREF="../../graph/GraphHolder.html" title="class in graph">GraphHolder</A>&nbsp;holder, <A HREF="../../gui/tooltip/IStatus.html" title="interface in gui.tooltip">IStatus</A>&nbsp;status, <A HREF="../../render/DisplayControl.html" title="class in render">DisplayControl</A>&nbsp;dp, <A HREF="../../graph/util/TranslatingSreen.html" title="class in graph.util">TranslatingSreen</A>&nbsp;ts)</CODE> <BR> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</TD> </TR> </TABLE> &nbsp; <!-- ========== METHOD SUMMARY =========== --> <A NAME="method_summary"><!-- --></A> <TABLE BORDER="1" WIDTH="100%" CELLPADDING="3" CELLSPACING="0" SUMMARY=""> <TR BGCOLOR="#CCCCFF" CLASS="TableHeadingColor"> <TH ALIGN="left" COLSPAN="2"><FONT SIZE="+2"> <B>Method Summary</B></FONT></TH> </TR> </TABLE> &nbsp;<A NAME="methods_inherited_from_class_edu.uci.ics.jung.visualization.control.AbstractPopupGraphMousePlugin"><!-- --></A> <TABLE BORDER="1" WIDTH="100%" CELLPADDING="3" CELLSPACING="0" SUMMARY=""> <TR BGCOLOR="#EEEEFF" CLASS="TableSubHeadingColor"> <TH ALIGN="left"><B>Methods inherited from class edu.uci.ics.jung.visualization.control.AbstractPopupGraphMousePlugin</B></TH> </TR> <TR BGCOLOR="white" CLASS="TableRowColor"> <TD><CODE>mouseClicked, mouseEntered, mouseExited, mousePressed, mouseReleased</CODE></TD> </TR> </TABLE> &nbsp;<A NAME="methods_inherited_from_class_edu.uci.ics.jung.visualization.control.AbstractGraphMousePlugin"><!-- --></A> <TABLE BORDER="1" WIDTH="100%" CELLPADDING="3" CELLSPACING="0" SUMMARY=""> <TR BGCOLOR="#EEEEFF" CLASS="TableSubHeadingColor"> <TH ALIGN="left"><B>Methods inherited from class edu.uci.ics.jung.visualization.control.AbstractGraphMousePlugin</B></TH> </TR> <TR BGCOLOR="white" CLASS="TableRowColor"> <TD><CODE>getCursor, getModifiers, checkModifiers, setCursor, setModifiers</CODE></TD> </TR> </TABLE> &nbsp;<A NAME="methods_inherited_from_class_java.lang.Object"><!-- --></A> <TABLE BORDER="1" WIDTH="100%" CELLPADDING="3" CELLSPACING="0" SUMMARY=""> <TR BGCOLOR="#EEEEFF" CLASS="TableSubHeadingColor"> <TH ALIGN="left"><B>Methods inherited from class java.lang.Object</B></TH> </TR> <TR BGCOLOR="white" CLASS="TableRowColor"> <TD><CODE>equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait</CODE></TD> </TR> </TABLE> &nbsp; <P> <!-- ========= CONSTRUCTOR DETAIL ======== --> <A NAME="constructor_detail"><!-- --></A> <TABLE BORDER="1" WIDTH="100%" CELLPADDING="3" CELLSPACING="0" SUMMARY=""> <TR BGCOLOR="#CCCCFF" CLASS="TableHeadingColor"> <TH ALIGN="left" COLSPAN="1"><FONT SIZE="+2"> <B>Constructor Detail</B></FONT></TH> </TR> </TABLE> <A NAME="PopupPlugin(edu.uci.ics.jung.visualization.VisualizationViewer, graph.GraphHolder, gui.tooltip.IStatus, render.DisplayControl, graph.util.TranslatingSreen)"><!-- --></A><H3> PopupPlugin</H3> <PRE> public <B>PopupPlugin</B>(edu.uci.ics.jung.visualization.VisualizationViewer&lt;<A HREF="../../graph/objects/Vertex.html" title="class in graph.objects">Vertex</A>,java.lang.Integer&gt;&nbsp;vv, <A HREF="../../graph/GraphHolder.html" title="class in graph">GraphHolder</A>&nbsp;holder, <A HREF="../../gui/tooltip/IStatus.html" title="interface in gui.tooltip">IStatus</A>&nbsp;status, <A HREF="../../render/DisplayControl.html" title="class in render">DisplayControl</A>&nbsp;dp, <A HREF="../../graph/util/TranslatingSreen.html" title="class in graph.util">TranslatingSreen</A>&nbsp;ts)</PRE> <DL> </DL> <!-- ========= END OF CLASS DATA ========= --> <HR> <!-- ======= START OF BOTTOM NAVBAR ====== --> <A NAME="navbar_bottom"><!-- --></A> <A HREF="#skip-navbar_bottom" title="Skip navigation links"></A> <TABLE BORDER="0" WIDTH="100%" CELLPADDING="1" CELLSPACING="0" SUMMARY=""> <TR> <TD COLSPAN=2 BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A NAME="navbar_bottom_firstrow"><!-- --></A> <TABLE BORDER="0" CELLPADDING="0" CELLSPACING="3" SUMMARY=""> <TR ALIGN="center" VALIGN="top"> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../overview-summary.html"><FONT CLASS="NavBarFont1"><B>Overview</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-summary.html"><FONT CLASS="NavBarFont1"><B>Package</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#FFFFFF" CLASS="NavBarCell1Rev"> &nbsp;<FONT CLASS="NavBarFont1Rev"><B>Class</B></FONT>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="class-use/PopupPlugin.html"><FONT CLASS="NavBarFont1"><B>Use</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-tree.html"><FONT CLASS="NavBarFont1"><B>Tree</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../deprecated-list.html"><FONT CLASS="NavBarFont1"><B>Deprecated</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../index-files/index-1.html"><FONT CLASS="NavBarFont1"><B>Index</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../help-doc.html"><FONT CLASS="NavBarFont1"><B>Help</B></FONT></A>&nbsp;</TD> </TR> </TABLE> </TD> <TD ALIGN="right" VALIGN="top" ROWSPAN=3><EM> </EM> </TD> </TR> <TR> <TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> &nbsp;<A HREF="../../gui/control/PickingPlugin.html" title="class in gui.control"><B>PREV CLASS</B></A>&nbsp; &nbsp;<A HREF="../../gui/control/QuadTreeShapePickSupport.html" title="class in gui.control"><B>NEXT CLASS</B></A></FONT></TD> <TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> <A HREF="../../index.html?gui/control/PopupPlugin.html" target="_top"><B>FRAMES</B></A> &nbsp; &nbsp;<A HREF="PopupPlugin.html" target="_top"><B>NO FRAMES</B></A> &nbsp; &nbsp;<SCRIPT type="text/javascript"> <!-- if(window==top) { document.writeln('<A HREF="../../allclasses-noframe.html"><B>All Classes</B></A>'); } //--> </SCRIPT> <NOSCRIPT> <A HREF="../../allclasses-noframe.html"><B>All Classes</B></A> </NOSCRIPT> </FONT></TD> </TR> <TR> <TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2"> SUMMARY:&nbsp;NESTED&nbsp;|&nbsp;FIELD&nbsp;|&nbsp;<A HREF="#constructor_summary">CONSTR</A>&nbsp;|&nbsp;<A HREF="#methods_inherited_from_class_edu.uci.ics.jung.visualization.control.AbstractPopupGraphMousePlugin">METHOD</A></FONT></TD> <TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2"> DETAIL:&nbsp;FIELD&nbsp;|&nbsp;<A HREF="#constructor_detail">CONSTR</A>&nbsp;|&nbsp;METHOD</FONT></TD> </TR> </TABLE> <A NAME="skip-navbar_bottom"></A> <!-- ======== END OF BOTTOM NAVBAR ======= --> <HR> </BODY> </HTML>
{ "redpajama_set_name": "RedPajamaGithub" }
1,318
3. A verified account is the one with the User's e-mail address and date of birth provided. 3. A verified account is the one with the User's e-mail address provided. 3.1. Filling complaints on the Game functioning is impossible without submitting a valid e-mail and date of birth. 3.1. Filling complaints on the Game functioning is impossible without submitting a valid e-mail. 6. The service of refilling Draconites can be purchased only by adult persons, or persons aged 16 or older with a permission from their legal representatives (parent, guardian, keeper). 6. Draconites service can be purchased only by adults, persons aged 16 or younger with a permission from their legal representatives (parent, guardian, keeper). 7. The Service Provider or Administrators have right to demand an identity document from the Player, which will prove that the Player's age meets the Agreement's requirements - age of 13. 7. The Service Provider or Administrators have right to demand an identity document from the Player, which will prove that the Player's age meets the Agreement's requirements - age of 16. 19. Asking another player for providing any data saved in their Account's configuration (login, password, date of birth etc. in particular). It is also forbidden to share the aforementioned data to the third party in order to bypass the Substitutes System. 19. Asking another player for providing any data saved in their Account's configuration (login, password etc. in particular). It is also forbidden to share the aforementioned data to the third party in order to bypass the Substitutes System.
{ "redpajama_set_name": "RedPajamaC4" }
4,031
The Orville is een Amerikaanse sciencefiction-comedy-drama serie bedacht door Seth MacFarlane. Het eerste seizoen werd uitgezonden van september tot december 2017. In november 2017 meldde Fox dat er een tweede seizoen zou komen, waarvan de eerste (dubbele) aflevering op 30 december 2018 in de VS werd uitgezonden. In mei 2019 kondigde Fox een derde seizoen aan. Het derde seizoen ging uiteindelijk pas in juni 2022 van start op de streamingdienst Hulu onder de titel The Orville: New Horizons. De eerste twee seizoenen van de serie zijn in Nederland uitgezonden door Comedy Central en in Vlaanderen door VTM 2. Verhaal The Orville gaat over de avonturen van het sterrenschip "The Orville" en de bemanning onder het commando van captain Ed Mercer. Het verhaal speelt zich af 400 jaar in de toekomst, in de 25e eeuw. De aarde maakt deel uit van de Planetary Union, een vreedzaam interplanetair samenwerkingsverband met een eigen vloot. De bemanning, zowel menselijk als buitenaards, krijgen te maken met de verrassingen en gevaren van het universum, maar ook met bekende, vaak grappige problemen van het alledaagse leven. De serie is een lichte parodie op de sciencefictionwereld, in het bijzonder op de Star Trek-reeks. Cast Hoofdrollen Bijrollen Terugkerende gastrollen Gastrollen Afleveringen Seizoen 1 (2017) Seizoen 2 (2018/2019) Seizoen 3 (2022) Connectie met andere tv-series Star Trek The Orville heeft verschillende connecties met de Star Trek-series: Penny Johnson Jerald die de hoofdrol van dokter Claire Finn speelt had eerder een bijrol in Star Trek: Deep Space Nine als Kasidy Yates. Bedenker, uitvoerend producent, regisseur, scenarioschrijver en hoofdrolspeler Seth MacFarlane speelde zelf een gastrol in twee afleveringen van Star Trek: Enterprise. Scott Grimes die de hoofdrol van piloot Gordon Malloy speelt had een gastrol in een aflevering van Star Trek: The Next Generation. Uitvoerend producent, regisseur en scenarioschrijver Brannon Braga werkte eerder als producent en scenarioschrijver voor Star Trek: The Next Generation, Star Trek: Voyager en Star Trek: Enterprise en Star Trek: Generations en Star Trek: First Contact. Uitvoerend producent en scenarioschrijver David A. Goodman werkte eerder als scenarioschrijver voor Star Trek: Enterprise. Producent en scenarioschrijver Joe Menosky werkte eerder als producent en scenarioschrijver voor Star Trek: The Next Generation, Star Trek: Deep Space Nine en Star Trek: Voyager. Producent André Bormanis werkte eerder als producent en scenarioschrijver voor Star Trek: Voyager en Star Trek: Enterprise. Regisseur James L. Conway werkte eerder als regisseur voor Star Trek: The Next Generation, Star Trek: Deep Space Nine, Star Trek: Voyager en Star Trek: Enterprise. Cameraregisseur Marvin V. Rush werkte eerder als cameraregisseur en regisseur voor Star Trek: The Next Generation, Star Trek: Deep Space Nine, Star Trek: Voyager en Star Trek: Enterprise. Star Trek-hoofdrolspelers Jonathan Frakes (Star Trek: The Next Generation) en Robert Duncan McNeill (Star Trek: Voyager) regisseerde beiden afleveringen in seizoenen 1 en 2. Tot op heden hebben vier Star Trek-hoofdrolspelers gastrollen gespeeld, te weten Robert Picardo (Star Trek: Voyager), John Billingsley (Star Trek: Enterprise), Marina Sirtis (Star Trek: The Next Generation) en Tim Russ (Star Trek: Voyager) . Ook gastacteurs Jason Alexander, F. Murray Abraham, Ron Canada, Steven Culp, Tony Todd, Brian George, Robert Knepper, John Rubinstein, James Horan, Molly Hagan, Derek Mears, Brian Thompson, Robert Curtis Brown, John Fleck, JD Cullum, J. Paul Boehmer, Joel Swetow, D. Elliot Woods, James Read speelden eerder gastrollen in Star Trek. Family Guy The Orville wordt mede-geproduceerd door Seth MacFarlanes eigen productiebedrijf Fuzzy Door Productions, verschillende producenten, regisseurs, scenarioschrijvers en acteurs die aan de The Orville werken zijn daarnaast ook werkzaam geweest bij MacFarlanes andere shows Family Guy, American Dad! en The Cleveland Show, naast MacFarlane zelf onder andere: David A. Goodman, Cherry Chevapravatdumrong, Wellesley Wild, Scott Grimes, Mike Henry en Rachael MacFarlane. Amerikaanse dramaserie Amerikaanse komedieserie Amerikaanse sciencefictionserie Satirisch televisieprogramma Programma van FOX Programma van Hulu
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,060
{"url":"http:\/\/quant.stackexchange.com\/questions?page=51&sort=votes","text":"# All Questions\n\n29 views\n\n### How to determine the equiy interest of target company if there is circular ownership?\n\nI would like to ask is there any way to determine the equity interest of target company if there is circular ownership. For example, suppose company A owns 50% of company B, company B owns 100% of ...\n111 views\n\n### Backtesting with fundamentals\n\nRecently I've read some books about quantative approach to fundamental investing: - What works on Wall Street - James O'Shaughnessy - Quantitative Value - Wesley Gray, Tobias Carlisle - Quantitative ...\n20 views\n\n### Forward Yield curve for an arbitrary company\n\nLet say I am analyzing a company XYZ. Credit rating for this company is BB. Now I need to have the 6-month forward Yield curve for this company. Can somebody help me how to find this information from ...\n89 views\n\n### List of financial derivatives Ito's Lemma does not apply\n\nAccording to Ito's Lemma there is no restriction on the continuity of the stochastic process. The restrictions are on the continuity of the pay-off so that second derivatives with respect to ...\n40 views\n\n### Detrending before cointegration\n\nWhen checking for co-integration , is it necessary to detrend the time series? What is the best way to go about it?\n40 views\n\n### How to setup quantlib\n\nI installed the quantlib using vc11 but couldn't make it work. I did everything as the tutorial said https:\/\/quantcorner.wordpress.com\/2012\/11\/13\/installing-quantlib-for-vc11-windows\/ And the ...\n36 views\n\n48 views\n\n### What is the arbitrage opportunity in Arrow-Debreu One Period market Model\n\nThe one period market model is made of 4 securities(A, B, C, D) and has 4 future states. Assume the market model is complete. and the state prices are (-2, 2, 4, 8). Given that I dont know the payoff ...\n27 views\n\n### swaps valuation\n\nI am asked to solve the marking to market value(MtM) of a swap, unfortunely i\u00b4m having big troubles finding the solution, it\u00b4s a 5.5% (vs. LIBOR) 10-year swap, The notional is 500 mio USD and LIBOR ...\n22 views\n\n### Transaction costs of lending and borrowing\n\nWhat are the transaction costs of lending and borrowing? How financial intermediaries reduce transaction costs of lending and how they reduce costs of borrowing? Thank you!\n23 views\n\n### How to rightfully balance the share of the organization between departments after variable changes?\n\nThis is an abstracted version of the problem I'm facing and I have to tell you first, my question might not be precise and or even correct, so I hope you understand and in that case can improve the ...\n67 views\n\n### Hull's method for the optimal hedge ratio: why?\n\nTo illustrate my question, let's assume that the owner of one unit of asset (unit spot price variable S) needs to sell this asset at time T in the future. In order to hedge against a possible fall ...\n48 views\n\n### Realized volatility Create index from individual stock data?\n\nI am about to estimate realized volatility on high frequent stocks. I am aware of the implications of the model (e.g. microstructure etc.) But what I dont know is how I can create an index to estimate ...\n73 views\n\n### Why are short expiries associated with more pronounced volatility skews?\n\nI've noticed that for a given strike price, the shorter expiration dates of options have more pronounced volatilities why is that?\n71 views\n\n### Can we model components in a set of multivariate multi-period time-series data?\n\nThere are N data sets in periods occurring weekly\/monthly, across a 10-year historical timeline. In each period, five dates are observed (labelled a to e), where a denotes the day the period ...\n40 views\n\n72 views\n\n### Stock Price Question\n\nCan anyone show me how to answer this please? A stock has beta of 2.0 and stock specific daily volatility of 0.04. Suppose that yesterday\u2019s closing price was 95 and today the market goes up by 3%. ...\n33 views\n\n### Is there any way to adjust the average cumulative credit loss rates (Exihibit 22) in Annual Default Study 1920-2012 by Moody due to country risk?\n\nI would like to know whether we can adjust the average cumulative credit loss rates (Exihibit 22) in Annual Default Study 1920-2012 by Moody due to country risk (eg. Damodaran country risk premium). ...\n79 views\n\n### Data feed for 10 year government bond yields [duplicate]\n\nI am trying to access a data feed for 10 year sovereign bond yields for countries, say the G20. I have tried world bank and IMF data api sources but to no avail. The data feed is used to update an ...\n44 views\n\n### Interpolation on CDS rates\n\nI am just wondering if there is any way we could calculate a CDS Spread (not harzard rate) on a CDS curve. Most of the papers that I have come across so far discuss about interpolating the hazard ...\n78 views\n\n### Calculating index arbitrage\n\nI have a days-worth of level 2 market data. I am calculating S&P500 index arbitrage. I have a few questions about the calculation: 1) Should I be summing all the bids and asks from the stocks ...\n207 views\n\n### fair price for a call option\n\nI am struggling with the following: an investor is considering a call option on the shares of XYZ. the strike price is 510p and the option can only be exercised in exactly 2 months time. at the ...\n44 views\n\n### Data vendor providing end of day equity data for private use including US and main European countries\n\nI'm looking for a data vendor with reliable data (and affordable price) that provides end of day data for at least US, german, french, italian and spanish equities. The data export should be automated ...\n85 views\n\n### Calculating the Sum of Squared Deviations between two Normalized Price Series\n\nHow can I calculate the sum of square deviations between two normalized price series according to (Gatev et. co 2006)? My normalized price series of stocks $X$ and $Y$ consist of the cumulative total ...\n130 views\n\n### Step-by-Step PCA algorithm (checking correctness without math packages)\n\nI would appreciate if someone could correct me if i am wrong in my suggestion. I am using PCA to : find measure of cointegration between selected assets find the eigenvector and its portfolio with ...\n38 views\n\n### Rational expectation meaning\n\nWhat does rational expectation (RE) mean in agent-based modeling context? What are the relationships between RE and expectation and outcome?","date":"2014-04-18 03:12:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6735069751739502, \"perplexity\": 2503.379632285542}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-15\/segments\/1397609532480.36\/warc\/CC-MAIN-20140416005212-00077-ip-10-147-4-33.ec2.internal.warc.gz\"}"}
null
null
module Booker module V4 module Models class BusinessType < Type; end end end end
{ "redpajama_set_name": "RedPajamaGithub" }
8,960
Q: Estimating a model with two unobserved components that has measurement equation, signal equations and three transition equations I have a dataset containing CPI inflation, 10 year breakeven rate, output gap, relative import price inflation, 2-year breakeven rate, 2-year firm inflation expectations, 2 year household inflation expectations and 2-year market inflation expectations. I am trying to make a updated version of a research paper that has been done before. The researcher estimates a model containing two unobserved component or two latent variables, that is "unseen" wedge between expectations from the bond market and the risk premium from the bond market. Here is the equation So the dependent variable is cpi inflation, the independent variable is lagged cpi inflation for one quarter and the second one has (expectations wedge, 2 year breakeven rate minus the unobserved risk premium), the third independent variable is the output gap and the fourth is relative imported price inflation. The researcher also has two other signal equations, * *two year breakeven rate as the dependent variable. The independent variable is "unobserved" inflation expectations + risk premium. I do not have data on this wedge neither on the risk premium. *two year inflation expectations for households, firms and market as the dependent variables and the independent variable is the sum of unobserved 2-year inflation expectations and the risk premium. Where j is households, firms and market expectations, which I get from surveys. There are additional three transition equations, the first two specifying the two expectations as random walks, while the final one specifies the risk premium as an AR(1) process The errors are which are independent white noise errors with variances So the equations are estimated in a Time-Varying parameter system for the sample period 2003Q1 to 2016Q4 with T=56. The researcher imposes an adittional smoothing on the variance of so that the variance is divided by 10. The result gives out an estimate of the first equation with two unobserved components. After estimating the phillips curve the researcher extracts the expectations wedge, Pi^uc, as seen above and uses a smoothed kalman estimate over the time period. . This is the results. How do I replicate this research if I use the same data set just with updated data? I am interested to estimate the risk premium and the expectations wedge like the researcher does, but I am stuck with the "unobserved component" how do I estimate an equation that contains variables that are latent or non-existing? I have looked into State-space modelling but how would I estimate such a model without the data points?
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,210
Gabriel Castellón, vollständig Gabriel Jesús Castellón Velazque, (* 8. September 1993 in Valparaíso) ist ein chilenischer Fußballtorwart. Karriere Verein Gabriel Castellón kam mit 7 Jahren in die Jugendabteilung der Santiago Wanderers. Dort verbrachte er seine Jugend und wurde 2012 an Deportes Colchagua ausgeliehen, um Spielpraxis in der Tercera División zu sammeln. Dort absolvierte der Torhüter allerdings nur eine Partie. 2017 stiegen die Santiago Wanderers mit Castellón im Tor in die Primera B ab, gewann aber im selben Jahr den chilenischen Pokal. Der Torwart blieb bis 2018 im Verein und wechselte dann zur neuen Saison zu CD Huachipato. Nationalmannschaft Bei der U-20 Südamerikameisterschaft 2013 wurde Castellón für die chilenische U-20-Auswahl als Ersatz von Lawrence Vigouroux nominiert. Für die A-Nationalmannschaft Chiles wurde Gabriel Castellón erstmals 2017 für den China-Cup nominiert, den sein Team nach Siegen über Kroatien und Island gewinnen konnte. Castellón wurde nicht eingesetzt. 2021 wurde er erneut im Rahmen der Copa América 2021 und der WM-Qualifikation 2022 für das Nationalteam nominiert, blieb aber erneut ohne Einsatz. Erfolge Santiago Wanderers Chilenischer Pokalsieger: 2017 International China-Cup 2017 Weblinks Einzelnachweise Fußballtorhüter (Santiago Wanderers) Fußballtorhüter (CD Huachipato) Chilene Geboren 1993 Mann
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,606
\section{Introduction}\label{sec:intro} Multi-agent reinforcement learning (MARL) has been the focus of research across a range of research communities~\citep{shapley1953stochastic,littman1994markov}. The case of two-player Markov Games (MG) has been of particular interest. In this case, two players select their actions based on the current state simultaneously and independently. One player (the max-player) aims to maximize the return based on the reward provided by the environment, while the other (the min-player) aims to minimize it. A series of recent results have established polynomial sample complexity/regret guarantees that depend on the cardinality of state/action spaces for two-player zero-sum MGs~\citep{wei2017online,bai2020provable,bai2020near,liu2020sharp, jia2019feature,sidford2020solving,cui2020minimax, lagoudakis2012value,perolat2015approximate,perolat2016softened,perolat2016use,perolat2017learning, jin2021v}. Meanwhile, most of the recent successful applications of MARL deal with \emph{large state/action spaces} that may be continuous or a fine-grained discretizations of a continous space. Examples include GO \citep{silver2016mastering}, autonomous driving \citep{shalev2016safe}, TexasHold'em poker \citep{brown2019superhuman}, and AlphaStar for the game Starcraft \citep{vinyals2019grandmaster}. In order to tackle problems with large state/action spaces, researchers have designed MARL algorithms based on \emph{function approximation} which approximate the original high-dimensional value function/policy by a function approximator. For instance, \citet{xie2020cce} and \citet{chen2021almost} studied RL for two-player zero-sum MGs with \emph{linear function approximation}, where it is assumed that there are a set of \emph{linear features} that span the transition kernel and reward function spaces. In contrast to RL with linear function approximation, RL with \emph{nonlinear function approximation} (e.g., kernel and neural network approximation) aims to take advantage of the superior representational power of nonlinear functions compared to linear parameterizations. For example, \citet{jin2021power} studied neural-network-based RL in the setting of \emph{MGs with low multi-agent Bellman eluder dimension}, obtaining algorithms that have polynomial dependence on the complexity of the underlying function class. Although this yields a strong theoretical guarantee, the algorithm that they propose is not computationally efficient due to the nonconvexity of the confidence sets that are constructed. The following question remains open:% \emph{ Can we design a computationally and statistically efficient RL algorithm for learning two-player Markov Games with nonlinear function approximation? } In this paper, we give an affirmative answer to this question for a class of episodic Markov Games, dubbed \emph{mixture Markov Games}, when using nonlinear approximations from a Reproducing Kernel Hilbert Space (RKHS). We propose a novel kernel-based MARL algorithmic framework for general episodic two-player zero-sum MGs which provides provable regret guarantees. We summarize the contributions of our work as follows: \begin{enumerate}[label=(\roman*)] \item We propose a $\texttt{KernelCCE-VTR}\xspace$ algorithm for two-player zero-sum MGs. In particular, at each episode, $\texttt{KernelCCE-VTR}\xspace$ uses kernel function approximation to approximate the optimal value function and constructs corresponding confidence sets, following the \emph{``Optimism-in-Face-of-Uncertainty''} principle \citep{abbasi2011improved} to select an action based on the current state. In contrast to the algorithms in \citet{jin2021power}, which construct implicit confidence sets that are in general computationally intractable, our $\texttt{KernelCCE-VTR}\xspace$ algorithm crafts a computationally efficient exploration bonus based on the Gram matrix of the kernel function. \item Under the assumption that the transition dynamics belongs to some RKHS, we show that our $\texttt{KernelCCE-VTR}\xspace$ algorithm is able to find a Nash equilibrium of the game with an $\tilde{O}( d_{\cF} H^2 \sqrt{T})$ regret bound on the duality gap, where $H$ is the horizon, $T$ is the number of the episodes, and $d_{\cF}$ represents the complexity of the function class $\cF$. We also propose an extension of $\texttt{KernelCCE-VTR}\xspace$ that utilizes \emph{weighted kernel ridge regression} and a \emph{Bernstein-type bonus} to achieve $\tilde O(d_{\cF} H^{3/2} \sqrt{T})$ regret. When $\cF$ reduces to the $d$-dimensional linear function class, our regret reduces to $\tilde{O}(dH^{3/2}\sqrt{T})$, which almost matches the lower bound in~\citet{chen2021almost}. \item We also study the general case where the transition dynamics belongs to some RKHS up to a misspecification error. We show that $\texttt{KernelCCE-VTR}\xspace$ can achieve a similar regret as in the well-specified case. In particular, we study the neural network function approximation case which can be regarded as a special instance of the misspecified RKHS case and derive the corresponding regret bound. \end{enumerate} \paragraph{Notation.} We use lower-case letters to denote scalars, and lower- and upper-case bold letters to denote vectors and matrices. We use $\| \cdot \|$ to indicate Euclidean norm, and for a semi-positive definite matrix $\bSigma$ and any vector $\xb$, we define $\| \xb \|_{\bSigma} := \| \bSigma^{1/2} \xb \| = \sqrt{\xb^{\top} \bSigma \xb}$. For real $t$ and interval $[a,b]$, we use $\Pi_{[a,b]}[t]$ to indicate the projection of $t$ onto $[a,b]$, i.e.~$ \Pi_{[a,b]}[t] = \max\left(a,\min(b,t)\right) $. For positive integer $N$ we define $[N] = \{1,\dots,N\}$. We also adopt the standard big-$O$ and big-$\Omega$ notation: say $a_n = O(b_n)$ if and only if there exists $C > 0, N > 0$, for any $n > N$, $a_n \le C b_n$; $a_n = \Omega(b_n)$ if $a_n \ge C b_n$. The notations $\tilde{O}$ and $\tilde{\Omega}$ are adopted when $C$ hides a polylogarithmic factor. \section{Related Work}\label{sec_related} \paragraph{Online RL with function approximation.} MARL with function approximation can be seen as an extension of RL with function approximation on MDPs. There are several lines of work studying RL with function approximation. The first line of work studies the so-called linear MDP which assumes the reward function and transition dynamics are linear functions of a feature mapping defined on the state and action spaces \citep{yang2019reinforcement, jin2019provably, zanette2020learning}. These works propose model-free algorithms with sublinear regret with respect to the number of episodes $K$. The second line of work studies the linear mixture MDP which assumes the transition kernel is a linear combination of several base models \citep{modi2019sample, jia2020model, zhou2021provably, zhou2020nearly}. These studies propose model-based RL algorithms that estimate the transition kernel with finite sample complexity or sublinear regret guarantees. The third line of work studies general function approximation for either the value function or the transition kernel \citep{osband2014model, jiang2017contextual, sun2019model, wang2020reinforcement, yang2020function, du2021bilinear, jin2021bellman}. Algorithms proposed in this vein enjoy finite regret or sample complexity bounds that depend on general complexity measures such as Eluder dimension \citep{russo2013eluder, osband2014model}, Bellman rank \citep{jiang2017contextual}, witness rank \citep{sun2019model}, information gain \citep{yang2020function}, bilinear class \citep{du2021bilinear} and Bellman eluder dimension \citep{jin2021bellman}. \paragraph{Learning two-player MGs with function approximation.} There is a large body of literature on MARL for two-player MGs with function approximation. These works can be generally categorized into MARL with \emph{linear function approximation} and MARL with \emph{general function approximation}. For example, for linear function approximation, \citet{xie2020cce} studied zero-sum simultaneous-move MGs where both the reward and transition kernel can be parameterized as linear functions of feature mappings. They proposed an OMVI-NI algorithm with an $\tilde O(\sqrt{d^3 H^3 T})$ regret, where $d$ is the number of the feature dimension, $H$ is the episode length and $T$ is the total number of rounds. \citet{chen2021almost} studied the linear mixture MGs and proposed a nearly minimax optimal Nash-UCRL-VTR algorithm with an $\tilde O(dH\sqrt{T})$ regret and an $\Omega(dH\sqrt{T})$ matching lower bound. In contrast to this work, our $\texttt{KernelCCE-VTR}\xspace$ does not assume the underlying transition dynamic or reward function have a linear structure. For MARL with general function approximation, \citet{jin2021power} studied the two-player zero-sum MGs with low multi-agent Bellman Eluder dimension and proposed a ``Golf with Exploiter'' algorithm using a general function class. They showed their algorithm enjoys an $\tilde O(H\sqrt{dK\log N})$ regret, where $d$ is the multi-agent Bellman eluder dimension, and $K$ is the number of episodes. \citet{huang2021towards} studied two-player MGs with a finite minimax Eluder dimension and proposed an ONEMG method with an $\tilde O(H\sqrt{dK\log N})$ regret, where $d$ is the minimax Eluder dimension. To obtain the desired function approximator, both Golf with Exploiter and ONEMG need to solve a constrained optimization problem which is computationally intractable even in the linear function approximation setting. In contrast to \citet{jin2021power} and \citet{huang2021towards}, our proposed algorithms are computationally efficient and nearly optimal when using the Bernstein-type bonus. \citet{qiu2021reward} also studied kernel function approximation for two-player MGs. However, there are two key differences between our work and theirs. First, \citet{qiu2021reward} studied MGs where the expectation of the value function is in some RKHS; we, on the other hand, assume that the transition dynamics of the MG lies in an RKHS. Second, while the regret result in \citet{qiu2021reward} depends on the covering number of the function space, our regret is \emph{independent} of the covering number. \section{Preliminaries}\label{sec_prelim} In this section, we present the necessary definitions that will be adopted throughout the paper. Section~\ref{sec_prelim_twoplayer} describes simultaneous-move games in the setting of zero-sum two-player Markov Games (MG) and recaps the concepts of equilibrium and duality gap that are employed in the game theory literature. Section~\ref{sec_prelim_rkhs} provides necessary definitions and notation for approximations based on a reproducing kernel Hilbert space (RKHS). \subsection{Two-player Markov Games}\label{sec_prelim_twoplayer} A simple instance of Markov Games, referred to as turn-based games, can be seen as a special case of simultaneous-move games.% \footnote{We present a discussion of the implications of our results for turn-based games in the supplementary materials.} In a zero-sum two-player simultaneous-move Markov Game, the dynamical structure is captured by an MG, denoted $(\cS, \cA_1, \cA_2, r, \PP, H)$, where $\cS$ is the space of states of the environment, $\cA_1$ is the action space of the first player and $\cA_2$ is the action space of the second player. $H$ is the time horizon representing the maximum step of each round of play. The reward function $r: \left\{r_h(x, a, b): h \in [H]\right\}$ is a sequence of mappings from $\cS \times \cA_1 \times \cA_2$ to $[-1, 1]$. The transition matrix $\PP: \left\{\PP_h(\cdot | x, a, b): h \in [H] \right\}$ gives for each triplet $(x, a, b)$ and at each time $h$ the stochastic response of the environment to the next $x' \in \cS$. Here by ``simultaneous move'' we refer to the setting where at each round of game the two players $P_1$ and $P_2$ take actions $a \in \cA_1, b \in \cA_2$ simultaneously at a given state $x \in \cS$, in contrast with the turn-based game where $r_h$ and $\PP_h$ are defined for a state-action pair $(x, a)$ where the action can be taken by either player. In the context of this paper, for simplicity of notation we let $\cA_1 = \cA_2 = \cA$, noting that the results can be easily generalized to the case when $\cA_1 \neq \cA_2$. Similar definitions of a zero-sum two-player simultaneous-move episodic Markov Games can be found in~\citet{wei2017online,perolat2018actor,xie2020cce}. In the above setting, two players $P_1$ and $P_2$ take actions according to their individual strategies. We use $\pi := \{\pi_h\}_{h \in [H]}$ to denote the stochastic policy of $P_1$ and use $\nu := \{\nu_h\}_{h \in [H]}$ to denote the stochastic policy of $P_2$. We note that at time $h$, $\pi_h: \cS \mapsto \Delta_{\cA}$ maps the current state $x_h$ to a probability distribution of the actions, and similarly for $\nu_h$. Given two agents' policies, $\pi, \nu$, across $h$ steps, the state value function is defined as the expected total reward through $H$ steps where at step $h \in [H]$ player $P_1$ follows policy $\pi_h(\cdot | x_h)$ and player $P_2$ follows policy $\nu_h(\cdot | x_h)$, \begin{align} V_h^{\pi, \nu}(x) := \EE_{\pi, \nu} \left[\sum_{t = h}^H r_t(s_t, a_t, b_t) \mid x_h = x\right] ,\quad \blue{ V_{H+1}^{\pi, \nu}(x):=0 },\notag \end{align} and where $V^{\pi, \nu}(x) := V_1^{\pi, \nu}(x)$. Note that the expectation is taken over all stochasticity in $\pi_h, \nu_h$ and $\PP_h$. The action-value function is defined as \begin{align} Q_h^{\pi, \nu}(x, a, b) := \EE_{\pi, \nu} \left[\sum_{t = h}^H r_t(x_t, a_t, b_t) \,\bigg|\, x_h = x, a_h = a, b_h = b\right] ,\quad \blue{ Q_{H+1}^{\pi, \nu}(x,a,b):=0 },\notag \end{align} and $ Q^{\pi, \nu}(x, a, b) := Q_1^{\pi, \nu} . $ From the definition of two value functions, we observe that for any $x \in \cS$, the state value function given policy pair $(\pi, \nu)$ is the expectation of the corresponding action-value function \begin{align} V_h\genpo(x) := \EE_{(a, b) \sim (\pi, \nu)} Q_h\genpo(x, a, b) ,\notag \end{align} where the expectation is taken over the action distribution induced by the policy pair. {\blue During this paper, we use superscripts to denote the number of episodes and subscripts to denote the number of horizon steps. } \paragraph{Nash equilibrium and duality gap.} In a zero-sum two-player Markov Game, $P_1$ wants to maximize the expected reward $V^{\pi, \nu}(x)$ via the choice of the policy $\pi$. On the other hand, $P_2$ wants to minimize $V^{\pi, \nu}(x)$ by the choice of $\nu$. For fixed $\nu$, we define the best-response policy with respect to $V$ and $\nu$ as $\text{br} (\nu)$ and define $V_h^{*, \nu} := V_h^{\text{br}(\nu), \nu}$ and $Q_h^{*, \nu} := Q_h^{\text{br}(\nu), \nu}$, We define $V_h^{\pi, *} := V_h^{\pi, \text{br}(\pi)}$ and $Q_h^{\pi, *} := Q_h^{\pi, \text{br}(\pi)}$ similarly. A Nash equilibrium is a pair of policies $(\pi^*, \nu^*)$ that are the best-response policy for each other, which we write as $V^{\pi^*, *}(x) = V^{\pi^*, \nu^*}(x) = V^{*, \nu^*}(x)$. For notational simplicity we write $V^* := V^{\pi^*, \nu^*}, Q^* := Q^{\pi^*, \nu^*}$. By definition of the best-response policy, we obtain weak duality: \begin{align} V_h^{\pi, *}(x) \leq V_h^*(x) \leq V_h^{*, \nu}(x).\notag \end{align} For any policy pair $(\pi, \nu)$, we define the duality gap as $V_1^{*, \nu^t}(x_1^t) -V_1^{\pi^t, *}(x_1^t)$. We call a pair an \emph{$\epsilon$-approximate Nash equilibrium (NE)} if $V_1^{*, \nu^t}(x_1^t) -V_1^{\pi^t, *}(x_1^t) \leq \epsilon$. We also define the regret in the MG setting as follows: \begin{align} \textrm{Regret}(T) := \sum_{t = 1}^T V_1^{*, \nu^t}(x_1^t) - V_1^{\pi^t, *}(x_1^t) .\notag \end{align} \paragraph{Coarse Correlated Equilibrium.} We introduce the \emph{Coarse Correlated Equilibrium (CCE)} solution concept. Given payoff matrices $Q_1, Q_2 : \cS \times \cA \times \cA \mapsto \RR$ and the state $x$, we define the CCE of the game as a joint distribution $\sigma$ on $\cA \times \cA$ satisfying: \begin{align} \mathbb{E}_{(a, b) \sim \sigma}\left[Q_{1}(x, a, b)\right] &\geq \mathbb{E}_{b \sim \mathcal{P}_{2} \sigma}\left[Q_{1}\left(x, a^{\prime}, b\right)\right] ,\quad \forall a^{\prime} \in \mathcal{A}\label{cce:1} ,\\ \mathbb{E}_{(a, b) \sim \sigma}\left[Q_{2}(x, a, b)\right] & \leq \mathbb{E}_{a \sim \mathcal{P}_{1} \sigma}\left[Q_{2}\left(x, a, b^{\prime}\right)\right] ,\quad \forall b^{\prime} \in \mathcal{A} ,\label{cce:2} \end{align} where $\cP_1 \sigma$ denotes the marginal of $\sigma$ on the first coordinate (min-player) and $\cP_2 \sigma$ denotes the marginal of $\sigma$ on the second coordinate (max-player). {\blue We use $\texttt{FIND\_CCE}(Q_1, Q_2, x)$ to denote $\sigma$. When $\sigma$ can be written as a product of two policies over action space $\cA$, it is an Nash equilibrium \citep{xie2020cce}. To compute a CCE given $Q_1, Q_2, x$, please refer to Appendix \ref{app:cce}.} \subsection{Nonlinear function approximation by reproducing kernel Hilbert spaces}\label{sec_prelim_rkhs} For simplicity of notation, we use $z = (x, a, b)$ to denote a state-action triplet in $\cZ := \cS \times \cA \times \cA$. An RKHS $\cH$ with kernel $K(\cdot, \cdot): \cZ \times \cZ \mapsto \RR$ is a general form of linear function class. Every RKHS $\cH$ consists of functions on $\cZ$, with a feature mapping, $\phi: \cZ \mapsto \cH$, such that $\forall f \in \cH$ and $\forall z \in \cZ$, $f(z) = \hprod{f}{\phi(z)}$. The kernel $K$ is thus defined for every $x, y \in \cZ \times \cZ$ as $K(x, y) = \hprod{\phi(x)}{\phi(y)}$. We call $\phi$ the feature mapping induced by the RKHS $\cH$ with kernel $K$. In the following sections, we use $f^\top g$ as a simplification of $\hprod{f}{g}$ when $f, g \in \cH$. We make no distinction in notation between the vector product and the product $\hprod{\cdot}{\cdot}$; the distinction can be read out from the nature of the two objects in the product. For every RKHS $\cH$, there exists a natural eigenvalue decomposition in $\cL^2(\cZ)$. RKHS approximation is a generalization of the linear function approximation of finite dimension $d$ which can be infinite dimensional. In the following, we define the so-called \emph{kernel mixture MG}, which can be regarded as an extension from the linear mixture MDP \citep{jia2020model, ayoub2020model, zhou2020nearly} and linear mixture MG \citep{chen2021almost} to their kernel counterpart. \paragraph{Kernel mixture MG.} In a kernel mixture MG model, we model the transition probability $\PP_h(s' | z): \cZ \mapsto \Delta(\cS)$ as an element in an RKHS $\cH$ with feature mapping $\phi(s' | z): \cZ \times \cS \rightarrow \cH$, such that for an unknown parameter $\btheta_h^* \in \cH$, we have $\PP_h(s' | z) = \left\langle \phi(s' | z), \btheta_h^* \right\rangle_{\cH}$ for all $ s' \in \cS$ and $z \in \cZ$. A similar MG structure called kernel MG has been studied by \citet{qiu2021reward}, which assumes that the transition probability satisfies $\PP_h(s'|z) = \la \phi(z), \mu_h(s')\ra$ for some $\phi(\cdot), \mu_h(\cdot) \in \cH$. The single-agent MDP counterparts of kernel MGs and kernel mixture MGs are linear MDPs and linear mixture MDPs. \citet{zhou2021provably} have shown that linear MDPs and linear mixture MDPs are different classes of MDPs and one cannot be covered by the other. Following a similar argument, we can also show that kernel mixture MGs and kernel MGs are different classes of MGs and cannot imply each other. At time $h$, for any estimate of the value function $V_h(\cdot): \cS \mapsto \RR$, the expectation of value function at time $h+1$, $\PP_h V_{h+1}$, is an element in the RKHS $\PP_h V_{h+1}(z) = \left\langle \phi_{V_{h+1}}(z), \btheta_h^* \right\rangle_{\cH}$, where $ \phi_{V_{h+1}}(z) := \sum_{s' \in \cS} \phi(s' | z) V_{h+1}(s') $ integrates the product of the feature mapping with the estimated value of $s'$ over $\cS$. It is worth noting that the quantity $\phi_V(\cdot)$ plays an important role in previous linear mixture model-based algorithms \citep{jia2020model, ayoub2020model, zhou2020nearly, chen2021almost}. We assume that for any bounded value function $V(\cdot): \cS \mapsto [-1, 1]$ and any $z \in \cZ$, we have $\|\phi_{V}(z)\|_{\cH}\leq 1$. Given that the reward function $r_h(z)$ is known, we obtain through the Bellman equation that \begin{align} Q_h^{*, \nu}(\cdot) = r_h(\cdot) + (\PP_h V_{h+1}^{*, \nu})(\cdot) &= r_h(\cdot) + \left\langle \phi_{V_{h+1}^{*, \nu}}(\cdot), \btheta_h^* \right\rangle_{\cH} , \\ Q_h^{\pi, *}(\cdot) = r_h(\cdot) + (\PP_h V_{h+1}^{\pi, *})(\cdot) &= r_h(\cdot) + \left\langle \phi_{V_{h+1}^{\pi, *}}(\cdot), \btheta_h^* \right\rangle_{\cH} . \end{align} \paragraph{Weighted kernel function.} In this work, we consider a general RKHS $\cH$ and do not assume that we can access the feature mapping $\phi$ directly. Instead, we assume that we can access the \emph{weighted kernel function} $k_{V_1, V_2}(\cdot, \cdot)$, which is defined as follows: \begin{definition}\label{def:weighkernel} For any function pairs $V_1, V_2: \cS \rightarrow [0,1]$ which map states to real numbers, the weighted kernel function $k_{V_1, V_2}(\cdot, \cdot)$ is defined as follows: \begin{align} \forall z_1, z_2 \in \cZ,\ k_{V_1, V_2}(z_1, z_2) := \sum_{s_1, s_2 \in \cS}V_1(s_1)V_2(s_2) \left\langle\phi(s_1|z_1), \phi(s_2|z_2)\right\rangle_{\cH} .\notag \end{align} \end{definition} It is easy to see from Definition \ref{def:weighkernel} that \begin{align} &k_{V_1, V_2}(z_1, z_2) = \bigg\la\sum_{s_1 \in \cS}V_1(s_1)\phi(s_1|z_1),\sum_{s_2 \in \cS}V_2(s_2)\phi(s_2|z_2) \bigg\ra_{\cH} = \la \phi_{V_1}(z_1), \phi_{V_2}(z_2)\ra_{\cH},\notag \end{align} which suggests that the weighted kernel function $k_{V_1, V_2}(\cdot, \cdot)$ captures the inner product relation between $\phi_{V_1}(z_1)$ and $\phi_{V_2}(z_2)$. We assume that we can access an integration oracle that can calculate $k_{V_1, V_2}(z_1, z_2)$ for any function $V_1, V_2$ and state-action tuples $z_1, z_2$ efficiently. \section{Algorithm}\label{sec_algo} In this section, we introduce our value-targeted iteration algorithm for the zero-sum two-player Markov Game setting with RKHS function approximation. We follow the \emph{value-targeted regression} framework and the confidence set design as in UCRL~\citep{jia2020model,ayoub2020model}, and combine the CCE technique~\citep{xie2020cce} to deal with the zero-sum sub-game induced by upper confidence bound (UCB) and lower confidence bound (LCB) value functions. These techniques enable us to adapt the results from the linear setting to the nonlinear RKHS regime~\citep{chowdhury2017kernelized,yang2020function, zhou2020neural} and obtain a structure-dependent regret bound that is both computationally simple and statistically efficient. \begin{algorithm}[!tb] \caption{\texttt{KernelCCE-VTR}\xspace}\label{alg:base} \begin{algorithmic}[1] \STATE \textbf{Input:} bonus parameter $ \beta>0 $. \FOR {episode $t=1,2,\ldots,T$} \FOR {step $h=H,H-1,\ldots,1$} \STATE {\blue Calculate $\QU{\cdot, \cdot, \cdot}, \QL{\cdot, \cdot, \cdot}$ as in \eqref{eq:Q_neural_update}} \STATE Let $\sigma_h^t(\cdot) = \texttt{FIND\_CCE}(\overline{Q}_h^t, \underline{Q}_h^t, \cdot)$ \STATE Let $\overline{V}_h^t(\cdot) = \EE_{(a, b) \sim \sigma_h^t(\cdot)} \overline{Q}_h^t(\cdot, a, b)$ and $\underline{V}_h^t(\cdot) = \EE_{(a, b) \sim \sigma_h^t(\cdot)} \underline{Q}_h^t(\cdot, a, b)$ \ENDFOR \STATE Receive initial state $x_{1}^{t}$ \FOR {step $h=1,2,\ldots,H$} \STATE Sample $(a_{h}^{t},b_{h}^{t})\sim\sigma_{h}^{t}(x_{h}^{t})$. \STATE $P_1$ takes action $a_h^t$, $P_2$ takes action $b_h^t$ \STATE Observe next state $x_{h+1}^{t}$. \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} To find an equilibrium $(\pi^*, \nu^*)$ of the value function $V_1^{\pi, \nu}(x_1)$, we design an algorithm using value-targeted regression (VTR) and upper/lower confidence bound-based exploration. As the min-player aims to minimize the value function while the max-player wishes to maximize the value function, we use upper confidence bound to encourage the exploration of the max-player and use a lower confidence bound to encourage the exploration of the min-player. Thus we need to define two value functions for the min/max-players respectively, i.e., $\overline{Q}_h^t, \underline{Q}_h^t, \overline{V}_h^t, \underline{V}_h^t$, where we adopt the overline notation for the over-estimation by the max-player and the underline notation for the under-estimation by the min-player. In the following, we only describe how to estimate the value functions for the max-player; the value functions for the min-player can be estimated analogously. At each round of the game, we solve the following ridge regression problem for minimizing the Bellman error: \begin{align} \overline{\btheta}_h^t &= \min_{\btheta \in \cH } \sum_{\tau = 1}^{t - 1} \left[ \overline{V}_{h+1}^\tau(x_{h+1}^\tau) - \left\langle \phi_{\overline{V}_{h+1}^\tau}(z_h^\tau), \btheta \right\rangle_{\cH} \right]^2 + \lambda \norm{\btheta }^2_{\cH} .\label{eq:rkhs_ridge} \end{align} Note that in~\eqref{eq:rkhs_ridge}, $\overline{V}_{h+1}^\tau$ only depends on the previous trajectories $ \left\{x_i^j, a_i^j, b_i^j: j \in [\tau - 1], i \in [H]\right\} $. We denote the corresponding $\sigma$-algebra as $\cF_{\tau - 1}$. Thus we have $\overline{V}_{h+1}^\tau \in \cF_{\tau - 1}$. As each $\overline{V}_{h+1}^\tau(x_{h+1}^\tau)$ can be seen as a stochastic sample of $(\PP_h \overline{V}_{h+1}^\tau)(z_h^\tau)$, the regularized regression problem of the max-player in~\eqref{eq:rkhs_ridge} can be seen as solving a linear bandit problem with context $\phi_{\overline{V}_{h+1}^\tau}(z_h^\tau)$, reward function $(\PP_h \overline{V}_{h+1}^\tau)(z_h^\tau)$ and noise term $\overline{V}_{h+1}^\tau(x_{h+1}^\tau) - (\PP_h \overline{V}_{h+1}^\tau)(z_h^\tau)$. From the solution to the ridge regression problem~\eqref{eq:rkhs_ridge}, we can define the upper/lower confidence bound of the action-value functions $Q_h^{*, \nu}, Q_h^{\pi, *}$ respectively. For simplicity of notation, we define the vectors $\overline{\bm{\Psi}}_h^t := \left( \phi_{\overline{V}_{h+1}^1}(z_h^1), \ldots \phi_{\overline{V}_{h+1}^{t - 1}}(z_h^{t - 1}) \right)^\top \in \cH^{t - 1}$. For a positive parameter $\beta_t>0$ that will be chosen in later analysis, the confidence region centered at $\overline{\btheta}_h^t$ in the RKHS $\cH$ is defined as \begin{align} \overline{\cC}_h^t = \bigg\{\btheta: \sqrt{ \lambda \norm{\btheta - \overline{\btheta}_h^t}_{\cH}^2 + \left\|\left\langle \overline{\bm{\Psi}}_h^t , \btheta - \overline{\btheta}_h^t \right\rangle_{\cH}\right\|^2} \leq \beta_t\bigg\}. \label{eq:region} \end{align} We omit the definition of $\underline{\cC}_h^t$ which is an analogue of Eq.~\eqref{eq:region} obtained by changing all overline symbols to underline ones. Based on the confidence regions, we construct an optimistic/pessimistic estimate of $Q_h^{*, \nu}$ as \begin{align} \overline{Q}_h^t := \Pi_{[-H, H]}\bigg[r_h + \max_{\btheta \in \overline{\cC}_h^t} \left\langle \phi_{\overline{V}_{h+1}^t}, \btheta \right\rangle_{\cH} \bigg] ,\quad \underline{Q}_h^t := \Pi_{[-H, H]}\left[r_h + \min_{\btheta \in \underline{\cC}_h^t} \left\langle \phi_{\underline{V}_{h+1}^t}, \btheta \right\rangle_{\cH} \right] ,\label{eq:Q_neural_update} \end{align} where $\Pi_{[-H, H]}$ is the projection operator onto $[-H, H]$, which is by definition the range of value functions. {\blue For the convenience of conducting an induction argument we define $\overline{V}^{t}_{H+1} = \underline{V}^{t}_{H+1} = 0$, and also $V_{H+1}^{\pi, \nu}(x) = 0$ and $V_{H+1}^{*, \nu^t} = V_{H+1}^{\pi^t, *} = 0$, since there are no more future steps starting from $h = H+1$}. Given the estimation of $\overline{Q}_h^t, \underline{Q}_h^t$, the next step is to estimate the corresponding state value functions $\overline{V}_h^t, \underline{V}_h^t$. We utilize the \texttt{FIND\_CCE} algorithm in~\citet{xie2020cce} to find a coarse-correlated equilibrium of the payoff pair $(\overline{Q}_h^t(z), \underline{Q}_h^t(z))$. \paragraph{Computational efficiency.} By substituting the closed-form solutions to the maximization/minimization problems in \eqref{eq:Q_neural_update}, we can derive the analytic-form for $\overline{Q}_h^t$ and $\underline{Q}_h^t$. Taking $\overline{Q}_h^t$ as an example, we have \begin{align} \QU{z} &= \Pi_{[-H, H]}\bigg[ r_h(z) + \overline{k}_h^t(z)^\top (\overline{K}_h^t + \lambda I)^{-1} \overline{y}_h^t + \beta_t \cdot \overline{w}_h^t(z) \bigg] , \end{align} where the Gram matrix $\overline{K}_h^t$ and vector-valued function $\overline{k}_h^t$ are defined as \begin{align} &\overline{K}_h^t = \left(\overline{\bm{\Psi}}_h^t\right)\left(\overline{\bm{\Psi}}_h^t\right)^\top \in \RR^{(t - 1) \times (t - 1)} ,\quad \overline{k}_h^t = \left(\overline{\bm{\Psi}}_h^t\right) \phi_{\overline{V}_{h+1}^t}(z) = \left(k_{\overline{V}_{h+1}^i, \overline{V}_{h+1}^t}(z_h^i, z)\right)_i \in \RR^{t-1}.\notag \end{align} Also, we have $\overline{y}_h^t := \left[ \overline{V}_{h+1}^1(x_h^1), \ldots \overline{V}_{h+1}^{t - 1}(x_h^{t - 1}) \right]^\top$ and $\overline{w}_h^t(z) = \lambda^{-1/2}\bigg[ k_{\overline{V}_{h+1}^t, \overline{V}_{h+1}^t}(z, z) - \overline{k}_h^t(z)^\top \big(\overline{K}_h^t + \lambda \cdot \Ib \big)^{-1} \overline{k}_h^t(z) \bigg]^{1/2}$. Therefore, by the assumption that the weighted kernel function $k_{V_1, V_2}$ can be evaluated efficiently, $\overline{Q}_h^t$ and $\underline{Q}_h^t$ can also be computed efficiently. Furthermore, given $\overline{Q}_h^t$ and $\underline{Q}_h^t$, \texttt{FIND\_CCE} can also be implemented efficiently \citep{xie2020cce}. Thus, Algorithm \ref{alg:base} is computationally efficient. \section{Main Results}\label{sec_RKHS} In this section, we present the regret bound of our algorithm for the kernel mixture Markov Game. Recall that for the linear function class, the regret upper bound is characterized by the dimension of the linear function, the horizon of the game, and the number of episodes \citep{chen2021almost}. Our analysis in the RKHS function approximation setting aligns with the linear function approximation setting when $K(z, z') = \phi(z)^\top \phi(z')$. When considering the nonlinear function class as an approximator of the value function, we need to develop a new concept analogous to the dimension $d$ that characterizes the intrinsic complexity of the function class $\cF$. We do so by making use of the maximal information gain, $\Gamma_K(T, \lambda)$ \citep{srinivas2009gaussian}, where $T$ is the episode number and $H$ is the time horizon. In particular, we define the \emph{effective dimension} of the RKHS $\cH$ with respect to the mixture MG as follows: \begin{definition}\label{def:eff} We define the effective dimension $\Gamma_K(T, \lambda)$ as follows: \begin{align} \Gamma_K(T, \lambda): = \sup_{(V_i)_i, (z_i)_i }\frac{1}{2}\log \det (\Ib + K(\{V_i\}_i, \{z_i\}_i)/\lambda), \notag\end{align} for any $1 \leq i \leq T,\ V_i: \cS \rightarrow [-H,H],\ z_i \in \cZ$, where $V_i$'s are functions mapping from $\cS$ to $[-H,H]$ and $z_i$'s are state-action tuples. Here, $K(\{V_i\}_i, \{z_i\}_i) \in \RR^{T \times T}$ and its $(p,q)$-th entry for any $1 \leq p,q \leq T$ is $[K(\{V_i\}_i, \{z_i\}_i)]_{p,q} = k_{V_p, V_q}(z_p, z_q)$. \end{definition} By the boundedness of $\phi_V$ as in Section~\ref{sec_prelim_rkhs}, it is easy to verify that both the tabular MG and the linear mixture MG enjoy a finite effective dimension. Specifically, for finite RKHS $\cH$ with rank $d$, $\Gamma_K(T, \lambda) = O(d \cdot \log T)$ approximates the rank of $\cH$. Via a concentration argument, we first present our main lemma for bounding the estimation error when choosing $\beta_t = \beta$ for all $t\geq 1$ \begin{lemma}\label{lem:main} Assuming that for any $h \in [H]$, $\norm{\btheta_h^*}_{\cH} \leq B$. Let $\lambda = 1 + 1/T$ and $\beta$ satisfies $\left(\beta/H\right)^2 \geq 2\Gamma_K(T, \lambda) + 2 + 4\cdot \log \left( 1/\delta \right) + 2 \lambda \left(B/H\right)^2$. Then, for any $\delta > 0$, with probability at least $1 - \delta$, the following holds for any $(t, h) \in [T] \times [H]$ and any $(x, a, b) \in \cS \times \cA \times \cA$: \begin{align} \left| \left\langle \phi_{\overline{V}_{h+1}^t}(x, a, b) , \overline{\btheta}_h^t - \btheta_h^*\right\rangle_{\cH} \right| &\leq \beta\cdot \overline{w}_h^t(x, a, b) ,\ \left| \left\langle \phi_{\underline{V}_{h+1}^t}(x, a, b), \overline{\btheta}_h^t - \btheta_h^*\right\rangle_{\cH} \right| \leq \beta\cdot \underline{w}_h^t(x, a, b) .\notag \end{align} \end{lemma} We are now ready to present our main theorem. \begin{theorem}[RKHS function approximation]\label{thm:main} Under the same conditions as Lemma \ref{lem:main}, with probability at least $1 - \delta$, $\texttt{KernelCCE-VTR}\xspace$ has the following regret: \begin{align} \operatorname{Regret}(T) = O \left(\beta H \sqrt{T \cdot \Gamma_{K}(T, \lambda)} + 1 \right) .\notag \end{align} \end{theorem} \begin{remark} Theorem \ref{thm:main} suggests that by treating the norm $B$ as a constant, $\texttt{KernelCCE-VTR}\xspace$ achieves an $\tilde O(\Gamma_{K}(T, \lambda) H^{2}\sqrt{T})$ regret bound. When the RKHS degenerates to the Euclidean space, the regret bound reduces to $\tilde O(dH^{2}\sqrt{T})$, which matches the $\tilde O(dH^{3/2}\sqrt{T})$ regret for linear mixture MGs presented by \citet{chen2021almost} up to a $\sqrt{H}$ factor. \end{remark} Similar to \citet{xie2020cce}, by using a standard online-to-batch conversion technique, we can convert the regret bound in Theorem \ref{thm:main} to a PAC bound. For simplicity, let the initial states of each episode be the same, i.e., $x_1^t = x_1$. After $T$ episodes, we select $t_0 \in [T]$ satisfying \begin{align} t_0 = \argmin_{t \in [T] }\{\overline{V}_1^t(x_1) - \underline{V}_1^t(x_1)\},\label{help:444} \end{align} which yields the following sample complexity guarantee for finding an $\epsilon$-approximate NE policy pair $(\pi^{t_0}, \nu^{t_0})$. \begin{corollary}[Sample complexity]\label{coro:main} Under the same conditions as Theorem \ref{thm:main}, by setting $T = \\ O\big(\beta^2H^2\Gamma_{K}(T, \lambda)/\epsilon^2\big) = \tilde O\big(H^4\Gamma_{K}^2(T, \lambda)/\epsilon^2\big)$ and selecting $t_0$ as in \eqref{help:444}, the policy pair $(\pi^{t_0}, \nu^{t_0})$ is an $\epsilon$-approximate NE. \end{corollary} \section{Bernstein-type Bonus, Misspecification, and Neural Function Approximation}\label{sec_RKHSmis} In this section, we propose several extensions of \texttt{KernelCCE-VTR}\xspace. Section~\ref{sec:bern11} introduces $\texttt{KernelCCE-VTR}\xspace$ with a Bernstein-type bonus. Section~\ref{subsec_RKHSmis} discusses kernel function approximation with misspecification. Section~\ref{subsec:NNmis} specialize the kernel function approximation with misspecification to the neural function approximation setting. \subsection{$\texttt{KernelCCE-VTR}\xspace$ with a Bernstein-type bonus}\label{sec:bern11} Recall that in \texttt{KernelCCE-VTR}\xspace, we need to choose $\beta$ in order to calculate the optimistic and pessimistic state-action value functions $\QU{\cdot}, \QL{\cdot}$ defined in \eqref{eq:Q_neural_update}. The theoretical value of $\beta$ is defined in Lemma~\ref{lem:main}, which controls the uncertainty of the action-value estimate. Such choice of $\beta$ is due to a Hoeffding-type concentration used in the proof of Lemma~\ref{lem:main}. It has been shown in \citet{zhou2020nearly} that by using a Bernstein-type bonus and a sharp analysis based on the total variance lemma, one can obtain an improved algorithm with a tighter regret bound. Following this idea, we propose a $\texttt{KernelCCE-VTR+}\xspace$ algorithm, which replaces the Hoeffding-type bonus with a Bernstein-type bonus. To demonstrate the construction of the Bernstein-type bonus, we take the max player for example. In particular, we solve the following weighted kernel ridge regression problem: \begin{align} \overline{\btheta}_{h, 1}^t &= \min_{\btheta \in \cH } \sum_{\tau = 1}^{t - 1} \Big[ \overline{V}_{h+1}^\tau(x_{h+1}^\tau) - \left\langle \phi_{\overline{V}_{h+1}^\tau}(z_h^\tau), \btheta \right\rangle_{\cH} \Big]^2 / \left(\overline{R}_h^\tau\right)^2 + \lambda_1 \norm{\btheta }^2_{\cH} ,\label{eq:rkhs_ridge_weighted} \end{align} where the input is the normalized feature mapping $\phi_{\overline{V}_{h+1}^\tau}(z_h^\tau)/\overline{R}_h^\tau$, the output is the normalized value function $\overline{V}_{h+1}^\tau(x_{h+1}^\tau)/\overline{R}_h^\tau$, and the normalization factor $\overline{R}_h^\tau$ is an upper bound on the conditional variance of $\overline{V}_{h+1}^\tau(x_{h+1}^\tau)$. It is straightforward to verify that \eqref{eq:rkhs_ridge_weighted} admits a closed-form solution. Given that solution, we can compute the upper confidence bound of the action-value functions $Q_h^{*, \nu}$. In detail, we define $\overline{\bm{\Psi}}_{h, 1}^t := \left( \phi_{\overline{V}_{h+1}^1}(z_h^1)/\overline{R}_h^1 ,\ldots, \phi_{\overline{V}_{h+1}^{t - 1}}(z_h^{t - 1})/\overline{R}_h^{t - 1} \right)^\top \in \cH^{t - 1}$. The Gram matrix $\overline{K}_{h, 1}^t$, vector-valued function $\overline{k}_{h, 1}^t$ and the confidence region centered at $\overline{\btheta}_{h, 1}^t$ in the RKHS $\cH$ can be calculated the same as in Algorithm \ref{alg:base}, except that $\overline{\bm{\Psi}}_{h}^t$ is replaced by $\overline{\bm{\Psi}}_{h, 1}^t$. Then the optimistic estimate of the action-value function $Q_h^{*, \nu}$ has the following form: \begin{align} \QU{z} &= \Pi_{[-H, H]}\left[ r_h(z) + \overline{k}_{h, 1}^t(z)^\top (\overline{K}_{h, 1}^t + \lambda I)^{-1} \overline{y}_{h, 1}^t + \beta_t \cdot \overline{w}_{h, 1}^t(z) \right],\label{eq:Q_neural_update_weighted} \end{align} where $\overline{y}_{h, 1}^t := \left[ \overline{V}_{h+1}^1(x_h^1)/\overline{R}_h^1, \ldots \overline{V}_{h+1}^{t - 1}(x_h^{t - 1})/\overline{R}_h^{t - 1} \right]^\top $ and \begin{align} \overline{w}_{h, 1}^t(z) &= \lambda_1^{-1/2}\cdot \bigg[ k_{\overline{V}_{h+1}^t, \overline{V}_{h+1}^t}(z, z) - \overline{k}_{h, 1}^t(z)^\top \left(\overline{K}_{h, 1}^t + \lambda_1 \cdot \Ib \right)^{-1} \overline{k}_{h, 1}^t(z) \bigg]^{1/2} .\notag \end{align} We defer the presentatino of the conditional variance estimator $\overline{R}_h^t$ to Appendix \ref{sec:bern}. Similarly, we can construct the pessimistic estimate of the action-value function $Q_h^{\pi, *}$ for the min player. We have the following informal result for $\texttt{KernelCCE-VTR+}\xspace$. The full algorithm and its formal guarantee can be found in Appendix \ref{sec:bern}. \begin{theorem}[Informal]\label{thm:inform_bernstein} Let $d_{\text{eff}} = \Gamma_K(T, \lambda) $, with proper choice of $\overline{R}_h^t, \underline{R}_h^t$ and $\beta_t$, with probability at least $1 - \delta$, $\texttt{KernelCCE-VTR+}\xspace$ has the following regret \begin{align} \operatorname{Regret}(T) = \tilde{O}\bigg( & d_{\text{eff}}^2 H^3 + \sqrt{d_{\text{eff}} H^4 + d_{\text{eff}}^2 H^3} \sqrt{T} + \left(d_{\text{eff}}^7 H^7 + d_{\text{eff}}^4 H^9 \right)^{1/4} T^{1/4} \bigg) .\notag \end{align} \end{theorem} \begin{remark} When $T$ is sufficiently large and $\Gamma_K(T, \lambda)$ is larger than $H$, the regret bound in Theorem \ref{thm:inform_bernstein} is dominated by $\tilde O(\Gamma_K(T, \lambda) H^{3/2}\sqrt{T})$, which improves the $\tilde O(\Gamma_K(T, \lambda) H^2\sqrt{T})$ regret derived in Theorem \ref{thm:main} by a factor of $\sqrt{H}$. Compared with the $\tilde \Omega(d H^{3/2}\sqrt{T})$ lower bound proposed in \citet{chen2021almost}, our $\texttt{KernelCCE-VTR+}\xspace$ algorithm is almost optimal when it reduces to the linear mixture MG. \end{remark} \subsection{Kernel function approximation with misspecification}\label{subsec_RKHSmis} In this subsection, we consider the case where the function class may not be confined to an RKHS, but instead the distance to it can be bounded. This can be formulated as kernel function approximation with misspecification. We assume that there exists a misspecification error between the RKHS $\cH$ and the true transition probability $\PP_h(s' | z)$. \begin{assumption}\label{assu:misspecification} There exists an $\iota_{\mis} > 0$, an RKHS $\cH$ with feature mapping $\phi: \cZ \mapsto \cS \times \cH$, and an unknown parameter $\btheta_h^* \in \cH$ satisfying $\left\|\btheta_h^*\right\|_{\cH} \leq B$ such that for any $h \in [H]$, the distance of the transition probability $\PP_h$ to $\cH$ can be bounded by $\iota_{\mis}$, which is $\left\|\PP_h(\cdot | z) - \left\langle \phi(\cdot | z), \btheta_h^* \right\rangle_{\cH} \right\|_{\text{TV}} \leq \iota_{\mis}$. \end{assumption} In order to deal with model misspecification, the key idea is to enlarge $\beta_t$ in the definition of the optimistic action-value function in \eqref{eq:Q_neural_update}. More specifically, we will add an extra $\cO(H \iota_{\mis} \sqrt{t})$ term brought by misspecification error to $\beta$ specified in Lemma~\ref{lem:main}. We can show that $\texttt{KernelCCE-VTR}\xspace$ with such enlarged $\beta$ will have a sublinear regret in the presence of misspecification. \begin{theorem}[RKHS function approximation with misspecification]\label{thm:main_RKHSmis} Assuming that for any $h \in [H]$, $\norm{\btheta_h^*}_{\cH} \leq B$. Set $\lambda = 1 + 1/T$ in the $\algname$ Algorithm. For any $\delta > 0$ and any $\beta_t$ satisfying $\left(\beta_t/H\right)^2 \geq 2\Gamma_K(T, \lambda) + 3 + 6\cdot \log \left(1/\delta \right) + 3 \lambda \left(B/H\right)^2 + 3 \iota_{\mis}^2 t$, there exists a global constant $c > 0$ such that with probability at least $1 - \delta$, we have \begin{align} \operatorname{Regret}(T) \leq c \left(\beta_T H \sqrt{T \cdot \Gamma_{K}(T, \lambda)} + 1 + H^2 T \iota_{\mis} \right) .\notag \end{align} \end{theorem} In words, Theorem \ref{thm:main_RKHSmis} suggests that in the misspeficified case, $\texttt{KernelCCE-VTR}\xspace$ can achieve the same regret as that in the well-specified case up to an $O(\sqrt{\Gamma_{K}(T, \lambda)}H^2T\iota_{\mis})$ error. Such a linear dependence on $\iota_{\mis}$ matches the result of single agent RL for the finite-dimensional case \citep{jin2019provably, zanette2020learning}. \begin{algorithm}[!tb] \caption{NeuralCCE-VTR} \begin{algorithmic}[1] \STATE \textbf{Input:} bonus parameter $ \beta_t>0 $. \FOR {episode $t=1,2,\ldots,T$} \STATE Receive initial state $x_{1}^{t}$ \FOR {step $h=H,H-1,\ldots,1$} \STATE Solve the optimization problem~\eqref{eq:nn_ridge} \STATE Calculate $\QU{\cdot}, \QL{\cdot}$ as in Eq.~\eqref{eq:Q_nn_update} \STATE For each $x$, let $\sigma_h^t(x) = \texttt{FIND\_CCE}(\overline{Q}_h^t, \underline{Q}_h^t, x)$ \STATE Let $\overline{V}_h^t(x_h^t) = \EE_{(a, b) \sim \sigma_h^t(x_h^t)} \overline{Q}_h^t(x_h^t, a, b)$ and $\underline{V}_h^t(x_h^t) = \EE_{(a, b) \sim \sigma_h^t(x_h^t)} \underline{Q}_h^t(x_h^t, a, b)$ \ENDFOR \FOR {step $h=1,2,\ldots,T$} \STATE Sample $(a_{h}^{t},b_{h}^{t})\sim\sigma_{h}^{t}(x_{h}^{t})$. \STATE $P_1$ takes action $a_h^t$, $P_2$ takes action $b_h^t$ \STATE Observe next state $x_{h+1}^{t}$. \ENDFOR \ENDFOR \end{algorithmic} \label{algo:neural} \end{algorithm} \subsection{Neural function approximation}\label{subsec:NNmis} In this subsection, we provide details for the application of our algorithm to the neural function approximation setting, showing that neural network (NN) function approximation can be treated as a special case of kernel function approximation with misspecification. We denote $z:= (x, a, b)$ as a vector in $\RR^d$ that satisfies $\norm{z} = 1$ and represent the parameters of a $L$-layer fully connected neural network $f$ by $\btheta := \left[\mathop{\text{vec}}(\mathbf{W}_1)^\top, \mathop{\text{vec}}(\mathbf{W}_2)^\top, \ldots, \mathop{\text{vec}}(\mathbf{W}_L)^\top \right]^\top$, where $\Wb_1 \in \RR^{m \times d}$, $\Wb_l \in \RR^{m \times m}$ for $2 \leq l \leq L-1$ and $\Wb_L \in \RR^{1 \times m}$. The neural network $f(z; \btheta)$ with parameter set $\btheta$ can be defined as: \begin{align*}f(z; \btheta)= \sqrt{m} \mathbf{W}_{L} G\left(\cdots G\left( \mathbf{W}_2 G\left(\mathbf{W}_1 z \right) \right)\right), \end{align*} where $G(\cdot): \RR \mapsto \RR $ is an activation function. For $1 \leq l \leq L-1$, $\Wb_l$ is initialized as $\Wb_l = (\Wb, \zero; \zero, \Wb)$, where each entry of $\Wb$ is generated independently from normal distribution $N(0, 4/m)$; $\Wb_L$ is initialized as $\Wb_L = (\bw^\top, -\bw^\top)$, where each entry of $\bw$ is generated independently from $N(0, 2/m)$. Given the initialized parameter $\btheta^{(0)}$, we choose the feature map as the gradient of $f$ at $\btheta^{(0)}$: \begin{align*}\phi(z) = \nabla_{\btheta} f(z;\btheta^{(0)})/\sqrt{m}.\end{align*} Then we define the weighted kernel function $k_{V_1, V_2}(\cdot, \cdot)$ in Definition \ref{def:weighkernel} with $\phi(z)$. Similarly, we define the effective dimension $\Gamma_K(T, \lambda)$ with respect to the kernel function $k_{V_1, V_2}(\cdot, \cdot)$, in the same fashion of Definition \ref{def:eff}. Our assumption is that for $\forall h \in [H]$ our transition probability $\PP_h$ can be modeled by the neural network with parameter $\btheta_h^*$ satisfying $\left\|\btheta_h^* - \btheta^{(0)}\right\|_2 \leq B$: \begin{align*} \PP_h(x' | z) = f(x', z; \btheta_h^*) . \end{align*} We now explicate our algorithm, shown formally in Algorithm~\ref{algo:neural}. As in Eq.~\eqref{eq:rkhs_ridge}, we solve penalized ridge regression problems for the min-player and the max-player respectively: \begin{align} \overline{\btheta}_h^t &= \min_{\btheta \in \RR^{P}} \sum_{\tau = 1}^{t - 1} \left[ \overline{V}_{h+1}^\tau(x_{h+1}^\tau) - f_{\overline{V}_{h+1}^\tau}(z_h^\tau; \btheta) \right]^2 + \lambda \cdot \norm{\btheta - \btheta^{(0)}}^2 ,\notag \\ \underline{\btheta}_h^t &= \min_{\btheta \in \RR^{P}} \sum_{\tau = 1}^{t - 1} \left[ \underline{V}_{h+1}^\tau(x_{h+1}^\tau) - f_{\underline{V}_{h+1}^\tau}(z_h^\tau; \btheta) \right]^2 + \lambda \cdot \norm{\btheta - \btheta^{(0)}}^2 ,\label{eq:nn_ridge} \end{align} where $p = md + m^2(L-2) +m$ is the dimension of the parameter space, and $f_{\overline{V}_{h+1}^\tau}, f_{\underline{V}_{h+1}^\tau}$ are defined similarly as $\phi_{\overline{V}_{h+1}^\tau}$ as follows: \begin{align*} f_{\overline{V}_{h+1}^\tau}(z; \btheta) &= \sum_{s' \in \cS}\overline{V}_{h+1}^\tau(s') f(s', z; \btheta) ,\qquad f_{\underline{V}_{h+1}^\tau}(z; \btheta)= \sum_{s' \in \cS} \underline{V}_{h+1}^\tau(s') f(s', z; \btheta) . \end{align*} For given $\overline{\btheta}_h^t, \underline{\btheta}_h^t$, we define \begin{align} &\overline{\bm{\Psi}}_h^t := \left( \phi_{\overline{V}_{h+1}^1}(z_h^1; \overline{\btheta}_h^{2}), \ldots \phi_{\overline{V}_{h+1}^{t -1}}(z_h^{t - 1}; \overline{\btheta}_h^t) \right)^\top , \notag \\ &\underline{\bm{\Psi}}_h^t := \left( \phi_{\underline{V}_{h+1}^{1}}(z_h^1; \underline{\btheta}_h^{2}), \ldots \phi_{\underline{V}_{h+1}^{t -1}}(z_h^{t - 1}; \underline{\btheta}_h^t) \right)^\top . \end{align} Furthermore, \begin{align*} \overline{\Lambda}_h^t := \lambda \cdot \Ib + (\overline{\bm{\Psi}}_h^t)^\top \overline{\bm{\Psi}}_h^t , \qquad \underline{\Lambda}_h^t := \lambda \cdot \Ib + (\underline{\bm{\Psi}}_h^t)^\top \underline{\bm{\Psi}}_h^t , \end{align*} and \begin{align} &\overline{w}_h^t(z) := \left[ \phi_{\overline{V}_{h+1}^t}(z; \overline{\btheta}_h^t)^\top (\overline{\Lambda}_h^t)^{-1} \phi_{\overline{V}_{h+1}^t}(z; \overline{\btheta}_h^t) \right]^{1/2} , \notag \\ &\underline{w}_h^t(z) := \left[ \phi_{\underline{V}_{h+1}^t}(z; \underline{\btheta}_h^t)^\top (\underline{\Lambda}_h^t)^{-1} \phi_{\underline{V}_{h+1}^t}(z; \underline{\btheta}_h^t) \right]^{1/2} . \end{align} Using the $\overline{\Lambda}_h^t, \underline{\Lambda}_h^t, \overline{w}_h^t, \underline{w}_h^t$, we estimate the optimal value functions as \begin{align} &\QU{z} = \Pi_{[-H,H]}\{r_h(z) + f_{\overline{V}_{h+1}^t}(z;\overline{\btheta}_h^t) + \beta \cdot \overline{w}_h^t(z)\} ,\notag \\ & \QL{z} = \Pi_{[-H,H]}\{r_h(z) +f_{\underline{V}_{h+1}^t}(z; \underline{\btheta}_h^t) - \beta \cdot \underline{w}_h^t(z)\} .\label{eq:Q_nn_update} \end{align} Combining with the procedures for finding a CCE, we obtain the full version of our algorithm as in Algorithm~\ref{algo:neural}. We have the following result on the neural network at initialization. \begin{lemma}\label{lemm:initiallinear} There exist constants $C_i >0$ such that for any $\delta \in (0,1)$, if $B$ satisfies that \begin{align} &B \geq C_1m^{-1}L^{-3/2}\max\{\log^{-3/2}m, \log^{3/2}(|\cZ|HL^2/\delta)\},\notag \\ &B \leq C_2 L^{-6}(\log m)^{-3/2},\notag \end{align} then with probability at least $1-\delta$, we have for all $z \in \cZ$, $h \in [H]$ and $V_h: \cS\rightarrow [-1, 1]$, \begin{align} & |\PP_h V_h(z) - \la \bphi_{V_h}(z), \btheta_h^* - \btheta^{(0)}\ra| \leq C_3 |\cS|B^{4/3} m^{-1/6}L^3\sqrt{\log m} ,\notag \end{align} and \begin{align} & \|\phi_{V_h}(z)\|_2 \leq C:=C_4|\cS|\sqrt{L} .\notag \end{align} \end{lemma} \noindent Lemma \ref{lemm:initiallinear} suggests that in the NN approximation setting, Assumption \ref{assu:misspecification} for the misspecified kernel approximation setting is satisfied with $\iota_{\mis} = C_3 |\cS|B^{4/3} m^{-1/6}L^3\sqrt{\log m}$ and with probability at least $1-\delta$. The misspecified error is sufficiently small when $m$ is large. We note that the definition of $\phi(z)$ in the NN setting does not match the boundedness assumption in Section~\ref{sec_prelim_rkhs}. We balance the scale of $\phi(z)$ by the constant $C$ in Lemma~\ref{lemm:initiallinear} which goes into the choice of $\lambda = C^2(1 + 1/T)$. With these choices in hand, we are ready to present our main result for NN approximation. \begin{theorem}[NN approximation]\label{theo:misspecification} Let $C$ be the constant in Lemma \ref{lemm:initiallinear}. Assuming that for any $h \in [H]$, $\norm{\btheta_h^* - \btheta^{(0)}}_2 \leq B$. Set $\lambda = C^2 \left(1 + 1/T\right)$ in the \algname Algorithm. For any $\delta > 0$ and any $\beta_t$ satisfying \begin{align*} \left(\frac{\beta_t}{H}\right)^2 &\geq 2\Gamma_K(T, \lambda) + 3 + 6\cdot \log \left( \frac{1}{\delta} \right) + 3 \lambda \left(\frac{B}{H}\right)^2 + 3 \cdot C^2 \cdot B^{8/3} \cdot m^{-1/12} \cdot t\cdot \log m , \end{align*} there exists a global constant $c > 0$ such that with probability at least $1 - 2\delta$, we have \begin{align*} \operatorname{Regret}(T) &\leq c \left(\beta_T H \sqrt{T \cdot \Gamma_{K}(T, \lambda)} + 1+ B^{4/3} H^2 T m^{-1/6} \sqrt{\log m}\right) . \end{align*} \end{theorem} \noindent Theorem \ref{theo:misspecification} suggests that when we use an overparameterized deep neural network ($m \gg 1$) to approximate the transition dynamics, $\texttt{KernelCCE-VTR}\xspace$ achieves an $\tilde O(\Gamma_{K}(T, \lambda) H^{2}\sqrt{T})$ regret, which is of the same order as that in Theorem \ref{thm:main}. \iffalse We leave the detailed description of the algorithm to Appendix \ref{subsec:NNmis}. We assume that for all $h \in [H]$ the transition probability $\PP_h$ can be modeled by a neural network with parameter $\btheta_h^*$ satisfying $\left\|\btheta_h^* - \btheta^{(0)}\right\|_2 \leq B$, $\PP_h(s' | z) = f(s', z; \btheta_h^*)$. Given these assumptions we can state our main theorem for NN approximation. \begin{theorem}[NN approximation]\label{theo:misspecification} Let $C$ be the constant in Lemma \ref{lemm:initiallinear}. Assuming that for any $h \in [H]$, $\norm{\btheta_h^* - \btheta^{(0)}}_2 \leq B$. Set $\lambda = C^2 \left(1 + 1/T\right)$ in the $\algname$ Algorithm. For any $\delta > 0$ and any $\beta_t$ satisfying \begin{align*} \left(\frac{\beta_t}{H}\right)^2 &\geq 2\Gamma_K(T, \lambda) + 3 + 6\cdot \log \left( \frac{1}{\delta} \right) + 3 \lambda \left(\frac{B}{H}\right)^2 \\&~\quad + 3 \cdot C^2 \cdot B^{8/3} \cdot m^{-1/12} \cdot t\cdot \log m , \end{align*} there exists a global constant $c > 0$ such that with probability at least $1 - \delta - m^{-2}$, we have \begin{align*} \operatorname{Regret}(T) &\leq c \left(\beta_T H \sqrt{T \cdot \Gamma_{K}(T, \lambda)} + 1 \right. \\&~\quad \left. + B^{4/3} H^2 T m^{-1/6} \sqrt{\log m}\right) . \end{align*} \end{theorem} Theorem \ref{theo:misspecification} suggests that when we use an overparameterized deep neural network ($m \gg 1$) to approximate the transition dynamics, $\texttt{KernelCCE-VTR}\xspace$ achieves an $\tilde O(\Gamma_{K}(T, \lambda) H^{2}\sqrt{T})$ regret, which is of the same order as its counterparts in the general misspecification case. \begin{remark} To adapt to the setting of the misspecified case, we consider solving the NN approximation case by a simplified version of Algorithm~\ref{algo:neural} that uses only a first-order Taylor expansion instead of the full NN solution in~\cite{zhou2020neural, yang2020function}. We present a more complete analysis of the NN function approximation by comparing the iterates of the simplified updates and the full updates in the supplementary materials. \end{remark} \fi \section{Conclusions}\label{sec_conclu} In this work, we studied learning for two-player mixture MGs using kernel function approximation. We introduced a new formulation of kernel mixture MGs and proposed an algorithm $\texttt{KernelCCE-VTR}\xspace$ that exploits the kernel function of the MG. We show that our $\texttt{KernelCCE-VTR}\xspace$ is able to achieve a sublinear $\tilde O(d_K H^2\sqrt{T})$ regret. We further improve our algorithm with a \emph{Bernstein-type bonus} and \emph{weighted kernel ridge regression}, which enjoys a better $\tilde O(d_K H^{3/2}\sqrt{T})$ regret and nearly matches the regret lower bound in~\citet{chen2021almost} when reducing to linear mixture MGs. Finally, we extend our analysis of the basic RKHS setting to a more general nonlinear function approximation setting with misspecification errors and demonstrate that neural networks can be treated as a special instance of this misspecification framework. We believe our framework and analysis greatly broadens the applicability of these function classes for game-theoretic problems.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,859
Q: Remove substring if number exists before keyword I have a strings with the form: 5 dogs = 1 medium size house 4 cats = 2 small houses one bird = 1 bird cage What I amt trying to do is remove the substring that exists before the equals sign but only if the substring contains a keyword and the data before that keyword is a integer. So in this example my key words are: dogs, cats, bird In the above example, the ideal output of my process would be: 1 medium size house 2 small houses one bird = 1 bird cage My code so far looks like this (I am hard coding the keyword values/strings for now) var orginalstring= "5 dogs = 1 medium size house"; int equalsindex = originalstring.indexof('='); var prefix = originalstring.Substring(0,equalsindex); if(prefix.Contains("dogs") { var modifiedstring = originalstring.Remove(prefix).Replace("=", string.empty); return modifiedstring; } return originalstring; The issue here is that I am removing the whole substring regardless of whether or not the data preceding the keyword is a number. Would somebody be able to help me with this additional logic? Thanks so much as always for anybody who takes a few minutes to read this question. Mick A: You can do it with a simple regex of the form \d+\s+(?:kw1|kw2|kw3|...)\s*=\s* where kwX is the corresponding keyword. var data = new[] { "5 dogs = 1 medium size house", "4 cats = 2 small houses", "one bird = 1 bird cage" }; var keywords = new[] {"dogs", "cats", "bird"}; var regexStr = string.Format( @"\d+\s+(?:{0})\s*=\s*", string.Join("|", keywords)); var regex = new Regex(regexStr); foreach (var s in data) { Console.WriteLine("'{0}'", regex.Replace(s, string.Empty)); } In the example above the call of string.Format pastes the list of keywords joined by | into the "template" of the expression at the top of the post, i.e. \d+\s+(?:dogs|cats|bird)\s*=\s* This expression matches * *One or more digits \d+, followed by *One or more space \s+, followed by *A keyword from the list: dogs, cats, bird (?:dogs|cats|bird), followed by *Zero or more spaces \s*, followed by *An equal sign =, followed by *Zero or more spaces \s* The rest is easy: since this regex matches the part that you wish to remove, you need to call Replace and pass it string.Empty. Demo. A: You can use regex (System.Text.RegularExpressions) to identify whether or not there is a number in the string. Regex r = new Regex("[0-9]"); //Look for a number between 0 and 9 bool hasNumber = r.IsMatch(prefix); This Regex simply searches for any number in the string. If you want to search for a number-space-string you could use [0-9] [a-z]|[A-Z]. The | is an "or" so that both upper and lower case letters result in a match. A: You can try something like this: int i; if(int.TryParse(prefix.Substring(0, 1), out i)) //try to get an int from first char of prefix { //remove prefix } This will only work for single-digit integers, however.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,377
'use strict'; angular.module('todos').controller('TodosController', ['$scope', '$location', 'Todos', function($scope, $location, Todos) { // Create new Todo $scope.create = function() { // Create new Todo object var todo = new Todos ({ task: this.task, }); // Redirect after save todo.$save(function(response) { //$location.path('todos'); $scope.find(); // Clear form fields $scope.task = ''; }, function(errorResponse) { $scope.error = errorResponse.data.message; console.log(errorResponse); }); }; $scope.toggleCompleted = function (todo) { todo.$update(function() { $location.path('todos'); }, function(errorResponse) { $scope.error = errorResponse.data.message; console.log(errorResponse.data.message); }); }; $scope.remove = function(todo) { if ( todo ) { todo.$remove(); for (var i in $scope.todos) { if ($scope.todos [i] === todo) { $scope.todos.splice(i, 1); } } } else { $scope.todo.$remove(function() { $location.path('todos'); }); } }; // Find a list of Todos $scope.find = function() { $scope.todos = Todos.query(); }; } ]);
{ "redpajama_set_name": "RedPajamaGithub" }
567
Gladkara cellularis är en insektsart som beskrevs av Irena Dworakowska 1995. Gladkara cellularis ingår i släktet Gladkara och familjen dvärgstritar. Artens utbredningsområde är Brunei. Inga underarter finns listade i Catalogue of Life. Källor Dvärgstritar cellularis
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,567
{"url":"https:\/\/acs.figshare.com\/articles\/journal_contribution\/Rotational_Spectrum_Conformational_Composition_and_Quantum_Chemical_Calculations_of_Cyanomethyl_Formate_HC_O_OCH_sub_2_sub_C_N_a_Compound_of_Potential_Astrochemical_Interest\/2200873\/1","text":"jp5b05285_si_001.pdf (996.64 kB)\n\n# Rotational Spectrum, Conformational Composition, and Quantum Chemical Calculations of Cyanomethyl Formate (HC(O)OCH2C\ue5fcN), a Compound of Potential Astrochemical Interest\n\njournal contribution\nposted on 27.08.2015, 00:00 by Svein Samdal, Harald M\u00f8llendal, Sophie Carles\nThe rotational spectrum of cyanomethyl formate (HC\u00ad(O)\u00adOCH2C\ue5fcN) has been recorded in the 12\u2013123 GHz spectral range. The spectra of two conformers were assigned. The rotamer denoted I has a symmetry plane and two out-of plane hydrogen atoms belonging to the cyanomethyl (CH2CN) moiety. In the conformer called II, the cyanomethyl group is rotated 80.3\u00b0 out of this plane. Conformer I has an energy that is 1.4(6) kJ\/mol lower than the energy of II according to relative intensity measurements. A large number of rotational transitions have been assigned for the ground and vibrationally excited states of the two conformers and accurate spectroscopic constants have been obtained. These constants should predict frequencies of transitions outside the investigated spectral range with a very high degree of precision. It is suggested that cyanomethyl formate is a potential interstellar compound. This suggestion is based on the fact that its congener methyl formate (HC\u00ad(O)\u00adOCH3) exists across a large variety of interstellar environments and the fact that cyanides are very prevalent in the Universe. The experimental work has been augmented by high-level quantum chemical calculations. The CCSD\/cc-pVQZ calculations are found to predict structures of the two forms that are very close to the Born\u2013Oppenheimer equilibrium structures. MP2\/cc-pVTZ predictions of several vibration\u2013rotation interaction constants were generally found to be rather inaccurate. A gas-phase reaction between methyl formate and the cyanomethyl radical CH2CN to produce a hydrogen atom and cyanomethyl formate was mimicked using MP2\/cc-pVTZ calculations. It was found that this reaction is not favored thermodynamically. It is also conjectured that the possible formation of cyanomethyl formate might be catalyzed and take place on interstellar particles.\n\n## Exports\n\nfigshare. credit for all your research.","date":"2021-09-24 11:31:56","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8107432126998901, \"perplexity\": 3955.6966607751474}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780057524.58\/warc\/CC-MAIN-20210924110455-20210924140455-00505.warc.gz\"}"}
null
null
Fifteen days before accepting her high school diploma, Rhegan Humphrey walked across the stage at North Idaho College and received a college degree. The St. Maries senior had been working toward the associate's degree since her sophomore year of high school. Rhegan Humphrey graduated from North Idaho College with her associate's degree 15 days before receiving her high school diploma from St. Maries High School. Rhegan earned 65 college credits to obtain the associate's degree all while working toward the high school diploma as well. While some of the college credits counted toward the 49 credits required to graduate high school not all did. Rhegan admitted it took "a lot of hard work, time and dedication" and "a lot of planning" to achieve what she wanted as a dual-enrollment student. With her general classes finished, Rhegan expects to be able to immediately begin classes related to her field of study when she transfers to Lewis Clark State College to study pre-med. Rhegan thanked several of her teachers and advisors at St. Maries High School for supporting her and helping her to succeed including Merri Jo Gilmore, Ashley Tate, Megan Sindt, Jim Broyles, Kippy Silflow, Loy Felix and Kathy Kahn. She also thanked her family for their support. "I hope this opens up the opportunity for others students from St. Maries to be able to this program as well," Rhegan added. Rhegan is the daughter of Darcy and Dale Humphrey of St. Maries.
{ "redpajama_set_name": "RedPajamaC4" }
8,552
ChurchReligionReligious Six Reflections After Mars Hill The wreckage that's caused when the role of the law is misunderstood in the life of the church. David Zahl / 12.7.21 When I try to hold together both the beautiful and the sad, I confess I feel a deep melancholy. And I kind of think that's right. That's how it's supposed to be when we hold together goodness and brokenness." Mike Cosper offers that admission in the closing minutes of The Rise and Fall of Mars Hill podcast, and I have a hunch his feeling is shared by many. I know I share it, not only when it comes to church but life itself. So much love, so much pain. It's too much sometimes. I felt this melancholy particularly strongly after finishing the podcast. The sadness had to do with the individual suffering that Mike uncovered, yes, but it also had to do with the strange mixture of vindication, regret, and exhaustion it conjured in me. People will say that Mars Hill is a cautionary tale about narcissism in the church.[1] About what happens when narcissism meets high anthropology meets demoralized young men meets the Internet. Or you could hear it as a tragic argument in favor of denominational hierarchies. And it is those things. But it is more than that. I almost hate to say it: The Rise and Fall of Mars Hill may first and foremost be a cautionary tale about what happens when law and gospel get confused in Christian ministry. I've grown sheepish about wielding that critique in recent years, mainly out of fear of sounding like a broken record or, worse, uncreative. A person gets tired of banging the same drum, especially one that so routinely gets dismissed as a Lutheran eccentricity. But it is unavoidable here, and illustrates the stakes of our Mockingbird project better than I would have ever wished. In one sense, Mike and his team took us on a 20-hour long journey into the wreckage caused when the role of the law is misunderstood in the life of the church — something Mockingbird was founded in large part to address. (Probably why I was invited to weigh in). The gist of the confusion is this: Before conversion, a person is presumed powerless to fulfill the commands of God. The law serves (rightly) to highlight our hypocrisy and frailty, to reveal how self-serving even our best deeds tend to be, how much in need of forgiveness we all are. After conversion, however, believers adopt a selectively high anthropology, claiming that Christians hear the law differently than they once did. They now hear it instructively, albeit less like a strict elementary school teacher and more like good advice from a friend. It sounds reasonable, but like a job review where you remember the one criticism and not the twenty praises, soon the law is all that's left. A gentle guideline for living out one's faith turns into a roadmap — or worse, a battle plan — for redeeming the city, or redeeming the family, or masculinity, or America. "You suck, do better" is how one Mars Hill refugee summed up the operating theology. When our problems don't go away, and temptation continues to hold us captive, the legalistic preacher's response is often to yell the law more loudly at his audience. Inwardly, we in the audience start to wonder: What is wrong with me that I still can't get it together? Maybe God lied. Maybe the whole thing is a lie. So the confusion, as I see it, has to do with how we understand human nature (anthropology) when it comes to Christians. Someone really ought to write a book. The selectively high anthropology described above is not just hogwash, it is dangerous. In the short term, using the law this way can produce staggering results, especially when married to serious charisma — as people witnessed in Seattle. In the long term, however, this approach produces meltdown, despair, and all manner of bad feeling. It does harm. It pits people against themselves and short-circuits compassion, opening up a wellspring of resentment, unbelief, and despair. Yet we gravitate toward the law because we love it. At least, we love the control it offers. Do this, don't do that – and good things will follow. We love the butts in the seats and the dollars in the coffers. We much prefer it to the open-endedness of the Gospel, which is not a means to some different end but an end in itself. Genuinely good news, too. For more, go here. In the early days of Mockingbird, we would hear from fellow believers fairly regularly that we were making too much of this distinction between the law and the gospel. Overdoing it and not providing "balance." Looking back, we were underdoing it. We were underselling the urgency — probably out of an apprehension of adopting the same tools as those we were critiquing. Of course, it feels icky to leverage stories of other people's pain for your own justification. I suppose you have to "balance"(!) that tendency with the many stories of people experiencing this distinction as a lifeboat from toxic semi-pelagianism. Along Those Lines, Six Reflections: We are not so different from those we find off-putting, and that includes Mark Driscoll. When we started Mockingbird, for example, we were just as idealistic as those guys in Seattle. We were simply idealistic about different things. They wanted to transform their city and create an army of Christian subversives. We wanted to transform the entire religious landscape away from… talking about transformation so much. But make no mistake: we had our own kind of hubris, and it wreaked its own kind of havoc. Listening to all those sweet people I felt convicted, not just of a failure of nerve, but of humility. Lord have mercy. The age-old conundrum of ecclesiology remains as pressing as ever. What I mean is that the difference between conceiving of church as a hospital for sinners vs a schoolhouse for saints is often the difference between hope and hurt. I know this sometimes sounds like a false dichotomy — why can't church be both a place for the wounded to find healing and the healthy to be trained in providing that healing? — but the contrast feels more concrete to me with each passing year. Which is to say, if growth is to be incorporated as a virtue into our designs for church, it must be radically de-emphasized in favor of relief, so as not to become primary. Such is our fatal attraction to the law. When it comes to ministry, disposition isn't incidental. It is essential. You cannot convey a message of grace in a non-gracious or overbearing way. Maybe the tone of this post attests as much. The circuits simply don't match up. This is not a matter of "should" or "shouldn't" but "can't," not dissimilar to what Marilynn Robinson meant when she quipped that "Nothing true can be said about God from a posture of defense." (The 'people are ______' word association told us pretty much all we needed to know…). There is no such thing as a gospel-centered bully. The distinction between the law and gospel cannot be understated. When these things get confused, religion invariably reduces to some form of the law or "glawspel". And law divorced from gospel devastates, full stop. The liberal end of this can produce anxiety (am I doing enough?) but the conservative end, because it ties obedience to eternal destinations, will wreck people. That wreckage is not theoretical. Nor is it confined to Ivory Tower squabbling. It looks like estrangement, unemployment, divorce, panic attacks, nihilism, and self-harm. One of my favorite moments in the final episode came when counselor Colleen Ramser confided that the trauma from these situations often runs so deep that "recovery looks like not pressuring a person to have a relationship with God (period)." I have found this to be true in my own encounters with refugees from similarly toxic churches. As well as much, much easier said than done. Allowing a sufferer space to walk away from what you hold dear is an act of faith, albeit one that runs counter to so much religious programming. It is nonetheless deeply in line with the grace of God. The fruit of this posture — what has often felt like a reckless endorsement of self — never ceases to amaze me. Alongside the melancholy, I'd be lying if I didn't cop to some gratitude. Not just that Mike was inspired to tell this story with such care, and that so many people have heard their experience acknowledged. I am grateful Mbird has never experienced that degree of "success," as I doubt I'd fare too much better with the attendant curses. I am also grateful that my own father devoted his adult life to identifying and healing the damage done when law is taken as gospel — and painting a picture of God's grace so compelling that, results or no, everything else pales in its light. Most of all, I am grateful that God uses broken things (and churches) in the lives of broken people, and that Jesus is in the business of redeeming traumatic experiences, even religious ones. Especially religious ones. A Good Church Is Hard to Find Joey Goodall No One's Making a Docuseries About Ordinary Churches Jason Micheli Deconstruction Disambiguation Guest Contributor David Zahl Mars Hill, Mike Cosper, The Rise and Fall of Mars Hill
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
872
Q: How to define the Cartesian Product on $A = \{ x \ | \ P(x)\}$ and $B = \{y \ | \ Q(y)\}$ Let's say I have two sets $A = \{ x \ | \ P(x)\}$ and $B = \{y \ | \ Q(y)\}$, how does one define the cartesian product $A \times B$ on these two sets? Is it simply the following? $$A \times B = \{(x, y) \ | \ P(x) \land Q(y) \}$$ A: Recall ordered pairs have the following property. $$(a,b)=(x,y)\iff a=x\quad\text{and}\quad b=y$$ That is to say, an ordered pair is well-defined once we know what the first coordinate and second coordinate are. Given sets $A$ and $B$, the cartesian product, denoted $A\times B$, is the collection of all ordered pairs $(a,b)$, where $a\in A$ and $b\in B$. In your case, you have $A=\{x:P(x)\}$ and $B=\{y:Q(y)\}$. Therefore $A\times B$ is the collection of all ordered pairs $(a,b)$, where $P(a)$ holds and $Q(b)$ holds. This can indeed be denoted as $$\{(a,b):P(a)\,\wedge\,Q(b)\}$$ as you proposed.
{ "redpajama_set_name": "RedPajamaStackExchange" }
131
{"url":"https:\/\/www.physicsforums.com\/threads\/velocity-relation.66644\/","text":"# Velocity relation\n\n1. Mar 9, 2005\n\n### Wiz\n\nI have a relation for velocity as v= sqrt [(1-x)\/x)]\nI need to find the time the particle takes to reach x= 0.25\nInitially it is at x=1\nI had problems integrating furthur...to find time\nCan ya help??..\nWiz\n\n2. Mar 9, 2005\n\n### Jameson\n\nI did this integral on my calulator and it's gross. Integration by parts perhaps? See what you can do with that..\n\n$$\\int \\sqrt{\\frac{1-x}{x}} dx = \\int (\\sqrt{(1-x)}) * (\\sqrt{x})^{-1} dx$$\n\nJameson\n\nLast edited: Mar 9, 2005\n3. Mar 9, 2005\n\n### dextercioby\n\nFrom the definition of velocity\n$$v(t)=:\\frac{dx}{dt}$$ Separation of variables and corresponding limits of integration give:\n\n$$\\int_{0}^{\\tau} dt=\\int_{1}^{0.25} \\frac{dx}{\\sqrt{\\frac{1-x}{x}}}$$\n\nThe integral in the first member is really simple,it is the time you need,denoted by me with \" \\tau\"...\n\nFor the integral in the RHS,use 2 substitutions\n\nFirst\n$$x=t^{2}$$\nThe second\n$$t=\\sin y$$\n\nPay attention to:\n1.Signs.\n2.Transformations of the limits of integration.\n\nDaniel.\n\nEDIT:I'm getting a \"tau\" smaller than 0.It's weird.Are u sure the problem is stated correctly...?\n\nLast edited: Mar 9, 2005\n4. Mar 10, 2005\n\n### Wiz\n\nprob\n\nI'm not sure......here's the original prob from which i derived tht velocity relation.\n\nThere is a particle initially on x = 1\nIt is acted upon by a force which varies as F(x)= -0.5 * x^-2\nWhat is the time it takes to reach x= 0.25\n\nThanks,\nWiz\n\n5. Mar 10, 2005\n\n### Wiz\n\nforgot\n\nI forgot.....the mass of the particle is 10^-2 kg.....\n\n6. Mar 10, 2005\n\n### dextercioby\n\nThat's another problem,man,totally different.\n\nThe advice is the same.Separate variables & integrate with corresponding limits.(EDIT 1:It doesn't work,it's second order).\n\nDaniel.\n\nEDIT 2:See next post for further comments.\n\nLast edited: Mar 10, 2005\n7. Mar 10, 2005\n\n### dextercioby\n\nNope,i'm afraid this problem (the second u posted) is not completely integrable\n\nIt requires solving the ODE\n\n$$\\frac{d^{2}x(t)}{dt^{2}}=-|C|\\frac{1}{x^{2}(t)}$$\n\n,but subject only to the initial condition:\n\n$$x(0)=1$$\n\nUnfortunately,the closure of the Cauchy problem requires 2 independent initial conditions to determine the 2 coefficients (<============ a II-nd order ODE).\n\nDaniel.\n\nP.S.Look it up again.\n\n8. Mar 10, 2005\n\n### Wiz\n\nThe prob i posted is the same,\nAll i did was....i found out acc. from the force..\nThen integrated it and i found velocity......the constant in tht case was c=-1\n\nBtw i hv mentioned tht the particle is initially at x=1\nThe prob is same as given in the book,\nThis is an iit-jee question meant for students who hv cleared the 12th grade...and i am in 11th rite now,\nThanks for the help but i think u are getting a bit too advanced for me,,,\n\nWiz\n\n9. Mar 10, 2005\n\n### dextercioby\n\nCould you please post your work.I'm really curious of what you've done...\n\nDaniel.\n\n10. Mar 10, 2005\n\n### Wiz\n\nOh,my god.......i am sooooo sorry\nthe correct eq is F(x)= -k * 0.5 * x^-2\nwhere k= 10^-2 Nm^2\nthe mass of the body is 10^-2 kg\nNow itz 100% correct.\n\nWht i did was, i got a= -0.5 * x^-2 as k and m cancelled out\nso i got dv\/dt=(-0.5 * x^-2)\nhence dv\/dx*dx\/dt=(-0.5 * x^-2)\nhence dv*v=(-0.5 * x^-2)dx\nintegrating i got c=-1 and simplification led to the vel eq i posted....\nCud u plz check it now?\n\nLast edited: Mar 10, 2005\n11. Mar 10, 2005\n\n### dextercioby\n\nIt looks correct.(In general,the constant are less worrying).\n\nDaniel.\n\nP.S.Can u uniquely determine v=v(x)...?\n\n12. Mar 10, 2005\n\n### Yegor\n\nYou can be sure. I also got: v= sqrt [(1-x)\/x)].\nIt's time just to integrate. Substitution sqrt [(1-x)\/x)]=t leads to rather simple integral.\n\n13. Mar 10, 2005\n\n### dextercioby\n\nAre u sure it's not a\n\n$$\\int a(x) \\ dx=\\int v \\ dv$$\n\nPlug the values & integrate.The RHS is simple\n\n$$\\int v \\ dv =\\frac{1}{2}v^{2}+C$$\n\nDaniel.\n\nLast edited: Mar 10, 2005\n14. Mar 10, 2005\n\n### Yegor\n\nI hope we shall see it after integration. There must be \"tau\">0. If we'll have opposite we can just change the sign. Am i right?\n\n15. Mar 10, 2005\n\n### Yegor\n\nSorry. i didn't notice edition. I think You are right in both posts\n\n16. Mar 10, 2005\n\n### dextercioby\n\nThe acceleration is force divided by mass:\n\n$$a(x)=\\frac{F(x)}{m}=\\frac{-5\\cdot 10^{-3} x^{-2}}{10^{-2}}=-0.5\\cdot x^{-2}$$\n\nTherefore:\n\n$$\\int a(x) \\ dx=-0.5\\int x^{-2} \\ dx =0.5 x^{-1}$$\n\nWhat initial condition does the velocity satisfy...?\n\nDaniel.\n\n17. Mar 12, 2005\n\n### Wiz\n\nHey i solved the prob,\nThe fact we forgot was tht since the body is moving towards the orgin due to -ve value of acc, the correct relation wud be v= - sqrt[(1-x)\/x] ----(1)\nNow put x = sin^2($) hence dx\/d$ = sin2$hence dx = sin2$*d$--------(2) By (1), - dx\/sqrt[(1-x)\/x] = dt => -dx \/cot($) = dt\nBy (2)\n\ndt = - sin2$*d$\/cot($) => dt = -2sin^2($)*d\\$\nNow integrate lhs from 0 to T and rhs from pi\/2 to pi\/6\nSo the final ans I got was\nT= [pi\/3 + sqrt(3)\/4 ]\nwhich happens to be correct\n\nI hope I havent made any typing mistakes this time..","date":"2017-03-25 02:28:20","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9096125364303589, \"perplexity\": 10046.087014863937}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-13\/segments\/1490218188773.10\/warc\/CC-MAIN-20170322212948-00106-ip-10-233-31-227.ec2.internal.warc.gz\"}"}
null
null
\section{Introduction} \label{sec:intro} \vspace*{-1.7mm} \IEEEPARstart{O}{ver} time, original methods from various branches of science have enriched the literature on digital image processing, and specifically the fundamental question of signal or image denoising, such as statistics \cite{Hamza2001image}, probability theory \cite{Pizurica2006estimating, Lebrun2013anonlocal}, graph theory \cite{Shuman2013the, Sandryhaila2013discrete, Pang2017graph} or differential equations \cite{Kim2006pde, Liu2012remote}. For the particular case of image restoration addressed herein, number of methods are based on sparse representations into a given basis, with most of the true image described by the projections on a few basis vectors. This enables to efficiently store and restore the image. Such sparse representations \cite{donoho1994ideal, starck2002curvelet} depend on both the transformation chosen and the nature of the image. Traditionally, all these methods exploit few explicit or underlying hypotheses about the image to restore, for example, piece-wise smoothness, but are not strong enough to capture the complex textures present in the true image. With the growth of computing power, data-driven strategies to increase the sparsity and overcome the limitations of general transforms have become more prominent in recent decades. One such approach is to learn overcomplete dictionaries from training image sets \cite{Aharon2006an, Elad2006image}. Another method is based on patch-based schemes, using patch neighborhood as a feature vector. For example, block-matching and 3D filtering known as BM3D creates 3D data arrays by grouping similar image fragments before computing a sparse representation applying 3D transformations \cite{Dabov2007Image, dabov2009bm3d}. The non-local means (NLM) algorithm brought a different perspective to the image denoising problem, where each estimated image pixel intensity is a weighted average of pixels centered at patches that are similar to the patch centered at the estimated pixel \cite{Buades2005areview}. An alternative patch-based NLM approach projects image patches into a lower dimensional subspace using principal component analysis (PCA) before performing the weighted average for denoising \cite{tasdizen2009principal, deledalle2011image}. Later on, various schemes were proposed in the literature to accelerate or to improve the NLM performance, such as a fast NLM algorithm with a probabilistic early termination \cite{Vignesh2010fast}, quadtree-based NLM with locally adaptive PCA \cite{Zuo2016image}, fast processing using statistical nearest neighbors strategy \cite{Frosio2019statistical}, adaptive neighborhoods \cite{Kervrann2006optimal}, patch-based locally optimal Wiener filtering \cite{Chatterjee2012patch} and others \cite{Mahmoudi2005fast, Van2009sure, Dong2013nonlocally}. These NLM-based schemes are known as a powerful way of denoising exploiting similar patches from the whole image. Hence, the patch neighborhood gives an effective way of preserving the local structures of an image where neighborhood similarity is the key ingredient. This paper explores such an approach of exploiting the image neighborhood by borrowing tools from quantum mechanics, precisely, the quantum interactions. Quantum theory is the underlying theory of nature, which governs our world at a fundamental level, and classical mechanics is merely a limiting behavior of quantum mechanics. In classical theory, the position and momentum of a particle are determined precisely, whereas in quantum theory they are given by a probability distribution encoded in the wave function. The wave function in turn can be computed as solution of a wave equation known as the Schr\"odinger equation. Over the past few years, new proposals to use such wave functions as a basis in imaging applications, such as image feature extraction \cite{Aytekin2013Quantum, youssry2015quantum, youssry2019continuous}, denoising \cite{kaisserli2015novel, dutta2021quantum}, deconvolution \cite{dutta2021plug, dutta2021poisson} or others \cite{Eldar2002quantum} have been proposed in the literature. However, all the existing approaches have been built on the theory of single-particle quantum systems. In this paper, we propose a novel image representation algorithm well adapted for denoising based on the theory of quantum many-body interaction. In the case of a system containing two or more quantum particles, they can influence each other's quantum state through quantum interactions. The main idea of this work is to adapt ideas from this theory to extend the concept of interaction to imaging problems. More precisely, the proposed framework consists in quantum interactions between image patches where interactions reflect patch similarity measures in a local neighborhood. In this way, each patch acts as a single-particle system, and the whole collection, that is the entire image, behaves as a many-body system where interactions describe regional similarities to neighboring patches. Preliminary results were presented in \cite{dutta2021image}. Herein, we show that this method constitutes a robust generalized formalism for image-independent and image-dependent noise models. The extension of \cite{dutta2021image} primarily lies in: (i) the characterization of the hyperparameters and automated ways to predict their optimal values with limited knowledge about the original image, (ii) explorations of the denoising possibilities beyond Gaussian statistics without any modification, (iii) a detailed discussion of denoising performance compared to sophisticated methods for both noise scenarios. Earlier proposed single-particle based schemes \cite{Aytekin2013Quantum, youssry2015quantum, youssry2019continuous, kaisserli2015novel, dutta2021quantum, dutta2021plug, dutta2021poisson} have proven their good restoration abilities for different noise models, but are too simple to take advantage of the structural properties of the image and are computationally costly at large scale. As we will show, the proposed generalized framework based on the use of quantum many-body physics improves the previous methods on both counts, building a more versatile computationally efficient adaptive basis that considers similarities between neighboring image patches. In general, it may seem that there is a close architectural resemblance between the NLM and the proposed many-body scheme since similarity measure is the key for both cases. However, the two methods are different from several perspectives. The NLM image denoising algorithm exploits the self-similarities among the image patches to obtain the similarity weights resulting into a non-local weighted average scheme for denoising. The proposed approach brings non-local characteristics within the quantum framework, where interactions between neighboring patches preserve the local structural similarities. For each patch, these interactions convey the structural information into a quantum adaptive basis offering a good sparsifying transformation at a patch level further used for denoising. It turns out that such a theory can be elegantly written using multi-particle quantum theory instead of the single-particle one. In the paper, we first present briefly the previously proposed decomposition concept using a quantum adaptive basis \cite{dutta2021quantum} based on single-particle theory with its limitations in Section~\ref{sec:qtsingparti}, and then introduce its generalization using many-body quantum theory for imaging problems in Section~\ref{sec:qtmanybodyth}. Our image denoising algorithm is described in detail in Section~\ref{sec:qmpiimadecom}. We then turn to numerical implementation of the method on several examples in Section~\ref{sec:simuresu}. We first explore ways to propose automated rules for hyperparameters selection, and then display numerical results showing that the ability of the proposed method in reducing low and high intensity noise regardless of the noise statistics. We also show its good performance in real-life medical ultrasound (US) image despeckling applications. Finally we end with conclusions and future perspectives in Section~\ref{sec:conclusion}. \begin{figure*}[t!] \centering \includegraphics[width=.78\textwidth]{figures/flow.pdf} \vspace*{-5mm} \caption{A simple example of the construction of adaptive vectors from many-patch interaction.} \vspace*{-6mm} \label{fig:flow} \end{figure*} \vspace*{-2mm} \section{Quantum many-body theory for imaging} \label{sec:quantheo} \vspace*{-1mm} \subsection{Quantum theory for a single-particle system} \label{sec:qtsingparti} \vspace*{-1mm} \subsubsection{Quantum theory} \label{sec:qt_sub1} Before detailing the proposed method, we briefly review, for self consistency, the quantum mechanical method for denoising built on single-particle theory introduced in \cite{dutta2021quantum}. For more details on quantum theory, one may refer to one of the many textbooks on this subject, e.g. \cite{feynman1977feynman, landau1991quantum, cohen1977quantum}. In a non-relativistic single-particle quantum system the wave function $\psi(z)$ describes a particle with energy $E$ in a potential $V(z)$ and satisfies the stationary Schr\"odinger equation: \vspace*{-3mm} \begin{equation} - \frac{\hbar ^2}{2m} \nabla^2 \psi (z) + V(z) \psi (z) = E \psi (z), \label{eq:schroedinger} \end{equation} \noindent with $m$, $\hbar$, $\nabla$, and $z$ are respectively the mass of the quantum particle, the Planck constant, the gradient operator, and the spatial coordinate. The wave function $\psi(z)$ is an element of the Hilbert space of $L^2$-integrable functions, and its modulus square \textit{i.e.}, $|\psi (z)|^2$, gives the probability of presence of the particle at some point $z$ on the potential $V(z)$. The wave function solutions of \eqref{eq:schroedinger} form a complete set of basis vectors of the Hilbert space with the following properties: i) Wave vectors are oscillating functions. ii) Oscillation frequency increases with increasing energy $E$. iii) The basis vectors oscillate with a local frequency proportional to $\sqrt{E - V(z)}$, thus for the same wave function the frequency differs locally depending on the local value of $E - V(z)$. iv) The hyperparameter $\hbar ^2/2m$ controls the dependence of the local frequency on $E - V(z)$. \noindent These properties of the basis vectors are the key features to use them as an adaptive basis for an imaging problem. For a more detailed illustration of these features, we refer readers to \cite{dutta2021quantum}. \subsubsection{Application to imaging problems} \label{sec:imaappli} To adapt these concepts to image processing applications, the wave equation \eqref{eq:schroedinger} is rewritten in operator notation leading to ${\boldsymbol{H}} {\boldsymbol{\psi}} (z) = E {\boldsymbol{\psi}} (z) $ with Hamiltonian operator ${\boldsymbol{H}} = -(\hbar ^2/2m) \nabla^2 + {\boldsymbol{V}}(z)$. The eigenvectors of the Hamiltonian operator are the stationary solutions of \eqref{eq:schroedinger}. For imaging applications, the space is finite and discretized, and the potential ${\boldsymbol{V}}$ of the system may be defined as the image pixel values ${\boldsymbol{x}}$. This leads to a discretized problem, where the Hamiltonian operator becomes a finite matrix and can be used as a tool for constructing an adaptive basis \cite{dutta2021quantum}. This discretized Hamiltonian operator reads: \vspace*{-2mm} \begin{eqnarray} {\boldsymbol{H}}[i,j]= \left \{ \begin{array}{c c l} {\boldsymbol{x}}[i]+ 4 \frac{\hbar ^2}{2m} & & \mbox{for} \; i=j,\\ -\frac{\hbar ^2}{2m} & & \mbox{for} \; i = j \pm 1,\\ -\frac{\hbar ^2}{2m} & & \mbox{for} \; i = j \pm n,\\ 0 & & \mbox{otherwise}, \vspace*{-1mm} \end{array} \right. \label{eq:H} \end{eqnarray} \noindent where ${\boldsymbol{x}} \in \mathbb{R}^{n^2}$ is an image (\textit{i.e.}, ${\boldsymbol{V}} = {\boldsymbol{x}}$), and ${\boldsymbol{x}}[i]$ and ${\boldsymbol{H}} [i,j]$ represent respectively the $i$-th component of the image ${\boldsymbol{x}}$, vectorized in lexicographical order and the $(i,j)$-th component of the operator. Note that standard zero padding is used to handle the boundary conditions. A more detailed description of the Hamiltonian construction can be found in \cite{dutta2021quantum, dutta2021plug}. The corresponding set of eigenvectors of the Hamiltonian operator \eqref{eq:H} serves as the quantum adaptive basis on which the image is decomposed before denoising is performed by thresholding the coefficients in energy. \subsubsection{Shortcomings of the single-particle theory in image processing} \label{sec:shortcomsingle} This method of constructing an adaptive basis using quantum principles in a single-particle setting has already been studied in some of our previous works, notably for image denoising \cite{dutta2021quantum} and deconvolution \cite{dutta2021plug, dutta2021poisson}. This adaptive method not only is effective for handling different noise statistics (\textit{e.g.}, Gaussian, Poisson) but also equally efficient for different levels of noise (low as well as high-intensity noise). Nevertheless, there are some technical and intrinsic challenges, such as: \begin{enumerate}[i)] \item Structural features are crucial for imaging applications, but this adaptive approach does not take advantage of them. \item The random noise present in the system leads to the well-known phenomenon of quantum localization \cite{Anderson1958absence} of the wave vectors. The presence of this subtle quantum phenomenon gives additional structures to the adaptive basis and makes it less effective for image denoising. This problem was cured in \cite{dutta2021quantum} by adding an additional step of low-pass filtering, for example, through a Gaussian filter with appropriate standard deviation, of the noisy image. This complicates the method and in particular entails the integration of a new hyperparameter (standard deviation) in the algorithm, which increases the complexity of hyperparameter tuning. \item The computational burden of such a method can be quite large compared to other sophisticated state-of-the-art methods, thus preventing it from implementation in large-scale images. \end{enumerate} In the following, we will show that these drawbacks can be addressed by constructing a new adaptive basis by exploiting quantum many-body theory, more precisely the physics of quantum interactions. \vspace*{-3.0mm} \subsection{Quantum many-body theory for image processing} \label{sec:qtmanybodyth} \subsubsection{Quantum theory for many particles} \label{sec:qtmbodyth_sub1} The quantum theory described above is modified for a system with more than one particle. In particular, particle-to-particle interactions take place inside the quantum system. For a system with $w$ particles the Hamiltonian operator for the many-body system becomes \cite{mahan2013local}: \vspace*{-1mm} \begin{equation} H = - \sum_{a=1}^w \dfrac{\hbar ^2}{2m_a} \nabla^2 + \dfrac{1}{2} \sum_{a=1}^w \sum_{b=1, b\neq a}^w V_{ab}, \label{eq:H_mb} \vspace*{-2mm} \end{equation} where the potential $V_{ab}$ is a function of $ z_1,z_2,\cdots,z_w $, the spatial coordinates of the $w$ particles. Thus, for a given energy $E$ the associated wave function $\psi$ depends on $ z_1,z_2,\cdots,z_w $, and satisfies a new Schr\"odinger equation: \begin{equation} H \psi( z_1,z_2,\cdots,z_w) = E \psi( z_1,z_2,\cdots,z_w). \label{eq:schro_mb} \end{equation} \vspace*{-1mm} \subsubsection{Application to image processing} \label{sec:mabothimapro} We propose to extend this multi-body theory to build an adaptive basis for imaging applications by assimilating similarities between patches into the quantum framework. Similar to non-local mean filter-based approaches, the proposed algorithm splits the image or a local region into into small patches ranging from $1$ to $w$. Each of these patches acts as a single-particle quantum system, which allows the Hamiltonian operator to be defined for each patch as follows: \vspace*{-2mm} \begin{equation} {\boldsymbol{H}}_a = \rlap{$\underbrace{\phantom{\sum_{b=1, b\neq a}^z I_{ab} -\dfrac{\hbar ^2}{2m_a}~}}$} -\dfrac{\hbar ^2}{2m_a} \nabla^2 + \rlap{$\overbrace{\phantom{V(z_a) + \sum_{b=1, b\neq a}^z I_{ab}}}^{{\boldsymbol{V}}_a^{effective}}$} {\boldsymbol{V}}(z_a) + \underbrace{ \sum_{b=1, b\neq a}^w {\boldsymbol{I}}_{ab} }_{{\boldsymbol{H}}_{I_a}}, ~ a = 1,\cdots,w, \label{eq:H_hfmf} \vspace*{-1mm} \end{equation} where, ${\boldsymbol{H}}_a$ and ${\boldsymbol{H}}_{0_a}$ are the Hamiltonians in the patch ${\boldsymbol{A}}$ respectively for the many-body system and a single particle system (as discretized in \eqref{eq:H}). ${\boldsymbol{I}}_{ab}$ and ${\boldsymbol{H}}_{I_a}$ represent respectively the interaction between the ${\boldsymbol{A}}$ and ${\boldsymbol{B}}$ patches and the total interaction between the patch ${\boldsymbol{A}}$ and the other patches in the system. Thus, inside the patch ${\boldsymbol{A}}$ the effective potential ${\boldsymbol{V}}_a^{effective}$ is \vspace*{-3mm} \begin{equation} {\boldsymbol{V}}_a^{effective} = {\boldsymbol{V}}(z_a) + \sum_{b=1, b\neq a}^w {\boldsymbol{I}}_{ab} = {\boldsymbol{V}}(z_a) + {\boldsymbol{H}}_{I_a}. \label{eq:V_eff} \vspace*{-2mm} \end{equation} Therefore, we have a different adaptive basis for each patch containing a unique effective potential ${\boldsymbol{V}}_a^{effective}$. Thus the problem of finding the adaptive basis is transformed into the solution of the system of $w$ equations, as follows: \vspace*{-2mm} \begin{equation} {\boldsymbol{H}}_a {\boldsymbol{\psi}}(z_a) = E_a {\boldsymbol{\psi}}(z_a), ~~~ a = 1,2,\cdots,w. \label{eq:schro_hfmf} \vspace*{-2mm} \end{equation} where similar discretization procedures should be applied in each patch as in \eqref{eq:H}. \subsubsection{Definition of the quantum interaction between two image patches} \label{sec:defintre} Interaction between two or more objects is a universal phenomenon that governs the world at a very basic level, fundamentally classified into four groups: gravitational, electromagnetic, strong, and weak interactions. The gravitational and electromagnetic interactions have long-range properties characterized by power laws. We extend this concept to an imaging problem by introducing the interaction between two image patches, as follows: \begin{itemize} \item There is an inverse proportionality between the interaction and the square of the Euclidean distance (\textit{i.e.}, physical distance) between the patches, \textit{i.e.,} ${\boldsymbol{I}}_{ab} \propto \frac{1}{D_{ab}^2}$, where $D_{ab}$ is the Euclidean distance between two patches denoted by ${\boldsymbol{A}}$ and ${\boldsymbol{B}}$. \item There is a linear proportionality between the interaction and the absolute value of the pixel-wise difference between the patches. This process is defined pixel-wise, \textit{i.e.,} ${\boldsymbol{I}}_{ab}^i \propto |{\boldsymbol{A}}^i - {\boldsymbol{B}}^i|$, $i = 1,2,\cdots,P_{dim}$, where superscript $i$ and $P_{dim}$ are associated with the $i$-th pixel and the number of pixels in every image patch respectively. \end{itemize} Hence, within the proposed image processing framework, the power law for an interacting many-patch system can be defined as \vspace*{-2mm} \begin{equation} {\boldsymbol{I}}_{ab}^i = p \frac{|{\boldsymbol{A}}^i - {\boldsymbol{B}}^i|}{D_{ab}^2} ,~~ i = 1,2,\cdots,P_{dim}, \label{eq:invsqulaw} \vspace*{-1mm} \end{equation} where the proportionality constant $p$ acts as a hyperparameter for the proposed formalism. \subsubsection{Interaction and patch similarity in image processing} \label{sec:interpre} In our many-patch model the proposed mathematical formalism of the power law interaction can be interpreted in the following way: i) two patches with similar pixel values have smaller interaction than the ones with very different values, ii) patches located far from each other have small interaction regardless of their pixel values. In other words, neighboring patches show high interactions if they are very different from each other based on pixel values, while distant patches are always less interactive despite their possible dissimilarity. Based on these principles, the power law manifests itself in such a way that the effective potential of the patch ${\boldsymbol{A}}$ is ${\boldsymbol{V}}_a^{effective}$. This is obtained after the combination of the initial potential (\textit{i.e.}, the target patch itself) with the total interaction between the target patch and its neighboring patches, exploiting the concept of patch similarity in the local neighborhood. This local-similarity is a fundamental building block of real images that preserves structural features \cite{Buades2005areview}. We note that power laws other than the inverse square law could be used, thus modifying the importance of distant patches compared to the nearby ones in the proposed methodology. \begin{figure}[t!] \centering \includegraphics[width=.45\textwidth]{figures/hyper/IPRvsPatchSize_zoomed.pdf} \vspace*{-3mm} \caption{Average inverse participation ratio (IPR) of all the adaptive basis vectors as a function of signal to noise ratio for the \textit{Lena} image degraded by AWGN using different sizes of the image patch.} \vspace*{-7mm} \label{fig:iprVSsnr} \end{figure} \subsubsection{Why the many-patch theory avoids the quantum localization problem} \label{sec:lolizprobqmpi} The presence of random fluctuations in the potential of a quantum system leads to the phenomenon of quantum localization, also known as Anderson localization \cite{Anderson1958absence}. This is a property of wave functions in a disordered potential which makes them exponentially localized due to destructive interference. As a consequence, the adaptive basis vectors for various imaging problems are localized at different positions of the potential in presence of random noise, which makes the adaptive basis less suitable for image decomposition tasks. In \cite{dutta2021quantum}, this challenge was solved by adding a cumbersome first step of image low-pass filtering, with an additional hyperparameter involved. A more detailed discussion of this phenomenon, in particular for image decomposition and denoising, can be found in our previous work \cite{dutta2021quantum}. In the framework of the many-patch theory described above, the decomposition is done at the level of the individual patch, much smaller than the full image. The inverse participation ratio (IPR) of the wave functions, defined as $1/\sum_i |{\boldsymbol{\psi}}(i)|^4$ for a wave function ${\boldsymbol{\psi}}$, gives a measure of the localization. For a vector uniformly spread over $L$ indices and zero elsewhere, the IPR is exactly $L$. More generally, the localization length of localized wave functions is proportional to the IPR. It is known from localization theory that this localization length decreases with the intensity of the disorder. Thus unless the noise is extremely strong, the localization length may be larger than the patch size, making the localization irrelevant for our problem. Fig.~\ref{fig:iprVSsnr} shows the average IPR (measuring the localization length) of all the adaptive basis vectors for the \textit{Lena} image degraded by additive white Gaussian noise (AWGN) with increasing signal to noise ratio (SNR) using different patch sizes. This illustration confirms that the IPR decreases with the SNR, but this effect reduces with patch size. For example, for a $80 \times 80$ patch, the IPR decreases rapidly with decreasing SNR (increasing noise intensity) and becomes less than the patch size for SNR $\leq 12$ dB, making the system extremely localized. However, for smaller patches like $7 \times 7$, almost no such effect is visible for similar noise intensities. In other words, the localization effect becomes negligible in a small patch than in a large one and turns out to be irrelevant beyond a certain level of patch size. We found out that even for fairly strong noise it is always possible to find a patch size smaller than the average IPR that makes irrelevant the localization effect, avoiding the need of the low-pass filtering to create the adaptive basis. \vspace*{-3.5mm} \section{Quantum many-patch interaction for imaging applications: The problem of image decomposition} \label{sec:qmpiimadecom} \subsection{Key principles of the proposed many-patch model} \label{sec:genfram} The objective of this work is to propose a methodology of an explicit construction of an adaptive basis related to the many-body interaction theory under the principles recalled here \cite{dutta2021image}: \begin{itemize} \item Every small patch extracted from an image corresponds to a quantum particle; each of these image-patches or potential surfaces with a quantum particle acts like a single-particle system. \item These single-particle systems are not isolated from each others, on the contrary, the interaction between them and other patches occurs within the whole image, like a quantum many-body system, where a particle-to-particle interaction takes place in the quantum system. \item As a consequence of these interactions, the effective potential (see \eqref{eq:V_eff}) of quantum particles changes, thus the local oscillation frequency of the wave function depends on these interactions. \item These interactions transmit structural features to the wave functions through the effective potential. \item The effective potentials will be used to construct an adaptive basis for each individual patch, in particular used for the decomposition of that patch. \item As an element of the set of oscillatory functions, this basis function uses low oscillation frequencies to probe higher values of the effective potential and vice-versa, \textit{i.e.}, local frequencies depend on the effective potential, and thus on the pixel values and inter-patch interactions. \end{itemize} \vspace*{-5mm} \subsection{Denoising algorithm using quantum many-patch interactions} \label{sec:algorithm} This subsection illustrates in detail the application of the proposed many-patch scheme to address image denoising. In this application, the construction of an adaptive basis for each individual image-patch is the primary objective, which leads to a three step denoising strategy: decomposition of that patch using the adaptive basis, thresholding of the projection coefficients, and finally recovery of the denoised patch by back-projection. These basis vectors are the eigenvectors of the Hamiltonian matrix \eqref{eq:H}, constructed from the effective potential \eqref{eq:V_eff}. These adaptive vectors belong to the Hilbert space of oscillatory functions with: i) the frequency of oscillation increases with increasing energy value (\textit{i.e.}, eigenvalue in \eqref{eq:schro_hfmf}), and ii) a given basis vector uses low oscillation frequencies to probe higher values of the effective potential and vice-versa. It is now assumed that the noise primarily rules the high-frequency components of the image, \textit{i.e.}, eigenvectors corresponding to higher energy eigenvalues. Therefore as in the single-particle algorithm, thresholding in energy should be done to eliminate the image components associated with the high energy eigenvectors. In the proposed interaction framework, the structural similarity between neighboring image patches is assumed to be an innate property of the image. Hence two neighboring patches are assumed to be similar to the extent of random noise. Following the definition \eqref{eq:invsqulaw}, two adjacent patches show high interaction if they are pixel-wise dissimilar (\textit{i.e.}, random noise is present), thus further contributing to the effective potential \eqref{eq:V_eff}. In other words, the interaction term or ultimately the effective potential increases if the noise intensity increases, which eventually shifts the high-frequency noise components of the image to even higher energy eigenvectors. Thus, in order to have a denoised patch, a noisy patch is projected onto a $d$-dimensional subspace that is constructed by the lowest energy solutions of \eqref{eq:schro_hfmf} and rebuild the denoised patch from these projection coefficients. In this way, a lack of similarity between pixels leads to a stronger denoising, since for the same value of the energy these regions will have lower frequencies than the ones with more similarity. Here, $d$ acts as a thresholding hyperparameter. Combining all the denoised patches, following a similar path proposed in the non-local means architecture, one can obtain the final denoised image. Hereafter this proposed adaptive quantum denoiser which integrates the quantum theory of interactions to imaging problems is called Denoising by Quantum Interactive Patches (De-QuIP). The whole denoising process is displayed in Algorithm.~\ref{Algo:QMBI}\footnote{The Matlab code of the proposed denoising algorithm is available at \href{https://github.com/SayantanDutta95/}{github.com/SayantanDutta95/}}. \subsection{Computational complexity} \label{sec:compucomplx} In the precedingly developed algorithm based on single particle quantum physics \cite{dutta2021quantum}, the computational complexity of the algorithm was essentially controlled by the diagonalization of a large Hamiltonian matrix and the identification of its eigenvectors. For an image of size $n\times n$, this matrix is $n^2\times n^2$. In general, for an arbitrary matrix, the diagonalization process would require $O(n^6)$ operations and $O(n^4)$ storage space. However, for a highly sparse matrix (like the Hamiltonian matrix), efficient iterative methods such as the Lanczos method reduce the computational complexity to $O(n^4)$ operations with $O(n^4)$ space complexity required for the diagonalization. In the case of the many patch algorithm, the denoising is done patch-wisely (of size $P_h \times P_h$), the time and space complexity become $O(P_h^4)$ for each denoise region, much smaller than the previous one for $P_h \ll n$. Yet, the best time complexity one can achieve is $O(dP_h^2)$ if one computes only the $d$ eigenvectors used for the restoration task (with $d \leq P_h^2$), with a space complexity also in $O(dP_h^2)$. The second major contribution comes from the computations of the transform coefficients using an iterative scheme that would require $O(dP_h)$ operations for each denoise region. The interaction count for each denoise region gives a complexity in total of $O((S_{patch}+1) P_h)$ if there are $S_{patch}$ patches inside the $W_h \times W_h$ size search window. Therefore, if the image consists of $T_{patch}$ regions (patches), then the dominant computational cost of the proposed denoising algorithm is $ O( T_{patch}d P_h^2 )$. Additionally, parallel computation can be used to boost up the process even further. \begin{algorithm}[h!] \begin{footnotesize} \label{Algo:QMBI} \KwIn{ ${\boldsymbol{y}}$ , $P_h$, $W_h$, $d$, $p$, $\hbar ^2/2m$} {Divide the noisy image ${\boldsymbol{y}}$ into small patches of size $P_h$; say total number is $T_{patch}$. So, the patch dimension $P_{dim} = {P_h}^2$}\\ \For{ $w = 1 : T_{patch}$}{ {Choose one small image patch ${\boldsymbol{J}}_w$}\\ {Creat a search window of size $W_h$ centering at ${\boldsymbol{J}}_w$ and using cyclic boundary conditions}\\ {Collect all the small image patches inside this search window; say the total number is $S_{patch}$}\\ \For{ $l = 1 : S_{patch}$}{ {Calculate Euclidean distance $D_{wl}$ between the ${\boldsymbol{J}}_w$ and ${\boldsymbol{J}}_l$ patch inside the search window}\\ {Calculate interaction ${\boldsymbol{I}}_{wl}$ between the ${\boldsymbol{J}}_w$ and ${\boldsymbol{J}}_l$ patch inside the search window as, ${\boldsymbol{I}}_{wl}^k = p \dfrac{ | {\boldsymbol{J}}_w^k - {\boldsymbol{J}}_l^k | }{ D_{wl}^2}, ~~k = 1,\cdots, P_{dim}$}\\ } {Calculate total interaction ${\boldsymbol{I}}^{~total}_w$ between the patch ${\boldsymbol{J}}_w$ and the patches inside the search window by taking sum over all $l$; \textit{i.e.}, ${\boldsymbol{I}}^{{~total}^k}_w = \sum_{l = 1}^{S_{patch}} {\boldsymbol{I}}_{wl}^k , ~~ k = 1,\cdots,P_{dim}$}\\ {Effective potential for the ${\boldsymbol{J}}_w$ patch is $ {\boldsymbol{V}}_w^{{~effective}^k} = {\boldsymbol{J}}_w^k + {\boldsymbol{I}}^{{~total}^k}_w , ~~ k = 1,\cdots,P_{dim}$}\\ {Construct the Hamiltonian matrix ${\boldsymbol{H}}_w$ using the effective potential $ {\boldsymbol{V}}_w^{~effective}$}\\ {Calculate the eigenvalues and eigenvectors of ${\boldsymbol{H}}_w$}\\ {Construct adaptive basis ${\boldsymbol{B}}_w^{adaptive}$ using the eigenvectors ${\boldsymbol{\psi}}_w^k , ~~ k = 1,\cdots,P_{dim}$}\\ {Project the noisy patche ${\boldsymbol{J}}_w$ onto this adaptive basis ${\boldsymbol{B}}_w^{adaptive}$}\\ {Calculate projection coefficients ${\boldsymbol{c}}_w$ in the $P_{dim}$-dimensional space. Note that, $P_{dim} > d$}\\ {Redefine the projection coefficients in the $d$-dim subspace as ${\boldsymbol{c}}^{{new}^k}_{w} = {\boldsymbol{c}}^k_{w} , k = 1,\cdots,d$}\\ {Reconstruct the patch by ${\boldsymbol{R}}_w = \sum_{k = 1}^d {\boldsymbol{c}}^{{new}^k}_{w} {\boldsymbol{\psi}}_w^k$}\\ } {Combining all $T_{patch}$ number of small denoised image patches ${\boldsymbol{R}}_w$ restores the full denoised image $\hat{{\boldsymbol{x}}}$}\\ \KwOut{$\hat{{\boldsymbol{x}}}$} \caption{De-QuIP algorithm} \DecMargin{1em} \end{footnotesize} \end{algorithm} \begin{figure}[t!] \centering \includegraphics[width=0.48\textwidth]{figures/all_samples.pdf} \vspace{-4mm} \caption{Sample images (sizes in parentheses).} \label{fig:samples} \vspace{-6.2mm} \end{figure} \vspace{-3.5mm} \section{Simulation Results} \label{sec:simuresu} This section illustrates the interest of the proposed approach in image denoising problems and explores ways to choose the suitable hyperparameters. At the outset, Subsection~\ref{sec:hypara} explains the reliance of the proposed denoising scheme on the optimal choice of the hyperparameters $P_h$, $W_h$, $p$, $\hbar ^2/2m$ and $d$, and explores rules for their possible estimations. For a thorough investigation, we explore cases of four different noise intensities (low to high) with image independent (\textit{e.g.}, Gaussian) and dependent (\textit{e.g.}, Poisson) noise models. The subsequent Subsection~\ref{sec:comparison} provides denoising results and a comparison between the proposed approach and several standard state-of-the-art methods. Finally, the section ends with a real medical application in Subsection~\ref{sec:usdesp}, which highlights the potential of the proposed scheme for the despeckling of real-world ultrasound (US) images. \subsection{Influence of hyperparameters $P_h$, $W_h$, $p$, $\hbar ^2/2m$ and $d$ and how to select them} \label{sec:hypara} \vspace{-.8mm} \subsubsection{Effect of patch size $P_h$} \label{sec:hy_ph} The localization of the basis vectors is associated with the length of the image patch, as explained in Subsection~\ref{sec:lolizprobqmpi}. The respective localization length or IPR decreases for increasing noise intensity. To deal with this quantum localization phenomenon, the size of the patch should be always less than or equal to the localization length of the basis vectors for different levels of noise. If the localization length is greater than the size of the patch, the basis vectors probe the entire region of the image patch with different ranges of oscillation frequencies depending on the intensity of the image pixels. On the contrary, a smaller localization length leads to an exponential localization of the basis vectors on a specific part of the image patch. Thus, these localized vectors will not have different frequencies at different pixel values and lose a key asset of this formalism. The drastic effect of this localization phenomenon on image denoising is shown in our previous work \cite{dutta2021quantum}, where an additional Gaussian smoothing was necessary before computing the quantum adaptive basis (QAB), used as a denoiser in that process. On the contrary, the current formalism eliminates this issue without any additional computational requirements. Furthermore, a smaller patch size helps to reduce the computational complexity, as discussed in the section above. As a consequence, De-QuIP denoiser is more computationally efficient than the previously proposed QAB denoiser in \cite{dutta2021quantum}. Table~\ref{tab:tab_com_De-QuIPvsQAB} summarizes the run time using the QAB and De-QuIP denoiser with increasing patch size. The peak signal to noise ratios (PSNR) and the structure similarity (SSIM), used as denoising quality metrics, are given to have a quantitative analysis concerning the patch size. All the algorithms have been implemented in Matlab and tested on a computer with an Intel(R) Core(TM) i7-10510U CPU of 4 cores each with 1.80 GHz, 16 GB memory and using Windows 10 Pro version 20H2 as operating system. From Table~\ref{tab:tab_com_De-QuIPvsQAB}, one can see that the computational time for both denoisers increases as the patch size increases but the denoising performance (\textit{i.e.}, PSNR and SSIM values) for De-QuIP first increases with the patch size and then begins to decrease gradually after size $11 \times 11$. Whereas, QAB requires much larger patches to achieve a similar performance, which essentially imposes a huge computational burden on the process. The gradual decrease in the performance of the De-QuIP denoiser for increasing patch size is expected due to the localization phenomenon, which is discussed above. Therefore, a smaller patch size preserves the fundamental features of these adaptive vectors and reduces the computational complexity and run time. Herein, we will only focus on the patch sizes $5 \times 5$, $7 \times 7$ and $11 \times 11$ for further investigations. \begin{table}[t!] \setlength\tabcolsep{1.6pt} \begin{scriptsize} \begin{center} \caption{Simulation data with different patch sizes for the \textit{Lake} image contaminated by AWGN (SNR = 16dB). For the proposed De-QuIP method hyperparameters $\hbar^2/2m = 1.5$, and $p$ and $d$ are estimated from the equations \eqref{eq:curfit1} and \eqref{eq:curfit2} respectively.} \vspace{-2mm} \label{tab:tab_com_De-QuIPvsQAB} \begin{tabular}{cc cccccccc} \thickhline \multirow{2}{*}{} & \multirow{2}{*}{Data} & \multicolumn{8}{c}{Patch size}\\ & & $1 \times 1$ & $3 \times 3$ & $5 \times 5$ & $7 \times 7$ & $11 \times 11$ & $17 \times 17$ & $27 \times 27$ & $63 \times 63$\\ \thickhline \multirow{3}{*}{\begin{turn}{90}QAB\end{turn}} & PSNR(dB) & 11.36 & 12.78 & 21.56 & 24.40 & 26.54 & 27.12 & 27.33 & 28.09 \\ & SSIM & 0.43 & 0.46 & 0.48 & 0.48 & 0.63 & 0.70 & 0.74 & 0.79 \\ \vspace*{2mm} & Time(sec) & 30.56 & 17.09 & 41.31 & 70.32 & 161.96 & 328.97 & 881.69 & 5800.72 \\ \multirow{3}{*}{\begin{turn}{90}De-QuIP\end{turn}} & PSNR(dB) & 22.12 & 28.16 & 28.73 & 28.84 & 28.58 & 28.23 & 28.16 & 27.77\\ & SSIM & 0.37 & 0.78 & 0.83 & 0.83 & 0.82 & 0.81 & 0.80 & 0.79\\ \vspace*{1mm} & Time(sec) & 21.93 & 22.75 & 82.61 & 108.01 & 490.52 & 3829.31 & 5644.90 & 22765.18\\ \thickhline \end{tabular}\end{center} \end{scriptsize} \vspace{-8mm} \end{table} \subsubsection{Effect of the search window size $W_h$} \label{sec:hy_wh} The search window is the image region around the current patch regrouping all the patches interacting with it. Following the discussion in Subsection~\ref{sec:defintre}, the size of the search window plays an important role in preserving the structural similarities in a local neighborhood. This search window is usually defined as a square window of limited size so that the implementation is restricted to a small neighborhood centered on the target patch (to be denoised) instead of the whole image. In the literature, mostly two types of approaches are used, based on a fixed search window size \cite{tasdizen2009principal, Mahmoudi2005fast, deledalle2011image, Vignesh2010fast} or an adaptive approach \cite{Kervrann2006optimal}. In this work, we concentrate on the fixed size approach for examining the effect of the search window on De-QuIP. Fig.~\ref{fig:hywindow} shows the denoising performance of De-QuIP in terms of PSNR as a function of the search window size for the Gaussian and Poisson noise cases. For these simulations, the patch size is kept fixed at $7 \times 7$ for all images. Note that in these simulations patches overlap. In Fig.~\ref{fig:hywindow}, one can see that in both cases, the denoising ability increases with the size of the search window before roughly stabilizing beyond a size $20 \times 20$. These observations show that the patch neighborhood is important to increase the denoising performance but larger search windows do not bring additional information about the neighborhood due to the inverse square nature of the interaction term. It is also important to notice that the computation time increases with the search window size, as shown in the right y-axis in Fig.~\ref{fig:hywindow}. The use of a relatively moderate size search window is computationally more efficient while preserving the image attributes using the proposed interaction framework. Note that these results are consistent for other patch sizes. Therefore, for simplicity, in this work we choose a search window size of $15 \times 15$, $21 \times 21$ and $33 \times 33$ respectively for the patch sizes of $5 \times 5$, $7 \times 7$ and $11 \times 11$. \subsubsection{Influence of the proportionality constant $p$} \label{sec:hy_p} As mentioned above, the proportionality constant $p$ regulates the interaction term in the effective potential, and consequently the shape of the basis vectors. Hence, there exists an optimal choice of $p$ depending on the size of the patch for optimal performance of De-QuIP for a given noisy image. Fig.~\ref{subfig:hy_pVSpsnr} presents the denoising performance in terms of PSNR as a function of $p$ for the \textit{house} image corrupted with AWGN (SNR = 16dB), for three different patch sizes. These optimal values also depend on the level of noise present in the image. These $p$ values that maximize the output PSNRs for the first seven sample images corrupted with different noise intensities are highlighted in Fig.~\ref{fig:samples}. These optimal $p$ values are shown as a function of SNR in Fig.~\ref{fig:hy_p_fit} using box-plots for a fixed patch size. The observations confirm that there is a tendency for optimal values to decrease as the noise level increases. For explicit details of these optimal values, we refer readers to the Supp. Mat. file Table~\ref{tab:tab_proporconst}. A possible explanation for this phenomenon comes from the fact that dissimilarities increase with the noise intensity in a local-neighborhood. Hence, to balance the original potential (patch pixels) and the interactions in the effective potential, the hyperparameter $p$ decreases. \begin{figure}[t!] \centering \includegraphics[width=.48\textwidth]{figures/hyper/hyper_searchwindow.pdf} \vspace*{-3.mm} \caption{Denoising performance in terms of PSNR (left y-axis) and average run time (right y-axis) of De-QuIP as a function of the search window size for the first seven sample images in Fig.~\ref{fig:samples}. The images hyperparameters $P_h = 7$ and others are estimated from the equations~\eqref{eq:curfit1}-\eqref{eq:curfit4}.} \label{fig:hywindow} \vspace*{-8mm} \end{figure} \begin{figure*}[t!] \centering \subfigure[PSNR vs proportionality constant $p$] {\label{subfig:hy_pVSpsnr} \includegraphics[width=.32\textwidth]{figures/hyper/hy_pVSpsnr_diff_patch.jpg}} \subfigure[PSNR vs $F_{\mbox{factor}}$] {\label{subfig:hy_factVSpsnr} \includegraphics[width=.318\textwidth]{figures/hyper/hy_factVSpsnr_diff_patch.jpg}} \subfigure[PSNR vs subspace dimensionality $d$] {\label{subfig:hy_dVSpsnr} \includegraphics[width=.327\textwidth]{figures/hyper/hy_dVSpsnr_diff_patch.jpg}} \vspace{-5.2mm} \caption{Denoising performance of De-QuIP in terms of PSNR as a function of the hyperparameters for the \textit{house} image corrupted with AWGN (SNR = 16dB) using three different patch sizes. All hyperparameters are estimated using equations \eqref{eq:curfit1}-\eqref{eq:curfit4}.} \label{fig:hyVSpsnr} \vspace{-6.5mm} \end{figure*} \begin{figure}[t!] \centering \includegraphics[width=.48\textwidth]{figures/hyper/hy_p.pdf} \vspace{-3mm} \caption{Optimal proportionality constant $p$ value as a function of SNR for three patch sizes, where the top and the bottom rows are associated with the case of Gaussian and Poisson noise models. The bars indicate the minimum and maximum values of the optimal $p$. The bottom and top edges of the blue boxes indicate the $25^{th}$ and $75^{th}$ percentiles and the central mark and green star indicate the median and mean values. The red line is the best linear curve fitted to the data points corresponding to the mean of the optimal $p$ values.} \label{fig:hy_p_fit} \vspace{-6.5mm} \end{figure} The data in Fig.~\ref{fig:hy_p_fit} enables rules to fix the $p$ value closer to its optimal values. The distribution of the data gives an intuition about a possible linear relationship between the optimal $p$ and the SNR. Therefore, the proportionality constant $p$ can be chosen from the following rule: \vspace*{-1.5mm} \begin{equation} p = m_1 \times (\mbox{SNR}) + c_1. \label{eq:curfit1} \end{equation} In Fig.~\ref{fig:hy_p_fit}, the best linear fits to the optimal $p$ as a function of SNR are shown for three different patch sizes as well as for Gaussian and Poisson noise models. These linear fits give a robust way of choosing the suitable $p$ for a given patch size and noise level. The linear fit parameters are summarized in the Supp. Mat. file Table~\ref{tab:tab_proporconstfit} together with the $\ell_2$ error and the resulting average loss in the denoising performance in terms of PSNR and SSIM. One may notice that the denoising performance loss with rule \eqref{eq:curfit1} rather than the optimal choice is negligible. This is expected due to the smooth nature of the PSNR curve with a broad maxima shown in Fig.~\ref{subfig:hy_pVSpsnr}, which makes the De-QuIP resilient to small sub-optimalities in the adoption of $p$. Hence, it is anticipated that the parameters learned from the sample images to estimate $p$ using \eqref{eq:curfit1}, will be effective for a large set of images. These conclusions are valid for various cases of noise models and patch sizes, as shown in the simulations results. Furthermore, an adaptive approach of tuning $p$ that depends on the image patch gives an alternative to the above rules and opens an interesting perspective for future investigation. \subsubsection{Influence of the $\hbar ^2/2m$ and subspace dimensionality $d$} \label{sec:hy_hbar_d} The last two hyperparameters to be analyzed are $\hbar ^2/2m$ and the subspace dimensionality $d$. Although the utilization of these two hyperparameters seems to be different, the first one being used in the construction of the Hamiltonian operator and the other one acting as a threshold, there is a deep connection between them. In this subsection, we will explain this connection with experimental validation and propose rules for automated estimation of their optimal choices. As stated above in Subsection~\ref{sec:qt_sub1}, the hyperparameter $\hbar ^2/2m$ controls how the local frequencies of the basis vectors change with the image pixel values. For low values of $\hbar ^2/2m$, the oscillation frequencies are very high, regardless of the low and high pixel values, due to the presence of a very high maximal oscillation in this limit which restricts the wave vectors from properly exploring higher pixel values. On the other side, increasing too much the values of $\hbar ^2/2m$ decreases the ability of the basis vectors to distinguish between high and low values pixels. For more illustrations about the effect of this hyperparameter $\hbar ^2/2m$ on the basis vectors, we refer readers to our previous work \cite{dutta2021quantum}. Therefore, the optimal $\hbar ^2/2m$ value has a strong dependence on the maximum and minimum values of the pixels present in the image patch. Thus, it is more convenient to use an adaptive way to select $\hbar ^2/2m$ that depends on the image patch to have the optimal performance of De-QuIP. Herein, it is possible to write the hyperparameter in terms of the difference between this maximum and minimum pixel values multiplied by a factor $F_{\mbox{factor}}$, for example, for the patch ${\boldsymbol{A}}$, \vspace*{-2mm} \begin{equation} \hbar ^2/2m = F_{\mbox{factor}} \times ( {\boldsymbol{A}}_{\mbox{max}} - {\boldsymbol{A}}_{\mbox{min}} ), \label{eq:curfit2} \vspace*{-1.6mm} \end{equation} where ${\boldsymbol{A}}_{\mbox{max}}$, ${\boldsymbol{A}}_{\mbox{min}}$ are the maximum and minimum pixel values of the patch ${\boldsymbol{A}}$. Hence, the optimal choice of $F_{\mbox{factor}}$ is needed to have the best possible output. In this proposed scheme, the subspace dimensionality $d$ is used as the threshold for truncating high energy wave solutions, which mostly carry noise information. Hence, an optimal choice of $d$ exists for a noisy image that yields the best denoising output depending on the patch size. $\hbar ^2/2m$ or say $F_{\mbox{factor}}$ controls the frequency distribution across the basis vectors since the maximal frequency of a vector with energy $E$ at the local pixel value $V$ is $\sqrt{(E-V)/(\hbar ^2/2m)}$. Hence, the maximal frequency decreases with increasing $F_{\mbox{factor}}$. As a consequence, low-energy basis vectors become more prominent to distinguish low and high pixel regions using different levels of frequency. Thus, the optimal subspace dimensionality $d$ decreases as $F_{\mbox{factor}}$ increases. These optimal choices vary with the image patch size and noise statistics. In Fig.~\ref{fig:hy_hfac_fit}, all these optimal values that give the best output PSNRs for the first seven sample images are shown as a scatter-plot of $F_{\mbox{factor}}$ vs $d$, which clearly shows their inverse relationship, \textit{i.e.}, $d$ decreases with $F_{\mbox{factor}}$'s growth or vice-versa and validates our above arguments. More details of these optimal values can be found in the Supp. Mat. file Table~\ref{tab:tab_planckdata} and Table~\ref{tab:tab_subspadim}. \begin{figure}[t!] \centering \includegraphics[width=.48\textwidth]{figures/hyper/hy_hfac.pdf} \vspace{-3.3mm} \caption{$F_{\mbox{factor}}$ vs $d$ scatter plot and the respective best-fitted curve of the form $(F_{\mbox{factor}}-l_1) = l_3/(d-l_2)$.} \label{fig:hy_hfac_fit} \vspace{-4.2mm} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=.48\textwidth]{figures/hyper/hy_d_G.pdf} \vspace{-3.3mm} \caption{Optimal subspace dimensionality $d$ value as a function of SNR for three patch sizes are shown for the Gaussian noise models. Similar to Fig.~\ref{fig:hy_p_fit}, a box-plot diagram is used for the optimal $d$. The red line is the best linear curve fitted to the data points corresponding to the mean of the optimal $d$.} \label{fig:hy_d_fit} \vspace{-7.5mm} \end{figure} These experimental data enable an automated way of selecting the values of $d$ and $F_{\mbox{factor}}$ close to their optimalities. To do this, the optimal $d$ values are shown in Fig.~\ref{fig:hy_d_fit} as a function of SNR using box-plots for a fixed patch size, for the Gaussian case. The observation shows a very predictable behavior of this optimal $d$ as a function of SNR which is expected as it needs to be further thresholded as the noise increases. A similar observation can be made for the Poisson model, whose data are available in the Supp. Mat. file Fig.~\ref{fig:hy_d_fit_P}. For a specific patch size, the optimal $d$ and SNR follow a linear relationship. Therefore, the subspace dimensionality $d$ and $F_{\mbox{factor}}$ can be inferred from the following two rules, \vspace{-2mm} \begin{equation} d = m_2 \times (\mbox{SNR}) + c_2 , \label{eq:curfit3} \vspace{-1.5mm} \end{equation} \begin{equation} F_{\mbox{factor}} - l_1 = l_3 / ( d - l_2 ). \label{eq:curfit4} \vspace{-1mm} \end{equation} Fig.~\ref{fig:hy_hfac_fit} and Fig.~\ref{fig:hy_d_fit} show the best-fitted curves to the optimal $F_{\mbox{factor}}$ and $d$, and the respective fit parameters are regrouped in the Supp. Mat. file Table~\ref{tab:tab_hbar_d_fit}. These rules give an efficient way of selecting the hyperparameters close to their optimality depending on the size of the given patch and the intensity of the noise. Our data show that the respective costs in terms of performance loss are minimal, since the output PSNR curves are smooth and have broad maxima, shown in Fig.~\ref{subfig:hy_factVSpsnr}-\ref{subfig:hy_dVSpsnr} for the choice of $F_{\mbox{factor}}$ and $d$, as discussed in Subsection~\ref{sec:hy_p} for the hyperparameter $p$. Hence, the rules for automated selecting hyperparameters are expected to be valid for other images as well. \vspace{-4mm} \subsection{Denoising efficiency of the proposed scheme in comparison with standard methods} \label{sec:comparison} \vspace{-1mm} This subsection presents the denoising performance of the De-QuIP algorithm depending on the noise statistics and intensity, and also how this performance varies with patch size for the sample images. For the first seven images in Fig.~\ref{fig:samples}, the denoising outputs using three patch sizes are summarized in Table~\ref{tab:tab_denoi_diffpat}. The numerical simulations show that $11 \times 11$ is the suitable patch size for most of the cases, but for low-level noise, smaller sizes give a small advantage. It is expected to have a better result with a large patch for a strong noise scenario since high noise intensity refers to an extreme random system and a large patch is more efficient to capture the similarity measures from this strong randomness. Obviously, the size should not be so large because it is affected by the phenomenon of localization, as illustrated in Subsection~\ref{sec:lolizprobqmpi}. \begin{table*}[t!] \begin{scriptsize} \setlength\tabcolsep{4pt} \begin{center} \caption{Comparison of denoising performance of De-QuIP with different patch sizes for different noise levels.} \vspace*{-2mm} \resizebox{.8\hsize}{!}{ \label{tab:tab_denoi_diffpat} \begin{tabular}{cc cc c cc c cc cc cc c cc c cc} \thickhline \multirow{3}{*}{Sample} & \multirow{3}{*}{Input} & \multicolumn{8}{c}{Gaussian case} &&& \multicolumn{8}{c}{Poisson case}\\ & & \multicolumn{2}{c}{$5 \times 5$} && \multicolumn{2}{c}{$7 \times 7$} && \multicolumn{2}{c}{$11 \times 11$} &&& \multicolumn{2}{c}{$5 \times 5$} && \multicolumn{2}{c}{$7 \times 7$} && \multicolumn{2}{c}{$11 \times 11$}\\ & SNR(dB) & PSNR(dB) & SSIM && PSNR(dB) & SSIM && PSNR(dB) & SSIM &&~~~~& PSNR(dB) & SSIM && PSNR(dB) & SSIM && PSNR(dB) & SSIM\\ \thickhline \multirow{4}{*}{house} & 22 & 35.30 & 0.88 & & 35.45 & \bd{0.89} & & \bd{35.58} & \bd{0.89} &&& 34.94 & 0.87 && 35.10 & \bd{0.88} && \bd{35.14} & \bd{0.88} \\ & 16 & 31.91 & \bd{0.83} & & 32.15 & \bd{0.83} & & \bd{32.29} & \bd{0.83} &&& 31.49 & \bd{0.82} && \bd{31.78} & \bd{0.82} && 31.73 & \bd{0.82} \\ & 8 & 26.85 & 0.72 & & 27.45 & 0.75 & & \bd{27.91} & \bd{0.76} &&& 26.45 & 0.72 && 27.02 & 0.74 && \bd{27.27} & \bd{0.75} \\ \vspace*{1mm} & 2 & 23.05 & 0.60 & & 23.92 & 0.68 & & \bd{24.66} & \bd{0.72} &&& 22.65 & 0.59 && 23.48 & 0.66 && \bd{24.09} & \bd{0.70} \\ \multirow{4}{*}{lake} & 22 & \bd{33.23} & \bd{0.92} & & 33.16 & 0.91 & & 32.80 & 0.90 &&& \bd{33.09} & \bd{0.91} && 33.04 & 0.90 && 32.72 & 0.90 \\ & 16 & 28.81 & \bd{0.83} & & \bd{28.85} & 0.82 & & 28.63 & 0.81 &&& 28.54 & \bd{0.81} && \bd{28.60} & \bd{0.81} && 28.42 & \bd{0.81} \\ & 8 & 24.05 & 0.69 & & 24.19 & \bd{0.71} & & \bd{24.37} & 0.69 &&& 23.75 & 0.66 && 23.98 & \bd{0.68} && \bd{24.11} & \bd{0.68} \\ \vspace*{1mm} & 2 & 21.09 & 0.57 & & 21.59 & 0.62 & & \bd{21.75} & \bd{0.63} &&& 20.90 & 0.56 && 21.33 & 0.61 && \bd{21.48} & \bd{0.63} \\ \multirow{4}{*}{lena} & 22 & 35.05 & 0.89 & & 35.21 & 0.89 & & \bd{35.34} & \bd{0.90} &&& 34.86 & 0.88 && 35.05 & \bd{0.89} && \bd{35.16} & \bd{0.89} \\ & 16 & 31.73 & 0.84 & & 32.00 & \bd{0.85} & & \bd{32.17} & \bd{0.85} &&& 31.49 & 0.83 && 31.78 & \bd{0.84} && \bd{32.34} & \bd{0.84} \\ & 8 & 26.17 & 0.71 & & 27.67 & \bd{0.78} & & \bd{28.00} & \bd{0.78} &&& 26.93 & 0.74 && 27.40 & \bd{0.77} && \bd{27.61} & 0.76 \\ \vspace*{1mm} & 2 & 23.60 & 0.63 & & 24.53 & 0.71 & & \bd{25.04} & \bd{0.74} &&& 23.36 & 0.63 && 24.30 & \bd{0.71} && \bd{24.71} & \bd{0.71} \\ \multirow{4}{*}{hill} & 22 & 31.54 & 0.82 & & \bd{31.58} & \bd{0.83} & & 31.55 & \bd{0.83} &&& 32.01 & 0.82 && \bd{32.16} & \bd{0.83} && 32.13 & \bd{0.83} \\ & 16 & 27.95 & 0.69 & & 28.06 & \bd{0.70} & & \bd{28.10} & \bd{0.70} &&& 28.25 & \bd{0.70} && 28.37 & \bd{0.70} && \bd{28.39} & \bd{0.70} \\ & 8 & 24.42 & \bd{0.55} & & 24.49 & \bd{0.55} & & \bd{24.61} & \bd{0.55} &&& 24.58 & \bd{0.55} && \bd{24.63} & \bd{0.55} && 23.58 & 0.54 \\ \vspace*{1mm} & 2 & 21.97 & 0.46 & & 22.41 & 0.48 & & \bd{22.61} & \bd{0.49} &&& 22.08 & 0.46 && 22.46 & 0.48 && \bd{22.54} & \bd{0.49} \\ \multirow{4}{*}{fingerprint} & 22 & 32.35 & 0.93 & & 32.50 & 0.93 & & \bd{32.54} & \bd{0.94} &&& 33.39 & 0.94 && 32.15 & 0.93 && \bd{33.49} & \bd{0.95} \\ & 16 & 28.12 & 0.86 & & 28.46 & \bd{0.87} & & \bd{28.65} & \bd{0.87} &&& \bd{28.63} & \bd{0.87} && 28.16 & 0.86 && 28.24 & 0.86 \\ & 8 & 23.36 & 0.72 & & 23.31 & 0.72 & & \bd{23.63} & \bd{0.73} &&& \bd{23.65} & \bd{0.74} && 23.07 & 0.72 && 23.40 & 0.73 \\ \vspace*{1mm} & 2 & \bd{20.03} & \bd{0.59} & & 19.80 & 0.57 & & 20.01 & 0.58 &&& \bd{19.90} & \bd{0.59} && 19.58 & 0.56 && 19.56 & 0.56 \\ \multirow{4}{*}{saturn} & 22 & 38.94 & 0.89 & & 39.36 & 0.92 & & \bd{39.53} & \bd{0.94} &&& 40.64 & 0.97 && 40.85 & \bd{0.98} && \bd{40.87} & \bd{0.98} \\ & 16 & 34.67 & 0.79 & & 35.27 & 0.83 & & \bd{35.63} & \bd{0.87} &&& 36.00 & 0.94 && 36.31 & \bd{0.95} && \bd{36.42} & 0.94 \\ & 8 & 28.94 & 0.61 & & 29.87 & 0.67 & & \bd{30.60} & \bd{0.74} &&& 30.44 & 0.89 && 31.00 & \bd{0.90} && \bd{31.40} & 0.89 \\ \vspace*{1mm} & 2 & 24.45 & 0.46 & & 25.97 & 0.55 & & \bd{27.03} & \bd{0.62} &&& 26.40 & 0.82 && 27.26 & \bd{0.86} && \bd{27.46} & 0.85 \\ \multirow{4}{*}{flintstones} & 22 & \bd{32.20} & \bd{0.87} & & 32.16 & \bd{0.87} && 31.97 & 0.86 &&& 33.20 & \bd{0.88} && 33.08 & \bd{0.88} && \bd{32.99} & \bd{0.88} \\ & 16 & 28.65 & \bd{0.80} & & \bd{28.69} & 0.79 && 28.47 & 0.78 &&& \bd{29.04} & \bd{0.80} && 29.00 & 0.78 && 28.77 & 0.78 \\ & 8 & 23.48 & 0.67 & & \bd{23.78} & \bd{0.68} && 23.70 & 0.66 &&& 23.44 & 0.65 && \bd{23.84} & \bd{0.68} && 23.64 & 0.64 \\ & 2 & 19.74 & 0.52 & & 19.87 & \bd{0.56} && \bd{20.03} & \bd{0.56} &&& 19.50 & 0.51 && \bd{19.69} & \bd{0.53} && 19.63 & \bd{0.53} \\ \thickhline \end{tabular} } \end{center} \end{scriptsize} \vspace*{-6mm} \end{table*} \begin{figure*}[h!] \centering \includegraphics[width=.75\textwidth]{figures/toshow/slice_1G.pdf} \includegraphics[width=.75\textwidth]{figures/toshow/slice_1P.pdf} \vspace{-3mm} \caption{Noisy images and the corresponding denoised estimations by De-QuIP.} \label{fig:result_1GP} \vspace{-4mm} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[width=.7\textwidth]{figures/toshow/result1G.pdf} \vspace{-3mm} \caption{Zoomed segments of the denoised estimations of the \textit{Flintstones} image while corrupted with AWGN (SNR $16$dB) using different methods.} \label{fig:result_1G} \vspace{-5mm} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[width=.8\textwidth]{figures/toshow/result1P.pdf} \vspace{-3mm} \caption{Zoomed segments of the denoised estimations of the \textit{Lake} image while corrupted with Poisson noise (SNR $16$dB) using different methods.} \vspace*{-3.7mm} \label{fig:result_1P} \end{figure*} \begin{figure*}[t!] \centering \vspace*{-1.2mm} \subfigure[Recovery of Gaussian corrupted images] {\label{subfig:rescomp_boxplotG} \includegraphics[width=.4\textwidth]{figures/toshow/result_psnr_diffMethods_boxplots_G.jpg} \includegraphics[width=.4\textwidth]{figures/toshow/result_ssim_diffMethods_boxplots_G.jpg} } \vspace*{-3.5mm} \subfigure[Recovery of Poisson corrupted images] {\label{subfig:rescomp_boxplotP} \includegraphics[width=.4\textwidth]{figures/toshow/result_psnr_diffMethods_boxplots_P.jpg} \includegraphics[width=.4\textwidth]{figures/toshow/result_ssim_diffMethods_boxplots_P.jpg} } \vspace*{-3mm} \caption{Quantitative denoising results using different methods for Gaussian and Poisson corrupted images with four different noise levels. The bottom and top edges of the boxes indicate the $25^{th}$ and $75^{th}$ percentiles, and the central black line and circle indicate the median and mean relative to the data points.} \label{fig:rescomp_boxplot} \vspace*{-6.5mm} \end{figure*} As explained earlier, the De-QuIP follows a similar principle to the NLM approach. Comparisons with NLM-based state-of-the-art methods are thus provided in order to prove the efficiency the proposed algorithm. However, for a comprehensive survey of the denoising ability of De-QuIP, rigorous comparisons with contemporary noise removal methods from the literature are also presented. For the restoration of Gaussian corrupted images, the following methods were used for comparison: NLM method using PCA called PND in \cite{tasdizen2009principal}, two patch-based PCA for NLM denoising methods referred to as PGPCA (global approach) and PLPCA (local approach) in \cite{deledalle2011image}, BM3D \cite{Dabov2007Image}, dictionary learning (DL) method \cite{Elad2006image}, graph signal processing (GSP) method \cite{Pang2017graph}, and finally, our earlier implementations of quantum adaptive basis (QAB) for image denoising based on the single particle theory \cite{dutta2021quantum}. For the recovery of Poisson corrupted images, comparisons have been carried out with recent algorithms dedicated to the Poissonian model such as Poisson non-local PCA (PNLPCA) \cite{salmon2014poisson}, BM3D consolidated with the Anscombe transform \cite{Makitalo2011optimal} leveled as ATBM3D, and finally the QAB \cite{dutta2021quantum} method. The denoising performance of the proposed De-QuIP method is presented in Fig.~\ref{fig:result_1GP} for visual inspection, where noisy and corresponding denoised images are shown. Observations show good performance of De-QuIP regardless of the noise model and intensity. In the denoised images, image features and details, for example, patterns (in \textit{Fingerprint} and \textit{Ridges}), sharp edges (in \textit{Lake}, \textit{Cameraman} and \textit{House}), smooth areas (in \textit{Peppers} and \textit{Lena}), are well preserved. For the increasing noise intensity, some artefacts can be observed in the denoised images (for example in \textit{House}, \textit{Hill}, \textit{Cameraman} images) due to the presence of strong noise but are very few and negligible for low-level noises. For further inspection in Fig.~\ref{fig:result_1G}, we show zoomed segments of the denoised results of the \textit{Flintstones} image while corrupted with AWGN (SNR $16$dB). Similarly, for Poisson corrupted (SNR $16$dB) \textit{Lake} image the zoomed segments of the denoised estimations are shown in Fig.~\ref{fig:result_1P}. Quantitative performance in terms of PSNR and SSIM adopting different methods for Gaussian and Poisson contaminated images are presented in Fig.~\ref{fig:rescomp_boxplot} using box-plots as a function of SNR. More details related to these experiments can be found in the Supp. Mat. file Table~\ref{tab:tab_psnrssimG}-\ref{tab:tab_psnrssimP} and Fig.~\ref{fig:result_2P}. Through visual and quantitative inspections, it is clear that the proposed De-QuIP uniformly outperforms all the NLM-based approaches with a significant increase in terms of PSNR and SSIM. For Gaussian corrupted images, BM3D is still the best method in most cases, but De-QuIP allows competitive comparisons in all scenarios. Furthermore, for both noise models, the positive effects of local similarity considerations are clearly visible in the Fig.~\ref{fig:rescomp_boxplot} data of QAB and De-QuIP, as it gives much better PSNR and SSIM values with significantly fewer computations. Therefore, exploiting the structural details through interaction terms notably contributes to the preservation of image details, as conveyed by quantitative and visual assessments. Additionally, regardless of the noise intensity, De-QuIP always provides good PSNR and SSIM for the recovery of Gaussian corrupted images as shown in Fig~\ref{subfig:rescomp_boxplotG}, which is not the case with most algorithms, as highlighted by SSIM values. For Poisson corrupted images, De-QuIP provides better outcomes compared to the other methods. ATBM3D generates comparable PSNR and SSIM data in some scenarios, but the visual assessment clearly shows an extra smoothing effect present on the denoised image, which causes lower SSIM values for low SNR images as shown in Fig.~\ref{subfig:rescomp_boxplotP}. This is due to the process of data Gaussianization through the Anscombe transformation. In addition, for increasing noise intensity, this Anscombe transformation loses its accuracy \cite{dutta2021plug}, which is clearly observable in the Fig.~\ref{subfig:rescomp_boxplotP} in the cases of low SNR. On contrary. De-QuIP is a straightforward method without having any such transformation and efficiently shows good denoising performance in all situations. Similar to the Gaussian case, De-QuIP defeats its nearest rival PNLPCA, a NLM based method, by a large margin. This proves its adaptability for high as well as for low SNR images regardless of their noise statistics which can be viewed as a strong point in several practical applications. \vspace*{-4mm} \subsection{Application to ultrasound (US) image despeckling} \label{sec:usdesp} \vspace*{-1mm} For further illustrations of the potential of De-QuIP, we investigate its ability for real medical US image despeckling in this subsection. US imaging is an integral part of modern medical science as it gives harmless, non-invasive, real-time images in an affordable way. The main issue affecting US images is a random granular pattern, the speckle, which is generated by random constructive and destructive interference between US waves. This coherent phenomenon of the acquisition system is used as a source of information about the tissues in several applications, but can also affect the interpretability of the images by diminishing their readability. Therefore, the important task of estimating speckle-free US images, known as despeckling in the relevant literature, has been extensively explored using various schemes \cite{Yu2002speckle, Santos2017ultrasound, Achim2001novel, Lee1980digital} to enhance the readability of the US images. Despeckling performance of De-QuIP is investigated through a phantom as well as four real cancer and two non-cancer thyroid US images acquired with a 7.5 MHz linear probe. As an extension of our preliminary results in \cite{dutta2021despeckling}, we are proposing a comprehensive study of this problem here. The estimated despeckled outcomes are compared with three existing algorithms, the anisotropic diffusion (AD) \cite{Yu2002speckle}, Lee \cite{Lee1980digital} and NLM \cite{tasdizen2009principal} filters. For the quantitative analysis, the contrast-to-noise-ratio (CNR) and resolution loss (RL) are regrouped for a cancer image with visual demonstrations in Fig.~\ref{fig:result_US}. Observation shows that De-QuIP offers a better image contrast (higher CNR than AD, Lee and slightly lower than NLM, which over-smooths the images and yields poor resolution) while having less spatial resolution loss (De-QuIP has less spatial resolution loss compared to the native US image). Note that these images are chosen arbitrarily, that is, the quality of the results should not depend on the data tested. More data related to these experiments can be found in the Supp. Mat. file Table~\ref{tab:tab_usdespeckling} and Fig~\ref{fig:result_US_supple}. \begin{figure}[t!] \centering \includegraphics[width=.48\textwidth]{figures/toshow/us_spe_3.pdf} \vspace*{-3mm} \caption{US image despeckling results using different methods for cancer thyroid image. The normalized pixel intensities of the extracted red lines from speckled and despeckled US images are shown.} \label{fig:result_US} \vspace*{-8mm} \end{figure} \vspace*{-3.2mm} \section{Conclusions and Perspectives} \label{sec:conclusion} \vspace*{-0.8mm} A novel image denoising algorithm inspired by the quantum many-body theory has been developed in this paper. This gives a way to adapt the concept of interaction from the many-body physics to an imaging problem. More precisely, the interactions between image patches are nothing more than a reflection of the similarity-measures in a local image neighborhood and provide an efficient way to capture the local structures of real images. Through these interactions, structural details are transmitted on a patch-based adaptive basis created by the solutions of the Schr\"odinger equation of quantum mechanics, which can be exploited as filters for denoising the patches. The versatile nature of the adaptive basis that conveys the structural similarities of image neighborhood, extends its scope of applications beyond AWGN without modification. A rigorous comparison with contemporary methods exemplifies the denoising ability of our De-QuIP algorithm regardless of the image nature, noise statistics and intensity. Simulation results demonstrate that the proposed method clearly outperforms other schemes and gives a good comparison with the best outcome for both image independent and dependent noise models. Additionally, De-QuIP achieves much better results at a significantly less computational cost in comparison with the earlier single-particle based quantum scheme of e.g. \cite{dutta2021quantum}. To make De-QuIP more robust, automated rules are discussed in the paper to efficiently select the values of the hyperparameters close to the optimal ones when less information is available. Furthermore, its good performance in real-life medical US image despeckling applications further shows its ability in handling multiplicative noise efficiently. Adaptation of this new quantum many-body idea opens up a new domain of future explorations. Since De-QuIP primarily has a non-local nature and significantly outperforms contemporary NLM-based methods, the first obvious perspective comes from the extension of this idea of interactions for collaborative patch denoising, as originally proposed in \cite{Dabov2007Image}. A second interesting point would be to embed this interaction architecture into a convolutional neural network, as explored with various schemes, such as a fast flexible learning method \cite{Chen2017trainable, Zhang2018ffdnet}, residual learning \cite{Zhang2017beyond} and others, and study imaging problems through this many-body network where each node shows interaction with others. Another possibility is to use a more advanced theory of physics, such as the time-dependent Schr\"odinger equation, a fascinating perspective for further research. Further expansion of the framework in three dimensional data or RGB color images can be easily done by simply bypassing data across different processing channels. Finally, other imaging applications, such as deconvolution or super-resolution, could also take benefit of this interaction model. \vspace*{-4mm} \bibliographystyle{IEEEbib}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,832
{"url":"http:\/\/math.stackexchange.com\/questions\/293056\/prove-or-disprove-an-inequality-with-0-le-a-1-le-a-2-le-ldots-le-a-n","text":"# Prove or disprove an inequality with $0 \\le a_1 \\le a_2 \\le \\ldots \\le a_n$\n\nLet $n \\in \\mathbb{Z}_+$ be $n \\ge 3$ and $0 \\le a_1 \\le a_2 \\le \\ldots \\le a_n$. Prove or disprove an inequality:\n\n$$\\large \\sqrt{a_1a_2} + \\sqrt{a_2a_3} + \\ldots + \\sqrt{a_na_1} \\ge \\sqrt[3]{a_1a_2a_3} + \\sqrt[3]{a_2a_3a_4} + \\ldots + \\sqrt[3]{a_na_1a_2}$$\n\n-\nPerhaps Lohwater's \"Inequalities\" <mediafire.com\/?1mw1tkgozzu>; is of help. \u2013\u00a0 vonbrand Feb 3 '13 at 3:58\nWhat are the terms on the both sides? Is $\\sum_{i=1}^n \\sqrt{a_ia_{i+1}}$ on the LHS? Or what? \u2013\u00a0 Tapu Feb 5 '13 at 18:39\nFor $n=3$ this follows from Muirhead's inequality. \u2013\u00a0 ivan Feb 7 '13 at 7:49\n\nThis is not a complete answer but hopefully shows why this may be true. Let's assume $a_i=a_i^6$ then our inequality becomes $$g(n)=\\sum _{i=1}^n \\left(a_i^3 a_{i+1}^3-a_i^2 a_{i+1}^2 a_{i+2}^2\\right)\\geq0$$ where the indexes are cyclical.\nLet's assume $n\\leq6$ and let's make the substitution $a_i=x_1+...+x_i$ for each $1\\leq i\\leq n$. Simplifying (using Mathematica) gives a polynomial with only one negative coef, the one before $x_1^4 x_2 x_n$ and this coef is always $-1$.\nIf someone manages to prove this for all $n$ it may be enough to establish the inequality.\nHere is the $n=4$ polynomial for reference.$$3 x_1^4 x_2^2+10 x_1^3 x_2^3+12 x_1^2 x_2^4+6 x_1 x_2^5+x_2^6+7 x_1^4 x_2 x_3+29 x_1^3 x_2^2 x_3+42 x_1^2 x_2^3 x_3+25 x_1 x_2^4 x_3+5 x_2^5 x_3+7 x_1^4 x_3^2+35 x_1^3 x_2 x_3^2+64 x_1^2 x_2^2 x_3^2+48 x_1 x_2^3 x_3^2+12 x_2^4 x_3^2+14 x_1^3 x_3^3+47 x_1^2 x_2 x_3^3+51 x_1 x_2^2 x_3^3+17 x_2^3 x_3^3+13 x_1^2 x_3^4+28 x_1 x_2 x_3^4+14 x_2^2 x_3^4+6 x_1 x_3^5+6 x_2 x_3^5+x_3^6-x_1^4 x_2 x_4+x_1^3 x_2^2 x_4+6 x_1^2 x_2^3 x_4+5 x_1 x_2^4 x_4+x_2^5 x_4+7 x_1^4 x_3 x_4+26 x_1^3 x_2 x_3 x_4+46 x_1^2 x_2^2 x_3 x_4+36 x_1 x_2^3 x_3 x_4+9 x_2^4 x_3 x_4+21 x_1^3 x_3^2 x_4+66 x_1^2 x_2 x_3^2 x_4+72 x_1 x_2^2 x_3^2 x_4+24 x_2^3 x_3^2 x_4+26 x_1^2 x_3^3 x_4+56 x_1 x_2 x_3^3 x_4+28 x_2^2 x_3^3 x_4+15 x_1 x_3^4 x_4+15 x_2 x_3^4 x_4+3 x_3^5 x_4+3 x_1^4 x_4^2+7 x_1^3 x_2 x_4^2+10 x_1^2 x_2^2 x_4^2+8 x_1 x_2^3 x_4^2+2 x_2^4 x_4^2+11 x_1^3 x_3 x_4^2+28 x_1^2 x_2 x_3 x_4^2+30 x_1 x_2^2 x_3 x_4^2+10 x_2^3 x_3 x_4^2+16 x_1^2 x_3^2 x_4^2+34 x_1 x_2 x_3^2 x_4^2+17 x_2^2 x_3^2 x_4^2+12 x_1 x_3^3 x_4^2+12 x_2 x_3^3 x_4^2+3 x_3^4 x_4^2+2 x_1^3 x_4^3+3 x_1^2 x_2 x_4^3+3 x_1 x_2^2 x_4^3+x_2^3 x_4^3+3 x_1^2 x_3 x_4^3+6 x_1 x_2 x_3 x_4^3+3 x_2^2 x_3 x_4^3+3 x_1 x_3^2 x_4^3+3 x_2 x_3^2 x_4^3+x_3^3 x_4^3$$","date":"2015-05-28 12:55:36","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9654524922370911, \"perplexity\": 658.1863202881866}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-22\/segments\/1432207929411.0\/warc\/CC-MAIN-20150521113209-00045-ip-10-180-206-219.ec2.internal.warc.gz\"}"}
null
null
package com.xhao.androidpractice.draw; import android.content.Context; import android.graphics.BitmapFactory; import android.graphics.Canvas; import android.graphics.Paint; import android.graphics.RectF; import android.support.v4.content.ContextCompat; import android.util.AttributeSet; import android.util.Log; import android.view.View; import com.xhao.androidpractice.R; /** * Created by WongxHao on 2016/5/25 22:35 */ public class MyView extends View { private Paint mPaint; public MyView(Context context) { super(context); init(context); } public MyView(Context context, AttributeSet attrs) { super(context, attrs); init(context); } public MyView(Context context, AttributeSet attrs, int defStyleAttr) { super(context, attrs, defStyleAttr); init(context); } private void init(Context context){ mPaint = new Paint(); mPaint.setAntiAlias(true);//抗锯齿 mPaint.setColor(ContextCompat.getColor(context, R.color.purple));//画笔颜色 mPaint.setStyle(Paint.Style.FILL);//画笔风格 mPaint.setTextSize(36);//绘制文字大小,单位px mPaint.setStrokeWidth(5);//画笔粗细 } @Override protected void onDraw(Canvas canvas) { super.onDraw(canvas); canvas.drawColor(ContextCompat.getColor(getContext(), R.color.PaleGoldenRod));//设置画布颜色 //绘制实心圆形 canvas.drawCircle(150, 150, 100, mPaint); //绘制矩形 canvas.drawRect(0, 300, 300, 450, mPaint); //绘制Bitmap canvas.drawBitmap(BitmapFactory.decodeResource(getResources(), R.mipmap.ic_launcher), 400, 50, mPaint); //绘制弧形区域 //第三个参数:true的话,会连着圆心的范围一起绘制出来 canvas.drawArc(new RectF(400, 400, 500, 500), 0, 90, true, mPaint); canvas.drawArc(new RectF(600, 50, 700, 150), 0, 90, false, mPaint); //绘制圆角矩形 canvas.drawRoundRect(new RectF(600, 300, 800, 400), 15, 15, mPaint); //绘制椭圆 canvas.drawOval(new RectF(0, 600, 200, 900), mPaint); } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,258
Goldman upgrades Netflix -- it is enough? U.S stocks are higher on Tuesday morning, with the benchmark S&P 500 and the narrower Dow Jones Industrial Average (DJINDICES:^DJI) up 0.44% and 0.6%, respectively, at 10:30 a.m. EDT. Stocks put in a very respectable performance during the first half of 2014, with the S&P 500 rising 6.1% over the first six months ended on Monday. However, in an unexpected development, long-dated Treasury bonds notched up a roughly a 13% return and surpassed stocks, With the 10-year Treasury yield just above 2.5%, the only thing investors can now expect from bonds is for the principal to hold its value relative to inflation (and that is assuming you hold the bonds to maturity). Carefully chosen stocks are the best game in town if you wish to achieve an actual return. Speaking of which, Goldman Sachs is giving one high-profile secular growth stock, Netflix (NASDAQ:NFLX), a boost this morning. Do the Masters of the Universe at the bank have this story right? Shares of Netflix are up 5.3% in morning trading, in the wake of the Goldman upgrade from neutral to buy and price target spike of a whopping 55% to $590. Goldman said Netflix will "drive sustained outperformance," as "subscriber growth will continue to exceed expectations." It also forecast that the streaming-video provider's addressable audience will double as it continues to add new markets. Netflix is expected to move into France and Germany in the second half of this year; the company has said it expects its international operations to become profitable this year and that international revenue will eventually exceed that from the U.S. Indeed, although Netflix already operates in 41 countries, it derives only a quarter of its revenue internationally. Growth and operating leverage in the international unit could ultimately provide a very substantial profit boost once Netflix begins to raise its prices. I think the stock market tends to overvalue short-term growth and undervalue long-term earnings power and market position, and Netflix scores very highly on the latter two. However, with the company at 65 times the earnings-per-share estimate for 2015, it's tough for this value investor to get tremendously excited about Netflix shares. Outstanding company, yes -- I'm a happy, loyal customer myself – but the stock looks a bit rich to earn the same adulation. My judgment could be tainted by a value investor's skepticism, of course, and I could ultimately be caught flatfooted in this situation, as the company's long-term growth overwhelms the impact of any excess in the valuation. Either way, investors should only buy shares in Netflix with a long-term holding period in mind if they expect to properly capitalize on this secular growth story. Don't jump into the stock for a trade just because it's rising on the back of an analyst upgrade -- that's a losing proposition.
{ "redpajama_set_name": "RedPajamaC4" }
8,264
City Hall Plaza in Boston, Massachusetts, is a large, open, public space in the Government Center area of the city. The architectural firm Kallmann McKinnell & Knowles designed the plaza in 1962 to accompany Boston's new City Hall building. The multi-level, irregularly shaped plaza consists of red brick and concrete. The Government Center MBTA station is located beneath the plaza; its entrance is at the southwest corner of the plaza. History The siting of the plaza, the City Hall, and other structures in Government Center was the responsibility of I. M. Pei, commissioned by Edward J. Logue, then development administrator of the Boston Redevelopment Authority. The plaza and City Hall were constructed between 1963 and 1968, on the former site of Scollay Square, which despite its vibrancy and historical interest, was considered a seedy area by some. Other streets removed to make way for the plaza included Brattle Street and Cornhill. Two historic buildings formerly on Cornhill, known as the Sears' Crescent and Sears' Block, were not demolished and now face the southern edge of the plaza. The 1962 design was reportedly modeled after Piazza del Campo in Siena, Italy. Reaction to the plaza has been mixed. Some praise City Hall Plaza for being cleaner and more appealing than Scollay Square, and for the simple fact that it was built at all—with the cooperation and compromises necessary of any complex, multi-agency government construction project. Architecture critic Ada Louise Huxtable called the plaza "one of the best urban spaces of the 20th century. ... With the plaza, and specifically because of it, the Boston Government Center can now take its place among the world's great city spaces." The Cultural Landscape Foundation listed the plaza as one of its "Marvels of Modernism." Others dislike City Hall Plaza for its anti-social aesthetic and failure to address unpleasant weather effects (such as wintertime cold, wet, and wind, and summertime heat, dust, sun, and wind). The Project for Public Spaces ranked it at the top of the organization's list of "Squares Most Dramatically in Need of Improvement in the United States" in 2005, and has placed the plaza on its "Hall of Shame." A fountain was built at the northwest corner of the plaza as part of the original design. But it was shut down in 1977 because water was leaking into the Blue Line subway tunnel below. The fountain was covered over with a concrete slab in 2006. A nonprofit group built a "Cancer Garden of Hope" at the northeast corner of the plaza in 2010. Redesign Since the plaza opened in 1968, ideas for improvements to the public space have been put forth by citizens, students, architects, politicians, and others. In the early 2000s, cellist Yo-Yo Ma proposed the construction of a music garden based on his Inspired by Bach series of recordings. The plan did not move forward in Boston, but was realized as the Toronto Music Garden. Boston Mayor Thomas Menino had several ideas for improvement. In 2007, Emerson College students used the virtual world Second Life to re-imagine a better design. The project was sponsored by the Boston Redevelopment Authority, among others. A 2011 study commissioned by the United States Environmental Protection Agency made recommendations for the "greening" of the plaza. A 2014 study by three landscape architecture students at West Virginia University proposed a complete redesign of the plaza. In 2015, the city launched a crowdsourcing project entitled "Re-invent City Hall Plaza," asking the public for suggestions to improve the plaza. Later in the year, the city made a number of changes to the plaza, including installing artificial grass, picnic tables, and lawn chairs, to make the space more inviting. However, the Project for Public Spaces argued that "these efforts to put what the New York Times has called a 'kelly green band-aid' on this gaping wound in the heart of the city, are insufficient," and that a more comprehensive redesign is needed. A 2016 plan for a ferris wheel and other improvements to the plaza did not move forward. Work had begun in 2011 on plans to redesign the plaza and the Government Center MBTA station. The new MBTA station opened in March 2016 after two years of construction. Extensive landscaping and accessibility improvements to the adjacent areas of the plaza were completed in 2017. A year-long study entitled "Rethink City Hall" was completed in 2017 by the firms Utile and Reed Hilderbrand. The final report called for major changes in City Hall and the plaza. "The Patios," a seasonal beer garden, opened in 2018 on a terrace overlooking Congress Street, and was expanded for 2019. In June 2019, the city announced the start of construction of a $70 million project to transform the plaza into a "People's Plaza" that will include "a civic space for all residents, with universal accessibility, new civic spaces for all to use, increased environmental sustainability, and critical infrastructure improvements that will ensure the Plaza is safe and accessible for all for generations to come." The plans, developed by Sasaki Associates, include schematic designs for the project. In January 2020, the Boston Landmarks Commission approved the first phase of the project. Construction began in 2020. Construction reached the half-way point in August 2021. The work includes adding "3,000 seating spaces, 12,000-square-feet of playscapes for children, and 11,000-square-feet of terraces for interactive public art. The renovated plaza opened in November 2022. Public events Nearby Boston Common has long been used for public events, including a 1969 peace rally that drew an estimated crowd of 100,000 and the 1979 mass celebrated by Pope John Paul II. But damage to the park from such large events led city officials to limit future events on the Common, relocating many to the paved City Hall Plaza. Annual events held on City Hall Plaza include Boston Calling Music Festival (2013–2016), Big Apple Circus, The Jimmy Fund Scooper Bowl, the Boston Pride Festival, the African Festival of Boston, Boston GreenFest, Boston Techjam, the Puerto Rican Festival of Massachusetts, the finish line of Hub on Wheels, and the Boston Cycling Celebration. Occasional events on the plaza have included Boston's 350th birthday celebration, art exhibits such as Strandbeest; large rallies in honor of the New England Patriots, the Boston Red Sox, and the Boston Bruins; political demonstrations; an exhibit of "street pianos"; beer festivals; HUBweek, a pizza festival, the Boston Night Market, a "Roller Disco Tribute Party", and the display of a 25-foot-tall statue of King Tut. The Plaza has also been the site for many free concerts including being the original site (before moving to the Hatch Memorial Shell) for WODS (Oldies 103.3) summer concert series such as Chubby Checker and Paul Revere & the Raiders. From December 2016 until February 2017, the Plaza opened an outdoor Skating Path and Holiday Market, as part of an installation called "Boston Winter". The skating path provided skating lessons for different age groups and planned themed skating events. Boston Winter continued the following year, but did not return in December 2018 as the city prepared for a major renovation of the plaza. Image gallery References Further reading Published in the 2010s . (Describes plan of New York's Delaware North Companies, Inc. to build commercial ferris wheel, skating rink, eateries, etc.) Andrew Ryan. Maligned for decades, City Hall Plaza to get EPA-aided makeover. Boston Globe. Sep 9, 2010. pg. B.3 A softer City Hall: Architects offer visions for less-austere City Hall Plaza; Young architects offer visions for turning plaza into something a bit more welcoming Boston Globe. Apr 4, 2010. pg. G.1 Published in the 2000s Peter J. Howe. Mayor has new spin for City Hall Plaza; Menino wants to explore wind turbine installation. Boston Globe, Sep 29, 2007. pg. A.1. Christina Pazzanese. Some small things could pretty up City Hall Plaza. Boston Globe, Sep 16, 2007. pg. 2. Matt Viser. Fount of futility finally runs dry; City Hall Plaza eyesore gets a concrete solution. Boston Globe, June 9, 2006. pg. B.1. Jack Thomas. 'I wanted something that would last': At 89, an architect stands by his plan for City Hall after four decades of both condemnation and praise. Boston Globe. October 13, 2004. Talk about City Hall Plaza. Boston Globe, Apr 12, 2003. pg. A.10. Thomas C. Palmer Jr. Effort to improve City Hall Plaza gets new life, group expected to deliver recommendations next year. Globe Staff. Boston Globe, Oct 1, 2002. pg. D.1. Sarah Schweitzer. An unfulfilled urban promise; Menino resurrects a plan to renovate City Hall Plaza. Boston Globe, Aug 8, 2002. pg. B.1. Robert F. Walsh. Some lessons on upkeep at City Hall Plaza. Boston Globe, Apr 15, 2002. pg. A.17. Improving City Hall Plaza. Boston Globe, Jul 17, 2000. pg. A.10. Published in the 1990s Anthony Flint. City Hall Plaza to get $27.5M renovation. Boston Globe, Dec 17, 1999. pg. B.3. A refuge on City Hall Plaza. Boston Globe, Dec 17, 1999. pg. A.26. Anthony Flint. City Hall Plaza redesign is OK'd; modest makeover to be announced. Boston Globe. Boston, Mass.: Dec 15, 1999. pg. A.38. Published in the 1960s Walter Muir Whitehill. Boston: a topographical history, 2nd ed. Cambridge, MA: Harvard University Press, 1968. External links City of Boston web page about plaza renovations Facebook page Twitter. @concreteplaza. About Government Center MBTA subway station renovation in City Hall Plaza, 2014 Project for Public Spaces Kallman McKinnell & Wood Architects One citizen's hope for the barren wasteland of Boston City Hall Plaza. YouTube, 2007. Kevin McCrea exposes Mayor Menino giving away 300 million dollars at City Hall Plaza. YouTube, 2009. Urban Toronto. Discussion thread about Boston's City Hall. Squares in Boston Buildings and structures completed in 1968 Government Center, Boston Odonyms referring to a building
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,960
Social Issues | Housing/Homelessness News Albuquerque Opens Center of "Hope" for Homeless Population Native Americans Moving Off the Rez Face Discrimination in MT Grant Funding Aims to Alleviate Virginia Eviction Crisis KY's Housing Shortage Worsened by Natural Disasters Unhoused residents who apply to live at Albuquerque's Hope Village must meet the federal definition of homelessness, have a documented mental, emotional or behavioral disorder and income at or below 30% of the Area Median Income. (courtesy Hope Village) Housing and rental prices have skyrocketed out of many people's reach during the pandemic. (Aisyaqilumar/Adobe Stock) According to the Eviction Lab, 838 evictions were filed in Virginia during the first week of 2023. (Adobe Stock) More than 1 in 7 households nationwide paid over half of their income on housing in 2020, according to a report by Harvard University's Joint Center for Housing Studies. (Adobe Stock) Roz Brown, Producer Eric Tegethoff, Producer Edwin J. Viera, Producer Nadia Ramlagan, Producer In the midst of the pandemic, hope was in the air as construction proceeded on a one-of-a-kind housing project to serve a portion of Albuquerque's homeless community. A state-of-the-art facility called Hope Village north of downtown recently opened its doors, offering 42 residents housing, along with mental, behavioral and medical services. Abby Long, program manager for Hope Village, believes there is a social benefit to having services centralized, rather than spread around the city, which is a typical situation for the unhoused. "They all signed leases, they all have their own apartments, their own agency and autonomy," Long outlined. "Because with scattered-site housing it's very isolating and can be very depressing, so we wanted to make sure that we are offering a different community." Hope Village is a $12 million collaborative effort between the city, county and financing agencies spearheaded by Hopeworks, a nonprofit focused on ending homelessness. Long would like to see New Mexico achieve the same success as Utah, which reduced its homeless population from 2,000 in 2005 to around 200 by 2015, by providing shelter and services. Because it is brand-new construction and not retrofitted, Long pointed out the architecture allowed trauma-informed design, which is intended to promote greater well-being for occupants. "It's developed so that people that have experienced a lot of trauma feel safer," Long noted. "There are small little things around windows and the ways the hallways are organized and with lines of sight." Rachel Rodriguez, chief development and communications officer for HopeWork and the Hope Center, said the project is a new approach for the city toward homelessness, because criminalization does not work. She added before any construction began, a yearlong discussion was held with the area's neighbors to get their buy-in. "We were really fortunate that they weren't wholeheartedly 'not in my backyard,' " Rodriguez acknowledged. "They said, 'OK, if this is going to be here, we would like to have some input and to ask some questions and to offer some suggestions.' " According to HopeWorks, 63% of extremely low income households in Albuquerque are headed by a person identifying as Black, Indigenous or a person of color, with a household income of about $24,000 per year for a family of four. Disclosure: The New Mexico Coalition to End Homelessness contributes to our fund for reporting on Domestic Violence/Sexual Assault, and Housing/Homelessness Issues. If you would like to help support news in the public interest, click here. Hope Village 2022 Native Americans in Montana face a slew of challenges to finding housing off reservations - including discrimination. A tight housing market in the state and across the country presents its own problems for finding an affordable place to live. But Les Left Hand, program director for All Nation Youth Partner for Success in Billings, said his last name was a barrier for him and his wife when they were looking for a home, and added eventually they used her maiden name on applications. "When she applied for some of these places as just 'Leslie Martin' they were more open to that until they saw my name on there." he said. "Then that's when the red flags were waved and, of course, some of them were just outright not willing to talk to us." Left Hand's organization works to prevent drug use among young people ages nine to 20 and he said people they work with, as well as his friends and family, have similar experiences. Rental costs like security deposits and first and last months' rent can be challenges as well. Census data finds more than a quarter of Native Americans live in poverty. Left Hand said young people especially find it hard to move off their reservations because they are not as financially established. "It's frustrating for them and that's when they give up and go back home and have to live in a tight, cramped household again because we don't really turn away any of the family members that do come back," Left Hand said. "We just accommodate until they can find a better resource or a different avenue." Analyses on housing issues for Native Americans are scarce but a study from before the pandemic found 16% of Native Americans reported overcrowding, compared with 2% of the U.S. population as a whole. Left Hand said organizations like the Native American Development Corporation can help people who feel they have been discriminated against, or are having trouble looking for housing. Most of all, he encourages people to be persistent. "I'm always willing to help people out and try to steer them in the right direction and then just give them the hope that there is somebody out there that might have an opportunity to open a door and then they succeed in that area," he said. "But then like I tell them, don't give up so easily." Systemic Inequality: Displacement, Exclusion, and Segregation American Progress 8/7/2019 About $3-million has been awarded to Virginia groups helping people facing evictions. The Virginia Eviction Reduction Pilot Program is designed to find effective services for people facing housing instability. According to the R-V-A Virginia Eviction Lab's third quarter report, eviction filings increased 86% from the previous quarter, with Charlottesville seeing some of the largest increases. Much of this is due to pandemic-related renter protections being lifted. Christie Marra, director of housing advocacy with the Virginia Poverty Law Center, said while this third round of funding is a much-needed financial boost, it is not enough. "The programs that are getting the funding now are not getting enough to meet the need in their area," Marra said. "And so, while the eviction rates for every locality that has a VERP-funded program serving it did go down, there is a lot of room for improvement." She added that in the past, one of the groups that received funding went through it in two months. As the Virginia General Assembly's legislative session gets under way, tenant's rights legislation is one issue at the forefront of legislators' minds. One such piece of legislation, the Virginia Residential Landlord and Tenant Act, seeks to increase the grace period for late rent, and would allow tenants to break leases when they move in and find a unit is uninhabitable. While these grants are working to alleviate the eviction crisis, Marra hopes proposals for other tenant assistance programs will be taken up as well. One in particular is the Virginia Housing Stability Fund, which would be a state housing voucher program. Marra acknowledged it won't come cheap, but said this program could aid numerous families. "What we're asking for is for 90 million, and this would be a one-time ask for this pilot program to serve 5,000 households over the period of four years," she said. In addition to gathering data, this program will also provide longer-term financial support than most VERP. The program hopes to bridge the gap between the shortage of affordable housing and the numerous Virginians who qualify for the federal housing voucher program, but can't receive it due to limited federal funding. Disclosure: Virginia Poverty Law Center contributes to our fund for reporting on Civil Rights, Housing/Homelessness, Poverty Issues, Social Justice. If you would like to help support news in the public interest, click here. Eviction Lab Report VA Eviction Lab 2022 Eviction Lab Third Quarter Report RVA Eviction Lab Staff 10/2022 2023 VIRGINIA EVICTION REDUCTION PILOT PROGRAM AWARDED PROJECTS VA dept. of Housing and Urban Development 2023 VA Residential Landlord and Tenant Act State Housing Stability Fund Budget Amendment VA Poverty Law Center 2023 Kentucky is facing a serious housing shortage, and the past few years of deadly floods and tornadoes have worsened the situation. Advocates want lawmakers to commit to more than $300 million Affordable Housing Emergency Action Recovery Trust Fund or "AHEART." More than 800 eastern Kentucky residents remain temporarily housed in state parks and travel trailers after last summer's flooding. Maggie Riden - Director of Advocacy for the group FAHE, which provides lending services in the Appalachian region - said the state has reached a housing tipping point. "Kentucky, like many states in the Appalachian region, has an aging housing stock," said Riden. "Many homes were built well before the 1970s. So we're talking about homes that need substantial repair and upkeep." She added that AHEART funding would be used to construct or rehab 1,500 new homes. A report released last year by the Federal Home Loan Mortgage Corporation - commonly known as Freddie Mac - found that nationwide, communities are short more than 3 million housing housing units, up from 2.5 million in 2018. Riden pointed out that the state is sitting on a substantial budget surplus and rainy day fund - and said using that money to build homes will lessen the pressure of future disasters, keeping more Kentucky families in safe, quality housing. "So," said Riden, "how are we getting resources, state resources on the ground that are flexible, and are able to get folks out of temporary housing or shelter into at least intermediate housing and shelter, while we rebuild?" Adrienne Bush - executive director of the Homeless and Housing Coalition of Kentucky - said climate change continues to put more communities on the frontlines of weather disasters, and says states need to boost local resources in order to respond immediately, noting the federal government's grant funding for disaster recovery isn't permanently authorized. She added that emergency outside relief from FEMA only goes so far. "FEMA is not designed to make people whole," said Bush. "It's not designed to completely replace everything that people lost in terms of their housing or their livelihood or any of their other needs. It is designed to produce the bare minimum in financial assistance." Research shows a lack of affordable housing is bad for business. In the nation's top 100 metro areas, housing shortages are stalling economic growth. Disclosure: Homeless and Housing Coalition of Kentucky contributes to our fund for reporting on Budget Policy & Priorities, Housing/Homelessness, Poverty Issues, Urban Planning/Transportation. If you would like to help support news in the public interest, click here. AHEART: A Comprehensive Housing Recovery Strategy FAHE et al 2023 Housing Supply: A Growing Deficit the Federal Home Loan Mortgage Corporation 5/7/21 Housing Affordability and Economic Growth Anthony/Housing Policy Debate 5/16/22 THE STATE OF THE NATIONS HOUSING 2022 the Harvard Joint Center for Housing Studies 2022 A bill in the Washington Legislature would lift cities' restrictions on multifamily housing like triplexes, seen above. (Imagenet/Adobe Stock) WA Lawmakers Look to Add Affordable Housing for 'Missing Middle' On Wednesday, tenants' rights groups urged the Los Angeles City Council to act before current protections expire on Jan. 31. (Agueda Dudley-Berrios/Community Power Collective) Rise in Evictions Expected Feb. 1 as L.A. Protections Expire A 2021 housing affordability study found a decent two-bedroom apartment in the Omaha and Council Bluffs market would require a minimum wage of $19 an hour, more than many essential jobs in the area pay. (Adobe Stock) ARPA Funds to Build Affordable Housing in Omaha-Council Bluffs Baltimore City rents have increased 19% since the start of the pandemic. (Adobe Stock) The Grassroots Fight for Housing Justice in Baltimore Facial-recognition technology companies, which originally partnered with law enforcement, are now working with schools and universities to increase safety and prevent shootings by denying campus access to people who have been banned, or to monitor activity inside school buildings. (Adobe Stock) MA Bill Would Tighten Restrictions on Facial Recognition Technology Lawmakers in the Commonwealth are considering legislation to ensure police use of facial-recognition technology also protects people's privacy and civ… Push to Rebuild Regional Food Systems as 2023 Farm Bill Takes Shape Next week, Ohio farmers and their advocates head to Washington, D.C., to push for shifting federal programs toward growing nutritious food, as … CA 'Just Safe' Campaign Aims to Redefine Public Safety Social justice advocates have just launched a new public education campaign. It's called "Just Safe," and it's aimed at changing the conversation … Since the beginning of 2022, seven Western states have initiated programs to maintain wildlife habitat and safe passage, through fencing, signs, overpasses and underpasses, according to Pew research. (TaborChichakly/Adobe Stock) NM Lawmakers Weigh Next Step for Wildlife Crossings Reducing the number of wildlife-vehicle collisions is the goal of a bill before the New Mexico Legislature this session. Sen. Mimi Stewart, D-… NV Nonprofit Celebrates 94% Grad Rate for Students in Program A Nevada nonprofit is celebrating a 94% graduation rate among its high school seniors for the 2021-2022 school year. Tami Hance-Lehr. CEO and state … Super Bowl LVII will be held at State Farm Stadium in Glendale on Feb. 12. (Katherine Welles/Adobe Stock) Big Economic Boosts Forecast for AZ as Super Bowl LVII Approaches Super Bowl LVII is right around the corner, which means Arizona will see hefty spending and wide exposure because of the massive sporting event… Protecting the 'Eyes of Texas' in the Age of Digital Overload It is not a pandemic yet, but eye doctors worry the constant use of digital devices could eventually result in long-term health problems for many … Maine's Small Farmers See Benefits from USDA Census Maine's small farmers are encouraged to complete the latest U.S. Department of Agriculture census to ensure they have a voice in federal decisions …
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,021
#ifndef _ACTIVEMQ_UTIL_CMSPROVIDER_H_ #define _ACTIVEMQ_UTIL_CMSPROVIDER_H_ #include <string> #include <memory> #include <cms/ConnectionFactory.h> #include <cms/Connection.h> #include <cms/MessageConsumer.h> #include <cms/MessageProducer.h> #include <cms/MessageListener.h> #include <cms/Session.h> #include <cms/Destination.h> #include <decaf/io/Closeable.h> namespace activemq { namespace util { class CMSProvider : decaf::io::Closeable { private: std::string brokerURL; cms::Session::AcknowledgeMode ackMode; std::string username; std::string password; std::string clientId; std::string destinationName; bool topic; bool durable; std::string subscription; std::auto_ptr<cms::ConnectionFactory> connectionFactory; std::auto_ptr<cms::Connection> connection; std::auto_ptr<cms::Session> session; std::auto_ptr<cms::MessageConsumer> consumer; std::auto_ptr<cms::MessageProducer> producer; std::auto_ptr<cms::MessageProducer> noDestProducer; std::auto_ptr<cms::Destination> destination; std::auto_ptr<cms::Destination> tempDestination; public: CMSProvider(const std::string& brokerURL, cms::Session::AcknowledgeMode ackMode = cms::Session::AUTO_ACKNOWLEDGE); CMSProvider(const std::string& brokerURL, const std::string& destinationName, const std::string& subscription, cms::Session::AcknowledgeMode ackMode = cms::Session::AUTO_ACKNOWLEDGE); virtual ~CMSProvider(); virtual void close(); std::string getBrokerURL() const { return this->brokerURL; } void setBrokerURL(const std::string& brokerURL) { this->brokerURL = brokerURL; } void setDestinationName(const std::string name) { this->destinationName = name; } std::string getDestinationName() const { return this->destinationName; } void setSubscription(const std::string name) { this->subscription = name; } std::string getSubscription() const { return this->subscription; } void setTopic(bool value) { this->topic = value; } bool isTopic() const { return this->topic; } void setDurable(bool value) { this->durable = value; } bool isDurable() const { return this->durable; } void setAckMode(cms::Session::AcknowledgeMode ackMode) { this->ackMode = ackMode; } cms::Session::AcknowledgeMode getAckMode() const { return this->ackMode; } public: /** * Initializes a CMSProvider with the Login data for the session that * this provider is managing. Once called a new Connection to the broker * is made and will remain open until a reconnect is requested or until * the CMSProvider instance is closed. */ virtual void initialize(const std::string& username = "", const std::string& password = "", const std::string& clientId = ""); /** * Forces a reconnect of the Connection and then of the Session and its * associated resources. */ virtual void reconnect(); /** * Forces a Recreation of a Session and any of its Resources. */ virtual void reconnectSession(); /** * Unsubscribes a durable consumer if one has been created and the chosen * wireformat supports it. The consumer is closed as a result any calls to * it after calling this method will result in an error. */ virtual void unsubscribe(); /** * Returns the ConnectionFactory object that this Provider has allocated. */ virtual cms::ConnectionFactory* getConnectionFactory(); /** * Returns the Connection object that this Provider has allocated. */ virtual cms::Connection* getConnection(); /** * Returns the Session object that this Provider has allocated. */ virtual cms::Session* getSession(); /** * Returns the MessageConsumer object that this Provider has allocated. */ virtual cms::MessageConsumer* getConsumer(); /** * Returns the MessageProducer object that this Provider has allocated. */ virtual cms::MessageProducer* getProducer(); /** * Returns the MessageProducer object that this Provider has allocated that has * no assigned Destination, message sent must be assigned one at send time. */ virtual cms::MessageProducer* getNoDestProducer(); /** * Returns the Destination object that this Provider has allocated. */ virtual cms::Destination* getDestination(); /** * Returns the Temporary Destination object that this Provider has allocated. */ virtual cms::Destination* getTempDestination(); /** * Destroys a Destination at the Broker side, freeing the resources associated with it. */ virtual void destroyDestination(const cms::Destination* destination); }; }} #endif /*_ACTIVEMQ_UTIL_CMSPROVIDER_H_*/
{ "redpajama_set_name": "RedPajamaGithub" }
388
{"url":"https:\/\/www.aimsciences.org\/article\/doi\/10.3934\/dcdss.2019013","text":"Article Contents\nArticle Contents\n\n# On solutions of semilinear upper diagonal infinite systems of differential equations\n\n\u2022 * Corresponding author: J\u00f3zef Bana\u015b\n\nDedicated to Professor Vicentiu Radulescu on the occasion of his 60th anniversary\n\n\u2022 The goal of the paper is to investigate the existence of solutions for semilinear upper diagonal infinite systems of differential equations. We will look for solutions of the mentioned infinite systems in a Banach tempered sequence space. In our considerations we utilize the technique associated with the Hausdorff measure of noncompactness and some existence results from the theory of ordinary differential equations in abstract Banach spaces.\n\nMathematics Subject Classification: Primary: 34G20; Secondary: 47H08.\n\n Citation:\n\n\u2022 R. R. Akhmerov, M. I. Kamenskii, A. S. Potapov, A. E. Rodkina and B. N. Sadovskii, Measures of Noncompactness and Condensing Operators, Birkh\u00e4user, Basel, 1992. doi:\u00a010.1007\/978-3-0348-5727-7. J. M. Ayerbe Toledano, T. Dominguez Benavides and G. Lopez Acedo, Measures of Noncompactness in Metric Fixed Point Theory, Birkh\u00e4user, Basel, 1997. doi:\u00a010.1007\/978-3-0348-8920-9. J. Bana\u015b and K. Goebel, Measures of Noncompactness in Banach Spaces, Marcel Dekker, New York, 1980. J. Bana\u015b \u00a0and\u00a0 M. Krajewska , Existence of solutions for infinite systems of differential equations in spaces of tempered sequences, Electronic J. Differential Equations, 2017 (2017) , 1-28. J. Bana\u015b and M. Mursaleen, Sequence Spaces and Measures of Noncompactness with Applications to Differential and Integral Equations, Springer, New Delhi, 2014. doi:\u00a010.1007\/978-81-322-1886-9. L. Cheng , Q. Cheng , Q. Shen , K. Tu \u00a0and\u00a0 W. Zhang , A new approach to measures of noncompactness of Banach spaces, Studia Math., 240 (2018) , 21-45.\u00a0 doi:\u00a010.4064\/sm8448-2-2017. K. Deimling, Ordinary Differential Equations in Banach Spaces, Springer, Berlin, 1977. K. Deimling, Nonlinear Functional Analysis, Springer, Berlin, 1985. doi:\u00a010.1007\/978-3-662-00547-7. J. Mallet-Paret \u00a0and\u00a0 R. D. Nussbaum , Inequivalent measures of noncompactness, Ann. Mat. Pura Appl., 190 (2011) , 453-488.\u00a0 doi:\u00a010.1007\/s10231-010-0158-x. H. M\u00f6nch \u00a0and\u00a0 G. H. von Harten , On the Cauchy problem for ordinary differential equations in Banach spaces, Arch. Math., 39 (1982) , 153-160.\u00a0 doi:\u00a010.1007\/BF01899196.","date":"2023-03-20 15:31:50","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 1, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7674960494041443, \"perplexity\": 2067.0587884672577}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296943484.34\/warc\/CC-MAIN-20230320144934-20230320174934-00500.warc.gz\"}"}
null
null
{"url":"https:\/\/bitbucket.org\/boldprogressives\/trac-newticketlikethis-plugin\/overview","text":"# Create new tickets based on existing tickets\n\n## Description\n\nThe NewTicketLikeThisPlugin adds a \"Clone\" button to existing tickets, which lets you create a new ticket whose fields derive from the original ticket if you have the appropriate permission.\n\nIt is based on the tracopt.ticket.clone.ticketclonebutton extension that ships with Trac core. Unlike that extension, the NewTicketLikeThisPlugin defines and consumes a pluggable interface for implementing custom policies to determine the way in which a new ticket is derived from the original. This allows flexible, customized business logic to be provided based on the needs and workflows of your team. Also, the NewTicketLikeThisPlugin allows you to configure the permission required to clone a ticket, whereas the core ticketclonebutton hard-codes the TICKET_ADMIN permission.\n\nTwo policies are provided by default, in the newticketlikethis.policies module:\n\n\u2022 SimpleTicketCloner mimics the behavior of the core tracopt.ticket.clone.ticketclonebutton extension: all fields from the original ticket are cloned, and the \"summary\" and \"description\" fields are modified to denote the ticket that they were cloned from.\n\u2022 DerivedFieldsTicketCloner can ignore certain fields entirely based on a configuration setting; can derive new field values from the old ticket using Genshi templates, also through configuration; and clones all remaining fields from the original ticket verbatim.\n\nMore complex policies might implement custom logic for deriving new ticket values based on the values of the existing ticket's fields, or use alternate cloning policies based on the ticket's type.\n\n## Configuration\n\nTo use the plugin, install it in your Trac environment and enable its components in trac.ini:\n\n[components]\nnewticketlikethis.* = enabled\n\n\nBy default this will add the \"Clone\" button to the ticket view, and will use the SimpleTicketCloner component to clone your tickets. The TICKET_ADMIN permission will be required for cloning tickets.\n\n### Choosing a policy\n\nTo use a different ticket-cloning policy, make sure to enable any necessary components and then set the newticketlikethis.ticket_cloner option in trac.ini to reference the component's name like so:\n\n[newticketlikethis]\nticket_cloner = ExcludedFieldsTicketCloner\n\n\n### Using an alternate form handler\n\nBy default the \"Clone\" button will submit a POST request to the current Trac environment's \/newticket handler. You can specify an alternate form submission (such as a different Trac instance's \/newticket handler) with:\n\n[newticketlikethis]\nticket_clone_form_action = http:\/\/trac.example.com\/main\/newticket\nticket_clone_form_method = GET\n\n\nEither or both of these options may be omitted.\n\n### Configuring permissions\n\nBy default the \"Clone\" button only appears if the user has the TICKET_ADMIN permission. You can change the required permission using the newticketlikethis.ticket_clone_permission option:\n\n[newticketlikethis]\nticket_clone_permission = TICKET_CREATE\n\n\n### DerivedFieldsTicketCloner\n\nIf enabled, the DerivedFieldsTicketCloner will look for an additional configuration option newticketlikethis.excluded_fields to determine which fields to exclude. This should be a comma-separated list of ticket fields. By default, no fields are excluded.\n\nIt will also look for an option newticketlikethis.derived_fields to determine how to derive new field values from the existing ticket. This should be a comma-separated list of Genshi templates mapped to new field values.\n\nFor example, you might use a trac.ini configuration like:\n\n[newticketlikethis]\nticket_cloner = DerivedFieldsTicketCloner\nexcluded_fields = description, summary, reporter\nderived_fields = $ticket.reporter->cc, milestone:$ticket.milestone component:\\$ticket.component->keywords\n\n\nThis would allow you to create cloned tickets with the old ticket's reporter CCed; the old ticket's milestone and component namespaced and set as keywords on the new ticket; the new ticket's description, summary and reporter left blank; and all other fields from the old ticket transferred verbatim to the new ticket.\n\n## Customization\n\nIt is easy to implement your own custom policies as well. Look at the code in newticketlikethis.policies for inspiration.\n\nIf you implement a custom policy that you would like to share, feel free to submit it as a patch, so that the NewTicketLikeThisPlugin can ship with a strong library of reusable cloning policies.","date":"2018-02-20 08:31:22","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.24768023192882538, \"perplexity\": 4791.367313165265}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-09\/segments\/1518891812913.37\/warc\/CC-MAIN-20180220070423-20180220090423-00652.warc.gz\"}"}
null
null
\section{Introduction} Confinement-deconfinement phase transition is an important and challenging problem in QCD. Near the phase transition region, the interaction becomes very strong so that the conventional perturbation method of QFT does not work. For a long time, lattice QCD has been the only method to study strong interacted QCD. Although lattice QCD works well for zero density, it encounters the sign problem when considering finite quark density. See \cite{1009.4089,1203.5320} for a review of the current status of lattice QCD. Recently, using the idea of AdS/CFT duality from string theory, one is able to study QCD in the strongly coupled region by studying its weakly coupled dual gravitational theory, i.e. holographic QCD \cite{0306018,0311270,0304032,0611099,0412141,0507073,0501128,0602229,0801.4383,0806.3830,0804.0434,1005.4690,1006.5461,1012.1864,1103.5389,1108.2029,1209.4512 . In \cite{1301.0385}, we considered a Einstein-Maxwell-scalar system and studied its holographic dual QCD model. We obtained a family of analytic black hole solutions by the potential reconstruction method. By studying the thermodynamics of the black hole backgrounds, we found a phase transition between two black holes with different size. We interpreted this black hole to black hole phase transition as the confinement-deconfinement phase transition of heavy quarks in the dual holographic QCD model. On the other hand, the heavy quark potential is an important observable relevant to confinement. It has been measured in great detail in lattice simulations \cite{2001} and the results remarkably agree with the Cornell potential \cite{Cornel \begin{equation} V\left( r\right) =-\dfrac{\kappa}{r}+\sigma_{s}r+C, \end{equation} which is dominant by Coulomb potential at short distances and by linear potential at large distances with the coefficient $\sigma_{s}$ defined as string tension. In QCD, the heavy quark potential can be read off from the expectation value of the Wilson loop along a time-like closed path $C$ \begin{equation} \left\langle W\left( C\right) \right\rangle \sim e^{-tV\left( r\right) }. \end{equation} In string/gauge duality, the expectation value of the Wilson loop is given by \cite{9803002 \begin{equation} \left\langle W\left( C\right) \right\rangle =\int DXe^{-S_{NG}}, \end{equation} where $S_{NG}$ is the string world-sheet action bounded by the loop $C$ at the boundary of an AdS space. In \cite{9803135,9803137,0604204,0610135,0611304,0701157,0807.4747,1004.1880,1008.3116,1201.0820,1206.2824,1401.3635 , a probe open string in an AdS background was considered. The two ends of the open string are attached to the boundary of AdS background and behave as a quark-antiquark pair. Thus the open string could be interpreted as a bound state, i.e. meson state, in QCD. By studying the dynamics of the open string, the expectation value of the Wilson loop can be obtained, so as the heavy quark potential. From the behavior of the heavy quark potential, one is able to study the process that an open string breaks to two open strings with their two ends attaching to the AdS boundary and the black hole horizon, respectively. This string breaking phenomenon describes\ how a meson melts to a pair of free quark and antiquark in its dual QCD. In this work, we put probe open strings in the background obtained in \cite{1301.0385}. We study the dynamics of the open strings to obtain the expectation value of the Wilson loop as well as the heavy quark potential. In \cite{1301.0385}, various black hole phases for different temperatures have been obtained. In this work, we found three open string configurations for the various black hole phases as in figure \ref{string-black hole}. According to AdS/QCD duality, these different open string configurations correspond to the confinement and deconfinement phases in QCD, respectively. This supports our preferred interpretation that the black hole to black hole phase transition in the bulk corresponds to the confinement-deconfinement phase transition of heavy quarks in the dual holographic QCD in \cite{1301.0385}. Nevertheless, we found that the phase transition temperatures obtained from the black hole phases and the open string configurations are not exact the same. In fact, we will argue that neither the black hole phases nor the string configurations alone could explain the full phase structure of the confinement-deconfinement phase transition in QCD. The string configurations tells us that whether the system is in confinement or deconfinement phase, while the black hole phase transition determines the location of the phase boundary. By combining the two effects together in this paper, we find a more natural picture to describe the phase diagram of the confinement-deconfinement transition for the heavy quarks in QCD. Furthermore, in the deconfinement phase, we also study the meson melting process by studying the process of an open string breaking to two open strings. \begin{figure}[h] \begin{center} \includegraphics[ height=1.727in, width=5.74in ]{string-black-hole.eps} \end{center} \caption{ Three configurations for open strings in a black hole. There is no black hole in case (a), open strings are always connected with their two ends on the AdS boundary. In the small black hole case (b), open strings can not exceed a certain distance from the boundary and are still connected with their two ends on the AdS boundary. In the large black hole case (c), the open strings with their two ends far away enough will break to two open strings with their two ends attaching the AdS boundary and the black hole horizon, respectively. \label{string-black hole \end{figure} The paper is organized as follows. In section II, we consider an Einstein-Maxwell-scalar system. We review how to get the analytic solutions in \cite{1301.0385} by potential reconstruction method and study the phase structure in the these backgrounds. In section III, we add probe open strings in our black hole background to study their various configurations. We calculate the expectation value of the Wilson loop and study the heavy quark potential. In section IV, by combining the background phase structure and the open string breaking effect, we obtain the phase diagram for confinement-deconfinement transition. We further study the meson melting process in the deconfinement phase. We conclude our result in section IV \setcounter{equation}{0} \renewcommand{\theequation}{\arabic{section}.\arabic{equation} \section{Einstein-Maxwell-Scalar System} In this section, we review the black hole solution and its phase structure obtained in \cite{1301.0385}. \subsection{Background} We consider a 5-dimensional Einstein-Maxwell-scalar system with probe matters. The action of the system has two parts, the background part and the matter part \begin{equation} S=S_{b}+S_{m}. \end{equation} In Einstein frame, the background action includes a gravity field $g_{\mu\nu $, a Maxwell field $A_{\mu}$ and a neutral scalar field $\phi$, while the matter action includes a massless gauge fields ${A}_{\mu}^{V}$, which we will treat as probe, describing the degrees of freedom of vector mesons on the 4d boundary \begin{align} S_{b} & =\dfrac{1}{16\pi G_{5}}\int d^{5}x\sqrt{-g}\left[ {R-\frac{f\left( \phi\right) }{4}F^{2}}-\dfrac{1}{2}\partial_{\mu}\phi\partial^{\mu \phi-V\left( \phi\right) \right] ,\label{action-b}\\ S_{m} & =-\dfrac{1}{16\pi G_{5}}\int d^{5}x\sqrt{-g}{\frac{f\left( \phi\right) }{4}}F_{V}^{2}, \label{action-m \end{align} where ${G}_{5}$ is the coupling constant for the gauge field strength ${F}_{\mu\nu}{=\partial}_{\mu}A_{\nu}-{\partial}_{\nu}A_{\mu}$, $f\left( \phi\right) $ is the gauge kinetic function associated to the Maxwell field $A_{\mu}$ and $V\left( \phi\right) $ is the potential of the scalar field $\phi$. The equations of motion can be derived from the above action a \begin{align} \nabla^{2}\phi & =\frac{\partial V}{\partial\phi}+\frac{1}{4}\frac{\partial f}{\partial\phi}\left( F^{2}+{F_{V}^{2}}\right) ,\text{ \ }\nabla_{\mu }\left[ f(\phi)F^{\mu\nu}\right] ={{0,}}\text{ \ }\nabla_{\mu}\left[ f(\phi)F_{V}^{\mu\nu}\right] ={{0,}}\\ R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R & =\frac{f(\phi)}{2}\left( F_{\mu\rho }F_{\nu}^{\rho}-\frac{1}{4}g_{\mu\nu}F^{2}\right) +\frac{1}{2}\left[ \partial_{\mu}\phi\partial_{\nu}\phi-\frac{1}{2}g_{\mu\nu}\left( \partial \phi\right) ^{2}-g_{\mu\nu}V\right] . \end{align} To solve the background of the Einstein-Maxwell-scalar system, we first turn off the probe gauge field ${A}_{\mu}^{V}$ and consider the ansatz for the metric, scalar field and Maxwell field as \begin{align} ds^{2} & =\dfrac{e^{2A\left( z\right) }}{z^{2}}\left[ -g(z)dt^{2 +\frac{dz^{2}}{g(z)}+d\vec{x}^{2}\right] ,\label{metric}\\ \phi & =\phi\left( z\right) \text{, \ \ }A_{\mu}=A_{t}\left( z\right) , \label{ansatz \end{align} which leads to the following equations of motion for the background fields \begin{align} \phi^{\prime\prime}+\left( \frac{g^{\prime}}{g}+3A^{\prime}-\dfrac{3 {z}\right) \phi^{\prime}+\left( \frac{z^{2}e^{-2A}A_{t}^{\prime2}f_{\phi }{2g}-\frac{e^{2A}V_{\phi}}{z^{2}g}\right) & =0,\label{eom-phi}\\ A_{t}^{\prime\prime}+\left( \frac{f^{\prime}}{f}+A^{\prime}-\dfrac{1 {z}\right) A_{t}^{\prime} & =0,\label{eom-At}\\ A^{\prime\prime}-A^{\prime2}+\dfrac{2}{z}A^{\prime}+\dfrac{\phi^{\prime2}}{6} & =0,\label{eom-A}\\ g^{\prime\prime}+\left( 3A^{\prime}-\dfrac{3}{z}\right) g^{\prime -e^{-2A}z^{2}fA_{t}^{\prime2} & =0,\label{eom-g}\\ A^{\prime\prime}+3A^{\prime2}+\left( \dfrac{3g^{\prime}}{2g}-\dfrac{6 {z}\right) A^{\prime}-\dfrac{1}{z}\left( \dfrac{3g^{\prime}}{2g}-\dfrac {4}{z}\right) +\dfrac{g^{\prime\prime}}{6g}+\frac{e^{2A}V}{3z^{2}g} & =0. \label{eom-V \end{align} To solve the above equations of motion, we need to specify the following boundary and physical conditions: \begin{enumerate} \item Near the boundary $z\rightarrow0$, we require the metric in string frame to be asymptotic to $AdS_{5}$; \item Near the horizon $z=z_{H}$, we put the regular condition $A_{t}\left( z_{H}\right) =g\left( z_{H}\right) =0$; \item The vector meson spectrum should satisfy the linear Regge trajectories at zero temperature and zero density \cite{0507246}. \end{enumerate} With the above conditions, the equations of motion (\ref{eom-phi}-\ref{eom-V}) can be analytically solved a \begin{align} \phi^{\prime}\left( z\right) & =\sqrt{-6\left( A^{\prime\prime -A^{\prime2}+\dfrac{2}{z}A^{\prime}\right) },\label{phip-A}\\ A_{t}\left( z\right) & =\mu\dfrac{e^{cz^{2}}-e^{cz_{H}^{2}} {1-e^{cz_{H}^{2}}},\label{At-A}\\ g\left( z\right) & =1+\dfrac{1}{\int_{0}^{z_{H}}y^{3}e^{-3A}dy}\left[ \dfrac{2c\mu^{2}}{\left( 1-e^{cz_{H}^{2}}\right) ^{2}}\left\vert \begin{array} [c]{cc \int_{0}^{z_{H}}y^{3}e^{-3A}dy & \int_{0}^{z_{H}}y^{3}e^{-3A}e^{cy^{2}}dy\\ \int_{z_{H}}^{z}y^{3}e^{-3A}dy & \int_{z_{H}}^{z}y^{3}e^{-3A}e^{cy^{2}}dy \end{array} \right\vert -\int_{0}^{z}y^{3}e^{-3A}dy\right] ,\\ V\left( z\right) & =-3z^{2}ge^{-2A}\left[ A^{\prime\prime}+3A^{\prime 2}+\left( \dfrac{3g^{\prime}}{2g}-\dfrac{6}{z}\right) A^{\prime}-\dfrac {1}{z}\left( \dfrac{3g^{\prime}}{2g}-\dfrac{4}{z}\right) +\dfrac {g^{\prime\prime}}{6g}\right] , \label{V-A \end{align} where $\mu\equiv A_{t}\left( 0\right) $ is defined as chemical potential. The solution (\ref{phip-A}-\ref{V-A}) depends on the warped factor $A\left( z\right) $. The choice of $A\left( z\right) $ is arbitrary provided it satisfies the boundary conditions. To be concrete, we fix the warped factor $A\left( z\right) $ in our solution in a simple form a \begin{equation} A\left( z\right) =-\dfrac{c}{3}z^{2}-bz^{4}, \label{A \end{equation} where the parameters $b$ and $c$ will be determined later. \subsection{Phase Structure of the Background} With the background (\ref{metric}), one can calculate the Hawking-Bekenstein entrop \begin{equation} s=\dfrac{e^{3A\left( z_{H}\right) }}{4z_{H}^{3}}, \label{entropy \end{equation} and the Hawking temperatur \begin{equation} T=\dfrac{z_{H}^{3}e^{-3A\left( z_{H}\right) }}{4\pi\int_{0}^{z_{H} y^{3}e^{-3A}dy}\left[ 1-\dfrac{2c\mu^{2}\left( e^{cz_{H}^{2}}\int_{0 ^{z_{H}}y^{3}e^{-3A}dy-\int_{0}^{z_{H}}y^{3}e^{-3A}e^{cy^{2}}dy\right) }{\left( 1-e^{cz_{H}^{2}}\right) ^{2}}\right] . \end{equation} \begin{figure}[h] \begin{center} \includegraphics[ height=2in, width=3in ]{T1.eps}\hspace*{0.5cm} \includegraphics[ height=2in, width=2.9in ]{T2.eps}\vskip -0.05cm \hskip 0.15 cm \textbf{( a ) } \hskip 7.5 cm \textbf{( b )} \end{center} \caption{The temperature v.s. horizon at different chemical potentials $\mu=0,0.5,0.714,1GeV$. We enlarge a rectangle region in (a) into (b) to see the detailed structure. For $\mu>\mu_{c}$, the temperature decreases monotonously to zero; while for $\mu<\mu_{c}$, the temperature has a local minimum. At $\mu_{c}\simeq0.714GeV$, the local minimum reduces to a inflection point. \label{temperature \end{figure} The temperature $T$ v.s. horizon $z_{H}$ at different chemical potentials is plotted in figure \ref{temperature}. At $\mu=0$, the temperature has a global minimum $T_{\min}\left( 0\right) $ at $z_{H}=z_{\min}\left( 0\right) $. The black hole solution is only thermodynamically stable for $z_{H}<z_{\min }\left( 0\right) $ and is unstable for $z_{H}>z_{\min}\left( 0\right) $. Below the temperature $T_{\min}\left( 0\right) $, there is no black hole solution and we expect a Hawking-Page phase transition happens at a temperature $T_{HP}\left( 0\right) \gtrsim T_{\min}\left( 0\right) $ where the black hole dissolves to a thermal gas background. For $0<\mu<\mu_{c}$, the temperature has a local minimum/maximum $T_{\min}\left( \mu\right) /T_{\max }\left( \mu\right) $\ at $z_{H}=z_{\min}\left( \mu\right) /z_{\max}\left( \mu\right) $ and decreases to zero at a finite size of horizon. The black holes between $z_{\min}\left( \mu\right) $ and $z_{\max}\left( \mu\right) $ are thermodynamically unstable. There are two sections that are stable with $z_{H}<z_{\min}\left( \mu\right) $ and $z_{H}>z_{\max}\left( \mu\right) $. We expect a similar Hawking-Page phase transition happens at a temperature $T_{HP}\left( \mu\right) \gtrsim T_{\min}\left( \mu\right) $. Nevertheless, since the thermodynamically stable black hole solutions exist even when the temperature below $T_{\min}\left( \mu\right) $ for the section $z_{H}>z_{\max}\left( \mu\right) $, we also expect a black hole to black hole phase transition happening at a temperature $T_{BB}\left( \mu\right) $ between $T_{\min}\left( \mu\right) $ and $T_{\max}\left( \mu\right) $, where a large black hole with the horizon $z=z_{Hl}\left( \mu\right) $ collapses to a small black hole with the horizon $z=z_{Hs}\left( \mu\right) $ as showed in figure \ref{horizon}. Finally, for $\mu>\mu_{c}$, the temperature monotonously decreases to zero and there is no black hole to black hole phase transition anymore\footnote{There could still be a Hawking-Page phase transition at some temperature for the case of $\mu>\mu_{c}$, but we will show later that the black hole solution is always thermodynamically favored in this case.}. \begin{figure}[h] \begin{center} \includegraphics[ height=1.7in, width=3.74in ]{horizon.eps} \end{center} \caption{Phase transition from a large black hole with the horizon $z=z_{Hl}$ collapses to a small black hole with the horizon $z=z_{Hs}$ at the transition temperature $T=T_{BB}$. \label{horizon \end{figure} To determine the phase transition temperatures $T_{HP}\left( \mu\right) $ and $T_{BB}\left( \mu\right) $, we compute the free energy from the first law of thermodynamics in grand canonical ensembl \begin{equation} F=-\int sdT. \label{int F \end{equation} We plot the free energy v.s. temperature in (a) of figure \ref{phase diagram}. \begin{figure}[h] \begin{center} \includegraphics[ height=2in, width=3in ]{FT.eps}\hspace*{0.5cm} \includegraphics[ height=2in, width=2.9in ]{phase.eps}\vskip -0.05cm \hskip 0.15 cm \textbf{( a ) } \hskip 7.5 cm \textbf{( b )} \end{center} \caption{(a) The free energy v.s. temperature at chemical potentials $\mu=0,0.5,0.714,1 GeV$. At $\mu=0$, the free energy intersect with the $x$-axis at $T=T_{HP}$ where the black hole dissolves to thermal gas by Hawking-Page phase transition. For $0<\mu<\mu_{c}\simeq0.714GeV$, the temperature reaches its maximum value where the free energy turns back to intersect with itself at $T=T_{BB}$ where the black hole to black hole transition happens. For $\mu>\mu_{c}$, the swallow-tailed shape disappears and there is no phase transition in the background. (b) The phase diagram in $T$ and $\mu$ plane. At small $\mu$, the system undergoes a first order phase transition at finite $T$. The first order phase transition stops at the critical point $(\mu_{c},T_{c})=(0.714GeV,0.528GeV)$, where the phase transition becomes second order. For $\mu>\mu_{c}$, the system weaken to a sharp but smooth crossover \cite{1301.0385}. \label{phase diagram \end{figure} At $\mu=0$, the free energy intersect the $x$-axis at $T=T_{HP}\left( 0\right) $ where the Hawking-Page phase transition happens. The black hole dissolves to thermal gas which is thermodynamically stable for $T<T_{HP \left( 0\right) $. We fix the parameter $b\simeq0.273GeV^{4}$ in Eq. (\ref{A}) by fitting the Hawking-Page phase transition temperature $T_{HP}\left( 0\right) $ with the lattice QCD simulation of $T_{HP \simeq0.6GeV$ in \cite{1111.4953}. For $0<\mu<\mu_{c}$, the free energy behaves as the expected swallow-tailed shape. The temperature reaches its maximum where the free energy turns back and intersects with itself at $T=T_{BB}\left( \mu\right) $ where the large black hole transits to the small black hole. Since the free energies of the stable black holes are always less than that of the thermal gas ($F_{gas \equiv0$), the thermodynamic system will always favor the small black hole background other than the thermal gas background. When we increase the chemical potential $\mu$ from zero to $\mu_{c}$, the loop of the swallow-tailed shape shrinks to disappear at $\mu=\mu_{c}$. For $\mu>\mu_{c}$, the curve of the free energy increases smoothly from higher temperature to lower temperature. The phase diagram of the background is plotted in (b) of figure \ref{phase diagram}. At $\mu=0$, the system undergoes a black hole to thermal gas phase transition at $T=T_{HP}\left( 0\right) $. For $0<\mu<\mu_{c}$, the system undergoes a large black hole to small black hole phase transition at $T_{BB}\left( \mu\right) $. The phase transition temperature $T_{BB}\left( \mu\right) $ approaches to $T_{HP}$ at $\mu\rightarrow0$ that makes the phase diagram continuous at $\mu=0$. The phase transition stops at $\mu=\mu_{c}$ and reduces to a crossover for $\mu>\mu_{c}$. The phase diagram we obtained here in figure \ref{phase diagram} is different from the conventional QCD phase diagram, in which crossover happens for small chemical potential and phase transition happens for large chemical potential. In \cite{1301.0385}, by comparing with the phase structure in lattice QCD simulation, the authors argued that this 'reversed' phase diagram should be interpreted as confinement-deconfinement phase transition of heavy quarks in QCD. In this paper, we consider the same background as in \cite{1301.0385} to study pure gluon QCD with one additional heavy flavour, and not light quarks. Since our model describes heavy quarks system in QCD, the flavour field ${A}_{\mu}^{V}$ in the matter action \ref{action-m} should be associated to the mesons make up of heavy quarks, i.e. quarkonium states. By fitting the lowest two spectrum of quarkonium states $m_{J/\psi}=3.096GeV$ and $m_{\psi^{\prime}}=3.685GeV$, we can fix $c\simeq1.16GeV^{2}$ in Eq. (\ref{A}). Nevertheless, there left a problem that, in the gravity side, it is commonly believed that the confinement-deconfinement phase transition in the field theory side is dual to the Hawking-Page phase transition. Hawking-Page phase transition is the transition between black hole and thermal gas backgrounds. However, in our gravity background, the phase transition is between two black holes for a non-zero chemical potential. Thus it is not consistent to consider a black hole to black hole phase transition in the gravity side to be dual to the confinement-deconfinement phase transition in QCD. In the following of this paper, by adding open strings in the background, we will study this issue more carefully to gain a more reasonable physical picture \setcounter{equation}{0} \renewcommand{\theequation}{\arabic{section}.\arabic{equation} \section{Open Strings in the Background} In this paper, we consider an open string in the above background with its two ends on the boundary of the space-time at $z=0$. There are two configurations for an open string in the black hole background. One is the U-shape configuration with the open string reaching its maximum depth at $z=z_{0}$; the other is the straight configuration with the straight open string having its two ends attached to the boundary and the horizon at $z=z_{H}$, respectively. The two configurations are showed in figure \ref{string}. Since the dual holographic QCD lives on the boundary, it is natural for us to interpret the two ends of the open string as a quark-antiquark pair. The U-shape configuration corresponds to the quark-antiquark pair being connected by a string and can be identified as a meson state. While the straight configuration corresponds to a free quark or antiquark. \begin{figure}[h] \begin{center} \includegraphics[ height=2.3in, width=2.5in ]{U.eps}\hspace*{0.7cm} \includegraphics[ height=2.3in, width=2.5in ]{straight.eps}\vskip -0.05cm \hskip 0.12 cm \textbf{( a ) } \hskip 6 cm \textbf{( b )} \end{center} \caption{Two open string configurations. In (a), a U-shape open string connects its two ends at the boundary at $z=0$ and reaches its maximum depth at $z=z_{0}$. In (b), two straight open strings with their ends connecting the boundary at $z=0$ and the horizon at $z=z_{H}$, respectively. \label{string \end{figure} The Nambu-Goto action of an open string i \begin{equation} S_{NG}=\int d^{2}\xi\sqrt{-G}, \end{equation} where the induced metri \begin{equation} G_{ab}=g_{\mu\nu}\partial_{a}X^{\mu}\partial_{b}X^{\nu}, \end{equation} on the 2-dimensional world-sheet that the string sweeps out as it moves with coordinates $(\xi^{0},\xi^{1})$ is the pullback of 5-d target space-time metric $g_{\mu\nu}$ \begin{equation} ds^{2}=\frac{e^{2A(z)}}{z^{2}}\left( g(z)dt^{2}+d\vec{x}^{2}+\frac{1 {g(z)}dz^{2}\right) , \end{equation} where, to study the thermal properties of the system, we consider the Euclidean metric and identify the periodic of the time with the inverse of temperature as $\beta=1/T$. \subsection{Wilson Loop} We consider a $r\times t_{0}$ rectangular Wilson loop $C$ along the directions $\left( t,x\right) $\ on the boundary of the AdS space attached by a pair of the quark and antiquark separated by $r$. The quark and antiquark located at $\left( z=0,x=\pm r/2\right) $ are connected by an open string, which reaches its maximum at $\left( z=z_{0},x=0\right) $ as in figure \ref{Wilson loop}. \begin{figure}[h] \begin{center} \includegraphics[ height=1.4905in, width=4.4823in ]{Wilson.eps} \end{center} \caption{Wilson loop as the boundary of string world-sheet. \label{Wilson loop \end{figure} It is known that taking the limit $t_{0}\rightarrow\beta=1/T$ allows one to read off the energy of such a pair from the expectation value of the Wilson loop \begin{equation} \left\langle W\left( C\right) \right\rangle \sim e^{-V\left( r,T\right) /T}, \end{equation} where $V\left( r,T\right) $ is the heavy quark-antiquark potential \cite{0604204,1201.0820}. In string/gauge duality, the expectation value of the Wilson loop is given b \begin{equation} \left\langle W\left( C\right) \right\rangle =\int DXe^{-S_{NG}}\simeq e^{-S_{on-shell}}, \label{Wilson \end{equation} where $S_{NG}$ is the string world-sheet action bounded by a curve $C$ at the boundary of AdS space and $S_{on-shell}$ is the on-shell string action, which is proportional to the area of the string world-sheet bounded by the Wilson loop $C$. Comparing with Eq. (\ref{Wilson}), the free energy of the meson is defined a \begin{equation} V\left( r,T\right) =TS_{on-shell}\left( r,T\right) . \end{equation} \subsection{Configurations of Open Strings} The string world-sheet action is defined by the Nambu-Goto action \begin{equation} S=\int d^{2}\xi\mathcal{L}=\int d^{2}\xi\sqrt{\det G}, \end{equation} where $G_{ab}=\partial_{a}X^{\mu}\partial_{b}X_{\mu}$\ is the induced metric on string world-sheet. For the meson configuration, by choosing static gauge: $\xi^{0}=t,$ $\xi^{1}=x$, the induced metric in string frame become \begin{equation} ds^{2}=G_{ab}d\xi^{a}d\xi^{b}=\frac{e^{2A\left( z\right) }}{z^{2}}g\left( z\right) dt^{2}+\frac{e^{2A\left( z\right) }}{z^{2}}\left( 1+\dfrac {z^{\prime2}}{g\left( z\right) }\right) dx^{2}, \end{equation} where the prime denotes a derivative with respect to $x$. The Lagrangian and Hamiltonian can be calculated a \begin{align} \mathcal{L} & =\sqrt{\det G}=\frac{e^{2A\left( z\right) }}{z^{2} \sqrt{g\left( z\right) +z^{\prime2}},\\ \mathcal{H} & =\left( \frac{\partial\mathcal{L}}{\partial z^{\prime }\right) z^{\prime}-\mathcal{L}=-\frac{e^{2A\left( z\right) }g\left( z\right) }{z^{2}\sqrt{g\left( z\right) +z^{\prime2}}}. \label{H \end{align} With boundary condition \begin{equation} z\left( x=\pm\frac{r}{2}\right) =0\text{, }z(x=0)=z_{0}\text{, }z^{\prime }(x=0)=0, \end{equation} we obtain the conserved energ \begin{equation} \mathcal{H}(x=0)=-\frac{e^{2A\left( z_{0}\right) }}{z_{0}^{2}}\sqrt{g\left( z_{0}\right) }. \end{equation} We can solve $z^{\prime}$\ from Eq. (\ref{H}) \begin{equation} z^{\prime}=\sqrt{g\left( \dfrac{\sigma^{2}\left( z\right) }{\sigma ^{2}\left( z_{0}\right) }-1\right) }, \end{equation} wher \begin{equation} \sigma\left( z\right) =\frac{e^{2A\left( z\right) }\sqrt{g\left( z\right) }}{z^{2}}. \end{equation} The distance $r$ between the quark-antiquark pair can be calculated as \begin{equation} r=\int_{-\frac{r}{2}}^{\frac{r}{2}}dx=2\int_{0}^{z_{0}}dz\frac{1}{z^{\prime }=2\int_{0}^{z_{0}}dz\left[ g\left( z\right) \left( \dfrac{\sigma ^{2}\left( z\right) }{\sigma^{2}\left( z_{0}\right) }-1\right) \right] ^{-\frac{1}{2}}, \end{equation} where $z_{0}$ is the maximum depth that the string can reach. The dependence of the distance $r$ on $z_{0}$ at two different horizons are plotted in figure \ref{r-z0}. \begin{figure}[h] \begin{center} \includegraphics[ height=2.5in, width=4in ]{r.eps} \end{center} \caption{Separate distance $r$ between quark and antiquark v.s. $z_{0}$ at $\mu=0.5GeV$. \label{r-z0 \end{figure} We see that for a small black hole (large $z_{H}$), there exist a dynamical wall at $z_{m}<z_{H}$ where $r^{\prime}\left( z_{m}\right) \rightarrow \infty$. The open string can not go beyond this dynamical wall, i.e. $z_{0}\leq z_{m}$, even when the distance $r$ between the quark-antiquark pair goes to infinity as showed in (a) of figure \ref{conf-deconf}. While for a large black hole (small $z_{H}$), the open string can reach arbitrary close to the horizon, but there is a maximum value for the distance at $r=r_{M}$. If the distance between the quark and antiquark is larger than $r_{M}$,\ there is no stable U-shape solution for open strings so that the open string of U-shape will break to two straight open strings connecting the boundary at $z=0$ and the horizon at $z=z_{H}$ as showed in (b) of figure \ref{conf-deconf}. \begin{figure}[h] \begin{center} \includegraphics[ height=1.7in, width=3in ]{confinement.eps}\hspace*{0.5cm} \includegraphics[ height=1.7in, width=3in ]{deconfinement.eps}\vskip -0.05cm \hskip 0.13 cm \textbf{( a ) } \hskip 7 cm \textbf{( b )} \end{center} \caption{(a) For a small black hole, open strings can not exceed the dynamical wall at $z=z_{m}$ and are always in the U-shape. (b) For a large black hole, an open string will break to two straight strings if the distance between its two ends is larger than $z_{M}$. \label{conf-deconf \end{figure} In summary, for a small black hole, open strings are always in the U-shape; while for a large black hole, an open string is in the U-shape for short separate distance $r<r_{M}$ and is in the straight shape for long separate distance $r>r_{M}$. Thus when a large black hole shrinks to a small one, we expect that a dynamical wall will appear when the black hole horizon equal to a critical value $z_{H\mu}$ for each chemical potential $\mu$, showed in figure \ref{phase-cross}. \begin{figure}[h] \begin{center} \includegraphics[ height=1.7409in, width=3.755in ]{phase-cross.eps} \end{center} \caption{When a large black hole with horizon $z_{Hl}<z_{H\mu}$ shrinks to a small black hole with horizon $z_{Hs}>z_{H\mu}$, the dynamical wall at $z=z_{m}$ appears when $z_{H}=z_{H\mu}$. \label{phase-cross \end{figure} Since each horizon is associated to a temperatures for black holes, we define the transformation temperature $T_{\mu}$ corresponding to the critical black hole horizon $z_{H\mu}$, at which the dynamical wall appears/disappears, for each chemical potential $\mu$. The dependence of $T_{\mu}$ on $\mu$ is plotted in figure \ref{Tmu}. \begin{figure}[h] \begin{center} \includegraphics[ height=2.5in, width=3.7in ]{Tmu.eps} \end{center} \caption{ The temperature $T_{\mu}$ corresponding to $z_{H\mu}$, at which the dynamical wall appears/disaapears, for each chemical potential $\mu$. \label{Tmu \end{figure} When there is dynamical wall, open strings are always in the U-shape. This means that quarks and antiquarks are always connected by an open string to form a bound state, i.e. the meson state in QCD. It is natural to interpret this case as the confinement phase in the dual holographic QCD. On the other hand, when the dynamical wall disappears, an U-shape open string could break up to two strings if the distance between its two ends is large enough. This means that a meson state could break to a pair of free quark and antiquark in QCD. We interpret this case as the deconfinement phase in the dual holographic QCD. Therefore, the transformation temperature $T_{\mu}$ is associated to the transformation between the confinement and the deconfinement phases in the dual holographic QCD\footnote{This transformation between confinement and deconfinement phases is not necessary a phase transition, it could be a smooth crossover as we will show later.}. \subsection{Heavy Quark Potential} When the black hole horizon $z_{H}>z_{H\mu}$, there exists a dynamical wall at $z=z_{m}<z_{H}$. Open strings are always in the U-shape and the heavy quark potential can be calculated a \begin{equation} V=TS_{on-shell}=\int_{-\frac{r}{2}}^{\frac{r}{2}}dx\mathcal{L}=2\in _{0}^{z_{0}}dz\frac{\sigma\left( z\right) }{\sqrt{g\left( z\right) }\left[ 1-\dfrac{\sigma^{2}\left( z_{0}\right) }{\sigma^{2}\left( z\right) }\right] ^{-\frac{1}{2}}. \label{quark potential \end{equation} In short separate distance limit $r\rightarrow0$, i.e. $z_{0}\rightarrow0$, we expand the distance and the heavy quark potential at $z_{0}=0$ \begin{align} r & =2\int_{0}^{z_{0}}dz\left[ g\left( z\right) \left( \dfrac{\sigma ^{2}\left( z\right) }{\sigma^{2}\left( z_{0}\right) }-1\right) \right] ^{-\frac{1}{2}}=r_{1}z_{0}+O(z_{0}^{2}),\\ V & =2\int_{0}^{z_{0}}dz\frac{\sigma\left( z\right) }{\sqrt{g\left( z\right) }}\left[ 1-\dfrac{\sigma^{2}\left( z_{0}\right) }{\sigma ^{2}\left( z\right) }\right] ^{-\frac{1}{2}}=\frac{V_{-1}}{z_{0}}+O(1), \end{align} wher \begin{align} r_{1} & =2\int_{0}^{1}dv\left( \frac{1}{v^{4}}-1\right) ^{-\frac{1}{2 }=\frac{1}{2}B\left( \frac{3}{4},\frac{1}{2}\right) ,\\ V_{-1} & =2\int_{0}^{1}\frac{dv}{v^{2}}\left( 1-v^{4}\right) ^{-\frac {1}{2}}=\frac{1}{2}B\left( -\frac{1}{4},\frac{1}{2}\right) . \end{align} This gives the expected Coulomb potential at short separate distance \begin{equation} V=-\frac{\kappa}{r}, \end{equation} wher \begin{equation} \kappa=-\dfrac{1}{4}B\left( \frac{3}{4},\frac{1}{2}\right) B\left( -\frac{1}{4},\frac{1}{2}\right) \simeq1.44. \end{equation} In long separate distance $r\rightarrow\infty$, i.e. $z_{0}\rightarrow z_{m}$, we make a coordinate transformation $z=z_{0}-z_{0}w^{2}$. The distance $r$ and the heavy quark potential $V$\ becom \begin{align} r & =2\int_{0}^{1}f_{r}\left( w\right) dw,\label{r}\\ V & =2\int_{0}^{1}f_{V}\left( w\right) dw, \label{FR \end{align} wher \begin{align} f_{r}\left( w\right) & =2z_{0}w\left[ g\left( z_{0}-z_{0}w^{2}\right) \left( \dfrac{\sigma^{2}\left( z_{0}-z_{0}w^{2}\right) }{\sigma^{2}\left( z_{0}\right) }-1\right) \right] ^{-\frac{1}{2}},\\ f_{V}\left( w\right) & =2z_{0}w\frac{\sigma\left( z_{0}-z_{0 w^{2}\right) }{\sqrt{g\left( z_{0}-z_{0}w^{2}\right) }}\left[ 1-\dfrac{\sigma^{2}\left( z_{0}\right) }{\sigma^{2}\left( z_{0}-z_{0 w^{2}\right) }\right] ^{-\frac{1}{2}}. \end{align} We learn from figure \ref{r-z0} that the distance $r$ is divergent at $z_{0}=z_{m}$, and the same happens for the heavy quark potential.\ By carefully analysis, we find that this divergence is due to the integrands $f_{r}\left( w\right) $ and $f_{r}\left( w\right) $ are divergent near the lower limit $w=0$, i.e. $z=z_{0}\rightarrow z_{m}$. To study the behaviors of the distance and the heavy quark potential at $z_{0}=z_{m}$, we expand $f_{r}\left( w\right) $\ and $f_{V}\left( w\right) $ at $w=0$ \begin{align} f_{r}\left( w\right) & =2z_{0}\left[ -2z_{0}g\left( z_{0}\right) \frac{\sigma^{\prime}\left( z_{0}\right) }{\sigma\left( z_{0}\right) }\right] ^{-\frac{1}{2}}+O\left( w\right) ,\\ f_{V}\left( w\right) & =2z_{0}\sigma\left( z_{0}\right) \left[ -2z_{0}g\left( z_{0}\right) \frac{\sigma^{\prime}\left( z_{0}\right) }{\sigma\left( z_{0}\right) }\right] ^{-\frac{1}{2}}+O\left( w\right) . \end{align} The integrals (\ref{r}) and (\ref{FR}) can be approximated by only consider the leading terms of $f_{r}\left( w\right) $ and $f_{r}\left( w\right) $\ near $z_{0}=z_{m}$. This leads t \begin{align} r\left( z_{0}\right) & \simeq4z_{0}\left[ -2z_{0}g\left( z_{0}\right) \frac{\sigma^{\prime}\left( z_{0}\right) }{\sigma\left( z_{0}\right) }\right] ^{-\frac{1}{2}},\\ V\left( z_{0}\right) & \simeq4z_{0}\sigma\left( z_{0}\right) \left[ -2z_{0}g\left( z_{0}\right) \frac{\sigma^{\prime}\left( z_{0}\right) }{\sigma\left( z_{0}\right) }\right] ^{-\frac{1}{2}}=\sigma\left( z_{0}\right) r\left( z_{0}\right) . \end{align} From the above expression, we obtain the expected linear potential $V=\sigma_{s}r$\ at long distance with the string tension \begin{equation} \sigma_{s}=\left. \dfrac{dV}{dr}\right\vert _{z_{0}=z_{m}}=\left. \dfrac{dV/dz_{0}}{dr/dz_{0}}\right\vert _{z_{0}=z_{m}}=\left. \dfrac {\sigma^{\prime}\left( z_{0}\right) r\left( z_{0}\right) +\sigma\left( z_{0}\right) r^{\prime}\left( z_{0}\right) }{r^{\prime}\left( z_{0}\right) }\right\vert _{z_{0}=z_{m}}=\sigma\left( z_{m}\right) . \end{equation} The temperature dependence of the string tension\ for various chemical potentials is plotted in figure \ref{string tension}. We see that the string tension decreases when the temperature increases. At the confinement-deconfinement transformation temperature $T_{\mu}$, the system transform to the deconfinement phase and the string tension suddenly drops to zero as we expected \cite{1006.0055}. \begin{figure}[h] \begin{center} \includegraphics[ height=2in, width=3in ]{sigma1.eps}\hspace*{0.5cm} \includegraphics[ height=2in, width=3in ]{sigma2.eps}\vskip -0.05cm \hskip 0.15 cm \textbf{( a ) } \hskip 7.5 cm \textbf{( b )} \end{center} \caption{String tension v.s. temperature at $\mu=0.5,0.678,0.714GeV$. (a) The string tension decreases with temperature growing in the confinement phase, and suddenly drops to zero at $T=T_{\mu}$. The region closes to the transition temperatures is enlarged in (b) \label{string tension \end{figure} The behaviors of the heavy quark potential at short distance and long distance agrees with the form of the Cornell potential \begin{equation} V\left( r\right) =-\dfrac{\kappa}{r}+\sigma_{s} r+C, \end{equation} which has been measured in great detail in lattice simulations Next, we would like to look at the $r$ dependence of the heavy quark potential by evaluating the integral in Eq. (\ref{quark potential}), which is divergent due to its integrand blows up at $z=0$. We simply regularize the integral by subtracting the divergent part of the integrand \begin{equation} V_{R}=C\left( z_{0}\right) +2\int_{0}^{z_{0}}dz\left[ \frac{\sigma\left( z\right) }{\sqrt{g\left( z\right) }}\left[ 1-\dfrac{\sigma^{2}\left( z_{0}\right) }{\sigma^{2}\left( z\right) }\right] ^{-\frac{1}{2}}-\frac {1}{z^{2}}\left[ 1+2A^{\prime}\left( 0\right) z\right] \right] , \label{VR \end{equation} wher \begin{equation} C\left( z_{0}\right) =-\dfrac{2}{z_{0}}+4A^{\prime}\left( 0\right) \ln z_{0}. \end{equation} After the regularization, we are able to calculate the heavy quark potential. The result is plotted in (a) of figure \ref{meson potential}. For short separate distance, the potential is proportional to $1/r$ as expected. While for long separate distance, there exists a critical horizon $z_{H\mu}$ for each chemical potential $\mu$. For small black hole with $z_{H}>z_{H\mu}$, the potential is linear to $r$ for $r\rightarrow\infty$. While for large black hole with $z_{H}<z_{H\mu}$, the potential ceases at a maximum distance $r_{M $. Beyond $r_{M}$, the U-shape open string will break to two straight shape open strings. \begin{figure}[h] \begin{center} \includegraphics[ height=2in, width=3in ]{Fr2.eps}\hspace*{0.5cm} \includegraphics[ height=2in, width=3in ]{free-energy.eps}\vskip -0.05cm \hskip 0.15 cm \textbf{( a ) } \hskip 7.5 cm \textbf{( b )} \end{center} \caption{(a) $V$ v.s. $r$ at $\mu=0.5GeV$ for $z_{H}>z_{H\mu}$, $z_{H =z_{H\mu}$ and $z_{H}<z_{H\mu}$. (b) The sketch diagram for heavy quark potentials at various temperatures at a fixed chemical potential $\mu>\mu_{c $. \label{meson potential \end{figure} It is helpful to use a sketch to describe the heavy quark potential and the phases transformation with temperature changing. We plot the heavy quark potentials at various temperatures at a fixed chemical potential\footnote{For $\mu<\mu_{c}$, the picture is similar but more complicated due to the black holes phase transition in the background. Here we just illustrate the general properties for the heavy quark potential and leave the details of the phase transition to the next section.} $\mu>\mu_{c}$ in (b) of figure \ref{meson potential}. For $T\leq T_{\mu}$, i.e. $z_{H}>z_{H\mu}$, the heavy quark potential is linear at large separate distance $r$ with the slopes decrease with the temperature increasing. The linear potential implies that the system is in the confinement phase. While for $T>T_{\mu}$, i.e. $z_{H}<z_{H\mu}$, the heavy quark potential admits a maximum separate distance $r_{M}$, beyond which the open string breaks up to two straight strings and the total energy of strings becomes constant. The constant potential implies that the system is in the deconfinement phase. The confinement-deconfinement phase transform happens at $T=T_{\mu}$ \setcounter{equation}{0} \renewcommand{\theequation}{\arabic{section}.\arabic{equation} \section{Phase Diagram} In the previous sections, we have studied the thermodynamics of the black hole background. We obtained two black hole phases and studied the phase transition between them. We also added probe open strings in the background and studied their configurations of U-shape and straight shape, which correspond to confinement and deconfinement phases in the dual holographic QCD. In \cite{1301.0385}, we interpreted the black hole to black hole phase transition in the background as the confinement-deconfinement phase transition in the dual holographic QCD, leaving a puzzle that a black hole background does not correspond to the confinement phase in QCD in the original AdS/QCD correspondence. In this paper, we argued that the U-shape and straight shape of open strings should correspond to confinement and deconfinement phases in QCD, but the transformation between the two phases seems always smooth without phase transition. In this section, by combining these two phenomena, we are ready to discuss the full phase structure for the system of the open strings in the black hole background, corresponding to the confinement-deconfinement phase diagram in the dual holographic QCD. \subsection{Confinement-deconfinement Phase Diagram} Let us consider the configurations of the probe open strings first. We have found that for a small black hole with $z_{H}>z_{H\mu}$, open strings can not exceed a dynamical wall at $z=z_{m}$\ even the distance $r$ between the quark-antiquark pair goes to infinity. This means that both ends of the open strings have to touch the boundary at $z=0$, and the quark-antiquark pair is always connected by an open string in the U-shape to form a bound state, which corresponds to a meson state in the dual holographic QCD, as in (a) of figure \ref{conf-deconf}. We interpret this phase as the confinement phase in QCD. On the other hand, for a large black hole with $z_{H}<z_{H\mu}$, the two ends of the open strings could also contact the horizon instead of the boundary. If the distance $r$ between the quark-antiquark pair is large enough with $r>r_{M}$, an open string of U-shape\ would break to two straight open strings as showed in (b) of figure \ref{conf-deconf}. Thus the meson state would decay to a pair of free quark and antiquark. We interpret this phase as the deconfinement phase in QCD. We should remark that for a small black hole with $z_{H}>z_{H\mu}$, it is impossible for an open string to break up due to the dynamical wall at $z=z_{m}$. Thus even in the black hole background, the holographic QCD could still be in the confinement phase. This clarifies the puzzle in \cite{1301.0385} that a black hole background does not correspond to the confinement phase in QCD in the original AdS/QCD correspondence. The black hole phases, open string configurations and QCD phases are summarized in Table \ref{table}. \begin{table}[h] \begin{center \begin{tabular} [c]{|c|c|c|}\hline Black hole & String configurations for $r\rightarrow\infty$ & Phase in QCD\\\hline Small $\left( z_{H}>z_{H\mu}\right) $ & U-shape & Confinement\\\hline Large $\left( z_{H}<z_{H\mu}\right) $ & Straight & Deconfenment\\\hline \end{tabular} \end{center} \caption{Black hole phases, open string configurations and QCD phases. \label{table \end{table} For each chemical potential $\mu$, we have calculated the transformation temperature $T_{\mu}$ corresponding to the critical black hole horizon $z_{H\mu}$. The result of $T_{\mu}$ v.s. $\mu$ is plotted in figure \ref{Tmu}. On the other hand, the phase transition temperature $T_{BB}$ of black hole to black hole phase transition in the background was plotted in (b) of figure \ref{phase diagram}. To investigate the relationship between $T_{\mu}$\ and $T_{BB}$, we plot both of them together in (a) of figure \ref{final phase}. We see that the two lines are close to each other but not exactly the same. The two lines intersect at $(\mu_{c},T_{c})=(0.678GeV,0.536GeV)$, where we define as the critical point\footnote{We have defined $(\mu_{c},T_{c )=(0.714GeV,0.528GeV)$ as the critical values of the background phase transition in section 2.2, here we redefined them as the true critical values of the confinement-deconfinement phase transition.}. For $\mu<\mu_{c}$, when the temperature increases from zero, the black hole grows with the temperature and a phase transition eventually happens at $T=T_{BB}(\mu)<T_{\mu}$, where a small black hole with horizon $z_{Hs}>z_{H\mu}$ suddenly jumps to a large black hole with horizon $z_{Hl}<z_{H\mu}$ as showed in figure \ref{phase-cross}. In the dual QCD, this implies that the confinement phase transform to the deconfinement phase by a phase transition. While for $\mu >\mu_{c}$, when the temperature increases from zero, the black hole horizon increases gradually with the temperature and continuously passes $z_{H\mu}$ at $T=T_{\mu}<T_{BB}(\mu)$. It means that the confinement phase will smoothly transform to the deconfinement phase as a crossover. Putting everything together, we obtain the final phase diagram for the confinement-deconfinement phase transition in QCD, plotted in (b) of figure \ref{final phase}. For the chemical potential less than the critical point $\mu<\mu_{c}$, we have confinement-deconfinement phase transition. While for the large chemical potential $\mu>\mu_{c}$, the confinement-deconfinement phase transition reduces to a smooth crossover. The critical point is located at $\mu _{c}=0.678GeV$ and $T_{c}=0.536GeV$. This result is consistent with the conclusion from the lattice QCD for the heavy quarks \cite{1111.4953}. \begin{figure}[h] \begin{center} \includegraphics[ height=2in, width=3in ]{both.eps}\hspace*{0.5cm} \includegraphics[ height=2in, width=3in ]{final.eps}\vskip -0.05cm \hskip 0.15 cm \textbf{( a ) } \hskip 7.5 cm \textbf{( b )} \end{center} \caption{(a) The phase diagrams from the pure black hole background (red line for $T_{BB}$) and from the configurations of open strings (blue line for $T_{\mu}$). The two lines intersect at the critical point locates at $\mu _{c}=0.678GeV$ and $T_{c}=0.536GeV$. (b) The final confinement-deconfinement phase diagram. For $\mu<\mu_{c}$, there is a phase transition between confinement and deconfinement phases (red solid line); while for $\mu>\mu_{c $, the phase transition becomes a crossover (blue dashed line). \label{final phase \end{figure} \subsection{Meson Melting in Hot Plasma} In the confinement phase, open strings are always connected in the configuration of U-shape, so that the quark-antiquark pair always form a bound state, i.e. a meson state in QCD. In the deconfinement phase, an open string could be in the configuration of either U-shape or straight-shape. For short separate distance $r<r_{M}$, the open string is still in the U-shape. While for the long separate distance $r>r_{M}$, the energy of the two straight strings is less than the free energy of an open string in the U-shape and the U-shape open string will break up to two straight strings, corresponding to that a meson state melts to a pair of free quark and antiquark.\ The phenomenon of mesons melting has been previously studied in \cite{1006.0055,1108.0684,1102.2289,1203.3942}. In this work, we define the screening length as the maximum length $r_{M}$, achieved by a pair of quark and antiquark in the bound state at a temperature $T>T_{BB}(\mu)$ for $\mu \leq\mu_{c}$ or $T>T_{\mu}$ for $\mu>\mu_{c}$. The screening length at a fixed chemical potential and temperature can be determined by the equation $V-2F_{q}=0$, where $V$ is the heavy quark potential energy defined in (\ref{quark potential}) and $F_{q}$ is the free energy of a straight string defined a \begin{equation} F_{q}=\int_{0}^{z_{H}}dz\frac{e^{2A}}{z^{2}}. \end{equation} We plot the 'melting lines' of screening length versus chemical potential in figure \ref{melt}. The screening length is a possible signal form Quark Gluon Plasma (QGP). Right after the collision QGP is formed and the temperature is high enough to be in the deconfinement phase. As temperature decreases, heavy quarks form bound states at melting temperatures higher than the deconfinement temperature. This means heavy quark bound sates can coexist with QGP. \begin{figure}[h] \begin{center} \includegraphics[ height=2in, width=3in ]{melt1.eps}\hspace*{0.5cm} \includegraphics[ height=2in, width=3in ]{melt2.eps}\vskip -0.05cm \hskip 0.15 cm \textbf{( a ) } \hskip 7.5 cm \textbf{( b )} \end{center} \caption{The 'melting lines' for various screening lengths are plotted in (a) with $r_{M}=0.13,0.15,0.18,0.20,0.22,0.24$ fm from above. In (b), the region closed to the phase transition line is enlarged. \label{melt \end{figure \setcounter{equation}{0} \renewcommand{\theequation}{\arabic{section}.\arabic{equation} \section{Conclusion} In this paper, we considered an Einstein-Maxwell-scalar system. We solved the equations of motion to obtain a family of black hole solutions by potential reconstruction method. We studied the thermodynamical properties of the black hole background and found black hole to black hole phase transitions at the temperature $T_{BB}$ for the backgrounds. We then added open strings in these backgrounds and identified the two ends of an open string as a quark and antiquark pair in the dual holographic QCD. By solving the equations of motion of the open strings, we got two configurations for the open strings, i.e. U-shape and straight-shape. When the temperature is low enough, the black hole is small, there exists a dynamical wall at $z=z_{m}$ which the open strings can not exceed even the separation of the quark and antiquark goes to infinite. From the view of the dual QCD, the quark and antiquark pair is always connected by an open string to form a bounded state, corresponding to the confinement phase in QCD. On the other hand, when the temperature is high enough, the black hole becomes large so that an open string could break up to two straight open strings connecting the boundary and the black hole horizon, corresponding to the deconfinement phase in QCD. We obtained the confinement-deconfinement phase transformation temperature $T_{\mu}$. Our main conclusion is that, to study the confinement-deconfinement phase structure in holographic QCD models, we need to combine two phase phenomena in the bulk gravity theory at the same time, namely the black hole to black hole phase transition in the background and the various configurations for the probe open strings. We found that, when the chemical potential is less than the critical value $\mu_{c}$, the background undergoes a small black hole to a large black hole phase transition with temperature increasing from zero. The horizon suddenly blows up to pass the critical horizon $z_{H\mu}$\ so that the confinement phase transform to the deconfinement phase by a phase transition. While when the chemical potential is greater than the critical value $\mu_{c $, the black hole horizon grows gradually and continuously pass the critical horizon $z_{H\mu}$\ so that the confinement phase transform to the deconfinement phase by a smooth crossover. The final confinement-deconfinement phase diagram is showed in (b) of figure \ref{final phase}. We also studied meson melting in this paper. When the temperature is higher than the phase transition temperature, QCD is in the deconfinement phase. However, it is known that the meson could still be thermodynamically stable in the deconfinement phase if the separate distance between the quark and antiquark is short enough. Only when the separate distance is longer than the screening length $r_{M}$, the meson becomes unstable and break up to a pair of free quark and antiquark. We showed the 'melting lines' for various separation distance in figure \ref{melt}. We conclude that, with increasing temperature, the mesons of larger size will break up earlier and the mesons of smaller size will be more stable and break up latter. Inversely, when QGP is cooling down, the mesons of smaller size will reunite earlier than the ones of larger size. This will help us to understand the process of the QGP cooling down. \subsection*{Acknowledgements} We would like to thank Song He, Mei Huang, Xiao-Ning Wu for useful discussions. This work is supported by the National Science Council (NSC 101-2112-M-009-005) and National Center for Theoretical Science, Taiwan.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,286
{"url":"https:\/\/www.physicsforums.com\/threads\/find-the-rate-constant-given-temperature-and-activation-energy.430135\/","text":"# Find the rate constant, given temperature and activation energy.\n\n#### Agent M27\n\n1. The problem statement, all variables and given\/known data\n\nA reaction is found to have an activation energy of 38.0 kJ\/mol. If the rate constant for this reaction is 1.60 \u00d7 102 M-1s-1 at 249 K, what is the rate constant at 436 K?\n\n2. Relevant equations\n\n$$ln\\frac{K_{2}}{K_{1}}=\\frac{E_{a}}{R}\\left(\\frac{1}{T_{1}}-\\frac{1}{T_{2}}\\right)$$\n\n3. The attempt at a solution\n\nGiven:\n\nR=8.314\nT1=249K\nT2=436K\nEa=160\n\n$$ln(K_{2})=\\frac{38}{8.314}\\left(\\frac{1}{249}-\\frac{1}{436}\\right)+ln(160)$$\n\nWhich equals 161.257 which is incorrect. Any clues where I went wrong would be greatly appreciated. Thanks in advance.\n\nJoe\n\nRelated Biology, Chemistry and Earth Homework News on Phys.org\n\nMentor\n38000\n\n#### Agent M27\n\nAh ha! I should have noticed that being that R has units of J not Kj. Thank you very much Borek.\n\nJoe\n\nMentor\nkJ, not Kj...\n\n### Physics Forums Values\n\nWe Value Quality\n\u2022 Topics based on mainstream science\n\u2022 Proper English grammar and spelling\nWe Value Civility\n\u2022 Positive and compassionate attitudes\n\u2022 Patience while debating\nWe Value Productivity\n\u2022 Disciplined to remain on-topic\n\u2022 Recognition of own weaknesses\n\u2022 Solo and co-op problem solving","date":"2019-05-23 08:40:05","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4535255432128906, \"perplexity\": 3989.200121243275}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-22\/segments\/1558232257197.14\/warc\/CC-MAIN-20190523083722-20190523105722-00227.warc.gz\"}"}
null
null
\section{Introduction}\label{math-intro} Suppose one is interested in imaging a small region of interest (ROI) inside an object using tomography. In order to acquire a complete data set that enables stable reconstruction, one needs to send multiple x-rays through the object from many different directions. In particular, the x-rays that do not pass through the ROI are required as well. The interior problem of tomography arises when only the x-rays through the ROI are measured. In this case the tomographic data are incomplete, and image reconstruction becomes a challenging problem. In what follows, image reconstruction from x-ray data taylored to an ROI will be called the interior problem, and the corresponding data will be called interior data. Practical importance of the interior problem is clear, since tayloring the x-ray exposure to an ROI results in a reduced x-ray dose to the patient in medical applications of tomography. See \cite{wy-13} for a nice review of the state of the art in interior tomography. One of the most powerfull tools for investigating the interior problem from the theoretical point of view is the Gelfand-Graev formula, which relates the tomographic data of an object with its one-dimensional Hilbert transform along lines \cite{gegr-91}. With the help of this formula, the interior problem of tomography can be reduced to the problem of inverting the Hilbert transform from incomplete data. Pick any line $L$ through the object. We regard $L$ as the $x$-axis. Fix some $2g+2$, $g\in{\mathbb N}$, distinct points $a_i$ on $L$: $a_i<a_{i+1}$, $i=1,2,\dots,2g+1$. Points $a_1$ and $a_{2g+2}$ mark the boundaries of the support of $f$ along $L$. Points $a_2$ and $a_{2g+1}$ mark the boundaries of the ROI along $L$. Consider the Finite Hilbert Transform (FHT) \begin{equation}\label{def-hilb} ({\mathcal H} f)(x):= \frac1\pi \int_{a_1}^{a_{2g+2}} \frac{f|_L(y)}{y-x}dy,\ f|_L\in L^2([a_1,a_{2g+2}]). \end{equation} Here $f|_L$ is the restriction of $f$ to $L$, and ${\mathcal H} f$ is the one-dimensional Hilbert transform of $f|_L$. Throughout the paper the line $L$ is always the same, so with some abuse of notation we write $f$ instead of $f|_L$. In the case of interior tomographic data, the Gelfand-Graev formula allows computation of ${\mathcal H} f$ only on $[a_2,a_{2g+1}]$, but not on all $[a_1,a_{2g+2}]$. Thus the interior problem of tomography is reduced to finding $f$ inside the ROI, i.e. on $[a_2,a_{2g+1}]$, by solving the equation \begin{equation}\label{hilb-dataintro} ({\mathcal H} f)(x)=\varphi(x),\ x\in [a_2,a_{2g+1}]. \end{equation} Consider the operator ${\mathcal H}:\, L_2([a_1,a_{2g+2}])\to L_2([a_2,a_{2g+1}])$. Unique recovery of $f$ on $[a_2,a_{2g+1}]$ is impossible since ${\mathcal H}$ has a non-trivial kernel (see \cite{kt12} for its complete description). Therefore, to achieve unique recovery the data $\varphi$ should be augmented by some additional information. One type of information that guarantees uniqueness is the knowledge of $f$ on some {interval or} intervals inside $[a_2,a_{2g+1}]$. This is the so-called interior problem with prior knowledge (\cite{yyw-07b, kcnd, cndk-08, wy-13}) {that will be considered below}. Let us assume that $f$ is known on the intervals \begin{equation}\label{int-int} I_i:=[a_3,a_4]\cup [a_5,a_6]\cup\dots\cup [a_{2g-1},a_{2g}], \end{equation} which we call ``interior" (inside the ROI). Denote by $I_e:=[a_1,a_2]\cup [a_{2g+1},a_{2g+2}]$ the remaining ``exterior'' intervals (they are outside the ROI). Applying the FHT inversion formula (see e.g. \cite{oe91}), we get \begin{equation}\label{hilb-inv} \begin{split} f(y)&=-\frac{w(y)}{\pi}\left(\int_{a_1}^{a_2}+\int_{a_{2g+1}}^{a_{2g+2}}\right) \frac{\varphi(x)}{w(x)(x-y)}dx-\frac{w(y)}{\pi}\int_{a_2}^{a_{2g+1}}\frac{\varphi(x)}{w(x)(x-y)}dx,\\ {\rm where}~~~w(x):&=\sqrt{(a_{2g+2}-x)(x-a_1)}~~~{\rm and}~~~\varphi(x)=({\mathcal H} f)(x),~ x\in [a_1,a_{2g+2}]. \end{split} \end{equation} The left side of (\ref{hilb-inv}) is known on $I_i$. The last integral on the right is known everywhere. Combining these known quantities we get an integral equation: \begin{equation}\label{int-eq} ({\mathcal H}^{-1}_e\varphi)(y):=-\frac{w(y)}\pi \int_{I_e} \frac{\varphi(x)}{w(x)(x-y)}dx=\psi(y),\ y\in I_i, \end{equation} where \begin{equation}\label{psi} \psi(y)= f(y)+\frac{w(y)}{\pi}\int_{a_2}^{a_{2g+1}}\frac{\varphi(x)}{w(x)(x-y)}dx,\ y\in I_i \end{equation} is a known function. The main problem we study in this paper is the stability of finding $f$ from the data. Several approaches to finding $f$ on $[a_2,a_{2g+1}]$ are possible. The first one consists of two steps. In step 1 we solve equation (\ref{int-eq}) for $\varphi(x)$ on $I_e$. In step 2 we substitute the computed $\varphi(x)$ into (\ref{hilb-inv}) and recover $f(y)$ on $[a_2,a_{2g+1}]$. It is clear that solving (\ref{int-eq}), i.e. inverting ${\mathcal H}^{-1}_e$, is the most unstable step. Consider the operator ${\mathcal H}^{-1}_e$ in (\ref{int-eq}) as a map between two weighted $L^2$-spaces: \begin{equation}\label{map} {\mathcal H}^{-1}_e:\ L^2(I_e,1/w)\to L^2(I_i,1/w). \end{equation} Its adjoint is the Hilbert transform: \begin{equation}\label{hilb-adj} ({\mathcal H}_i\psi)(x):= \frac1\pi \int_{I_i} \frac{\psi(y)}{y-x}dy,\ x\in I_e. \end{equation} In \cite{BKT2} the authors studied the singular value decomposition (SVD) for the operator ${\mathcal H}^{-1}_e$. Namely, we were interested in the singular values $2\lambda=2\lambda_n>0$, $n\in{\mathbb N}$, and the corresponding left and right singular functions $f=f_n,~h=h_n$, satisfying \begin{equation}\label{svd-def} \begin{split} ({\mathcal H}^{-1}_e) h(y)=-\frac{w(y)}{\pi}&\int_{I_e} \frac{h(x)}{w(x)(x-y)}dx={2}\la f(y),\ y\in I_i,\\ ({\mathcal H}_i f)(x)=\frac1\pi &\int_{I_i} \frac{f(y)}{y-x}dy={2}\la h(x),\ x\in I_e. \end{split} \end{equation} See \eqref{svd-hat}--\eqref{straight} and Theorem~\ref{theo-whK}, which show that the SVD is well-defined. It is well known that the rate at which $\lambda_n$'s approach zero is related with the ill-posedness of inverting ${\mathcal H}^{-1}_e$. Because of the symmetry $(\lambda,f,h)\Leftrightarrow (-\lambda,-f,h)$ of \eqref{svd-def}, we are interested only in positive $\lambda_n$. The main result of the paper \cite{BKT2} is the large $n$ asymptotics of $\lambda_n$, $f_n$ and $h_n$. Let us introduce a $g\times g$ matrix $\mathbb A$ by \begin{equation}\label{matrixA} (\mathbb A)_{kj}=2\int_{a_{2k}}^{a_{2k+1}}\frac{z^{j-1} dz}{R(z)}, ~~~k=1,\dots,g-1,~~~~{\rm and}~~~ (\mathbb A)_{gj}=2\int_{a_{1}}^{a_{2g+2}}\frac{z^{j-1} dz}{R_+(z)},~~~~j=1,\dots,g, \end{equation} where $R(z) = \prod_{j=1}^{2g+2}(z-a_j)^\frac{1}{2}$ is an analytic function on ${\mathbb C} \setminus (I_e \cup I_i)$ behaving as $z^{g+1}$ at infinity, and define \begin{equation}\label{tau11intro} \t_{11}=-2\sum_{j=1}^g (\mathbb A^{-1})_{j1}\int_{I_e}\frac{z^{j-1} dz}{R_+(z)}. \end{equation} Here and throughout the paper the subscripts $\pm$ routinely denote limiting values of functions (vectors, matrices) from the left/right side of corresponding oriented arcs. In particular, $R_+$ means the limiting value of $R$ on $I= I_e\cup I_i$ from $\Im z>0$. We also want to note that, according to the well-known Riemann's Theorem on periods of holomorphic differentials (\cite{FarkasKra}, $\tau_{11}$ is a purely imaginary number with positive imaginary part. Then the asymptotics of $\lambda_n$ is given by (\cite{BKT2}) \begin{equation} \lambda_n = {\rm e}^{-\frac { i \pi}{\tau_{11}}n + \mathcal O(1)}\ ,\ \ n\to \infty. \label{in1} \end{equation} The asymptotics of the singular functions from \cite{BKT2} is described in Section \ref{sec-review} of this paper. An alternative approach to the analysis of SVD for the Hilbert transform with incomplete data is developed in \cite{kat10c, kat_11, kt12, aak13}. The very rapid decay of singular values in \eqref{in1} indicates that finding $\varphi$ from $\psi$ is very unstable. This, however, does not imply that finding $f$ on $[a_2,a_{2g+1}]$ is unstable, since $f$ is computed by applying a smoothing operator to $\varphi$. The second approach to finding $f$ is based on the observation that the function $\psi$ defined by \eqref{psi} is analytic in $\mathbb C\setminus I_e$ (cf. \eqref{int-eq}). Hence, analytically continuing $\psi$ from $I_i$ to $(a_2,a_{2g+1})$, we can find $f$ using \eqref{psi} with $y\in (a_2,a_{2g+1})$. Note that any method that gives $f$ on $(a_2,a_{2g+1})$ is equivalend to analytic continuation of $\psi$ in view of \eqref{psi}. {\it Thus, analytic continuation of $\psi$ is at the heart of any method for solving the interior problem of tomography {with prior knowledge}.} In this paper we obtain two results regarding the analytic continuation of $\psi$. We show that this analytic continuation can be obtained with the help of a simple explicit formula, which involves summation of a series, {see Corollary \ref{analcont-series}}. We prove that the series is absolutely convergent if $\psi$ is in the range of ${\mathcal H}^{-1}_e$. We also analyze stability of this analytic continuation. Intuitively, it is clear that the farther away from $I_i$ we continue $\psi$ the less stable the procedure becomes. Our second result is that the operator of analytic continuation is not stable for any pair of Sobolev spaces: $H^{s_1}(I_i)\to H^{-s_2}(J)$, where $J$ is any open set containing $I_i$. In other words, the procedure is unstable no matter how close to $I_i$ we perform the continuation. This is an {interesting} result, because earlier related results indicated that finding $f$ might be stable \cite{dnck, kcnd}. The paper is organized as follows. Since the derivation of our main results strongly depends on the results in \cite{BKT2}, the latter are briefly reviewed in Section~\ref{sec-review}. The analytic continuation of $\psi$ and its instability in the Sobolev spaces are established in Section~\ref{sec-sobol}. {Loosely speaking, this result shows that no matter how many derivatives are required of $\psi$, the continuation is not stable. The availability of asymptotics of singular values and singular functions allows us to accurately estimate the degree of instability of the continuation. In Section~\ref{sec-sobol} we introduce a Hilbert space $\mathcal A$ of functions defined on $I_i$ with the help of an exponentially growing weight. We show how fast this weight must grow in order to ensure that the analytic continuation from $I_i$ to an open set ${\bf J}$ be a continuous map from $\mathcal A\to L^2({\bf J})$. Thus, this rate of growth measures the degree of ill-posedness of the analytic continuation as a function of the target interval ${\bf J}$.} In \cite{BKT2} it is shown that the asymptotic approximations to the exact singular functions $f_n$ are valid uniformly on compact subsets of the interior of $I_i$ as $n\to\infty$. In Section~\ref{sec-L2-conv} we show that these approximations are also valid in the $L^2(I_i)$ sense as well. This is the third result obtained in this paper. We do not consider the other set of singular functions that are defined on $I_e$, since they are not needed for the analytic continuation of $\psi$. The main idea of the approach in \cite{BKT2} is to reduce the SVD problem \eqref{svd-def} to a matrix Riemann-Hilbert problem (RHP), which, in turn, is asymptotically reduced to a simpler RHP. That simpler (model) RHP has an explicit solution, which can be expressed in terms of the Riemann Theta function. A brief review of the reduction to the model RHP and certain related results from \cite{BKT2} are contained in Appendix~\ref{sec-ideas}. Some technical lemmas related to the approximation of singular functions on $[a_1,a_{2g+2}]\setminus I$ and on $I_i$ that are needed in Sections~\ref{sec-sobol} and \ref{sec-L2-conv} are proven in Appendix~\ref{proofstechn}. \section{Brief review of main results of \cite{BKT2}}\label{sec-review} This section contains a brief review of major results of \cite{BKT2}. For convenience, most of the statements below are provided with direct references (in square brackets) to the corresponding results of \cite{BKT2}. The SVD system \eqref{svd-def} can be represented as \begin{eqnarray}\label{svd-hat} (H^{-1}_e\widehat h)(y) &\&:=\frac{\sqrt{w(y)}}{2\pi i }\int_{I_e} \frac{\widehat h(x)}{\sqrt{w(x)}(x-y)}dx = \la \widehat f(y),\ y\in I_i, \nonumber \\ (H_i\widehat f)(x)&\&:=\frac1{2\pi i} \frac 1{\sqrt{w(x)}} \int_{I_i} \frac{\widehat f(y)\sqrt{w(y)}}{(y-x)}dy= \la \widehat h(x),\ x\in I_e, \label{svd-def2} \end{eqnarray} where $\widehat h = \frac {h}{\sqrt{w}}\in L^2(I_e)$, $\widehat f = \frac { i f}{\sqrt{w}}\in L^2(I_i)$, and the operators $H^{-1}_e$, $H_i$ act on the corresponding unweighted $L^2$ spaces. It can be checked directly that the triple $(\lambda,\widehat f,\widehat h)$ satisfies the system \eqref{svd-def2} if and only if $\lambda,\psi$ is the eigenvalue/eigenvector of the integral operator $(\hat K \phi)(z) =\int_I K(z,x)\phi(x)dx$ from $L^2(I)$ to $L^2(I)$, where \begin{equation}\label{svd-K} K(z,x)= \frac{w^{\frac 1 2}(x) w^{-\frac 1 2}(z) \chi_e(z)\chi_i(x) + w^{\frac 1 2}(z) w^{-\frac 1 2}(x) \chi_i(z)\chi_e(x)}{2i\pi (x-z)},~~ \psi =\widehat f(z) \chi_i(z)+\widehat h(z)\chi_e(z). \end{equation} (Here and henceforth $ \chi_i(z), \chi_e(z)$ denote the characteristic (indicator) functions of the sets $I_i, I_e$, respectively.) Thus, the SVD problem for the system \eqref{svd-def2} is reduced to the spectral problem for the integral operator $\widehat K:L^2(I) \to L^2(I)$. It follows directly from \eqref{svd-K} that \begin{equation} \widehat K\big|_{L^2(I_i)} = H_i\ ,\ \ \ \ \widehat K\big|_{L^2(I_e)} = H_e^{-1}.\label{straight} \end{equation} \begin{theorem}\label{theo-whK}[Thm.3.1 and Cor.3.8] $\widehat K$ is a self-adjoint and a Hilbert--Schmidt operator. Moreover, all the eigenvalues of $\widehat K$ are simple. \end{theorem} According to Theorem \ref{theo-whK}, the eigenvalues of $\widehat K$ are real with the only possible point of accumulation $\lambda=0$. Since the singular values of \eqref{svd-def2} are positive (note the symmetry $(\lambda,\widehat f,\widehat h)\mapsto (-\lambda,-\widehat f, \widehat h)$ in \eqref{svd-def2}), we are interested only in the positive eigenvalues $\lambda_n$, $n\in{\mathbb N}$, of $\widehat K$, where we order $\lambda_0>\lambda_1>\dots >0$. Let $\widehat L$ denote the restrictions of $\widehat K^2$ to the interval $I_i$. Then, according to \eqref{svd-K}, $\widehat L = H_e^{-1} H_i:L^2(I_i) \to L^2(I_i)$ is an integral operator with eigenvalues $\lambda_n^2$ and eigenfunctions $\widehat f_n$, $n\in{\mathbb N}$. It is interesting to note (Lemma 3.6 in \cite{BKT2}) that $\widehat L$ is a strictly totally positive operator. Then the simplicity of the eigenvalues $\lambda^2_n$ of $\widehat L$ and, thus, of $\lambda_n$ of $\widehat K$ in Theorem \ref{theo-whK}, follows from properties of strictly totally positive integral operators (see \cite{Pinkus-Rev}). Another consequence of this property of $\widehat L$ is that the singular function $\widehat f_n$ has exactly $n$ sign changes on $I_i$, $n=0,1,2,\dots$. An important object of the spectral theory is the resolvent operator $\widehat R$ of $\widehat K$, defined by \begin{equation}\label{whR} (\Id + \widehat R)(\Id -\frac 1 \lambda \widehat K) = \Id. \end{equation} The resolvent operator $\widehat R$ is an integral operator with the kernel of the form \begin{equation} \label{resolvent} R(z,x;\lambda) = \frac{ \vec g^t(x) \Gamma^{-1}(x;\lambda) \Gamma(z;\lambda)\vec f(z)} {2i\pi \lambda (z-x)}, ~{\rm where}~ \vec f(z):= \left[ \begin{matrix} { \frac{i \chi_e(z)}{\sqrt{ w(z)}}} \\ \sqrt{ w(z)} \chi_i(z) \end{matrix}\right], \ \vec g(x):= \left[ \begin{matrix} -i\sqrt{ w(x)} \chi_i(x)\\ \frac{\chi_e(x)} {\sqrt{ w(x)}} \end{matrix} \right], \end{equation} where $\vec g^t$ denotes the transposition of $\vec g$ and the matrix $\G(z;\lambda)$ satisfies the following Riemann-Hilbert Problem (RHP) \ref{RHPGamma}. \begin{problem} \label{RHPGamma} Find a $2\times 2$ matrix-function $\G=\G(z;\la)$, $\lambda\in{\mathbb C}\setminus\{0\}$, which is analytic in $\overline{{\mathbb C}}\setminus I$, where $I=I_i\cup I_e$, admits non-tangential boundary values from the upper/lower half-planes that belong to $L^2_{loc}$ in the interior points of $I$, and satisfies \begin{eqnarray} \label{rhpGam} \G_+(z;\lambda)&=\G_-(z;\lambda) \left[\begin{matrix} 1 & 0 \\ \frac{iw}{\la} & 1 \end{matrix}\right], \ \ z\in I_i;\qquad \G_+(z;\lambda)=\G_-(z;\lambda) \left[\begin{matrix} 1 & -\frac{i}{\la w} \\ 0 & 1 \end{matrix}\right],\ \ z\in I_e, \\ \label{assGam} &\G(z;\lambda)={\bf 1}+O(z^{-1})~~~~{\rm as} ~~z\rightarrow\infty, \\ \label{endpcond-out} &\G(z;\lambda)=\left[\mathcal O(1), \mathcal O((z-a_j)^{-\frac{1}{2}})\right],~~z\rightarrow a_j,~~j=1,2g+2,\\ \label{endpcond-out-inn} &\G(z;\lambda)=\left[\mathcal O(1), \mathcal O(\ln(z-a_j))\right],~~~~z\rightarrow a_j,~~j=2,2g+1,\\ \label{endpcond-inn} &\G(z;\lambda)=\left[\mathcal O(\ln (z-a_j)),\mathcal O(1)\right], ~~~~~ z\rightarrow a_j,~~j=3, \dots, 2g. \end{eqnarray} Here the endpoint behavior of $\Gamma$ is described column-wise. We will frequently omit the dependence on $\lambda$ from notation and write simply $\G(z)$ for convenience. \end{problem} The latest fact links the resolvent operator $\widehat R$ for $\widehat K$ with the RHP for the matrix $\G$ from \eqref{resolvent}. \begin{theorem}\label{theo-Gam}[Thm.3.17 and Prop.3.12] The RHP \ref{RHPGamma} has a solution $\G(z;\lambda)$, where $\lambda\in {\mathbb C}\setminus\{0\}$, if and only if $\lambda$ is not an eigenvalue of $\widehat K$. Moreover, for any fixed $\lambda\in {\mathbb C}\setminus\{0\}$ the RHP \ref{RHPGamma} has at most one solution. \end{theorem} Connection of our spectral problem with the RHP \ref{RHPGamma} is remarkable, as the RHP \ref{RHPGamma} is a much more convenient object for rigorous asymptotic analysis (in small $\lambda$) than the spectral problem for $\widehat K$. The eigenfunctions of $\widehat K$ corresponding to a fixed eigenvalue $\lambda_n$ are given by two proportional expressions \begin{equation}\label{round-phi_n} \phi_{n,j}(z)=\frac{\chi_e(z)}{\sqrt{w(z)}} \mathop{\mathrm {res}}\limits_{\lambda=\lambda_n} \Gamma_{j1}(z;\lambda) \frac {1}{\lambda} + {i} \sqrt{w(z)}\chi_i(z)\mathop{\mathrm {res}}\limits_{\lambda=\lambda_n} \Gamma_{j2}(z;\lambda) \frac {1}{\lambda},~~~j=1,2, \end{equation} in terms of the entries of the matrix $\G(z,\lambda)$, where for every $n\in{\mathbb N}$ at least one of $\phi_{n,j}$ is not identical zero on $I$. Once the connection between the spectral problem for $\widehat K$ and the RHP \ref{RHPGamma} is established, we use the nonlinear steepest descent method of Deift and Zhou to construct an explicit leading order approximation of $\G(z,\lambda)$ as $\lambda\rightarrow 0^+$ in terms of the Riemann Theta functions. Of course, this approximation will not be valid at the eigenvalues $\lambda_n$ of $\widehat K$, as, according to Theorem \ref{theo-Gam}, $\G(z,\lambda_n)$ does not exists. However, using the explicit form of the approximate solution, we can find the values $ {\tilde \l}_n$ for which this approximate solution has singularities. The obtained values $ {\tilde \l}_n$ will be referred to as ``approximate eigenvalues''. It tuns out that, indeed, $ {\tilde \l}_n$ approximate the corresponding $\lambda_n$ with the accuracy \begin{equation}\label{acc_sing_val} | \k_n - {\tilde \k}_n|=O( {\tilde \k}_n^{-\frac 12}), \end{equation} where $\k_n=-\ln \lambda_n$ and $ {\tilde \k}_n=-\ln {\tilde \l}_n$ (it will be shown that $ {\tilde \k}_n=O(n)$ as $n\rightarrow\infty$). Let us now consider the asymptotics of singular functions. According to \eqref{svd-K}, the approximation of normalized singular functions can be obtained by replacing rows of the matrix $\Gamma_{jk}(z;\lambda)$, $j,k\in\{1,2\}$, in \eqref{round-phi_n} by the corresponding rows of the approximate solution to the RHP \ref{RHPGamma}. To present the approximation formula for singular functions, we need to introduce some notations and a few notions from the theory of compact Riemann surfaces. They will also be helpful for a geometrical interpretation of $ {\tilde \k}_n$. The {\bf Riemann Theta function} associated with a symmetric matrix $\tau$ with strictly positive imaginary part (that guarantees convergence) is the function of the vector argument $\vec z\in{\mathbb C}^g$ given by \begin{equation}\label{Theta} \Theta(\vec z,\tau):= \sum_{\vec n\in {\mathbb Z}^g} \exp\bigg(i\pi \vec n^t \cdot \tau \cdot \vec n +2i\pi \vec n^t \vec z\bigg). \end{equation} Often the dependence on $\tau$ is omitted from the notation. We will consider the matrix $\tau$ given by \begin{equation} \tau = [\tau_{ij}]= \left[ \oint_{B_i}\!\!\! \omega_j d\zeta \right]_{i,j=1,g}, \label{taumatrix} \end{equation} where \begin{equation} \label{1stkind} \vec \omega^t(z)= \left[ \begin{matrix} \omega_1(z),\dots, \omega_g(z) \end{matrix} \right] = \frac { \left[\begin{array}{c} 1,\dots, z^{g-1} \end{array} \right]}{R(z)}\mathbb A^{-1}, \ \ \ \end{equation} matrix $\mathbb A$ is defined by \eqref{matrixA}, and the loops (cycles) $B_i$, $i=1,\dots,g$ are shown in Figure \ref{homology}. \begin{theorem}[Riemann \cite{FarkasKra}] \label{Riemann1} The matrix $\tau$ is {\bf symmetric} and its imaginary part is strictly positive definite. \end{theorem} Matrix $\tau$ is an important object in the theory of compact Riemann surfaces. Indeed, consider the hyperelliptic Riemann surface $\mathcal R$, defined by the segments $[a_{2k-1},a_{2k}]$, $k=1,2,\dots,g+1$, that form $I$, with canonical $A$ and $B$ cycles shown in Figure \ref{homology}. Then $\vec \omega(z)dz$ is known as the vector of normalized holomorphic differentials on $\mathcal R$ and $\tau$ is called the {normalized matrix of $B$-periods} of $\mathcal R$. Note that $[\mathbb A]_{ji}= \oint_{A_j} \frac {\zeta^{i-1} d \zeta}{R(\zeta)}$, and $\t_{11}$ in \eqref{tau11intro} is the $(1,1)$ entry of the matrix $\tau$. \begin{remark}\label{rem-imag_tau} It follows from \eqref{taumatrix}, \eqref{1stkind} and \eqref{matrixA} that the entries of the matrix $\t$ are purely imaginary. \end{remark} \begin{figure} \begin{center} \resizebox{0.9\textwidth}{!}{\input{Homology2.pdf_t}} \end{center} \vspace{-10pt} \caption{ Riemann surface $\mathcal R$ with the choice of $A$ and $B$ cycles.} \vspace{-12pt} \label{homology} \end{figure} \begin{proposition} \label{thetaproperties} For any $\lambda, \mu \in {\mathbb Z}^g$, the Theta function has the following properties: \begin{eqnarray} &\& \Theta(\vec z,\tau) = \Theta(-\vec z,\tau);\\ &\& \Theta(\vec z + \mu + \tau\lambda, \tau) = \exp \bigg( -2i\pi \lambda^t\vec z - i\pi \lambda^t \tau \lambda \bigg) \Theta(\vec z,\tau).\label{thetaperiods} \end{eqnarray} \end{proposition} According to \eqref{Theta} and Proposition \ref{thetaproperties}, the Theta function is an even function of $g$ complex variables, periodic on the lattice ${\mathbb Z}^g$ and quasi-periodic on the lattice $\t{\mathbb Z}^g$. A hypersurface $(\Th)\subset{\mathbb C}^g$, defined by $\Theta(\vec z,\tau)=0$, is called a theta divisor. This is a hypersurface of complex codimension one or real codimension two. According to Proposition \ref{thetaproperties}, the theta divisor $(\Th)$ is periodic in ${\mathbb Z}^g$ and $\t{\mathbb Z}^g$. Let \begin{equation} \label{wukappa} W= W(\k)=\frac {\k}{i\pi} \tau_1 + 2\mathfrak u(\infty) + \frac {{\bf e}_1}{2},\ \ W_0 = \frac {\tau_1}2 - \frac { {\bf e}_1 + {\bf e}_g}2, \end{equation} where $\t_1$ is the first column of matrix $\t$, \begin{equation} \label{Abelmap} \mathfrak u(z) = \int_{a_1}^z \vec \omega(\zeta)d\zeta ,\ \ \ \ z\in {\mathbb C} \setminus [a_1,\infty), \end{equation} is known as the Abel map on $\mathcal R$, and ${\bf e}_k$ denotes the $k$th vector of the standard basis in ${\mathbb C}^g$. Then $ {\tilde \k}_n=-\ln {\tilde \l}_n$ are defined by the condition \begin{equation}\label{kappa_n_cond} \Theta\left ( W(\k) - W_0\right )=0. \end{equation} Geometrically, this condition determines the points of intersection of the line $ W(\k)-W_0\subset {\mathbb C}^g$ with the theta divisor. Let us consider this question in a little more details. Direct calculations show that all the terms of $W(\k)$ in \eqref{wukappa} are real, provided that $\k\in{\mathbb R}$. Thus, the line $\{W(\k):\k\in{\mathbb R}\}\subset {\mathbb R}^g\subset{\mathbb R}^{2g}$, if we identify ${\mathbb C}^g$ with ${\mathbb R}^{2g}$. So, the line $ W(\k)-W_0$, $\k\in{\mathbb R}$, is a subset of the shifted hyperplane $\Pi=W_0+{\mathbb R}^g$. Let $(\Th)_R:=(\Th)\cap\Pi$. \begin{lemma}\label{lem-Theta_R}[Lem.7.5] Each connected component of $(\Th)_R$ is a smooth $g-1$ (real) dimensional hypersurface in $\Pi$. \end{lemma} Moreover, since $(\Th)_R$ is $Z^g$ periodic on $\Pi$, it is sufficient to study $(\Th)_R$ in a $g$ (real) dimensional torus $\mathbb T_g$. Numerically simulated surfaces $(\Th)_R\cap\mathbb T_g$ for $g=2,3$, and their intersections with the line $ W(\k)-W_0$ are shown on Figure \ref{ThetaDivisor}. In the case $g=2$ we proved that the line $ W(\k)-W_0$ has one and only one intersection with $(\Th)_R$ in $\mathbb T_2$. It is likely (but not proven yet) that this statement holds for a general $g\in{\mathbb N}$. However, the following lemma is sufficient to obtain the asymptotics \eqref{in1} for $\lambda_n$ with any $g\in\{2,3,\dots\}$. \begin{figure}[t] \includegraphics[width=0.5\textwidth]{ThetaDivisorg2.pdf} \includegraphics[width=0.5\textwidth]{ThetaDivisorg3.pdf} \caption{Intersection of the line $ W(\k)-W_0$ (blue or lighter line) with the theta divisor $(\Th)_R$ in $\mathbb T_g$, where $g=2$ (left panel) and $g=3$ (right panel). On the left panel ($g=2$) $(\Th)_R$ is represented by a curve, on the right panel ($g=3$) $(\Th)_R$ is represented by a surface. In both cases the point of intersection of $ W(\k)-W_0$ with $(\Th)_R$ determines some $\k= {\tilde \k}_n$. } \label{ThetaDivisor} \end{figure} \begin{lemma}\label{prop-ass} [Prop.7.11] Let $\k_0\in{\mathbb R}^+$ and $g\in\{2,3,\dots\}$. For any $N\in{\mathbb N}$ the number $m(N)$ of intersections of the segment of the line $W(\k)-W_0$, where $\k \in \left [\k_0 , \k_0 + \frac{N(g-1) i\pi}{\tau_{11}}\right)$, with $(\Theta)_{\mathbb R}$ is bounded by \begin{equation}\label{bound_n} (N-1)(g-1)\leq m(N) \leq (N+1)(g-1). \end{equation} \end{lemma} Let us now denote \begin{equation}\label{some_not} \begin{split} &\mathbf f_n:= W( {\tilde \k}_n)-W_0,~~~{\mathfrak g}(z) = \frac 12 - 2 \int_{a_1}^z\!\!\! \omega_1d z ~~~{\rm and} \\ d(z)=&\frac{R(z)}{2\pi i}\left(-\sum_{j=1}^{g+1}\int_{a_{2j-1}}^{a_{2j}}\frac{\ln w(\zeta)d\zeta}{(\zeta-z)R_+(\zeta)}+ \sum_{j=1}^{g}\int_{a_{2j}}^{a_{2j+1}} \frac {i\d_{\mu(j)} d\zeta}{(\zeta-z)R_+(\zeta)}\right), \end{split} \end{equation} where: $\mu(g)=0$ and $\mu(j)=j$ for all $j\neq g$; the vector $\vec\d=[\d_1,\dots,\d_{g-1},\d_0]^t$ is given by $\vec \d = 2\pi L^{-1} \left(2\mathfrak u(\infty) - \mathfrak u(a_{2g+2}) \right)$ and \begin{equation} L= \left[ \begin{array}{ccccc} 1 & 0 & \dots&0 & -1\\ 0 & 1& 0\dots &0& -1\\ & &\ddots &\\ &&\dots &1&-1\\ 0&0&\dots &0&1 \end{array} \right]. \end{equation} \begin{proposition}\label{propositiongg}[Prop.4.2] {\bf (1)} ${\mathfrak g}(z)$ satisfies the jump conditions \begin{equation}\label{geqm} {\mathfrak g}_+(z)+{\mathfrak g}_-(z)=-1~~~~{\rm on}~I_i,~~~~~{\mathfrak g}_+(z)+{\mathfrak g}_-(z)=1~~~~{\rm on}~I_e, \end{equation} \begin{equation}\label{geqc} ~{\rm and}~~~~~{\mathfrak g}_+(z)-{\mathfrak g}_-(z)=i\O_{\mu(j)}~~~~{\rm on}~[a_{2j},a_{2j+1}],~~j=1,\cdots,g, \end{equation} where $\O_0= \frac 4 i \sum_{k=1}^g\int_{a_{2k-1}}^{a_{2k}}\!\!\!\!\! \omega_1dz\in {\mathbb R}$ and $\Omega_j = \frac 4 i \sum_{k=1}^j\int_{a_{2k-1}}^{a_{2k}} \!\!\!\!\! \omega_1dz\in {\mathbb R}$. {\bf (2)} The function $d(z)$given by \eqref{some_not} is analytic on $\bar {\mathbb C} \setminus [a_1,a_{2g+2}]$ (in particular, analytic at infinity) and satisfies the jump conditions \begin{equation} d_+ +d_- =- \ln w~~{\rm on~} I,~~~\qquad d_+ -d_- =i\d_{\mu(j)}~~{\rm on~} c_j,~ ~[a_{2j},a_{2j+1}],~~j=1,\cdots,g. \label{propertyDelta} \end{equation} \end{proposition} Let \begin{equation} r(z):= \sqrt[4]{\frac {\prod_{j\in J} (z-a_j)}{\prod_{\ell\in J'} (z-a_\ell)}}, \ \ \ z\in {\mathbb C} \setminus [a_1,a_{2g+2}], \label{spinorh} \end{equation} where $J = \{ 1,5, 7,9,11, \dots, 2g-1\}$ and $J' = \{1,2, 3, \dots 2g+2\}\setminus J$ (so that $|J| = g-1$ and $|J|' =g+3$). The function $r(z)$ is defined so that it is analytic in ${\mathbb C}\setminus [a_1,a_{2g+2}]$ and at infinity behaves like $\frac 1 z$. Let \begin{equation} \label{Ups} \begin{split} \Upsilon^{(j)}(z;\mathbf f_n ) = (-1)^j&\sqrt{ \frac {\Theta(W_0\!+\!(-1)^j2\mathfrak u(\infty) )}{\Theta(\mathbf f_n\!+\! (-1)^j2\mathfrak u(\infty))} \frac {[\mathbb A^{-1} \nabla \Theta(W_0)]_g }{i\vec \tau_1\!\!\cdot \!\! \nabla \Theta(\mathbf f_n) }} \\ &\times \frac{\Theta\left(\mathfrak u_+(z)\! +\!\!(-1)^j\!\mathfrak u(\infty) + \mathbf f_n \right) r_+(z)} { \Theta\left(\mathfrak u_+(z) \!+\!(-1)^j \mathfrak u(\infty)+W_0\right)},~j=1,2, \end{split} \end{equation} where $z\in I$. It follows from Corollary 7.20, \cite{BKT2}, that for every $n\in{\mathbb N}$ we have $\Upsilon^{(1)}(z;\mathbf f_n)\equiv \pm\Upsilon^{(2)}(z;\mathbf f_n)$, where the choice of the sign depends on a particular $n$. It turns out that this sign is not essential, since the normalized singular functions $\widehat f_n(z)$ and $\widehat h_n(z)$, approximated through $\Upsilon^{(j)}(z;\mathbf f_n )$ (see below), are determined only up to a sign. Thus, we introduce $\Upsilon(z;\mathbf f_n)$ that, for a given $n\in{\mathbb N}$, coincides with both $\Upsilon^{(j)}(z;\mathbf f_n)$, $~j=1,2,$ modulo factor $(-1)$. Now the asymptotics of singular functions is described by the following theorem. \begin{theorem} \label{cor-first}[Thm.7.22] The singular functions $\widehat f_n(z)$ and $\widehat h_n(z)$ of the system in \eqref{svd-def2} {\em normalized} in $L^2(I_i)$ and $L^2(I_e)$, respectively, are asymptotically given by \begin{equation} \label{843} \begin{split} \widehat f_n(z) = {i} \Im \left[ 2\Upsilon(z;\mathbf f_n){\rm e}^{-i {\tilde \k}_n \Im ({\mathfrak g}_+(z)) -i\Im (d_+(z)) } \right] + \mathcal O({ {\tilde \k}}_n^{-1}), ~~~z\in I_i,\\ \widehat h_n(z) = \Re \left[ 2\Upsilon(z;\mathbf f_n) {\rm e}^{-i {\tilde \k}_n \Im ({\mathfrak g}_+(z)) -i\Im (d_+(z)) } \right] + \mathcal O({ {\tilde \k}}_n^{-1}),~~~z\in I_e, \end{split} \end{equation} where the approximation is uniform in any compact subset of the interior of $I_i, I_e$, respectively. \end{theorem} \begin{corollary} \label{cor-sing-func}[Cor.7.24] The {singular functions} $f_n(z)$ and $h_n(z)$ of the system \eqref{svd-def} {\em normalized} in $L^2(I_i,\frac 1{w(z)})$ and $L^2(I_e,\frac 1{w(z)})$, respectively, are asymptotically given by \begin{eqnarray} \widehat f_n(z) = \sqrt{w(z)} \Im \left[ 2\Upsilon(z;\mathbf f_n){\rm e}^{-i {\tilde \k}_n \Im ({\mathfrak g}_+(z)) -i\Im (d_+(z)) } \right] + \mathcal O({ {\tilde \k}}_n^{-1}), ~~~z\in I_i,\cr \widehat h_n(z) = \sqrt{w(z)}\Re \left[ 2\Upsilon(z;\mathbf f_n) {\rm e}^{-i {\tilde \k}_n \Im ({\mathfrak g}_+(z)) -i\Im (d_+(z)) } \right] + \mathcal O({ {\tilde \k}}_n^{-1}), ~~~z\in I_e,\label{844} \end{eqnarray} where the approximation is uniform in any compact subset of the interior of $I_i, I_e$, respectively. \end{corollary} \section{Instability of the interior problem in Sobolev spaces}\label{sec-sobol} \subsection{Continuation of $f$ from $I_i$}\label{sec-cont} The function $\psi(y)$ in \eqref{int-eq} is analytic in ${\mathbb C}\setminus I_e$ and is known on $I_i$. If we can find the analytic continuation of $\psi(y)$ on $(a_2, a_{2g+1})$, then, according to \eqref{psi}, we can solve the problem of reconstructing $f$ on $(a_2, a_{2g+1})$. The idea of such reconstruction is straightforward. The eigenfunctions $\phi_n=\frac 1 {\sqrt{2}}(\widehat f_n\chi_i + \widehat h_n\chi_e)$ of the self-adjoint Hilbert-Schmidt integral operator $\widehat K: L^2(I)\mapsto L^2(I)$ form an orthonormal basis in $L^2(I)$. Thus, $\widehat f_n, \widehat h_n$ form orthonormal bases in $L^2(I_i)$, $L^2(I_e)$ respectively, so that $ f_n, h_n$ form orthonormal bases in the corresponding in $L^2(I_i,1/w)$, $L^2(I_e,1/w)$. Note that the former {coincides} with $L^2(I_i)$. Given $\psi\in L^2(I_i,1/w)$ and $\varphi\in L^2(I_e,1/w)$ we have \begin{equation}\label{psi_phi_exp} \psi=\sum \psi_n f_n ~~{\rm on}~~ I_i~~{\rm and}~~~ \varphi=\sum \varphi_n h_n~~{\rm on}~~ I_e, \end{equation} where $\sum \psi_n^2<\infty,~\sum \varphi_n^2<\infty$. According to \eqref{svd-def2}, ${\mathcal H}_e^{-1} h_n =2\lambda_n f_n$, so that ${\mathcal H}_e^{-1} \varphi =\psi$ and \eqref{psi_phi_exp} imply $\psi_n=2\lambda_n\varphi_n$. In view of the asymptotics \eqref{in1} of $\lambda_n$, we conclude that the coefficients $\psi_n$ decay exponentially fast, so we have a very fast convergence of the series \eqref{psi_phi_exp} for $\psi$. Note that, according to \eqref{svd-def}, the singular functions $f_n$ are analytic in ${\mathbb C}\setminus I_e$. Thus the question of analytic continuation of $\psi$ to $(a_2, a_{2g+1})$ through the series \eqref{psi_phi_exp} is reduced to the question of convergence of $\psi=\sum \psi_n f_n$ in $(a_2, a_{2g+1})\setminus I_i$. Let $\mathcal I_\o$, $\o>0$, denote the set of all $z\in (a_2, a_{2g+1}) \setminus I_i$ that are at least $\o$ away from the nearest branchpoint $a_j$, $j=2,3,\dots,2g+1$. Below, we consider only such $\o$, that $a_j+\o<a_{j+1}-\o$ for all $j=2,\dots,2g$. \begin{lemma}\label{lem-f_n_in_gaps} There exists a constant $C_\o>0$, such that for all $n\in{\mathbb N}$ and for all $z\in \mathcal I_\o$ \begin{equation}\label{est-f_n-gaps} |f_n(z)|\leq C_\o e^{\k_n(\Re{\mathfrak g}(z)+\frac{1}{2})}. \end{equation} \end{lemma} Lemma \ref{lem-f_n_in_gaps} follows from Lemma \ref{lem-est-gaps}, \eqref{acc_sing_val} and \eqref{svd-def2}. \begin{lemma}\label{lem-Re_g} $|\Re {\mathfrak g}(z)|< \frac 12$ for any $z\in{\mathbb C}\setminus I$, with $\Re {\mathfrak g}(z)\equiv \frac{1}{2}$ on $I_e$ and $\Re {\mathfrak g}(z)\equiv-\frac{1}{2}$ on $I_i$. \end{lemma} \begin{proof} Consider ${\mathfrak g}(z)$ on the main sheet of the Riemann surface $\mathcal R$ with branchcuts on $I$. Note that ${\mathfrak g}(z)$ is Schwarz symmetrical and satisfies the jump conditions ${\mathfrak g}_+ + {\mathfrak g}_- \equiv 1 $ on $I_e$ and ${\mathfrak g}_+ + {\mathfrak g}_- \equiv -1 $ on $I_i$, see Proposition \ref{propositiongg}. Thus, $\Re {\mathfrak g}(z)\equiv \frac{1}{2}$ on $I_e$ and $\Re {\mathfrak g}(z)\equiv-\frac{1}{2}$ on $I_i$. The remaining statement follows from the maximal principle for harmonic functions. \end{proof} \begin{theorem}\label{thm33} For a given $\o>0$, the series $\psi(z)=\sum \psi_n f_n(z)$ converges absolutely and uniformly on $\mathcal I_\o$. \end{theorem} \begin{proof} Recall that $\la_n=\exp(-\k_n)$. As a consequence of Lemma \ref{lem-f_n_in_gaps}, we have \begin{equation}\label{unif-conv} \sum \left | \psi_n f_n(z)\right| \leq 2C_\o\varphi_* \sum e^{\k_n(\Re{\mathfrak g}(z)-\frac{1}{2})}, \end{equation} where $\varphi_*=\max_n\{|\varphi_n|\}<\infty$. In light of \eqref{in1} and Lemma \ref{lem-Re_g}, the series in the right hand side of \eqref{unif-conv} converges absolutely and uniformly on $\mathcal I_\o$. \end{proof} \begin{corollary}\label{analcont-series} The series $\psi(z)=\sum \psi_n f_n(z)$ provides analytic continuation of $\psi$ onto $(a_2,a_{2g+1})$. \end{corollary} Indeed, by choosing a sufficiently small $\o$, one can analytically continue $\psi(z)$ to any point in $(a_2,a_{2g+1})\setminus I_i$ through this series. \subsection{Instability of analytic continuation in Sobolev norms} In the previous section we obtained a formula for analytic continuation of $\psi(y)$ from $I_i$ to all of $(a_2,a_{2g+1})$. Next we prove that analytic continuation of $\psi$ from $I_i$ is unstable for any pair of Sobolev spaces: $H^{s_1}(I_i)\to H^{-s_2}({\bf J})$, where ${\bf J}$ is any open set containing $I_i$. Clearly, it makes sense to consider $s_1,s_2>0$. For simplicity we will assume that $s_1$ and $s_2$ are integers, so (see Chapter 1 in \cite{egs}): \begin{equation}\label{norm1} \lVert f \rVert_{H^{s_1}(I_i)}^2:=\sum_{j=0}^{s_1} \int_{I_i} |f^{(j)}(y)|^2dy, \end{equation} and \begin{equation}\label{norm2} \lVert f\rVert_{H^{-s_2}(J)}:=\inf_{\tilde f \in H^{-s_2}(\mathbb R), f=\tilde f|_{\bf J}}\sup_{\phi\in\coi({\mathbb R})} \frac{\left| \int_{\mathbb R} \tilde f(y)\overline{\phi(y)}dy\right|}{\lVert \phi\rVert_{H^{s_2}({\mathbb R})}}. \end{equation} Let $\gamma$ be a collection of simple loops in the complex plane so that $I_i$ is contained in the union of the interiors of the loops. We take $\g$ to be sufficiently close to $I_i$. By the Cauchy integral theorem using the analyticity of $f_n$ one can show that \begin{equation}\label{est1} \lVert f_n\rVert_{H^{s_1}(I_i)} \leq c(s_1,\ga) \max_{z\in\gamma}|f_n(z)| \end{equation} for some $c(s_1,\ga)>0$. Analogously to Lemma \ref{lem-f_n_in_gaps}, it follows from Lemma \ref{lem-est-gaps} that \begin{equation}\label{est2} \max_{z\in\gamma}|f_n(z)|\leq c_\ga \exp(\k_n(\max_{z\in\ga} \Re g(z)+\frac12)) \end{equation} for some $c_\ga>0$. By taking $\ga$ sufficiently close to $I_i$, we can make $\max_{z\in\ga} \Re g(z)+\frac12$ as close to zero as we want. \begin{lemma}\label{lem-seq-intervals} One can find a sequence of intervals $J_n\subset {\bf J}$ with the following properties: \begin{enumerate} \item The length of each $J_n$ is greater than a fixed positive constant independent of $n$; \item The distance of each $J_n$ to $I_i$ is greater than a fixed positive constant independent of $n$; and \item There exists $N>0$ large enough such that \begin{equation}\label{est4} |f_n(y)|\ge c \exp(\k_n(\Re g(y)+\frac12)),\ n\ge N,\ y\in J_n, \end{equation} for some $c>0$ independent of $n$. \end{enumerate} \end{lemma} Lemma~\ref{lem-seq-intervals} is proven in Appendix~\ref{proofstechn}. By property 1 in Lemma~\ref{lem-seq-intervals} we can find $L>0$ such that the length of each interval $J_n$ is greater than or equal to $L$. Then we select a real-valued function $\phi\in C_0^\infty([-L/2,L/2])$, $\phi\ge0$, $\phi\not\equiv0$. By shifting $\phi$ appropriately, we get a collection of functions $\phi_n\in\coi(J_n)$ and they all have the same $H^{s_2}({\mathbb R})$-norm. Using the facts that: (i) $f$ and $\tilde f$ coincide on ${\bf J}$ (cf. \eqref{norm2}); (ii) $f_n$'s are real-valued on ${\bf J}$, and; (iii) $f_n$'s do not change sign on $J_n$ for $n$ large (cf. \eqref{est4}), equation \eqref{norm2} immediately yields \begin{equation}\label{est3} \lVert f_n \rVert_{H^{-s_2}({\bf J})}\ge c_\phi \min_{y\in J_n} |f_n(y)|,\ n\ge N, \end{equation} for some $c_\phi>0$. From the second property in Lemma~\ref{lem-seq-intervals}, by choosing $\ga$ sufficiently close to $I_i$ so that all $J_n$ are in the exterior of $\gamma$ and $\text{dist}(\ga,\cup_n J_n)>0$, we get $\inf_{y\in \cup J_n} \Re g(y) > \max_{z\in\ga} \Re g(z)$. Hence, \begin{equation}\label{final} \frac{ \exp(\k_n(\min_{y\in J_n} \Re g(y)+\frac12))}{ \exp(\k_n(\max_{z\in\ga} \Re g(z)+\frac12))}\to\infty,\ n\to\infty. \end{equation} Hence it follows from \eqref{est2} and \eqref{est4} that the quantity $\lVert f_n \rVert_{H^{-s_2}({\bf J})}$ cannot be bounded in terms of $\lVert f_n \rVert_{H^{s_1}(I_i)}$. Since the Sobolev norm $\lVert f \rVert_{H^s}$ is a monotonically increasing function of $s$ (provided that $f$ belongs to the appropriate spaces), our argument proves the following result. \begin{theorem}\label{instability} Fix an open set ${\bf J}\supset I_i$. The operation of analytic continuation from $I_i$ to ${\bf J}$ described in Corollary \ref{analcont-series} cannot be extended to a continuous operator $H^{s_1}(I_i)\to H^{-s_2}({\bf J})$ for any $s_1,s_2$. \end{theorem} Theorem~\ref{instability} shows that analytic continuation is more unstable than calculation of any number of derivatives. An interesting question is to estimate the degree of ill-posedness of analytic continuation. This can be done, for example, by finding a Hilbert space $\mathcal A$ on which the operator of analytic continuation is bounded. It is clear that the space $\mathcal A$ should contain at least all functions in the range of ${\mathcal H}^{-1}_e:\ L^2(I_e,1/w)\to L^2(I_i,1/w)$. If $\psi\in \mathcal A$, but $\psi$ is not in the range of ${\mathcal H}^{-1}_e$, then the analytic continuation of $\psi$ is understood via the summation of the series in Corollary~\ref{analcont-series}. Let $w_n$ be a sequence of positive numbers. Introduce the following space: \begin{equation} \mathcal A:=\{ \psi\in {L^2}(I_i):\, \sum_{n\ge0} w_n^2 |\psi_n|^2<\infty \}, \end{equation} where \begin{equation} \psi_n:=\langle \psi,f_n \rangle:=\int_{I_i} \psi(y) f_n(y) \frac1{w(y)}dy. \end{equation} It is obvious that $\mathcal A$ is a Hilbert space with the inner product defined by the formula \begin{equation} \langle \psi^{(1)},\psi^{(2)} \rangle:=\sum_{n\ge 0} w_n^2 \psi^{(1)}_n \overline{\psi^{(2)}_n}. \end{equation} \begin{theorem}\label{stability} Fix an open set ${\bf J}$, whose closure is a subset of $(a_2,a_{2g+1})$. Suppose that each connected component of ${\bf J}$ contains at least one of the intervals that make up $I_i$. Suppose the sequence of $w_n$'s is such that the limit below exists and satisfies \begin{equation} 0<\lim_{n \to \infty} \left\{\frac{w_n}n \exp(-\k_n(\sup_{z\in {\bf J}} \Re g(z)+\frac12))\right\} <\infty. \end{equation} Then one has: (1) ${\mathcal H}^{-1}_e(L^2(I_e,1/w))\subset \mathcal A$, and; (2) the operator of analytic continuation acting between the spaces $\mathcal A \to L^2({\bf J})$ is continuous. \end{theorem} \begin{proof} Similarly to the proof of Theorem~\ref{thm33}, it is easily seen that assertion (1) holds. Now we prove assertion (2). First we show that \begin{equation}\label{est-comb} \max_{z\in {\bf J}}|f_n(z)|\leq c_ {\bf J} \exp(\k_n(\sup_{z\in {\bf J}} \Re g(z)+\frac12)) \end{equation} for some $c_ {\bf J}>0$. Denote $G:=\sup_{z\in {\bf J}} \Re g(z)$. Let $\ga$ be a collection of simple contours in ${\mathbb C}$ containing the components of ${\bf J}\cap I_i$ in their interior. By making $\ga$ as close to these component as we need and using Lemma~\ref{lem-Re_g}, we can find $\ga$ such that $\sup_{z\in \ga} \Re g(z)<G$. Now \eqref{est-comb} follows immediately by using inequalities \eqref{est-f_n-gaps} and \eqref{est2} combined with the maximum modulus principle. Finally, to prove (2) we fix any $N>0$. Then \begin{equation}\label{eq316} \begin{split} \int_{J} \left| \sum_{n=0}^N \psi_n f_n(z) \right|^2 dz \leq |J| \left (\sum_{n=0}^N |\psi_n| \sup_{z\in J} |f_n(z)| \right)^2\leq c \left (\sum_{n=0}^N |\psi_n| \frac{w_n}{n} \right)^2\leq c \sum_{n=0}^N (|\psi_n| w_n)^2 \sum_{n=0}^N \frac{1}{n^2}, \end{split} \end{equation} where $c>0$ is some constant. By taking the limit $N\to\infty$ the desired assertion follows immediately. \end{proof} \begin{remark} Using the fact that the singular functions $f_n$ are analytic on ${\bf J}$ and the coefficients $\psi_n$ go to zero sufficiently fast, similarly to the proof of Theorem~\ref{thm33} and \eqref{eq316} it is easy to see that each $\psi\in\mathcal A$ defined on ${\bf J}$ via the series in Corollary~\ref{analcont-series} is a uniform limit of analytic functions. Hence the continuation of $\psi$ from $I_i$ to ${\bf J}$ via the series and via the conventional analytic continuation coincide. \end{remark} \section{Approximation in $L^2(I_i)$ }\label{sec-L2-conv} According to Theorem \ref{cor-first}, the normalized singular functions $\widehat f_n$ are approximated by \begin{equation} \tf_n:={i} \Im \left[ 2\Upsilon(z;\mathbf f_n){\rm e}^{-i \k_n \Im ({\mathfrak g}_+(z)) -i\Im (d_+(z)) } \right] \label{tfn} \end{equation} with accuracy $O(n^{-1})$ in the sup-norm (uniformly) on any compact subset of the interior of $I_i$. In this subsection we discuss this approximation in $L^2(I_i)$. We will use $\|f\|$ to denote the $L^2$ norm of $f\in L^2(I_i)$. \begin{lemma}\label{lem-prepar} Let $\o_0$ be so small that each interval $(a_k-\o_0,a_k+\o_0)$ contains no endpoints except $a_k$. Then there exists some $\eta>0$, such that \begin{equation}\label{est-unif} \forall k\in\{3,\dots,2g\},\,\forall n\in{\mathbb N},\, \forall \o\in(0,\o_0):~~~ |\tf_n(z)|\leq \frac{\eta}{|z-a_k|^\frac 14} ~~{\rm on}~~(a_k-\o,a_k+\o). \end{equation} \end{lemma} Lemma \ref{lem-prepar} follows from Lemma \ref{lemmab1} and Corollary \ref{cor-Ups12}. \begin{lemma}\label{lem-norm-tf} {The norms of $\widetilde f_n(z)$ \eqref{tfn} satisfy the asymptotic expansion} \begin{equation}\label{norm-tf-st} \, \|\tf_n\|=1 + O(n^{-1})\ ,\ \ n\to \infty. \end{equation} \end{lemma} The proof of this lemma can be found in Appendix \ref{proofstechn}. Let $\o>0$ and define $I_i^\o=I_i\setminus \bigcup_{k=3}^{2g}(a_k-\o,a_k+\o)$. If $f\in L^2(I_i)$, then $\|f\|^2= \|f\|_b^2+\|f\|^2_t$, where $\|f\|_b$ denotes the norm of $f$ in $L^2(I_i^\o)$ (in the bulk) and $\|f\|_t$ denotes the norm of $f$ in $L^2(I_i\setminus I_i^\o)$ (in the tails). According to Theorem \ref{cor-first}, for any $\o\in(0,\o_0)$ there exists some $P_\o>0$, such that \begin{equation}\label{dif-norm} \|\widehat f_n-\tf_n\|_b\leq \frac{P_\o}{n}. \end{equation} \begin{theorem} $\tf_n$ approximate $\widehat f_n$ in $L^2(I_i)$, that is, $\forall \epsilon>0 \, \exists n_0\in{\mathbb N}$ such that $ \forall n>n_0:\,\|\widehat f_n-\tf_n\|<\epsilon$. \end{theorem} \begin{proof} According to \eqref{est-unif}, $\|\tf_n\|_t\leq 2\sqrt{g-1}\eta\o^\frac 14$ for all $n\in{\mathbb N}$. As implied by Lemma \ref{lem-norm-tf}, there exist some $Q_\o>0$, such that $\|\tf_n\|\geq 1- \frac{Q_\o}{n}$. Since $1- \frac{Q_\o}{n}\leq\|\tf_n\|\leq \|\tf_n\|_b+\|\tf_n\|_t$, we obtain $\|\tf_n\|_b\geq 1-2\sqrt{g-1}\eta\o^\frac 14- \frac{Q_\o}{n}$. Then, according to \eqref{dif-norm}, \begin{equation} \begin{split} & \|\widehat f_n\|_b\geq \|\tf_n\|_b-\|\widehat f_n-\tf_n\|_b\geq 1-2\sqrt{g-1}\eta\o^\frac 14-\frac{P_\o}{n}- \frac{Q_\o}{n}, ~~~{\rm so ~that} \cr & \|\widehat f_n\|_t^2=1- \|\widehat f_n\|_b^2 \leq 2\left(2\sqrt{g-1}\eta\o^\frac 14+\frac{P_\o+Q_\o}{n}\right). \end{split} \end{equation} Thus, \begin{equation}\label{first-est} \|\widehat f_n-\tf_n\|\leq \|\widehat f_n-\tf_n\|_b +\|\widehat f_n\|_t+ \|\tf_n\|_t\leq 2\sqrt{g-1}\eta\o^\frac 14+\frac{P_\o+Q_\o}{n}+\sqrt{2\left(2\sqrt{g-1}\eta\o^\frac 14+ \frac{P_\o+Q_\o}{n}\right)}. \end{equation} It is clear that for a small $\epsilon$ condition $2\sqrt{g-1}\eta\o^\frac 14+\frac{P_\o+Q_\o}{n}<\frac{\epsilon^2}{4}$ would imply $\|\widehat f_n-\tf_n\|\leq \epsilon$. Choose $\o^\frac 14=\frac{\epsilon^2}{16\sqrt{g-1}\eta}$. Then the former inequality holds for all $n>\frac{8(P_\o+Q_\o)}{\epsilon^2}$. The proof is completed. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
346
Aug. 14, 2018 12:05 p.m. Back when hair metal was all the rage, Witchazel drove the highways of Western Canada as one of the hardest working club bands around. For a half-dozen years in the late 1980s and early '90s, the quartet pumped out a head-banging mix of originals and covers at venues from Vancouver Island to Ontario, sometimes six nights a week. Sherman Friesen, a North Delta resident who sang and played guitar in the band (as "Sherman Von Riesen"), remembers the good times before the Grunge explosion got in the way of Witchazel making it out of the nightclub scene and onto a major record label. Those halcyon days of hard rock are echoed on "The Lost Hazel Tapes," a new CD of music that features songs recorded by Witchazel at bars in Calgary, Edmonton and Coquitlam back in the day. With Friesen as frontman, a 2018 version of the band has been formed to play once again, with a pair of concerts planned at Donegal's Rock & Irish House in Surrey (on Thursday, Aug. 30) and Edmonton. The gig in Surrey serves as a "Cancer Killer Rock Party" inspired, in part, by former band member Doug Draper's five-year battle with the disease. The new CD, which is dedicated to Draper, was assembled from DAT and reel-to-reel recordings discovered in the basement of a house in Edmonton, the city where Friesen grew up and formed Witchazel in 1985. The CD was produced with the help of Ray Harvey, the Kick Axe guitar player, who also worked with Witchazel the first go-around. Harvey, a member of the note-perfect Eagles Eyes tribute band these days, operates a studio in Lake Cowichan now, and Friesen and the others went there multiple times over the last several months to finish the project. • READ ALSO: Key harmonies make Eagle Eyes tribute band soar. This year, Friesen worked to assemble a modern edition of Witchazel with bass player David Desjarlais, who played with the original band for a time, and old pals Ian Mulcaster (guitar) and Rick Smook (drums). At Donegal's (12054 96th Ave.) on Aug. 30, they'll rock "Bad is Better" "Cinderalla Girl" and other old favourites — and some newer ones, too — at the $15-a-ticket cancer-fundraiser concert, along with Ghettovators and guest musicians Harvey, Layla Vaugeois and others. The event, presented by Freakrock Entertainment, serves as a local launch for the CD. • READ ALSO: 'Freakrock' jammers find groove among the faithful at Surrey pub, from 2017. More than anything, Witchazel was reformed this year to play Tushfest, an annual concert that features reunited rock bands, at Edmonton's Century Casino on Sept. 8. The event raises money for a charity program at a cancer institute in the Alberta capital. Last year's concert raised more than $10,000. As for Witchazel's new CD, 50 per cent of profits from sales of the initial 500 limited-edition discs will be donated to help in the fight against cancer. One song on the CD is "Now Look at This," originally written in 1991. "We took it and turned it inside out, and now it sounds like Tom Petty and Brad Paisley having tea over at Kid Rock's house — or maybe a beer," Friesen said with a laugh. Some former members of Witchazel will be in the crowd when the band plays Donegal's, but not Draper, who lives in Calgary and has promised to attend Tushfest in Edmonton. Singer/guitarist Sherman Friesen with Witchazel in the late 1980s. Tickets for the "Cancer Killer Rock Party" at Donegal's bar in Surrey on Aug. 30.
{ "redpajama_set_name": "RedPajamaC4" }
7,610
Q: Using tkinter and pygame together Is there any way to use both pygame and tkinter . I am creating a game and want to use tkinter to create buttons and menus as I don't want to use physical keyboard or joystick. How should I do it ? How can I use both together?
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,910
\section{Introduction} A $t$-structure on a triangulated category generalizes the idea of truncating the homology of a chain complex above a specified degree. They were introduced by Beilinson, Bernstein and Deligne in \cite{BBD}. This paper constructs an invariant of t-structures of $D(R)$ the derived category of a Noetherian ring $R$. When restricting attention to the quasi-coherent complexes $D_{\operatorname{qc}}(R)$, we show that this invariant is complete. We do this by classifying a slightly weaker structures called a nullity classes. The Postnikov section functor $P_n$, which kills all homotopy groups $\pi_i(X)=[S^i,X]$ above dimension $n$, provides a notion of truncation in topological spaces. Originating in the work of Bousfield \cite{Bousfield1} and Dror-Farjoun \cite{DF}, for any space $E$ there is a more general truncation functor $P_E$ which kills all maps from $E$ and it's suspensions so that $[\Sigma^iE, P_E(X)]=0$. It can be defined to be the universal functor with this property. The class of objects such that $P_E(X)=0$, $\overline{C(E)}$ is a nullity class. More generally a nullity class is a full subcategory closed under arbitrary coproducts, (positive) suspensions and extensions. When working in $D(R)$, in the language of Keller and Vossieck \cite{KV}, a nullity class is a cocomplete preaisle. They showed, more generally in any triangulated category $\bold T$, that $t$-structures correspond the preaisles ${\mathcal A}\subset {\bold T}$ allowing a right adjoint to the inclusion functor. They call such preaisles aisles. Motivated by this and ideas of Bousfield, Alonso-Tarrio, Jeremias-Lopez and Souto-Solario \cite{AJS} constructed the functor $P_E$ in the category $D(R)$. They also showed that, in our notation, $\overline{C(E)}$ is an aisle. This gives a very general way to contruct $t$-structures. We will see that when restricted to quasi-coherent complexes this gives all the nullity classes and thus all the $t$-structures. Our approach has its roots in topology and thick subcategories. If our nullity class is also closed under desuspension then it is a localizing subcategory, an infinite version of a thick subcategory. The thick subcategories of $p$-local finite spectra were classified by Hopkins and Smith \cite{HS} in terms of an invariant called type. Bousfield \cite{Bousfield2} used their classification to classify nillity classes of $p$-torsion finite suspension spaces. Bousfield's classification is in terms of two things: type, which tells us which thick subcategory the class generates stably and connectivity, which tells us where the class starts. Using ideas from the classification for spectra, Hopkins \cite{Hopkins} and Neeman \cite{Neeman} classified retraction closed thick subcategories of the perfect complexes $D_{\operatorname{parf}}(R)$ for Noetherian rings $R$ by subsets of $\operatorname{Spec} R$ closed under specialization. The invariant is given by taking the support of the object. Neeman \cite{Neeman} proved the analogous result in $D(R)$ where localizing subcategories are classified by all subsets of $\operatorname{Spec} R$. Starting with a nullity class $\mathcal A$ we associate a function (see \ref{p def}): $$ \phi({\mathcal A})\colon \mathbb{Z}\rightarrow \{ {\rm Subsets\ of\ } \operatorname{Spec}(R) {\rm \ closed \ under \ specialization }\}, $$ whose value at $n$, $\phi({\mathcal A})(n)$ can be thought of in the following way. Truncate $\mathcal A$ above $n$ with the standard truncation to get $\tau^{\geq -n} {\mathcal A}$. Next take the thick subcategory generated by $\tau^{\geq -n} {\mathcal A}$ and apply the correspondence of Hopkins-Neeman, in other words take supports. This subset of $\operatorname{Spec}(R)$ is the value $\phi({\mathcal A})(n)$. Since we cannot desuspend, as in Bousfield's result, we have to prescribe at what level the $t$-structure starts and there is some choice of when different primes can start. If $p\in \phi({\mathcal A})(n)$ it means that that prime has already been included at level $n$. So we see that $\phi({\mathcal A})$ must be increasing. In this way applying Bousfield's philosophy to the Hopkins-Neeman result from Theorem \ref{class} we get: \begin{thmA} $\phi$ is an order preserving bijection between nullity classes in $D_{\operatorname{qc}}(R)$ and increasing functions from $\mathbb{Z}$ to subsets of $\operatorname{Spec} R$ closed under specialization. \end{thmA} Since all aisles are nullity classes (Theorem \ref{aislenull}) this theorem also implies $\phi$ is a complete invariant of $t$-structures in $D_{\operatorname{qc}}(R)$. Although in $D(R)$ all the $\overline{C(E)}$ are aisles, their restriction to $D_{\operatorname{qc}}(R)$, $\overline{C(E)}\cap D_{\operatorname{qc}}(R)$ may not be an aisle. The problem is that for $M\in D_{\operatorname{qc}}(R)$, $P_E(M)$ may no longer be in $D_{\operatorname{qc}}(R)$. However we get some control over the image, in fact the primes must be added in a very particular way for the restriction to have a chance of being an aisle. We get Theorem \ref{comono}: \begin{thmB} Suppose $\mathcal A$ is an aisle. Then if $p'\in \phi({\mathcal A})(n)$ and $p$ maximal under $p'$, then $p\in \phi({\mathcal A})(n+1)$. \end{thmB} We conjecture the converse of the theorem: all nullity classes satisfying the condition are aisles. Proving the conjecture would complete the classification of $t$-structures in $D_{\operatorname{qc}}(R)$. It is worth remarking at this point that the conjecture would be false if we were to work with perfect complexes (see Example \ref{ex-qcright}), so $D_{\operatorname{qc}}(R)$ seems to be the right place to work when taking this point of view. In the last section of the paper we use examples of Shelah \cite{Shelah} to show that there is not a set but rather a proper class of $t$-structures in $D(\mathbb{Z})$. This not only shows a strong contrast to what happens with localizing subcategories in $D(R)$ but also shows that classifying the $t$-structures in $D(R)$ is probably not feasible. These examples can be transported to the topological setting to show there exists no set of nullity classes in spectra or topological spaces and no set of $t$-structures in the triangulated category of spectra. Next we give a short description of the contents of each section, more details and comments can be found at the start of some sections. Section \ref{back} gives some background from ring theory and derived categories and also introduces aisles, nullity classes and the nullification functor $P_E$. Section \ref{props} proves some properties of nullity classes and $P_E$ that are well known in the topological setting. Section \ref{inv} defines the invariant $\phi$ and another $N$ which is the inverse of $\phi$. In Section \ref{short} we give a short proof that $\phi$ is a complete invariant when restricted to aisles in $D_{\operatorname{qc}}(R)$. The heart of the paper is Section \ref{sec-null} where the classification of the nullity classes in $D_{\operatorname{qc}}(R)$ is completed. In Section \ref{image} we get restrictions on the image of $\phi$ when restricted to aisles. Section \ref{propclass} constructs the examples in $D(\mathbb{Z})$ that show the aisles, and nullity classes do not form a set. \section{Background and notation}\label{back} Throughout this paper we let $R$ be a Noetherian ring. \subsection{Ring theory and associated primes} \ \vskip 0.3cm \par Recall that $\operatorname{Spec} R$ is the set of primes of $R$. If $U\subset \operatorname{Spec} R$, then $\overline{U}$ denotes the closure of $U$ under specialization. That is, $$ \overline{U}=\{ p\in \operatorname{Spec} R | \exists q \in U, q\subset p\} $$ We will repeatedly use the concept of associated prime. Most things we need to know about them can be found in Eisenbud's book \cite{Eisenbud}. \begin{defin} For an $R$-module $M$, $\operatorname{Ass} M$ denotes the set of associated primes of $M$. So $p\in \operatorname{Ass} M$, if $p$ is the annihilator of some element of $M$. $$ \operatorname{Ass} M=\{ p| \exists x\in M, \operatorname{ann} x=p\}, $$ where for $x\in M$, $\operatorname{ann} x$ denoted the annihilator of $x$. \end{defin} \begin{lemma}\label{WA} Let $A$, $B$ and $C$ be $R$-modules. If we have an exact sequence $$ 0 \rightarrow A \rightarrow B \rightarrow C \rightarrow 0 $$ then $\operatorname{Ass} A \subset \operatorname{Ass} B \subset \operatorname{Ass} A \cup \operatorname{Ass} C$. \end{lemma} {\sl Proof. }\par \cite[Lemma 3.6b]{Eisenbud}. \cqfd \begin{lemma}\label{WAW} Let $B$ and $C$ be $R$-modules. If we have an exact sequence $$ B \stackrel{f}{\rightarrow} C \rightarrow 0 $$ then $\operatorname{Ass} C\subset \overline{\operatorname{Ass} B}$. \end{lemma} {\sl Proof. }\par Let $p\in \operatorname{Ass} C$, and $x\in C$ such that $\operatorname{ann} x=p$. Let $y\in f^{-1}(x)\subset B$, then clearly $\operatorname{ann} y\subset \operatorname{ann} x=p$. Let $p'\subset p$ be any prime minimal among primes containing $\operatorname{ann} y$. Then since the submodule generated by $y$, $\langle y \rangle$ is finitely generated, $p\in \overline{p'}$ and $p'\in \operatorname{Ass} \langle y \rangle$ by \cite[Lemma 3.1 a]{Eisenbud} and so $p'\in \operatorname{Ass} B$ by Lemma \ref{WA}. Thus $\operatorname{Ass} C\subset \overline{\operatorname{Ass} B}$. \cqfd \begin{lemma}\label{A} Supposing that $M$ is a finitely generated $R$-module, $p\in \overline{\operatorname{Ass} M}$ if and only if $M\otimes R_p\not=0$. \end{lemma} {\sl Proof. }\par Assume $p\in \overline{\operatorname{Ass} M}$, then there is a $p'\in \operatorname{Ass} M$ such that $p'\subset p$, so there is an injective map $R/p'\rightarrow M$. So since $\_\otimes R_{p'}$ is exact $M\otimes R_{p'}\not=0$ which implies that $M\otimes R_{p}\not=0$. Next assume $M\otimes R_{p}\not=0$ then there exist $p'\in \operatorname{Ass} (M\otimes R_{p})$ such that $p'\subset p$ and so $p\in \overline{\operatorname{Ass} M}$ since $\operatorname{Ass} (M\otimes R_{p})=\{ p'\in \operatorname{Ass} M| p'\subset p\}$ by \cite[Theorem 3.1c]{Eisenbud} \begin{lemma}\label{B} Suppose $(R,m)$ is a local ring. If $M$ is a non-trivial finitely generated $R$-module then there is a surjective map $M\rightarrow R/m$. \end{lemma} {\sl Proof. }\par We prove this by quotienting out all but one element of a minimal generating set and then quotienting out by $m$ times the last generator. \cqfd \subsection{Derived categories} \ \vskip 0.3cm \par The category of chain complexes of $R$-modules with differential of degree $-1$ is denoted by $C(R)$ and $D(R)$ denotes the derived category of the ring $R$. This is just $C(R)$ modulo weak equivalences. We make $D(R)$ into a triangulated category in the standard way. For $M\in D(R)$ or $C(R)$, $H_i(M)$ denotes the $i$-th homology of $M$. Given $A\in C(R)$, $s^iA$ is the same complex shifted up by $i$, $s^iA_j=A_{j-i}$ and $ds^ia=(-1)^is^ida$. Since $s^i(\_)$ preserves weak equivalences $s^i(\_)$ extends to $D(R)$. By convention we write $s^1=s$. $H_i(sM)=H_{i-1}(M)$. If $f\colon A \rightarrow B$ is a map in $D(R)$, we get a distinguished triangle, $$ A \rightarrow B \rightarrow C \rightarrow sA. $$ Applying $H_*$ to a triangle we get a long exact sequence $$ H_iA \rightarrow H_iB\rightarrow H_iC\rightarrow H_{i-1}A. $$ If $M$ is an $R$-module we consider it as the object $M$ in $C(R)$ or $D(R)$ with $$ M_i= \begin{cases} M & i=0 \\ 0 & else \end{cases} $$ and trivial differential. \par For a category $\mathcal C$ and $A,B\in {\mathcal C}$ $\operatorname{Hom}_{\mathcal C}(A,B)$ denotes the set of maps from $A$ to $B$. We will omit the subscript $\mathcal C$ if it is clear which category we are working in, usually it will be $D(R)$. \par There are two important full triangulated subcategories of $D(R)$ that we will use: \par \noindent $\bullet \quad D_{\operatorname{parf}}(R)$ is the subcategory of $D(R)$ consisting of objects represented by chain complexes of finitely generated projective modules. \par \noindent $\bullet \quad D_{\operatorname{qc}}(R)$ is the subcategory of $D(R)$ whose homology groups are finitely generated and bounded above and below. \begin{lemma}\label{rag} If $M,N\in D(R)$, $H_i(M)$ is finitely generated and bounded below, then the natural map $$ \operatorname{Hom}_{D(R)}(M,N)\otimes R_p\rightarrow \operatorname{Hom}_{D(R_p)}(M\otimes R_p,N\otimes R_p) $$ is an isomorphism of $R_p$ modules. \end{lemma} {\sl Proof. }\par Represent $N$ by $B\in C(R)$ that is bounded above. Since $H_i(M)$ is finitely generated and bounded below we can represent $M$ by $A\in C(R)$ that is bounded below, projective in each degree and has finitely many generators in each degree. Let $\EuScript Hom^i_{grR-mod}(A,B)=\prod_n\operatorname{Hom}_{R-mod}(A_{n+i}, B_n)$, be the $R$-module of graded $R$-module maps from $A$ to $B$ lowering degree by $i$. We put a differential on $\EuScript Hom_{grR-mod}(A,B)=\oplus \EuScript Hom^i_{grR-mod}(A,B)$ by setting $df(x)=d(f(x))+(-1)^if(dx)$ Then the natural map $$ \theta\colon \EuScript Hom^i_{grR-mod}(A,B)\otimes R_p \rightarrow \EuScript Hom^i_{grR_p-mod}(A\otimes R_p,B\otimes R_p) $$ is an isomorphism for each $i$ since by \cite[Proposition 2.10]{Eisenbud} it is an isomorphism restricted to each factor of the product. The map $\theta$ also commutes with the differential. Seeing chain maps as $0$-cycles and chain homotopies as $0$-boundaries, we get that $\operatorname{Hom}_{D(R)}(M,N)=H^0(\EuScript Hom_{grR-mod}(A,B))$, and similarly for $\operatorname{Hom}_{D(R_p)}(M\otimes R_p,N\otimes R_p)$. Since $R_p$ is flat by \cite[Proposition 2.5]{Eisenbud} for any $L\in C(R)$, $H_i(L)\otimes R_p\rightarrow H_i(L\otimes R_p)$ is an isomorphism of $R_p$ modules. So $$ \operatorname{Hom}_{D(R)}(M,N)\otimes R_p=H^0(\EuScript Hom_{grR-mod}(A,B))\otimes R_p $$ $$ \stackrel{\cong}{\rightarrow} H^0(\EuScript Hom_{grR-mod}(A,B)\otimes R_p) $$ $$ \stackrel{\cong}{\rightarrow} H^0(\EuScript Hom_{grR_p-mod}(A\otimes R_p,B\otimes R_p)) $$ $$ =\operatorname{Hom}_{D(R_p)}(M\otimes R_p,N\otimes R_p), $$ and so the lemma follows. \cqfd \begin{lemma}\label{D} Suppose $M,N \in D_{\operatorname{qc}}(R)$. Suppose $p\in \overline{\operatorname{Ass} H_n(M)}$ and $p\not\in \overline{\operatorname{Ass} H_i(M)}$ for $i<n$. If also $p\in \operatorname{Ass} H_n(N)$, and $p\not\in \operatorname{Ass} H_i(N)$ for $i>n$, then $\operatorname{Hom}(M,N)\not=0$. \end{lemma} {\sl Proof. }\par Since $M,N \in D_{\operatorname{qc}}(R)$, by Lemma \ref{rag} it is enough to show that $\operatorname{Hom}_{D(R_p)}(M\otimes R_p, N\otimes R_p)\not=0$. Since $p\not\in \operatorname{Ass} H_i(M)$ for $i<n$ and $p\in \operatorname{Ass} H_n(M)$, Lemma \ref{A} implies that $H_i(M\otimes R_p)=0$ for $i<n$ and $H_n(M\otimes R_p)\not=0$. So Lemma \ref{B} implies that there is a map $f\colon M\otimes R_p \rightarrow s^nR/p$ that induces a surjection on $H_n$. Also since $p\in \operatorname{Ass} H_n(N)$, \cite[Theorem 3.1 c]{Eisenbud} implies that $p\in \operatorname{Ass} H_n(N\otimes R_p)$ and thus there is an injection $g':R/p \rightarrow H_n(N\otimes R_p)$. Since by Lemma \ref{A} $H_i(N)\otimes R_p=0$ for $i>n$, there is a map $g: s^nR/p \rightarrow N\otimes R_p$ which induces $g'$ in $H_n$. The composition $gf\colon M\otimes R_p \rightarrow N\otimes R_p$ is nontrivial on $H_n$ and hence nontrivial, thus $\operatorname{Hom}_{D(R_p)}(M\otimes R_p, N\otimes R_p)\not=0$ and we are done. \cqfd Observe that the lemma does need the finiteness condition, as we since since $\operatorname{Hom}(\mathbb{Q}, \mathbb{Z}/p)=0$ in $D(\mathbb{Z})$. In particular in this case the part of the proof that relies on Lemma \ref{B} does not hold. A slight variant of the last lemma is: \begin{lemma}\label{map} If $M\in D_{\operatorname{qc}}(R)$ such that $H^n(M)\otimes R_{p}\not=0$ and $H^i(M)\otimes R_{p}=0$ if $i<n$ then there is a map $\phi\colon M \rightarrow s^nR/p$ such that $H^n(\phi)\not=0$. \end{lemma} {\sl Proof. }\par There is a map $M\otimes R_p\rightarrow s^nH^n(M)\otimes R_p$ that is an isomorphism on $H^n$. Using this map the lemma follows directly from Lemmas \ref{rag} and \ref{B}. \cqfd \subsection{Aisles and classes generated by a module} \ \vskip 0.3cm \par Let $\bf T$ be a triangulated category. We make the give the following definitions due to Keller and Vossieck \cite{KV}. \begin{defin}\label{tdef} A full subcategory $\mathcal U$ of $\bf T$ is a {\em pre-aisle} if: \begin{enumerate}\renewcommand{\labelenumi}{\arabic{enumi})} \item for every $X\in {\mathcal U}$, $sX\in {\mathcal U}$ \item for every distinguished triangle $X \rightarrow Y\rightarrow Z \rightarrow sX$, if $X,Z\in {\mathcal U}$ then $Y\in {\mathcal U}$. \end{enumerate} A pre-aisle $\mathcal U$ is called {\em cocomplete} if $\mathcal U$ is closed under coproducts. A pre-aisle $\mathcal U$ such that the inclusion ${\mathcal U}\subset {\bf T}$ admits a right adjoint is called an {\em aisle}. \end{defin} Keller and Vosiek \cite{KV} proved that a $t$-structure corresponds to an aisle. For a definition of $t$-stucture, see \cite{AJS} or \cite{BBD}. For a subcategory ${\mathcal U}\subset {\bf T}$ we let $$ {\mathcal U}^{\perp}=\{ x\in {\bf T}|\operatorname{Hom}(y,x)=0 \ \forall y\in {\mathcal U} \} $$ \begin{thm}\cite{KV} A pre-aisle $\mathcal U$ is an aisle, that is the inclusion ${\mathcal U}\subset T$ admits a right adjoint, if and only if $({\mathcal U},s{\mathcal U}^{\perp})$ is a $t$-structure. \end{thm} We will mainly consider aisles for the rest of the paper since they are equivalent to $t$-structures Cocomplete pre-aisles in the triangulated category of spectra, and similar subcategories of spaces, have also been referred to as nullity classes and Bousfield classes. We will also use the term nullity class to refer to the intersection of a cocomplete pre-aisle with a full subcategory. \begin{defin}\label{BC} Let ${\mathcal D}\subset D(R)$ be a full triangulated subcategory. A {\em nullity class} in $\mathcal D$ is a full subcategory of the form ${\mathcal A} \cap {\mathcal D}$ where $\mathcal A$ is a cocomplete preaisle in $D(R)$. We let $NC$ denote the set of nullity classes when $\mathcal D=D_{\operatorname{qc}}(R)$. We order $NC$ by inclusion. \end{defin} Notice that the objects in $D(R)$ with finitely generated homology form a pre-aisle but not a nullity class, so not all nullity classes are pre-aisles, however we do have the following: \begin{thm}\label{aislenull} Suppose ${\mathcal D}\subset D(R)$ is a full triangulated subcategory. Any aisle in ${\mathcal D}$ is a nullity class and any nullity class is a pre-aisle. \end{thm} {\sl Proof. }\par Since ${\mathcal D}$ is a triangulated subcategory it is clear that any nullity class is a pre-aisle. \par Suppose ${\mathcal U}\subset {\mathcal D}$ is an aisle. By \cite[Proposition 1.1]{AJS} $x\in {\mathcal U}$ if and only if for every $y\in {\mathcal U}^{\perp}$, $\operatorname{Hom}(x,y)=0$. This condition is closed under taking coproducts and extensions in the first variable. Also again by \cite[Proposition 1.1]{AJS} ${\mathcal U}^{\perp}\subset s{\mathcal U}^{\perp}$, the condition is also closed under suspension. Thus any object in $D(R)$, and hence in the full subcategory $\mathcal D$, that can be constructed using these operations also satisfies the condition. The fact that an aisle is a nullity class follows. \cqfd In the proof of the last lemma, the operations can take us out of $\mathcal D$, but that doesn't matter since we intersect back with $\mathcal D$, and $\mathcal D$ is full. In \cite{AJS} Alonso Tarrio, Jeremias Lopez and Souto Salorio, show that for any Grothendieck category $\mathcal A$ and any $E\in D({\mathcal A})$, there is an associated aisle. A special case of \cite[Proposition 3.2]{AJS} is the following: \begin{thm}\label{AJS}\cite{AJS} Let $R$ be a commutative ring and $E\in D(R)$. If $\mathcal U$ is the smallest nullity class of $D(R)$ that contains $E$, then $\mathcal U$ is an aisle in $D(R)$. \end{thm} \vskip 0.5cm \noindent {\bf Notation:} Following the topologists we denote the nullity class $\mathcal U$ of the proposition associated to $E$ by $\overline{C(E)}$. We also denote the associated truncation functor $\tau_E^{\geq 1}$ by $P_E$ and $\tau_E^{\leq 0}(X)$ by $X\langle E \rangle$. More generally for any aisle $\mathcal A$. we will denote the associated truncation functor $\tau_{\mathcal A}^{\geq 1}$ by $P_{\mathcal A}$ and $\tau_{\mathcal A}^{\leq 0}(X)$ by $X\langle {\mathcal A} \rangle$. \par So for any $X\in D(R)$ we have distinguished triangles \begin{myeq}\label{long} X\langle E \rangle \rightarrow X \rightarrow P_E X \rightarrow sX\langle E \rangle. \end{myeq} and $$ X\langle {\mathcal A} \rangle \rightarrow X \rightarrow P_{\mathcal A} X \rightarrow sX\langle {\mathcal A} \rangle. $$ The topological notation has the advantage of eliminating the superscripts. The superscripts are also more compatible with chain complexes with differentials of degree $+1$, while we are using differentials of degree $-1$. Even though the main reason I chose this notation is so that I wouldn't get confused, for the purposes of this paper it seems to be right. \section{Properties of closed classes and the nullification functor $P_E$}\label{props}. \begin{lemma}\label{retract} $\overline{C(E)}$ is closed under retracts. \end{lemma} {\sl Proof. }\par This follows from the well known Eilenberg swindle. If $X=A\oplus B$ we can consider the countable coproduct of $X$ with itself in two different ways. $\oplus_{i\in \omega} X=A\oplus B\oplus A\oplus B \dots$ or $\oplus_{i\in \omega} X=B \oplus A\oplus B \oplus A \dots$. We can include the second into the first missing the first $A$ to get a distinguished triangle, $$ \oplus_{i\in \omega} X \rightarrow \oplus X_{i\in \omega} \rightarrow A \rightarrow s\oplus_{i\in \omega} X $$ So if $X$ is in $\overline{C(E)}$, since $\overline{C(E)}$ is cocomplete, so is $\oplus_{i\in \omega} X$ and then Definition \ref{tdef} 2) implies that $A\in \overline{C(E)}$. \cqfd We can put a partial order on $D(R)$ by letting $E<F \Longleftrightarrow P_E(F)=0$. The following shows that $<$ is indeed a partial order. \begin{prop}\label{trans} $$ E<F \Longleftrightarrow \overline{C(F)}\subset \overline{C(E)} $$ In particular for any full triangulated subcategory ${\mathcal D}\subset D(R)$, the map ${\mathcal D}\rightarrow NC$, $E\mapsto \overline{C(E)}$ is order reversing. \end{prop} {\sl Proof. }\par By definition $E<F$ if and only if $P_EF=0$. Next looking at the triangle of Equation (\ref{long}), we see that $P_EF=0$ if and only if $F\langle E\rangle=F$, which happens if and only if $\overline{C(F)}\subset\overline{C(E)}$. The second statement follows easily. \cqfd As is standard in the topological setting, we call a map $f$ in $D(R)$ a {\em $P_E$ equivalence} if $P_E(f)$ is an isomorphism in $D(R)$ and an object $A\in D(R)$, {\em $P_E$ local} if $P_E(A)=A$. The following proposition is standard in the topological settings and also holds more generally for any $t$-structure on any triangulated category. \begin{prop}\label{char uni} Working in $D(R)$, \begin{enumerate}\renewcommand{\labelenumi}{\arabic{enumi})} \item $A \rightarrow P_EA$ is a $P_E$ equivalence. \item If $f\colon A \rightarrow B$ is a $P_E$ equivalence then there exists a unique map $\phi\colon B \rightarrow P_EA$ making the following diagram commute, $$ \xymatrix{ A \ar[r]^f \ar[dr]_{\eta_A} & B \ar[d]^{\phi} \\ & P_EA. } $$ \item $P_EA$ is $P_E$ local. \item Given $f\colon A \rightarrow B$ with $B$ $P_E$ local, there exists a unique map $\phi\colon P_EA\rightarrow B$ making the following diagram commute, $$ \xymatrix{ A \ar[r]^{\eta_A} \ar[dr]_f & P_EA \ar[d]^{\phi} \\ & B. } $$ \item $A$ is $P_E$ local if and only if $\operatorname{Hom}(s^iE,A)=0$ for all $i\geq 0$ if and only if $\operatorname{Hom}(X,A)=0$ for all $X\in \overline{C(E)}$. \item Suppose $E<F$ then $P_E$ local objects are $P_F$ local and $P_F$ equivalences are $P_E$ equivalences. \end{enumerate} \end{prop} {\sl Proof. }\par 1): By Theorem \ref{AJS}, $\overline{C(A)}$ is an aisle in $D(R)$, so it follows from \cite[Proposition 1.1]{AJS}. \par\noindent 2): Since $P_Ef$ is an isomorphism, we can take $\phi=\eta_B(P_Ef)^{-1}\colon B \rightarrow P_EB\rightarrow P_EA$. Uniqueness follows from the functoriality of $P_E$. \par\noindent 3): Part 1) implies that $P_EA\rightarrow P_EP_EA$ is an equivalence and thus $P_EA$ is $P_E$ local. \par\noindent 4): There is a distinguished triangle $$ H \rightarrow A \rightarrow P_EA \rightarrow sH $$ where $H\in \overline{C(E)}$. Since $B$ is $P_E$ local, $\operatorname{Hom}(H,B)=0$ and thus there exists a dashed extension in the following diagram $$ \xymatrix{ A \ar[r] \ar[dr]_f & P_EA \ar@{-->}[d]^{\phi} \\ & B. } $$ The map $\phi$ is unique since any other extension differs from $\phi$ by an element of $\operatorname{Hom}(sH,B)$ which is $0$ since $sH\in \overline{C(E)}$. \par\noindent 5) is \cite[Lemma 3.1]{AJS}. \par\noindent 6): For any $A$, we know that $\operatorname{Hom}(s^iE, P_EA)=0$ for every $i\geq 0$, so by Part 5) since $F\in \overline{C(E)}$, $\operatorname{Hom}(s^iF,P_EA)=0$ for every $i\geq 0$, and $P_EA$ is $P_F$ local. So we get a diagram $$ \xymatrix { A \ar[r]^a \ar[d]_b & P_EA \ar[d]^c \\ P_F A \ar[r]_{P_F(a)} & P_FP_EA } $$ in which $c$ is an equivalence. If $A$ is $P_E$ local then $a$ and thus $P_F(a)$ are equivalences. So by two out of three, $b$ is an equivalence which proves the first part. For the second part since $P_EA$ is $P_F$ local, by Part 4) there exists a dashed extension in the following solid arrow diagram $$ \xymatrix { A\ar[r]\ar[d] & P_EA \\ P_FA \ar@{-->}[ur] } $$ Starting with this diagram and taking $P_E$, in one case more than once, we get a diagram $$ \xymatrix { A\ar[d] \ar[r] \ar[d] & P_FA \ar[r] \ar[d] & P_EA \ar[d] \\ P_EA \ar[r]_a & P_EP_FA \ar[r]_b & P_EP_EA \ar[r]_c & P_EP_EP_FA } $$ By Part 1) $b\circ a$ and $c\circ b$ are equivalences, and this implies that $a$ is an equivalence. \par Now let $f\colon A\rightarrow B$ be a $P_F$ equivalence. We get a square $$ \xymatrix { P_EA \ar[r]\ar[d]_{P_Ef} & P_EP_FA \ar[d]^{P_EP_Ff} \\ P_EB \ar[r] & P_EP_FB } $$ We have just seen that the horizontal maps are equivalences; since $f$ is a $P_F$ equivalence $P_Ff$ is an equivalence so $P_EP_Ff$ is an equivalence. So by two out of three $P_Ef$ is an equivalence which is what we needed to prove. \cqfd In $D(R)$ generally direct limits do not exist. Countable homotopy direct limits in any triangulated category were constructed in \cite{BN}, and any homotopy direct limits of objects in $C(R)$ were constructed in \cite{AJS}. Even though direct limits in $C(R)$ are homotopy invariant (this follows since direct limits commute with homology), the direct limits cannot generally be extended to direct limits in $D(R)$; phantom maps provide a first obstruction. For these reasons when we do constructions involving direct limits we will work in $C(R)$. \begin{prop}\label{closure} \begin{enumerate}\renewcommand{\labelenumi}{\arabic{enumi})} \item A direct limit of $P_E$ equivalences is a $P_E$ equivalence. In particular given a direct system $\{ A_{\alpha}\}_{\alpha< \lambda}$ of objects in $C(R)$, if $P_E A(1)\rightarrow P_E A(\alpha)$ is a weak equivalence for each $\alpha<\lambda$ then $P_E A(1)\rightarrow colim_{\alpha< \lambda}A(\alpha)$ is a weak equivalence. \item If $A\in \overline{C(E)}$ and $A\rightarrow B \rightarrow C \rightarrow sA$ is a distinguished triangle then $B\rightarrow C$ is a $P_E$ equivalence. \end{enumerate} \end{prop} {\sl Proof. }\par 2): Since $A\in \overline{C(E)}$, $\operatorname{Hom}(A,P_EB)=0$. Thus there exists a dashed extension $h$ in the following solid arrow diagram $$ \xymatrix { A \ar[d] & \\ B \ar[r] \ar[d]_i & P_EB \ar[d] \\ C \ar[r] \ar@{-->}[ur]^h & P_EC. } $$ The map $h$ makes the upper left triangle of the square commute, and the lower right square commutes too since the two ways around differ by an element of $\operatorname{Hom}(sA,P_EC)$, which is $0$ since $A\in \overline{C(E)}$. So we get a commuting diagram $$ \xymatrix { P_E B \ar[r] \ar[d]_{P_Ei} & P_EP_EB \ar[d] \\ P_E C \ar[r] \ar[ur]^{P_Eh} & P_EP_EC } $$ in which the horizontal arrow are equivalences. This implies that $P_Ei$ is an equivalence by Proposition \ref{char uni} 1), and so $i$ is a $P_E$ equivalence as required. \par\noindent 1): In the proof of this part we will be working in $C(R)$. Consider the direct limit: $$A(\lambda)=colim_{\alpha<\lambda}A(\alpha).$$ Since $P_E$ is a functor on $C(R)$ we get a commuting diagram, $$ \xymatrix { A(1) \ar[r]\ar[d] & P_E(A_1)\ar[d]\\ A(\lambda)=colim_{\alpha<\lambda}A(\alpha) \ar[r] & colim_{\alpha<\lambda}P_EA(\alpha). } $$ The right vertical arrow is a homology equivalence since it is a direct limit of homology equivalences. Thus we get a commuting diagram $$ \xymatrix { A_1 \ar[r] \ar[d] & P_EA_1 \ar[r] & colim_{\alpha<\lambda}P_EA(\alpha) \ar[d] \\ A_{\lambda} \ar[urr] \ar[rr] && P_EA_{\lambda} } $$ in which all the horizontal arrows are $P_E$ equivalences. Using Proposition \ref{char uni} 1), $colim_{\alpha<\lambda}P_EA(\alpha) \rightarrow P_EA_{\lambda}$ is an equivalence by the same argument as in the proof of Part 2) above. Hence, being a composition of two equivalences, $P_EA(1) \rightarrow P_E A(\lambda)$ is an equivalence as desired. \par \cqfd The functor $P_E$ can be characterized by its universal properties. \begin{cor}\label{use} If $f\colon A\rightarrow B$ is a $P_E$ equivalence and $B$ is $P_E$ local then there is an isomorphism $\phi\colon B\rightarrow P_EA$ such that $$ \xymatrix{ A \ar[r]^f \ar[dr]_{\eta_A} & B \ar[d]^{\phi} \\ & P_EA } $$ commutes. \end{cor} {\sl Proof. }\par The map $\phi$ comes from Proposition \ref{char uni} 2), and its inverse from \ref{char uni} 4). The are compositions are equal to the identity come from Proposition \ref{char uni} 2) and 4) by the uniqueness part of using universal properties as usual. \cqfd \section{An invariant}\label{inv} We will let $\mathcal S$ denote the set of increasing functions from $\mathbb Z$ to subsets of $\operatorname{Spec} R$ closed under specialization. \par Next we define maps $$ N\colon {\mathcal S} \rightarrow NC $$ and $$ \phi\colon NC \rightarrow {\mathcal S}. $$ \subsection{Definition of $N$} Let $$ {\mathcal S}=\{ f \colon \mathbb Z \rightarrow {\mathcal P}(\operatorname{Spec} R)| f(n)=\overline{f(n)} \ {\rm and} \ f(n)\subset f(n+1)\}, $$ where $\mathcal P$ is the power set. We put an order on ${\mathcal S}$ by inclusion, more precisely $f\leq g$ when for every $n$, $f(n)\subset g(n)$. For $f\in {\mathcal S}$ let $M(f)=\oplus_n \oplus_{p\in f(n)}s^nR/p$ and $N(f)=\overline{C(M(f))} \cap D_{\operatorname{qc}}(R)$ be the associated nullity class. \subsection{Definition of $\phi$}\label{p def} Let ${\mathcal A}\subset D_{\operatorname{qc}}(R)$ be a nullity class. Define $\phi({\mathcal A})\in {\mathcal S}$ by letting $p\in \phi({\mathcal A})(n)$ if there is $M\in {\mathcal A}$ such that $p\in \overline{\operatorname{Ass} H^n(M)}$. So $$ \phi({\mathcal A})(n)=\{ p\in \operatorname{Spec} R| \exists M\in {\mathcal A} \ {\rm with} \ p\in \overline{\operatorname{Ass} H^n(M)}\} $$ Note that $\phi({\mathcal A})\in {\mathcal S}$ since if $M\in {\mathcal A}$ and $p\in \overline{\operatorname{Ass} H^n(M)}$ then $sM\in {\mathcal A}$ and $p\in \overline{\operatorname{Ass} H^{n+1}(sM)}$ also each $\phi({\mathcal A})(n)$ is clearly closed under specialization from the way they are defined. Under the correspondence of Hopkins-Neeman \cite{Neeman} the $\phi({\mathcal A})(n)$ correspond to the thick subcategories of $D(R)$ generated by the usual truncations of $\mathcal A$ by dimensions. \par As advertised in the abstract $\phi$ can be considered an invariant of $t$-structures in $D(R)$ by simply intersecting the associated aisle with $D_{\operatorname{qc}}(R)$. \begin{lemma}\label{compatible} $N$ and $\phi$ are order preserving. \end{lemma} {\sl Proof. }\par That $\phi$ is order preserving follows immediately from the definition. For $f,g \in {\mathcal S}$, if $f\leq g$ then $M(f)$ is a retract of $M(g)$ so $M(f)\in \overline{C(M(g))}$ by Lemma \ref{retract}. Therefore $\overline{C(M(f))}\subset\overline{C(M(g))}$ and $N$ is seen to be order preserving. \cqfd \section{Complete invariant}\label{short} In this section we give a short proof that when restricted to aisles in $D_{\operatorname{qc}}(R)$, $\phi$ is injective. This implies that $\phi$ is a complete invariant of such $t$-structures. We begin with a technical lemma. \begin{lemma}\label{E} Let ${\mathcal D}\subset D(R)$ be any full triangulated subcategory. Let $\mathcal A$ be an aisle in $\mathcal D$ and $M\in {\mathcal D}$. If $p\in \operatorname{Ass} H_nP_{\mathcal A}(M)$ then $p\in \overline{\operatorname{Ass} H_n(M)}$ or there exists $N\in {\mathcal A}$, $p\in \operatorname{Ass} H_{n-1}(N)$. \end{lemma} {\sl Proof. }\par There is a distinguished triangle $$ M\langle {\mathcal A}\rangle \stackrel{f}{\rightarrow} M \rightarrow P_{\mathcal A}M \rightarrow sM\langle {\mathcal A}\rangle. $$ From this we get a short exact sequence $$ 0 \rightarrow H_n(M)/\mathrm{im} H(f) \rightarrow H_n(P_{\mathcal A}M) \rightarrow \ker H_{n-1}(f) \rightarrow 0 $$ Since $\ker H_{n-1}(f)\subset H_{n-1}M\langle {\mathcal A}\rangle$ and $M\langle {\mathcal A}\rangle\in {\mathcal A}$ , the lemma follows directly from Lemmas \ref{WAW} and \ref{WA}. \cqfd \begin{prop}\label{F} Let ${\mathcal D}=D_{\operatorname{qc}}(R)$ or $D_{\operatorname{parf}}(R)$. Suppose $\mathcal A$ is an aisle in $\mathcal D$ and $M\in {\mathcal D}$. Suppose for every $n$ and for every $p\in H_n(M)$, there exists $l\leq n$, $N\in {\mathcal A}$ and $p\in \overline{\operatorname{Ass} H_l(N)}$, then $M\in {\mathcal A}$. \end{prop} {\sl Proof. }\par Assume $P_{\mathcal A}M\not=0$. Let $n$ by the largest such that $H_n(P_{\mathcal A}M)\not=0$. This $n$ exists since $P_{\mathcal A}M\in {\mathcal D}$. Then there exists a prime $p$ such that $p\in \operatorname{Ass} H_n(P_{\mathcal A}M)$ and $p\not\in \operatorname{Ass} H_i(P_{\mathcal A}M)$, if $i>n$. Thus by Lemma \ref{E}, the hypotheses and the fact that aisles are closed under suspension, there exists $N\in {\mathcal A}$ such that $p\in \overline{\operatorname{Ass} H_n(N)}$ and $p\not\in \overline{\operatorname{Ass} H_l(N)}$ for $l<n$. So Lemma \ref{D} says that $\operatorname{Hom}(N,P_{\mathcal A} M)\not=0$, which contradicts the fact (see \cite[Proposition 1.1]{AJS}) that $P_{\mathcal A}M\in {\mathcal A}^{\perp}$. So $P_{\mathcal A}M=0$ and $M\in {\mathcal A}$. \cqfd It may look like the proof of the last proposition should extend to all nullity classes or even to any full subcategory ${\mathcal D}\subset D(R)$. Observe though that there are finiteness conditions needed in the results the proof calls on. In Section \ref{propclass} we will show there is a proper class of $t$-structures in $D\mathbb{Z}$ so, considering the next theorem, some finiteness or other assumptions are needed. \begin{thm}\label{main} Let ${\mathcal D}=D_{\operatorname{qc}}(R)$ or $D_{\operatorname{parf}}(R)$. Suppose $\mathcal A, A'$ are aisles in $\mathcal D$, then ${\mathcal A}\subset {\mathcal A}'$ if and only if for every $n\in \mathbb Z$, $\phi({\mathcal A})(n)\subset \phi({\mathcal A'})(n)$. Thus $\phi\colon \{{\rm aisles \ in\ } {\mathcal D}\} \rightarrow {\mathcal S}$ is injective. \end{thm} {\sl Proof. }\par If ${\mathcal A}\subset {\mathcal A}'$ then it is clear from the definition that $\phi({\mathcal A})(n)\subset\phi({\mathcal A}')(n)$ for all $n$. Also if $\phi({\mathcal A})(n)\subset\phi({\mathcal A}')(n)$ for all $n$, then it follows directly from Proposition \ref{F} that ${\mathcal A}\subset {\mathcal A}'$. \cqfd \section{Nullity classes in $D_{\operatorname{qc}}(R)$}\label{sec-null} In this section we classify nullity classes in $D_{\operatorname{qc}}(R)$. This also gives us another proof that $\phi$ is a complete invariant of $t$-structures. We use $k(p)$ to denote $(R/p)_{(0)}$. \begin{lemma}\label{Axfin} For any finitely generated $R$-module $B$, $\oplus_{p\in \overline{\operatorname{Ass} B}}R/p< B$. \end{lemma} {\sl Proof. }\par Since $B$ is finitely generated it has a decomposition $$ 0=B_0\subset B_1\subset \dots \subset B_n=B $$ such that $B_i/B_{i-1}=\oplus_j R/p(i,j)$ for some primes $p(i,j)$. By Lemmas \ref{WAW} and \ref{WA} each $p(i,j)\in \overline{\operatorname{Ass} B}$. Thus Definition \ref{tdef} 2) then implies that $\oplus_{p\in \operatorname{Ass} B} R/p<B$. \cqfd \begin{lemma}\label{Ax} For any $R$-module $B$, $\oplus_{p\in \overline{\operatorname{Ass} B}}R/p< B$. \end{lemma} {\sl Proof. }\par Note that since we will use Proposition \ref{closure} we work in $C(R)$. Let $\{ x_i \}_{i<\lambda}\subset B$ be a generating set. For $\alpha\leq \lambda$, let $B(\alpha)\subset B$ be the submodule generated by $\{ x_i \}_{i<\alpha}$. Then $B=B(\lambda)$ and we will prove the lemma by induction. Assume $\oplus_{p\in \overline{\operatorname{Ass} B}}R/p< B(\gamma)$ for all $\gamma<\alpha$. If $\alpha$ is a successor ordinal then $B(\alpha)=colim_{\gamma<\alpha}B(\alpha)$ so $\oplus_{p\in \overline{\operatorname{Ass} B}}R/p< B(\alpha)$ by Proposition \ref{closure}. If $\alpha=\gamma+1$ then consider the exact sequence $$ 0 \rightarrow B(\gamma) \rightarrow B(\alpha) \rightarrow M=B(\alpha)/B(\gamma) \rightarrow 0. $$ The image of $x_{\alpha}$ generates $M$ and so from Lemmas \ref{WAW} and \ref{WA}, $\operatorname{Ass} M\subset \overline{\operatorname{Ass} B(\alpha)} \subset \overline{\operatorname{Ass} B}$. Thus from Lemma \ref{Axfin}, $\oplus_{p\in \overline{\operatorname{Ass} B}}R/p<M$. So by Definition \ref{tdef} 2) and the induction hypothesis, $\oplus_{p\in \overline{\operatorname{Ass} B}}R/p< B(\alpha)$. The lemma now follows by induction. \cqfd \begin{lemma}\label{STA} For every $M\in D(R)$ with homology in only finitely many degrees, $$ \bigoplus_{i\in {\mathbb Z}}\bigoplus_{p\in\overline{\operatorname{Ass} H^i(M)}} s^iR/p< M $$ \end{lemma} {\sl Proof. }\par By finiteness $M$ has a decomposition $$ 0\rightarrow M_{r}\rightarrow M_{r+1} \rightarrow \dots \rightarrow M_s=M $$ such that for every $i$, $M_i\rightarrow M_{i+1} \rightarrow s^{i+1}H^{i+1}(M)\rightarrow sM_i$ is a distinguished triangle. Thus it follows easily from Lemma \ref{Ax} and Definition \ref{tdef}, 2) that $\bigoplus_{i\in {\mathbb Z}}\bigoplus_{p\in\ \overline{\operatorname{Ass} H^i(M)}} s^iR/p< M$. \cqfd A similar proof of the last lemma works in the category of bounded above complexes and presumably the lemma can be proved in the full derived category using an idea similar to that in Lemma \ref{Ax} \begin{lemma}\label{torn} If $E\in D_{\operatorname{qc}}(R)$ then $$ (P_E M) \otimes R_p\cong P_{E\otimes R_p}(M \otimes R_p) $$ Where the $P_{E\otimes R_p}$ is taken in the category of $R_p$ modules. \end{lemma} {\sl Proof. }\par In the construction of $P_EM$ in \cite[Proposition 3.2]{AJS}, there is a cardinal $\gamma$ and a sequence of objects, $\{ B_{\alpha}\}_{\alpha< \gamma}$ such that $B(0)=M$, for every $\alpha< \gamma$, there exists a distinguished triangle $$ \oplus s^kE \rightarrow B_{\alpha} \rightarrow B_{\alpha+1} \rightarrow s\oplus s^kE. $$ If $\alpha$ is a limit ordinal then $B_{\alpha}=lim_{i<\alpha}B_i$ and $P_EM=colim_{\alpha< \gamma}B_{\alpha}$. Since $\_\otimes R_p$ preserves triangles we get a sequence of triangles in $D(R_p)$ $$ \oplus s^kE\otimes R_p \rightarrow B_{\alpha}\otimes R_p \rightarrow B_{\alpha+1}\otimes R_p \rightarrow s(\oplus s^kE\otimes R_p) $$ Since $\_\otimes R_p$ commutes with taking colimits the natural map $colim (B_{\alpha}\otimes R_p)\rightarrow (colim B_{\alpha})\otimes R_p$ is an isomorphism. These two facts imply that $M\otimes R_p\rightarrow P_EM\otimes R_p$ is a $P_{E\otimes R_p}$ equivalence. Also Lemma \ref{rag} implies that $\operatorname{Hom}_{R_p}(E\otimes R_p, P_EM\otimes R_p)=0$. Thus the lemma follows from Corollary \ref{use}. \cqfd \begin{lemma}\label{1A} $P_AB=0$ implies that for every $M$, $P_{A\otimes M}B\otimes M=0$. \end{lemma} {\sl Proof. }\par Recalling the construction of $P_AB$ (see proof of Lemma \ref{torn}) we have a cardinal $\gamma$, $\{ B_{\alpha} \}_{\alpha< \gamma}$ such that $B(0)=N$ and distinguished triangles $$ \oplus s^kA \rightarrow B_{\alpha} \rightarrow B_{\alpha+1} \rightarrow s\oplus s^kA. $$ and if $\alpha$ is a limit ordinal then $B_{\alpha}=colim_{i<\alpha} B_i$ and $P_AB=B_{\gamma}=0$. Since $\_\otimes M$ preserves triangles and colimits the result follows. \cqfd \begin{lemma}\label{1B} For $M\in D(R)$, if $H_*(M)=0$ for $*<0$, then $P_RM=0$. \end{lemma} {\sl Proof. }\par Straightforward. \cqfd \begin{lemma}\label{good1} If $P_AB=0$ and $H_i(M)=0$ for $i<0$ then $P_A(B\otimes M)=0$. \end{lemma} {\sl Proof. }\par Lemma \ref{1B} says that $R<M$ so Lemma \ref{1A} implies that $B<B\otimes M$. Since $A<B$ by assumption, the transitivity of $<$ (Proposition \ref{trans}) implies that $A<B\otimes M$ and we are done. \cqfd \begin{lemma}\label{1C} If $M\in D_{\operatorname{qc}}(R)$ and $q\in \overline{\operatorname{Ass} H_0(M)}$, then $M<k(q)$. \end{lemma} {\sl Proof. }\par By Lemma \ref{1B}, $R<k(q)$. So by Lemma \ref{1A} $M<M \otimes k(q)$. By \cite[Lemma 2.17]{BN}, $M \otimes k(q)$ is a direct sum of suspensions of $k(q)$. In degree $0$ this direct sum is non-trivial by Lemma \ref{A}, since $H_0(M)$ is finitely generated and $q\in \overline{\operatorname{Ass} H_0(M)}$. The result follows from Lemma \ref{retract}. \cqfd The last lemma does not always hold for $M\in D(R)$ as we see by taking $R=\mathbb{Z}$, $M=\mathbb{Q}$ and $q=(p)$ for any non zero prime $p\in \mathbb{Z}$. \begin{lemma}\label{LL} $\operatorname{Ass} (k(q))=\{ q \}$. \end{lemma} {\sl Proof. }\par Let $\frac{x}{u}\in k(q)$ with $x\in R/q$ and $u\in R\setminus q$. Clearly $p\subset \operatorname{ann} x$ and if $l\in \operatorname{ann} x$ then $vlx=0$ for some $v\in R\setminus q$. Since $R/q$ is an integral domain this implies either $x=0$ or $l\in q$. So $\operatorname{ann} x=R$ or $\operatorname{ann} x\subset p$ and we are done. \cqfd \begin{prop}\label{1D} Suppose $\dim(R)$ is finite and $S\in D_{\operatorname{qc}}(R)$. For every $p\in \operatorname{Ass} H_0(S)$ and $q$ such that $p\subset q$, $S<s^{\dim R/q}R/q$. In particular $S<s^{\dim R}R/q$. \end{prop} {\sl Proof. }\par Fix $p\in \operatorname{Ass} H_0(S)$. Looking at a particular $q$, assume for every $q'$ with $q\subset q'$, $q'\not=q$ the lemma holds. Let $M$ be defined to make the following sequence short exact $$ 0\rightarrow R/q \rightarrow k(q) \rightarrow M \rightarrow 0 $$ By Lemma \ref{LL}, $\operatorname{Ass} (k(q))=\{ q \}$ and so by Lemma \ref{WAW}, $\operatorname{Ass} M \subset \overline{\operatorname{Ass} (k(q))}=\overline{q}$. Since $R/q\otimes k(q) \rightarrow k(q)\otimes k(q)$ is an isomorphism, $M\otimes k(q)=0$. If $q\in \operatorname{Ass} M$ then there exists an injection $R/q\rightarrow M$ and this would mean that $M\otimes k(q)\not=0$. So $q\not\in \operatorname{Ass} M$, and $\operatorname{Ass} M\subset \overline{q}-\{ q\}$. Therefore by the induction hypothesis and Lemma \ref{Ax} $S<s^{\dim R/q-1}M$. By Definition \ref{tdef} 2) and Lemma \ref{1C}, $S<k(q)<s^{\dim R/q-1}k(q)$ so by the short exact sequence above $S<s^{\dim R/q}R/q$. Notice that if $\dim R/q=0$ then $q$ is maximal and $k(q)=R/q$, so $S<k(q)=R/q$. This completes the proof of the first statement of the proposition. The second statement follows since $\dim R/q \leq \dim R/p$. \cqfd \begin{lemma}\label{small ring} Suppose that $\dim R$ is finite and $M\in D_{\operatorname{qc}}(R)$. If $p'\in ass H^n(M)$ and $p'\subset p$ then there exist $N\in D_{\operatorname{qc}}(R)$ such that $p\in \operatorname{Ass} H^n(N)$, $M<N$ and for every $i$, if $p''\in \operatorname{Ass} H^i(N)$ then $p\subset p''$. \end{lemma} {\sl Proof. }\par Since $M<sM$ we can assume that $n$ is the smallest such that there is a $p'\in \operatorname{Ass} H^n(M)$ with $p'\in p$. Thus $H^i(M\otimes R_p)=0$ for $i<n$. Suppose $p=(x_1, \dots, x_s)$ and let $K=K(x_1, \dots, x_s)=\otimes_i {\rm Cone}(R\stackrel{\cdot x_i}{\rightarrow} R)$ denote the Koszul complex. Let $N=M\otimes K$. By Lemma \ref{good1} $M<N$. By \cite[Proposition 17.14]{Eisenbud} if $y\in p$ then $y$ annihilates $H^*(N)$, hence for every $i$, if $p''\in \operatorname{Ass} H^i(N)$ then $p\subset p''$. Using this and the fact that $H^i(M\otimes R_p)=0$ for $i<n$ we calculate that $p\in \operatorname{Ass} H^n(N)$. This shows that $N$ satisfies the desired conditions. \cqfd \begin{lemma}\label{LM1} Suppose $\dim(R)$ is finite and $M\in D_{\operatorname{qc}}(R)$. If $p\in \operatorname{Ass} H^i(M)$ and $p\subset p'$ then $M< s^tR/p'$ for all $t\geq i$. \end{lemma} {\sl Proof. }\par Fix a prime $p$ and assume that the lemma is true for each $M\in D_{\operatorname{qc}}(R)$ and each prime $p''$ such that $p\varsubsetneq p''$. We wish to show the lemma is true for $p$. So assume that $p\in \operatorname{Ass} H^i(M)$. By Lemma \ref{small ring} and the induction hypothesis if $p\varsubsetneq p'$ then $M< s^iR/p'$. So we just have to prove that $M< s^iR/p$. \par From Proposition \ref{1D} we know that for some $k$, $M< s^kR/p$. Let $j\leq k$ be the smallest such that $M< s^jR/p$. If $j\leq i$ we are done, otherwise all that remains is to show that $M<s^{j-1}R/p$. Since $M<sM$ and $<$ is transitive using Lemma \ref{small ring}, there exists $N$ such that $M<N$, $H^{j-1}(N)\otimes R_p\not=0$,$H^l(N)\otimes R_p=0$ if $l<j-1$ and if $p''\in \operatorname{Ass} H^n(N)$ for some $n$ then $p\subset p''$. By Lemma \ref{map} there is a map $\phi\colon N \rightarrow s^{j-1}R/p$ such that $H^{j-1}(\phi)\not=0$. A simple calculation with the long exact sequence on homology then shows that $\operatorname{Ass} H^t(C(\phi))\subset \overline{\operatorname{Ass} H^t(N)\cup \operatorname{Ass} H^{t-1}(N)}$ and $\operatorname{Ass} H^{j-1}(C(\phi))\subset \overline p -p$. Thus the induction hypothesis says that for each $t$ and $p\in \operatorname{Ass} H^t(C(\phi))$, $M<s^tR/p$. Thus by Lemma \ref{STA} we get that $M<C(\phi)$, since also $M<N$ we get that $M<s^{j-1}R/p$ and we are done. \cqfd Next we remove the hypothesis that $\dim R$ is finite by reducing to the local case and using the last lemma. \begin{lemma}\label{LM} Suppose $M\in D_{\operatorname{qc}}(R)$. If $p\in \operatorname{Ass} H^i(M)$ and $p\subset p'$ then $M< s^tR/p'$ for all $t\geq i$. \end{lemma} {\sl Proof. }\par Let $q$ be any prime of $R$ then by Lemma \ref{torn} $$ P_M(s^tR/p')\otimes R_q \cong P_{M\otimes R_q}s^t(R/p'\otimes R_q). $$ If $p'\subset q$ then $p\subset q$ and $p\otimes R_q\in \operatorname{Ass} H^i(M\otimes R_q)$. Also $p\otimes R_q \subset p'\otimes R_q$ and $(R/p')\otimes R_q=R/(p'\otimes R_q)$. By the Krull Principal Ideal Theorem (\cite[Theorem 10.2]{Eisenbud}) $\dim R_q$ is finite, so by Lemma \ref{LM1} $P_{M\otimes R_q}(s^tR/q'\otimes R_q)=0$. So we have that for all primes $q$ of $R$, $P_M(s^tR/p')\otimes R_q=0$ and therefore by \cite[Lemma 2.8]{Eisenbud}, $P_M(s^tR/p')=0$. By definition this is the same as saying that $M<s^tR/p'$. \cqfd \begin{prop}\label{LA} For any nullity class ${\mathcal A}$ with homology in finitely many degrees, ${\mathcal A}\subset N\phi({\mathcal A})$. \end{prop} {\sl Proof. }\par Let $M\in {\mathcal A}$. Then by definition for every $n\in \mathbb{Z}$ and $p\in \overline{\operatorname{Ass} H^n(M)}$, $p\in \phi({\mathcal A})(n)$. Thus $M(\phi({\mathcal A}))<\oplus_n\oplus_{p\in \overline{\operatorname{Ass} H^n(M)}}R/p$ since it is a retract of $M(\phi({\mathcal A}))$. So Lemma \ref{STA} and Proposition \ref{trans} imply that $M(\phi({\mathcal A}))<M$ and therefore $M\in N\phi{\mathcal A}$, so ${\mathcal A}\subset N\phi({\mathcal A})$. \cqfd \begin{prop}\label{LB} For every nullity class ${\mathcal A}\subset D_{\operatorname{qc}}(R)$, $N\phi({\mathcal A})\subset {\mathcal A}$. \end{prop} {\sl Proof. }\par Suppose $p\in \phi({\mathcal A})(n)$, then there exist $M(p,n)\in {\mathcal A}$ with $p\in \overline{\operatorname{Ass} H^n(M)}$. By Lemma \ref{LM}, $M(p,n)<s^nR/p$. Therefore $\oplus_n\oplus_{p\in \phi({\mathcal A})(n)} M(p,n) \in {\mathcal A}$ and $$ \oplus_n\oplus_{p\in \phi({\mathcal A})(n)} M(p,n)< \oplus_n\oplus_{p\in \phi({\mathcal A})(n)} s^nR/p=M(\phi({\mathcal A})). $$ So $M(\phi({\mathcal A}))\in \overline{C(\oplus_n\oplus_{p\in \phi({\mathcal A})(n)} M(p,n))}\subset {\mathcal A}$. and $N\phi({\mathcal A})\subset {\mathcal A}$ as desired. \cqfd Notice that the condition ${\mathcal A}\subset D_{\operatorname{qc}}(R)$ is needed, since $(0)\in \operatorname{Ass} \mathbb{Q}$, so $\mathbb{Z}\in N\phi(\overline{C(\mathbb{Q})})$. However $\mathbb{Q}\not<\mathbb{Z}$ and so $\mathbb{Z}\not\in \overline{C(\mathbb{Q})}$. \begin{prop}\label{LC} Working in $D_{\operatorname{qc}}(R)$, for any $f\in {\mathcal S}$, $\phi Nf=f$. \end{prop} {\sl Proof. }\par Suppose $p\in f(n)$ then $s^nR/p\in Nf$ and so since $p\in \operatorname{Ass} H^n(s^nR/p)$, $p\in \phi Nf(n)$. \par Now suppose $p\in \phi Nf(n)$. Then there is a $M\in N(f)$ such that $M(f)<M$ and $p\in {\overline \operatorname{Ass} H^n(M)}$. Thus Lemma \ref{A} and \cite[Lemma 2.17]{BN}, which says that $M\otimes k(p)$ is a direct sum of suspensions of $k(p)$, imply that $s^nk(p)$ is a retract of $M\otimes k(p)$. So using Lemmas \ref{retract} and \ref{1A} and Proposition \ref{trans}, we see that $M(f)\otimes k(p)<M\otimes k(p)<s^nk(p)$. Since $M(f)\otimes k(p)$ is also a direct sum of suspensions of $k(p)$, it follows that for some $l\leq n$, $s^lk(p)$ is a retract of $M(f)\otimes k(p)$. Next we applying Lemma \ref{A}, $p\in \overline{H^lM(f)}$ and so $p\in f(l)$ since $f(l)$ is closed under specialization. Since $f$ is increasing and $l\leq n$, $p\in f(n)$. \cqfd The next theorem provides a classification of nullity classes in $D_{\operatorname{qc}}(R)$. \begin{thm}\label{class} \par $\phi\colon NC\rightarrow {\mathcal S} $ and $N\colon {\mathcal S} \rightarrow NC $ are inverse bijections of partially ordered sets. \end{thm} {\sl Proof. }\par This follows from the last three lemmas \ref{LA}, \ref{LB} and \ref{LC}. \cqfd As a corollary we give another proof of Theorem \ref{main} \begin{cor}\label{cor-inj} Suppose $\mathcal A, A'$ are aisles in $D_{\operatorname{qc}}(R)$, then ${\mathcal A}\subset {\mathcal A}'$ if and only if for every $n\in \mathbb Z$, $\phi({\mathcal A})(n)\subset \phi({\mathcal A'})(n)$. Thus $\phi\colon aisles \rightarrow {\mathcal S}$ is injective. \end{cor} {\sl Proof. }\par Follows from Theorem \ref{class}. \cqfd We can consider constant functions $f\in {\mathcal S}$ so that $f(i)=f(j)$ for all $i,j\in \mathbb{Z}$. Taking $N$ of such a function we get a nullity class $N(f)$ that is closed under desuspension. As such it is a thick subcategory in $D_{\operatorname{qc}}(R)$ that is closed under retracts, yet we do not get all retraction closed thick subcategories in this way. For example consider $R=\mathbb{Z}/4$. $\operatorname{Spec} \mathbb{Z}/4=\{ (2) \}$, so a constant function $f\colon \mathbb{Z} \rightarrow \operatorname{Spec} \mathbb{Z}/4$ is either $f(n)=\emptyset$ or $f(n)=\{ (2) \}$. If $f(n)=\emptyset$ then $N(f)=\{ 0 \}$, the class consisting of only the trivial complex, and if $f(n)=\{ (2) \}$ then $N(f)=D_{\operatorname{qc}}(R)$. However $D_{\operatorname{parf}}(R)\subset D_{\operatorname{qc}}(R)$ is another thick subcategory that is closed under retracts, and $D_{\operatorname{parf}}(R)\not= D_{\operatorname{qc}}(R)$ since $\mathbb{Z}/2\notin D_{\operatorname{parf}}(R)$ as it only has infinite resolutions. So in $D_{\operatorname{qc}}(\mathbb{Z}/4)$ there are more thick subcategories closed under retracts than nullity classes close under suspension. Nevertheless considering constant functions in $\mathcal S$ simply as subsets of $\operatorname{Spec} R$ we do get the following corollary of Theorem \ref{class}. \begin{cor} $\phi$ and $N$ induce order preserving bijections between the set of nullity classes in $D_{\operatorname{qc}}$ closed under desuspension and subsets of $\operatorname{Spec} R$ closed under specialization. \end{cor} {\sl Proof. }\par Follows directly from Theorem \ref{class} and the definitions of $\phi$ and $N$. \cqfd It is tempting to think that by restricting to $D_{\operatorname{parf}}(R)$, we should be able to recover the result of Hopkins and Neeman, but I know of no way to do that. \section{Image of invariant}\label{image} In this section using the classification of nullity classes we get some control over what the image of $\phi$ is when restricted to $t$-structures. The main object of the section is to show that if ${\mathcal A}\subset D_{\operatorname{qc}}(R)$ is an aisle then then if $p\in \phi({\mathcal A})(n)$ then all primes maximal under $p$ must be in $\phi({\mathcal A})(n+1)$ (Theorem \ref{comono}). So $\phi({\mathcal A})$ must increase in a very particular way. \par As a motivating example let us work in $D(\mathbb{Z}_{(p)})$ and consider $P_{\mathbb{Z}/p}s\mathbb{Z}_{(p)}$. Since we have a short exact sequence $$ 0\rightarrow \mathbb{Z}_{(p)} \stackrel{\times p}{\rightarrow} \mathbb{Z}_{(p)} \rightarrow \mathbb{Z}/p \rightarrow 0 $$ we have a triangle $$ \mathbb{Z}/p \rightarrow s\mathbb{Z}_{(p)} \stackrel{\times p}{\rightarrow} s\mathbb{Z}_{(p)} \rightarrow s\mathbb{Z}/p $$ so $s\mathbb{Z}_{(p)} \stackrel{\times p}{\rightarrow} s\mathbb{Z}_{(p)}$ is a $P_{\mathbb{Z}/p}$ equivalence. Taking colimits we see that $$ s\mathbb{Z}_{(p)} \rightarrow colim(s\mathbb{Z}_{(p)} \stackrel{\times p}{\rightarrow} s\mathbb{Z}_{(p)} \stackrel{\times p}{\rightarrow} \dots)= s\mathbb{Q} $$ is a $P_{\mathbb{Z}/p}$ equivalence. Also $\operatorname{Hom}(s^i\mathbb{Z}/p,s\mathbb{Q})=0$ for all $i\geq 0$, so $P_{\mathbb{Z}/p}s\mathbb{Z}_{(p)}= s\mathbb{Q}$. However $s\mathbb{Z}_{(p)}\in D_{\operatorname{qc}}(\mathbb{Z}_{(p)})$ but $s\mathbb{Q}\not\in D_{\operatorname{qc}}(\mathbb{Z}_{(p)})$. This implies that the nullity class ${\mathcal A}=\overline{C(\mathbb{Z}/p)}$ is not an aisle in $D_{\operatorname{qc}}(\mathbb{Z}_{(p)})$. In fact we would need $s\mathbb{Z}_{(p)}\in {\mathcal A}$ to make it an aisle. It is this basic phenomenon that stops many nullity classes from being aisles. Recall from \ref{p def} that for $f \in {\mathcal S}$, $M(f)=\oplus_i\oplus_{p\in f(i)}s^iR/p$. \begin{lemma}\label{Ming} Suppose $R$ is a local ring with maximal ideal $m$, and $p$ a prime maximal under $m$. Let $h\in m-p$ be any element. Suppose $f\in {\mathcal S}$ such that $m\in f(n)$, $f(n-1)=\emptyset$ and $p\not\in f(n+1)$. If $N=s^n(R/p)/(h)\oplus_{q\not=p\in f(n)}s^nR/q \bigoplus\oplus_{i\not=n}\oplus_{q\in f(i)}s^iR/q$ then $P_{M(f)}=P_N$. \end{lemma} Notice to get $N$ from $M(f)$ we simply replaced $s^nR/m$ by $s^n(R/p)/(h)$ and left everything else the same. \par \noindent \Proof{of lemma} The only prime that contains $(h)=\operatorname{ann}((R/p)/(h))$ is $m$ and therefore by \cite[Theorem 3.1 a)]{Eisenbud}, $\operatorname{Ass}((R/p)/(h))=m$. Therefore by Theorem \ref{class}, $s^n(R/p)/(h)<s^nR/m$ and $s^nR/m<s^n(R/p)/(h)$. It follows that $M(f)<N$ and $N<M(f)$, therefore $P_{M(f)}=P_N$. \cqfd \begin{lemma}\label{Ping} Suppose $R$ is a local ring with maximal ideal $m$, and $p$ a prime maximal under $m$. Let $h\in m-p$ be any element. Suppose $f\in {\mathcal S}$ such that $m\in f(n)$, $f(n-1)=\emptyset$ and $p\not\in f(n+1)$. $P_{M(f)}(s^{n+1}R/p)=s^{n+1}(R/p[\frac{1}{h}])$ \end{lemma} {\sl Proof. }\par $P_{M(f)}=P_N$ from Lemma \ref{Ming}. Suppose $q\in f(n)$ or $f(n+1)$. If $q\subset p$ then since $f(n)$ and $f(n+1)$ are closed under specialization and $f(n)\subset f(n+1)$, we see that $p\in f(n+1)$. \par So $q\not\subset p$ and we can choose $h\in q$. So for any map $f\colon s^iR/q\rightarrow s^{n+1}R/p[\frac{1}{h}]$ we get a square, $$ \xymatrix { s^i R/q \ar[r]^f \ar[d]_{\times h} & s^{n+1}R/p[\frac{1}{h}] \ar[d]^{\times h} \\ s^i R/q \ar[r]^f & s^{n+1}R/p[\frac{1}{h}]. } $$ The left vertical map is $0$ but the right vertical map is an isomorphism, this implies that $f=0$. So $\operatorname{Hom}(s^iR/q, s^{n+1}R/p[\frac{1}{h}])=0$. It follows that $s^{n+1}(R/p[\frac{1}{h}])$ is $M(f)$ local. Considering the exact sequence $$ 0 \rightarrow R/p \stackrel{\times h}{\rightarrow} R/p \rightarrow (R/p)/(h) \rightarrow 0 $$ We get a triangle $$ s^n(R/p)/(h) \rightarrow s^{n+1}R/p \stackrel{\times h}{\rightarrow} s^{n+1}R/p \rightarrow s(R/p)/(h) $$ And so Proposition \ref{closure} 2) implies that $s^{n+1}R/p \stackrel{\times h}{\rightarrow} s^{n+1}R/p$ is a $P_{s^n(R/p)/(h)}$ equivalence. Since $M(f)<N<P_{s^n(R/p)/(h)}$ it is also a $P_{M(f)}$ equivalence by Proposition \ref{char uni} 6). Since $s^{n+1}R/p[\frac{1}{h}]$ is the colimit of such maps, it follows from Proposition \ref{closure} 1) that $s^{n+1}R/p\rightarrow s^{n+1}(R/p[\frac{1}{h}])$ is a $P_{M(f)}$ equivalence. Thus, since we saw above that $s^{n+1}(R/p[\frac{1}{h}])$ is $M(f)$ local $P_{M(f)}(s^{n+1}R/p)=(s^{n+1}R/p[\frac{1}{h}])$ by Corollary \ref{use}. \cqfd \begin{lemma}\label{Ling} Using notation from last few lemmas, $s^{n+1}(R/p[\frac{1}{h}])\not\in D_{\operatorname{qc}}(R)$. \end{lemma} {\sl Proof. }\par We know that $R/p[\frac{1}{h}]=colim (R/p \stackrel{\times h}{\rightarrow} R/p \stackrel{\times h}{\rightarrow} \cdots )$. Since $R/p$ is an integral domain, each map $\times h$ is injective and since $h\in m\setminus p$, $h$ is not a unit and so $\times h$ is not surjective. These two facts imply that $R/p[\frac{1}{h}]$ is not a finitely generated $R$-module. Thus since $H^{n+1}(s^{n+1}R/p[\frac{1}{h}])=R/p[\frac{1}{h}]$, $s^{n+1}R/p[\frac{1}{h}]\not\in D_{\operatorname{qc}}(R)$. \cqfd \begin{prop}\label{not finite} Suppose $p'\in \operatorname{Spec} R$ and $p$ a prime maximal under $p'$. Suppose $f\in {\mathcal S}$ such that $p'\in f(n)$ and $p\not\in f(n+1)$, then $P_{M(f)}(s^{n+1}R/p)\not\in D_{\operatorname{qc}}(R)$. \end{prop} {\sl Proof. }\par By Lemmas \ref{torn} and \ref{Ping} $$ (P_{M(f)}s^{n+1}R/p\otimes R_{p'}\cong P_{M(f)\otimes R_{p'}}(s^{n+1}R/p\otimes R_{p'})\cong s^{n+1}R/p\otimes R_{p'}[\frac{1}{h}] $$ By Lemma \ref{Ling}, $s^{n+1}R/p\otimes R_{p'}[\frac{1}{h}]\not\in D_{\operatorname{qc}}(R_{p'})$. Hence $P_{M(f)}s^{n+1}R/p\otimes R_{p'}\not\in D_{\operatorname{qc}}(R_{p'})$ and so $P_{M(f)}s^{n+1}R/p\not\in D_{\operatorname{qc}}(R)$. \cqfd \begin{thm}\label{goodfinite} Suppose $E\in D(R)$ and ${\mathcal D}\subset D(R)$ is a full triangulated subcategory. If for every $M\in {\mathcal D}$, $P_EM\in {\mathcal D}$ then the nullity class ${\mathcal A}=\overline{C(E)}\cap {\mathcal D}$ is an aisle. The converse is also true if there exist a set $\{ E(\alpha)\}_{\alpha<\lambda}$ of objects in $\mathcal A$ such that $\oplus_{\alpha<\lambda}E(\alpha)<E$. \end{thm} {\sl Proof. }\par Suppose that for every $M\in {\mathcal D}$, $P_EM\in {\mathcal D}$. Looking at the distinguished triangle of Equation \ref{long}, we see that $M\langle E \rangle \in {\mathcal D}$ as well. The functor $M\mapsto M\langle E \rangle$ gives the required right adjoint to the inclusion ${\mathcal A}\subset {\mathcal D}$ and so ${\mathcal A}$ is an aisle. \par Now suppose $\mathcal A$ is an aisle. Let $M\in {\mathcal D}$. We know, see \cite[Proposition 1.1]{AJS} for example for a proof, that we have a triangle in $\mathcal D$ $$ M\langle {\mathcal A} \rangle \rightarrow M \rightarrow P_{\mathcal A} M \rightarrow sM\langle {\mathcal A} \rangle $$ such that: \par a) $M\langle {\mathcal A} \rangle\in {\mathcal A}$. \par b) $P_{\mathcal A}M\in {\mathcal A}^{\perp}$. \par By Proposition \ref{closure} 2), a) implies that $M \rightarrow P_{\mathcal A} M$ is a $P_{E}$ equivalence. \par Statement b) above says that for every $X\in {\mathcal A}$, $\operatorname{Hom}(X,P_{\mathcal A}M)=0$. In particular for every $\alpha$, since $E(\alpha)\in {\mathcal D}$ and $E<E(\alpha)$, $E(\alpha)\in {\mathcal A}$, and so $\operatorname{Hom}(E(\alpha),P_{{\mathcal A}}M)=0$. Thus $$ \operatorname{Hom}(\oplus_{\alpha<\lambda}E(\alpha),P_{\mathcal A}M)= \prod_{\alpha<\lambda}\operatorname{Hom}(E(\alpha),P_{\mathcal A}M)=0, $$ and so $P_{\mathcal A}M$ is $P_{\oplus_{\alpha<\lambda}E(\alpha)}$ acyclic and thus since $\oplus_{\alpha<\lambda}E(\alpha)<E$, $P_E$ acyclic by Proposition \ref{char uni} 6). So by Corollary \ref{use}, $P_{E}M\cong P_{\mathcal A}M\in {\mathcal D}(R)$. \cqfd The condition that $\oplus_{\alpha<\lambda}E(\alpha)<E$ arises since something could be in $\overline{C(E)}^{\perp}$ when restricted to maps in a smaller category, like $\mathcal D$, but no longer in $\overline{C(E)}^{\perp}$ in $D(R)$. This seems related to the problem of the construction of cohomological localizations in the category of spectra. \begin{cor}\label{finite} For $f\in {\mathcal S}$, $N(f)$ is an aisle if and only if for every $A\in D_{\operatorname{qc}}(R)$, $P_{M(f)}A\in D_{\operatorname{qc}}(R)$. \end{cor} {\sl Proof. }\par By definition $N(f)=\overline{C(M(f)}\cap D_{\operatorname{qc}}(R)$ and $M(f)=\oplus_n\oplus_{p\in f(n)} s^nR/p$, so in particular $\oplus_n\oplus_{p\in f(n)} s^nR/p<M(f)$ and each $s^nR/p\in N(f)$. Thus the corollary follows from the theorem. \cqfd We call a function $f\colon \mathbb{Z}\rightarrow \operatorname{Spec} R$ {\em comonotone} if whenever $p'\in f(n)$ and $p$ is maximal under $p'$, then $p\in f(n+1)$. \begin{thm}\label{comono} If $\mathcal A$ is an aisle in $D_{\operatorname{qc}}(R)$, then $\phi({\mathcal A})$ is comonotone. \end{thm} {\sl Proof. }\par Follows directly from Proposition \ref{not finite} and Theorem \ref{finite}. \begin{conj} For a noetherian ring $R$, if $f$ is comonotone then $N(f)$ is an aisle. \end{conj} The converse of this is Theorem \ref{comono} and by Corollary \ref{cor-inj} or Theorem \ref{main}, all aisles in $D_{\operatorname{qc}}(R)$ are of this form. So proving this conjecture would complete the classification of $t$-structures in $D_{\operatorname{qc}}(R)$. \vskip 0.3cm \begin{example}\label{ex-qcright} Quasi-coherent complexes are really the right ones to work with in this case since there are additional restrictions for having a $t$-structure in $D_{\operatorname{parf}}(R)$. For example consider again $D_{\operatorname{parf}}(\mathbb{Z}/4)$ and $f\colon \mathbb{Z} \rightarrow \operatorname{Spec} \mathbb{Z}/4=\{ (2)\}$ given by $$ f(i)= \begin{cases} \emptyset & i\leq 0 \\ \{ (2) \} & i>0 \end{cases} $$ Then $M(f)=\oplus_{i>0}s\mathbb{Z}/2$, $N(f)$ is just all complexes with homology concentrated in possitive degrees and $P_{M(f)}$ is just truncation. Letting $A$ be the complex $$ A_i= \begin{cases} \mathbb{Z}/4 & i=0,1 \\ 0 & {\rm else} \end{cases} $$ and $d\colon A_1\rightarrow A_0$ be multiplication by $2$, we can see that $P_{M(f)}A=\mathbb{Z}/2$, but $\mathbb{Z}/2\not\in D_{\operatorname{parf}}(\mathbb{Z}/4)$ since any resolution of it has infinite length. Also $sA\in \overline{C(M(f)} \cap D_{\operatorname{parf}}(R)$, and $sA<s\mathbb{Z}/2$, so we get by Theorem \ref{goodfinite} that $\overline{C(M(f))}\cap D_{\operatorname{parf}}(R)$ is not an aisle. \end{example} \section{A class of t-structures in $D(\mathbb Z)$.}\label{propclass} In this section we show that the t-structures in $D(\mathbb Z)$ do not form a set but rather a proper class (Corollary \ref{properclass}). The same proof shows that the nullity classes in spectra and in topological spaces do not form a set. Similarly the $t$-structures in the triangulated category of spectra do not form a set. These results follows easily from some nice, and more difficult, examples of Shelah \cite{Shelah}. \par There are two main reasons we chose to exhibit this result. The first reason is to show that it is unreasonable to expect a nice classification of $t$-structures or nullity classes in $D(R)$. The second, related, reason is to contrast with what happens in the case of localizing subcategories in $D(R)$. If we demand that our nullity classes are also closed under taking desuspensions, we get a localizing category in $D(R)$. Neeman \cite {Neeman} showed that these are in 1-1 correspondence with subsets of $\operatorname{Spec} R$, so the situation is only slightly more complicated than for thick subcategories of $D_{\operatorname{parf}}(R)$. So going from something with some finiteness conditions, $D_{\operatorname{parf}}(R)$, to infinite things, $D(R)$, only increases complexity slightly. However the situation for nullity classes is much different. In the finite case, $D_{\operatorname{qc}}(R)$, we have a classification more or less in terms of increasing sequences of thick subcategories; when we move to $D(R)$ though, we completely lose control, we might have a proper class of nullity classes, and even with the extra condition required for a $t$-structure still have a proper class. \begin{defin} A system $\{ A_{\alpha}\}_{\alpha \in Y}$ of distinct abelian groups is called a rigid system if $\alpha\not=\beta$ implies that $\operatorname{Hom}(A_{\alpha},A_{\beta})=0$. \end{defin} In \cite{Shelah}, Shelah proved the following: \begin{thm}\label{She}\cite{Shelah} For every cardinal $\lambda$ there is a rigid system of abelian groups $\{ A_{\alpha} \}_{\alpha \in 2^\lambda}$ such that $|A_{\alpha}|=\lambda$. \end{thm} \begin{prop}\label{her} Consider a rigid system $\{ A_{\alpha} \}_{\alpha \in Y}$ of abelian groups. If $\alpha\not=\beta$ then in $D(\mathbb Z)$, $P_{A_{\alpha}}A_{\beta}=A_{\beta}\not=0$ hence $A_{\alpha} \not< A_{\beta}$. \end{prop} {\sl Proof. }\par Suppose $\alpha\not=\beta$. Since $\{ A_{\alpha} \}$ is a rigid system, $\operatorname{Hom}(A_{\alpha},A_{\beta})=0$ and since $H_i(A_{\alpha})=0$ for $i<0$ and $H_i(A_{\beta})=0$ for $i>0$, $\operatorname{Hom}(s^iA_{\alpha}, A_{\beta})=0$ for all $i>0$. Thus $P_{A_{\alpha}}A_{\beta}=A_{\beta}\not=0$. That $A_{\alpha} \not< A_{\beta}$ then follows directly from the definition of $<$. \cqfd \begin{cor}\label{properclass} The class of t-structures, and hence also the class of nullity classes, in $D(\mathbb Z)$ do not form a set. \end{cor} {\sl Proof. }\par For any cardinal $\lambda$ let $\{ A_{\alpha} \}_{\alpha \in 2^{\lambda}}$ be the rigid system of abelian groups of Theorem \ref{She}. To each $A_{\alpha}$ using Theorem \ref{AJS} we associate the aisle $\overline{C(A_{\alpha})}$. By Proposition \ref{her}, if $\alpha\not=\beta$ then $A_{\alpha} \not< A_{\beta}$ and hence by Proposition \ref{trans} $\overline{C(A_{\alpha})}\not=\overline{C(A_{\beta})}$. Thus the aisles $\{ \overline{C(A_{\alpha})}\}_{\alpha\in 2^{\lambda}}$ are all distinct, which means that the t-structures $(\overline{C(A_{\alpha})},s\overline{C(A_{\alpha})}^{\perp})$ are also distinct. So we see that there are at least $2^{\lambda}$ distinct t-structures. Since $\lambda$ is arbitrary the proof is complete. \cqfd In the category of spectra by results of Bousfield, homological localizations are of the form $P_A$. It was shown by Ohkawa \cite{Ohkawa} that this subclass of localizations do form a set. It is unknown if all localizations of the form $P_A$ which are stable under desuspension from a set. The next theorem shows that if we take all localizations of the form $P_A$ without assuming they are closed under suspension then we do not get a set. \begin{thm}\label{properclassinspectra} The class of t-structures, and hence also the class of nullity classes, in spectra do not form a set. Similarly the class of nullity classes in spaces do not form a set. \end{thm} {\sl Proof. }\par We really only give an outline of the proof. Those initiated to the calculus of $P_A$ can easily fill in the details. Recall that $K(G,n)$ is the Eilenberg-Mac Lane spectrum (or space) with homotopy groups $G$ concentrated in dimension $n$. The functors $P_E$ have been constructed in spaces and spectra, for example see \cite{DF} and \cite{Hirschhorn}. For spectra and any $E$, $\overline{C(E)}$ is an aisle. Then for a rigid system $\{ A_{\alpha} \}_{\alpha \in 2^{\lambda}}$ of abelian groups, all the nullity classes (and t-structures if we are working in spectra) $\overline{C(K(A_{\alpha},n))}$ are distinct for the same reasons as above. Since $\lambda$ is arbitrary this means that there is not a set of them. \cqfd \vskip5mm
{ "redpajama_set_name": "RedPajamaArXiv" }
9,764
An MSNBC contributor wrote Tuesday on Twitter that he hoped the Islamic State would bomb President Donald Trump's property in Turkey. Counterterrorism analyst Malcolm Nance was responding to a tweet of an image of Trump Tower Istanbul that was alleging the property was the reason Trump called Turkish President Recep Tayyip Erdoğan to congratulate him on his controversial referendum win over the weekend. "This is my nominee for the first ISIS suicide bombing of a Trump property," Nance tweeted, and then later deleted. During the presidential election, Nance accused Trump of propping up ISIS through his actions and words. "I will go so far as to say Donald Trump is the ISIS candidate," Nance said. "He inflames the passions of people in the West to perform Islamophobia, to draw recruits to them, to make them say, 'This is what America is.'" Nance also nodded along when MSNBC host Thomas Roberts speculated in 2016 that the ISIS-affiliated Nice truck killer was just a mentally ill person who "took a moment to challenge society in a horrific way." "We've seen that," he agreed. "We call them EDPs. Extremely disturbed persons, or emotionally disturbed persons." This entry was posted in National Security and tagged Malcolm Nance, MSNBC, NBC. Bookmark the permalink.
{ "redpajama_set_name": "RedPajamaC4" }
7,203
<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=US-ASCII"> <title>Header &lt;boost/test/data/monomorphic/singleton.hpp&gt;</title> <link rel="stylesheet" href="../../../../../boostbook.css" type="text/css"> <meta name="generator" content="DocBook XSL Stylesheets V1.79.1"> <link rel="home" href="../../../../../index.html" title="Boost.Test"> <link rel="up" href="../../../../../boost_test/reference.html" title="Reference"> <link rel="prev" href="../../../../../boost/unit_test/data/monomorphic/is_datas_idm45194482011776.html" title="Struct template is_dataset&lt;join&lt; DataSet1, DataSet2 &gt;&gt;"> <link rel="next" href="../../../../../boost/unit_test/data/monomorphic/singleton.html" title="Class template singleton"> </head> <body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"> <table cellpadding="2" width="100%"><tr> <td valign="top"><img alt="Boost C++ Libraries" width="277" height="86" src="../../../../../../../../../boost.png"></td> <td align="center"><a href="../../../../../../../../../index.html">Home</a></td> <td align="center"><a href="../../../../../../../../../libs/libraries.htm">Libraries</a></td> <td align="center"><a href="http://www.boost.org/users/people.html">People</a></td> <td align="center"><a href="http://www.boost.org/users/faq.html">FAQ</a></td> <td align="center"><a href="../../../../../../../../../more/index.htm">More</a></td> </tr></table> <hr> <div class="spirit-nav"> <a accesskey="p" href="../../../../../boost/unit_test/data/monomorphic/is_datas_idm45194482011776.html"><img src="../../../../../../../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../../../../../boost_test/reference.html"><img src="../../../../../../../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../../../../../index.html"><img src="../../../../../../../../../doc/src/images/home.png" alt="Home"></a><a accesskey="n" href="../../../../../boost/unit_test/data/monomorphic/singleton.html"><img src="../../../../../../../../../doc/src/images/next.png" alt="Next"></a> </div> <div class="section"> <div class="titlepage"><div><div><h4 class="title"> <a name="header.boost.test.data.monomorphic.singleton_hpp"></a>Header &lt;<a href="../../../../../../../../../boost/test/data/monomorphic/singleton.hpp" target="_top">boost/test/data/monomorphic/singleton.hpp</a>&gt;</h4></div></div></div> <p>Defines single element monomorphic dataset. </p> <pre class="synopsis"><span class="keyword">namespace</span> <span class="identifier">boost</span> <span class="special">{</span> <span class="keyword">namespace</span> <span class="identifier">unit_test</span> <span class="special">{</span> <span class="keyword">namespace</span> <span class="identifier">data</span> <span class="special">{</span> <span class="keyword">namespace</span> <span class="identifier">monomorphic</span> <span class="special">{</span> <span class="keyword">template</span><span class="special">&lt;</span><span class="keyword">typename</span> T<span class="special">&gt;</span> <span class="keyword">class</span> <a class="link" href="../../../../../boost/unit_test/data/monomorphic/singleton.html" title="Class template singleton">singleton</a><span class="special">;</span> <span class="keyword">template</span><span class="special">&lt;</span><span class="keyword">typename</span> T<span class="special">&gt;</span> <span class="keyword">struct</span> <a class="link" href="../../../../../boost/unit_test/data/monomorphic/is_datas_idm45194481968448.html" title="Struct template is_dataset&lt;singleton&lt; T &gt;&gt;">is_dataset</a><span class="special">&lt;</span><span class="identifier">singleton</span><span class="special">&lt;</span> <span class="identifier">T</span> <span class="special">&gt;</span><span class="special">&gt;</span><span class="special">;</span> <span class="special">}</span> <span class="special">}</span> <span class="special">}</span> <span class="special">}</span></pre> </div> <table xmlns:rev="http://www.cs.rpi.edu/~gregod/boost/tools/doc/revision" width="100%"><tr> <td align="left"></td> <td align="right"><div class="copyright-footer">Copyright &#169; 2001-2016 Boost.Test contributors<p> Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at <a href="http://www.boost.org/LICENSE_1_0.txt" target="_top">http://www.boost.org/LICENSE_1_0.txt</a>) </p> </div></td> </tr></table> <hr> <div class="spirit-nav"> <a accesskey="p" href="../../../../../boost/unit_test/data/monomorphic/is_datas_idm45194482011776.html"><img src="../../../../../../../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../../../../../boost_test/reference.html"><img src="../../../../../../../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../../../../../index.html"><img src="../../../../../../../../../doc/src/images/home.png" alt="Home"></a><a accesskey="n" href="../../../../../boost/unit_test/data/monomorphic/singleton.html"><img src="../../../../../../../../../doc/src/images/next.png" alt="Next"></a> </div> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
4,110
{"url":"http:\/\/www.nag.com\/numeric\/CL\/nagdoc_cl24\/html\/X04\/x04dcc.html","text":"x04 Chapter Contents\nx04 Chapter Introduction\nNAG Library Manual\n\n# NAG Library Function Documentnag_pack_complx_mat_print\u00a0(x04dcc)\n\n## 1\u00a0\u00a0Purpose\n\nnag_pack_complx_mat_print\u00a0(x04dcc) is an easy-to-use function to print a Complex triangular matrix stored in a packed one-dimensional array.\n\n## 2\u00a0\u00a0Specification\n\n #include #include\n void nag_pack_complx_mat_print\u00a0(Nag_OrderType\u00a0order, Nag_UploType\u00a0uplo, Nag_DiagType\u00a0diag, Integer\u00a0n, const\u00a0Complex\u00a0a[], const\u00a0char\u00a0*title, const\u00a0char\u00a0*outfile, NagError\u00a0*fail)\n\n## 3\u00a0\u00a0Description\n\nnag_pack_complx_mat_print\u00a0(x04dcc) prints a Complex triangular matrix stored in packed form. It is an easy-to-use driver for nag_pack_complx_mat_print_comp\u00a0(x04ddc). The function uses default values for the format in which numbers are printed, for labelling the rows and columns, and for output record length.\nnag_pack_complx_mat_print\u00a0(x04dcc) will choose a format code such that numbers will be printed with a $%8.4\\mathrm{f}$, a $%11.4\\mathrm{f}$\u00a0or a $%13.4\\mathrm{e}$\u00a0format. The $%8.4\\mathrm{f}$\u00a0code is chosen if the sizes of all the matrix elements to be printed lie between $0.001$\u00a0and $1.0$. The $%11.4\\mathrm{f}$\u00a0code is chosen if the sizes of all the matrix elements to be printed lie between $0.001$\u00a0and $9999.9999$. Otherwise the $%13.4\\mathrm{e}$\u00a0code is chosen. The chosen code is used to print each complex element of the matrix with the real part above the imaginary part.\nThe matrix is printed with integer row and column labels, and with a maximum record length of $80$.\nThe matrix is output to the file specified by outfile or, by default, to standard output.\n\nNone.\n\n## 5\u00a0\u00a0Arguments\n\n1: \u00a0\u00a0\u2002 orderNag_OrderTypeInput\nOn entry: the order argument specifies the two-dimensional storage scheme being used, i.e., row-major ordering or column-major ordering. C language defined storage is specified by ${\\mathbf{order}}=\\mathrm{Nag_RowMajor}$. See Section 3.2.1.3 in the Essential Introduction for a more detailed explanation of the use of this argument.\nConstraint: ${\\mathbf{order}}=\\mathrm{Nag_RowMajor}$\u00a0or $\\mathrm{Nag_ColMajor}$.\n2: \u00a0\u00a0\u2002 uploNag_UploTypeInput\nOn entry: indicates the type of the matrix to be printed\n${\\mathbf{uplo}}=\\mathrm{Nag_Lower}$\nThe matrix is lower triangular\n${\\mathbf{uplo}}=\\mathrm{Nag_Upper}$\nThe matrix is upper triangular\nConstraint: ${\\mathbf{uplo}}=\\mathrm{Nag_Lower}$\u00a0or $\\mathrm{Nag_Upper}$.\n3: \u00a0\u00a0\u2002 diagNag_DiagTypeInput\nOn entry: indicates whether the diagonal elements of the matrix are to be printed.\n${\\mathbf{diag}}=\\mathrm{Nag_NonRefDiag}$\nThe diagonal elements of the matrix are not referenced and not printed.\n${\\mathbf{diag}}=\\mathrm{Nag_UnitDiag}$\nThe diagonal elements of the matrix are not referenced, but are assumed all to be unity, and are printed as such.\n${\\mathbf{diag}}=\\mathrm{Nag_NonUnitDiag}$\nThe diagonal elements of the matrix are referenced and printed.\nConstraint: ${\\mathbf{diag}}=\\mathrm{Nag_NonRefDiag}$, $\\mathrm{Nag_UnitDiag}$\u00a0or $\\mathrm{Nag_NonUnitDiag}$.\n4: \u00a0\u00a0\u2002 nIntegerInput\nOn entry: the order of the matrix to be printed.\nIf n is less than $1$, nag_pack_complx_mat_print\u00a0(x04dcc) will exit immediately after printing title; no row or column labels are printed.\n5: \u00a0\u00a0\u2002 a[$\\mathit{dim}$]const\u00a0ComplexInput\nNote: the dimension, dim, of the array a must be at least $\\mathrm{max}\\phantom{\\rule{0.125em}{0ex}}\\left(1,{\\mathbf{n}}\u00d7\\left({\\mathbf{n}}+1\\right)\/2\\right)$.\nOn entry: the matrix to be printed. Note that a must have space for the diagonal elements of the matrix, even if these are not stored.\nThe storage of elements ${A}_{ij}$\u00a0depends on the order and uplo arguments as follows:\n\u2022 if ${\\mathbf{order}}=\\mathrm{Nag_ColMajor}$\u00a0and ${\\mathbf{uplo}}=\\mathrm{Nag_Upper}$,\n${A}_{ij}$\u00a0is stored in ${\\mathbf{a}}\\left[\\left(j-1\\right)\u00d7j\/2+i-1\\right]$, for $i\\le j$;\n\u2022 if ${\\mathbf{order}}=\\mathrm{Nag_ColMajor}$\u00a0and ${\\mathbf{uplo}}=\\mathrm{Nag_Lower}$,\n${A}_{ij}$\u00a0is stored in ${\\mathbf{a}}\\left[\\left(2n-j\\right)\u00d7\\left(j-1\\right)\/2+i-1\\right]$, for $i\\ge j$;\n\u2022 if ${\\mathbf{order}}=\\mathrm{Nag_RowMajor}$\u00a0and ${\\mathbf{uplo}}=\\mathrm{Nag_Upper}$,\n${A}_{ij}$\u00a0is stored in ${\\mathbf{a}}\\left[\\left(2n-i\\right)\u00d7\\left(i-1\\right)\/2+j-1\\right]$, for $i\\le j$;\n\u2022 if ${\\mathbf{order}}=\\mathrm{Nag_RowMajor}$\u00a0and ${\\mathbf{uplo}}=\\mathrm{Nag_Lower}$,\n${A}_{ij}$\u00a0is stored in ${\\mathbf{a}}\\left[\\left(i-1\\right)\u00d7i\/2+j-1\\right]$, for $i\\ge j$.\nIf ${\\mathbf{diag}}=\\mathrm{Nag_UnitDiag}$, the diagonal elements of $A$\u00a0are assumed to be $1$, and are not referenced; the same storage scheme is used whether ${\\mathbf{diag}}=\\mathrm{Nag_NonUnitDiag}$\u00a0or ${\\mathbf{diag}}=\\mathrm{Nag_UnitDiag}$.\n6: \u00a0\u00a0\u2002 titleconst\u00a0char\u00a0*Input\nOn entry: a title to be printed above the matrix.\nIf ${\\mathbf{title}}=\\mathbf{NULL}$, no title (and no blank line) will be printed.\nIf title contains more than $80$\u00a0characters, the contents of title will be wrapped onto more than one line, with the break after $80$\u00a0characters.\nAny trailing blank characters in title are ignored.\n7: \u00a0\u00a0\u2002 outfileconst\u00a0char\u00a0*Input\nOn entry: the name of a file to which output will be directed. If outfile is NULL the output will be directed to standard output.\n8: \u00a0\u00a0\u2002 failNagError\u00a0*Input\/Output\nThe NAG error argument (see Section 3.6 in the Essential Introduction).\n\n## 6\u00a0\u00a0Error Indicators and Warnings\n\nNE_ALLOC_FAIL\nMemory allocation failed.\nOn entry, argument $\u27e8\\mathit{\\text{value}}\u27e9$\u00a0had an illegal value.\nNE_INTERNAL_ERROR\nAn internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.\nNE_NOT_APPEND_FILE\nCannot open file $\"\u27e8\\mathit{\\text{value}}\u27e9\"$\u00a0for appending.\nNE_NOT_CLOSE_FILE\nCannot close file $\"\u27e8\\mathit{\\text{value}}\u27e9\"$.\nNE_NOT_WRITE_FILE\nCannot open file $\"\u27e8\\mathit{\\text{value}}\u27e9\"$\u00a0for writing.\n\nNot applicable.\n\n## 8\u00a0\u00a0Parallelism and Performance\n\nNot applicable.\n\nA call to nag_pack_complx_mat_print\u00a0(x04dcc) is equivalent to a call to nag_pack_complx_mat_print_comp\u00a0(x04ddc) with the following argument values:\n```\nncols = 80\nindent = 0\nlabrow = Nag_IntegerLabels\nlabcol = Nag_IntegerLabels\nform = 0\ncmplxform = Nag_AboveForm\n\n```\n\nNone.","date":"2016-05-02 09:35:15","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 60, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9919542074203491, \"perplexity\": 1963.4941056410294}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-18\/segments\/1461860128071.22\/warc\/CC-MAIN-20160428161528-00127-ip-10-239-7-51.ec2.internal.warc.gz\"}"}
null
null
Q: Access the history of a task using the Conduit API I want to be able to use the Phabricator Conduit API to get historical information about a task such as when it was assigned/unassigned and when it moved columns on a workboard. I've looked through the Conduit API Documentation and I'm able to give a Project PHID and get information on the current state of the tasks on the workboard, but not their history. This is an example of the information I'd like to get back through the API A: You can use maniphest.gettasktransactions. It's a frozen method, but I don't see any modern methods to get this info. Example result: { "2059": [ { "taskID": "2059", "transactionID": "36573", "transactionPHID": "PHID-XACT-TASK-4fyapons4cxspcv", "transactionType": "core:columns", "oldValue": null, "newValue": [ { "columnPHID": "PHID-PCOL-l47qpqaqky5cucv53jtj", "boardPHID": "PHID-PROJ-ogxwp55og5rqok56vmot", "fromColumnPHIDs": { "PHID-PCOL-azcgsgut44vew2sfqhh7": "PHID-PCOL-azcgsgut44vew2sfqhh7" } } ], "comments": null, "authorPHID": "PHID-USER-gimad45egg7tcccxd6co", "dateCreated": "1556545748" }, { "taskID": "2059", "transactionID": "36572", "transactionPHID": "PHID-XACT-TASK-y2fjn5yzby4mdz6", "transactionType": "core:edge", "oldValue": [], "newValue": [ "PHID-CMIT-idy5uamuuz3eespdsmvd" ], "comments": null, "authorPHID": "PHID-USER-gimad45egg7tcccxd6co", "dateCreated": "1556545665" }, { "taskID": "2059", "transactionID": "36382", "transactionPHID": "PHID-XACT-TASK-rwanaewpmqpzslc", "transactionType": "core:subscribers", "oldValue": [ "PHID-USER-2clcr42jsfygiyne64kq", "PHID-USER-whapfbsypxiafuoy3wi5", "PHID-USER-gimad45egg7tcccxd6co" ], "newValue": [ "PHID-USER-whapfbsypxiafuoy3wi5", "PHID-USER-gimad45egg7tcccxd6co" ], "comments": null, "authorPHID": "PHID-USER-2clcr42jsfygiyne64kq", "dateCreated": "1555615023" }, [...]
{ "redpajama_set_name": "RedPajamaStackExchange" }
518
'use strict'; var gulp = require('gulp'), less = require('gulp-less'), concat = require('gulp-concat'), exclude = require('gulp-ignore').exclude, ngHtml2Js = require('gulp-ng-html2js'), argv = require('yargs').argv, uglify = require('gulp-uglify'), gulpif = require('gulp-if'), scripts = require('./scripts'), styles = require('./styles'), cleanFolder = require('./clean-folder'); var sourceScripts = scripts.source; var vendorScripts = scripts.vendor; var vendorStyles = styles.vendor; //TODO fonts next for fonyawesome var lessFiles = [ './client/assets/less/index.less' ]; var fonts = [ './bower_components/components-font-awesome/fonts/*' ]; var audio = [ './client/assets/audio/*' ]; var publicDest = function(optSubdir) { var suffix = optSubdir ? '/' + optSubdir : ''; var destDir = './dist' + suffix; return gulp.dest(destDir); }; var publicRootDest = function() { return publicDest(); }; var publicJsDest = function() { return publicDest('assets/js'); }; var publicCssDest = function() { return publicDest('assets/css'); }; var publicImageDest = function() { return publicDest('assets/images'); }; var publicVideoDest = function() { return publicDest('assets/video'); }; var publicAudioDest = function() { return publicDest('assets/audio'); }; var publicFontDest = function() { return publicDest('assets/fonts'); }; var ifGulpProd = function(stream, optNonProdStream) { return gulpif(argv.production, stream, optNonProdStream); }; var uglifyProdJs = function() { return ifGulpProd(uglify({ output : { beautify : false, ascii_only : true } })); }; var compressProdLess = function() { return ifGulpProd(less({compress : true}), less()); }; var renameTemplate = function(fileUrl) { return fileUrl.replace('.tpl.html', '.html'); }; var transformTemplates = function(templateModuleName) { return ngHtml2Js({ moduleName : templateModuleName, rename : renameTemplate }); }; var templateConfigs = { app : { sources : './views/**/*.jade', moduleName : 'templates-app', destFileName : 'templates-app.js' } }; var templateCompilationTask = function(templateConfig) { return function() { return gulp.src(templateConfig.sources) .pipe(transformTemplates(templateConfig.moduleName)) .pipe(concat(templateConfig.destFileName)) .pipe(publicRootDest()); }; }; var compilationTasks = [ 'scripts', 'vendor-scripts', 'vendor-styles', 'less', 'images', 'video', 'audio', 'fonts', //'views', 'html2js' ]; var watchForRecompilationTask = function() { gulp.watch('./client/assets/**/*.less', ['less']); gulp.watch('./client/assets/**/*.js', ['scripts']); gulp.watch('./views/**/*.jade', ['views']); gulp.watch('./client/assets/images/**/*', ['images']); gulp.watch(['./views/**/*.jade'], ['html2js']); }; var cleanCompilationFilesTask = cleanFolder.makeCleanFolderTask('./dist'); var buildScripts = function() { return gulp.src(sourceScripts) .pipe(exclude('/**/*Spec.js')) .pipe(concat('main.js')) .pipe(uglifyProdJs()) .pipe(publicJsDest()); }; var compileVendorScriptsToPublicDirectory = function() { return gulp.src(vendorScripts) .pipe(concat('vendor.js')) .pipe(publicJsDest()); }; var compileVendorStylesToPublicDirectory = function() { return gulp.src(vendorStyles) .pipe(concat('vendor.css')) .pipe(publicCssDest()); }; var compileLessToPublicCssDirectory = function() { return gulp.src(lessFiles) .pipe(compressProdLess()) .pipe(publicCssDest()); }; var copyAllImagesToPublicDirectory = function() { return gulp.src('./client/assets/images/**/*') .pipe(publicImageDest()); }; var copyAllVideosToPublicDirectory = function() { return gulp.src('./client/assets/video/**/*') .pipe(publicVideoDest()); }; var copyAllAudioToPublicDirectory = function() { return gulp.src('./client/assets/audio/**/*') .pipe(publicAudioDest()); }; var copyAllFontsToPublicDirectory = function() { return gulp.src(fonts) .pipe(publicFontDest()); }; gulp.task('scripts', buildScripts); gulp.task('vendor-scripts', compileVendorScriptsToPublicDirectory); gulp.task('vendor-styles', compileVendorStylesToPublicDirectory); gulp.task('less', compileLessToPublicCssDirectory); gulp.task('images', copyAllImagesToPublicDirectory); gulp.task('video', copyAllVideosToPublicDirectory); gulp.task('audio', copyAllAudioToPublicDirectory); gulp.task('fonts', copyAllFontsToPublicDirectory); //gulp.task('views', copyViewsToPublicDirectory); gulp.task('html2js', templateCompilationTask(templateConfigs.app)); module.exports = { compilationTasks : compilationTasks, watchForRecompilationTask : watchForRecompilationTask, cleanCompilationFilesTask : cleanCompilationFilesTask };
{ "redpajama_set_name": "RedPajamaGithub" }
2,910
Форталеза-дус-Валус () — муниципалитет в Бразилии, входит в штат Риу-Гранди-ду-Сул. Составная часть мезорегиона Северо-запад штата Риу-Гранди-ду-Сул. Входит в экономико-статистический микрорегион Крус-Алта. Население составляет 5290 человек на 2006 год. Занимает площадь 650,324 км². Плотность населения — 8,1 чел./км². История Город основан 5 марта 1982 года. Статистика Валовой внутренний продукт на 2003 составляет 116.294.062,00 реалов (данные: Бразильский институт географии и статистики). Валовой внутренний продукт на душу населения на 2003 составляет 22.581,37 реалов (данные: Бразильский институт географии и статистики). Индекс развития человеческого потенциала на 2000 составляет 0,824 (данные: Программа развития ООН). География Климат местности: субтропический гумидный. Муниципалитеты штата Риу-Гранди-ду-Сул
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,791
Q: Mobile internet modem still not being detected I tried to implement the answer in Micromax 3G mobile internet modem not being detected. Here is the output of lsusb. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 010: ID 1c9e:9913 OMEGA TECHNOLOGY Bus 001 Device 004: ID 064e:d20c Suyin Corp. Bus 002 Device 004: ID 0489:e03c Foxconn / Hon Hai With this information, I followed the answer step-by-step, replacing 9605 in the answer by 9913. usb_modeswitch -c /etc/usb_modeswitch.d/1c9e\:9913 outputs Looking for target devices ... found matching product ID adding device Found devices in target mode or class (1) Looking for default devices ... found matching product ID adding device Found device in default mode, class or configuration (1) Accessing device 004 on bus 001 ... Getting the current device configuration ... OK, got current device configuration (1) Using first interface: 0x00 Using endpoints 0x01 (out) and 0x81 (in) Inquiring device details; driver will be detached ... Looking for active driver ... OK, driver found ("usbserial_generic") OK, driver "usbserial_generic" detached SCSI inquiry data (for identification) ------------------------- Vendor String: Model String: Revision String: Pp� ------------------------- USB description data (for identification) ------------------------- Manufacturer: Icera Product: USB MODEM Serial No.: 0.0.1 ------------------------- Setting up communication with interface 0 Using endpoint 0x01 for message sending ... Trying to send message 1 to endpoint 0x01 ... OK, message successfully sent Resetting response endpoint 0x81 Resetting message endpoint 0x01 -> Run lsusb to note any changes. Bye. I still do not have my modem detected by Ubuntu 12.04. My laptop is an Aspire 4752ZG. A: I totally forgot about this question. It seems that in Ubuntu 12.04 and 12.10, the default Network Connections is enough to fix the connection. Wait for the Modem icon to pop up in the Unity bar after plugging the mobile broadband stick. Right click on the icon and click on Eject modem. It will re-mount itself so right click on the icon again and click Eject modem once more. Wait for a short while and the Network icon should start showing your modem (usually follows New Mobile Broadband). Click on that and follow through with the installation as described, for instance, in this external site: http://daksh21ubuntu.blogspot.com/2012/02/how-to-establish-mobile-broadband.html
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,785
<?xml version="1.0" encoding="UTF-8"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.apache.flink</groupId> <artifactId>flink-connectors</artifactId> <version>1.14-SNAPSHOT</version> <relativePath>..</relativePath> </parent> <artifactId>flink-connector-elasticsearch6_${scala.binary.version}</artifactId> <name>Flink : Connectors : Elasticsearch 6</name> <packaging>jar</packaging> <!-- Allow users to pass custom connector versions --> <properties> <elasticsearch.version>6.3.1</elasticsearch.version> </properties> <dependencies> <!-- Core --> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-streaming-java_${scala.binary.version}</artifactId> <version>${project.version}</version> <scope>provided</scope> </dependency> <!-- Table ecosystem --> <!-- Projects depending on this project won't depend on flink-table-*. --> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-table-api-java-bridge_${scala.binary.version}</artifactId> <version>${project.version}</version> <scope>provided</scope> <optional>true</optional> </dependency> <!-- Flink Elasticsearch --> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-elasticsearch-base_${scala.binary.version}</artifactId> <version>${project.version}</version> <exclusions> <!-- Elasticsearch Java Client has been moved to a different module in 5.x --> <exclusion> <groupId>org.elasticsearch</groupId> <artifactId>elasticsearch</artifactId> </exclusion> </exclusions> </dependency> <!-- Elasticsearch --> <!-- Dependency for Elasticsearch 6.x REST Client --> <dependency> <groupId>org.elasticsearch.client</groupId> <artifactId>elasticsearch-rest-high-level-client</artifactId> <version>${elasticsearch.version}</version> </dependency> <!-- Tests --> <dependency> <groupId>org.testcontainers</groupId> <artifactId>elasticsearch</artifactId> <version>1.15.1</version> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-test-utils_${scala.binary.version}</artifactId> <version>${project.version}</version> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-streaming-java_${scala.binary.version}</artifactId> <version>${project.version}</version> <scope>test</scope> <type>test-jar</type> </dependency> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-statebackend-changelog_${scala.binary.version}</artifactId> <version>${project.version}</version> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-elasticsearch-base_${scala.binary.version}</artifactId> <version>${project.version}</version> <exclusions> <exclusion> <groupId>org.elasticsearch</groupId> <artifactId>elasticsearch</artifactId> </exclusion> </exclusions> <type>test-jar</type> <scope>test</scope> </dependency> <!-- Including elasticsearch transport dependency for tests. Netty3 is not here anymore in 6.x --> <dependency> <groupId>org.elasticsearch.client</groupId> <artifactId>transport</artifactId> <version>${elasticsearch.version}</version> <scope>test</scope> </dependency> <dependency> <groupId>org.elasticsearch.plugin</groupId> <artifactId>transport-netty4-client</artifactId> <version>${elasticsearch.version}</version> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <scope>provided</scope> </dependency> <!-- Table API integration tests --> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-table-planner_${scala.binary.version}</artifactId> <version>${project.version}</version> <scope>test</scope> </dependency> <!-- Elasticsearch table sink factory testing --> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-json</artifactId> <version>${project.version}</version> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <!-- Enforce single fork execution because of spawning Elasticsearch cluster multiple times --> <forkCount>1</forkCount> </configuration> </plugin> </plugins> </build> </project>
{ "redpajama_set_name": "RedPajamaGithub" }
6,111
Q: Proper way to call $filter on a promise object in the controller? I have this forked code plunker for pagination it was working fine but after I made some adjustment to make it grub the data from a Json file it stop working. To see code working comment lines: [13, 14, 15], then uncomment lines: [16 -> 37]. The error is about a problem with the $filter function it returns undefined when the data are fetched from a Json file so I think because the object is a promise now that breaks the $filter function, I'm not sure. The error happens in this 2 function (with the Json call): $scope.search = function () { $scope.filteredItems = $filter('filter')($scope.items, function (item) { for(var attr in item) { if (searchMatch(item[attr], $scope.query)){ return true; } } return false; }); // take care of the sorting order if ($scope.sortingOrder !== '') { $scope.filteredItems = $filter('orderBy')($scope.filteredItems, $scope.sortingOrder, $scope.reverse); } $scope.currentPage = 0; // now group by pages $scope.groupToPages(); }; // calculate page in place $scope.groupToPages = function () { $scope.pagedItems = []; for (var i = 0; i < $scope.filteredItems.length; i++) { if (i % $scope.itemsPerPage === 0) { $scope.pagedItems[Math.floor(i / $scope.itemsPerPage)] = [ $scope.filteredItems[i] ]; } else { $scope.pagedItems[Math.floor(i / $scope.itemsPerPage)].push($scope.filteredItems[i]); } } }; error message: TypeError: Cannot read property 'length' of undefined           at Object.$scope.groupToPages (controllers.js:66:45)           at Object.$scope.search (controllers.js:61:12)           at new controllers.MainCtrl (controllers.js:99:10) Any help thanks in advance. A: The problem was that the filter get called before the promise return from $http is fulfilled so the items object is undefined. the solution as mentioned in this Google groups post ( proper way to call $filter in the controller? ). I need to do this: Items.then(function(data) { $scope.items = data; //call $scope.search here, instead of calling it on line 100 $scope.search(); }); so the plunker now works. Thanks to Florian Orben in google AngularJS group.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,794
Fimbristylis naikii är en halvgräsart som beskrevs av Wad.Khan och Pakshirajan Lakshminarasimhan. Fimbristylis naikii ingår i släktet Fimbristylis och familjen halvgräs. Inga underarter finns listade i Catalogue of Life. Källor Halvgräs naikii
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,954
{"url":"https:\/\/www.trustudies.com\/question\/679\/in-figure-pq-and-rs-are-two-mirrors-p\/","text":"3 Tutor System\nStarting just at 265\/hour\n\n# In figure, PQ and RS are two mirrors placed parallel to each other. An incident ray AB strikes the mirror PQ at B, the reflected ray moves along the path BC and strikes the mirror RS at C and again reflects back along CD. Prove that AB || CD.\n\nDraw perpendiculars BE and CF on PQ and RS, respectively.\n\nTherefore, we can say that BE || CF\nAs angle of incidence = angle of reflection,\nwe get, $$\\angle{a}$$ = $$\\angle{b}$$ ....(i)\nSimilarly, we get, $$\\angle{x}$$ = $$\\angle{y}$$ ....(ii)\n\nBy Alternate angles theorem, $$\\angle{b}$$ = $$\\angle{y}$$\nNow, doubling the angles, we get,\n2$$\\angle{b}$$ = 2$$\\angle{y}$$\n$$\\Rightarrow$$ $$\\angle{b}$$ + $$\\angle{b}$$ = $$\\angle{y}$$ + $$\\angle{y}$$\nfrom (i) and (ii),\n$$\\angle{a}$$ + $$\\angle{b}$$ = $$\\angle{x}$$ + $$\\angle{y}$$\n\nHence, $$\\angle{ABC}$$ = $$\\angle{DCB}$$\n\nThus by converse of alternate angles theorem,\n\nwe get, AB || CD\nHence, it is proved.","date":"2023-03-28 20:41:18","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7605301141738892, \"perplexity\": 1464.8629393457281}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296948871.42\/warc\/CC-MAIN-20230328201715-20230328231715-00120.warc.gz\"}"}
null
null
Q: How to control validation data for Federated framework I'm trying to specify the validation data that is passed through the federated framework to each client to train/validation on. I know that tensorflow-federated takes random sample of each client's dataset and validate on it. But if my data (within a subset) is very correlated, how can I specify (in the TFF framework) the validation data-set to each client? Do you think shuffling the data makes sense here? (e.g. using: DS.repeat(FL_rpt).shuffle(FL_shuf).batch(FL_batch)) If so, any recommendation on the shuffle_buffer size? In keras training, we have the following to train the model on Set A and validate the training on set B: model.fit(InA,OutA, validation_data=(In_valid_B,Out_valid_B),batch_size=100,epochs=100) How can we do the same with the federated framework? A: This could probably be written in the outer Python loop during simulation. The current APIs don't have a notion of both evaluation and training inside a single round. If using the simulation datasets included in TFF (e.g. those under tff.simulation.datasets), they include a train/test split that makes this easy. Each returns a 2-tuple of tff.simulation.ClientData objects, a test and a train ClientData. Both test and train have the same ClientData.client_id lists, but the tf.data.Dataset returned by create_tf_dataset_for_client(client_id) will have a disjoint set of examples. In other words, the train and test split is over user examples, not over users. A federated training and federated evaluation loop might look like: train_data, test_data = tff.simulation.datasets.shakespeare.load_data() federated_average = tff.learning.build_federated_averaging_process(model_fn, ...) federated_eval = tff.learning.build_federated_evaluation(model_fn) state = federated_average.initialize() for _ in range(NUM_ROUNDS): participating_clients = numpy.random.choice(train_data.client_ids, size=5) # Run a training pass clients_train_datasets = [ train_data.create_tf_dataset_for_client(c) for c in participating_clients ] state, train_metrics = federated_average.next(state, client_train_datasets) # Run an evaluation pass client_eval_datasets = [ test_data.create_tf_dataset_for_client(c) for c in participating_clients ] eval_metrics = federated_eval(state.model, client_eval_datasets)
{ "redpajama_set_name": "RedPajamaStackExchange" }
586
Q: mimic C ++ behavior in python 3 I just saw this behavior in c ++: cout << "\ nEnter two numbers:"; cin >> num >> num2; In this way it is not necessary to do: cout << "\ nEnter a number:"; cin >> num; cout << "\ nEnter another number:"; cin >> num2; So with that, the following question was generated: How can I imitate this behavior in python3? I've been trying to support myself with functions like range, but I still don't get the same behavior that happened in c ++ Does anyone know how I can achieve it? Thank you A: It's not that neat, but here's one way, using fileinput: #!/usr/bin/python import fileinput print("Enter two numbers:") count = 0 nums = [] for line in fileinput.input(): nums.append(float(line.strip())) count += 1 if count == 2: break print(nums)
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,026
Q: Session timeout confusion - session.setMaxInactiveInterval(0) I am new to JEE and this is what confuses me. According to HttpSession.html#setMaxInactiveInterval(int interval) documentation An interval value of zero or less indicates that the session should never timeout. but according to my text book (which already is few years old - so I expect it not to be always right) using zero as argument should cause session to timeout immediately. This code public class Test extends HttpServlet { protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html"); response.setCharacterEncoding("utf-8"); PrintWriter out = response.getWriter(); HttpSession session = request.getSession(); session.setAttribute("foo", 42); session.setMaxInactiveInterval(0); out.println(session.getAttribute("foo"));//problem here } } used on Glassfish 4.0 seems to confirm theory from textbook instead of newer official documentation because it returns HTTP Status 500 - Internal Server Error with error message java.lang.IllegalStateException: getAttribute: Session already invalidated What is going on here? Is this Glassfish 4.0 bug or documentation is wrong? Or maybe there is third option? PS. This code works as it should with negative values (session is not invalidated) and I am using -1 instead of 0 in my code. I am just interested what is wrong with 0. A: The Servlet Specification chapter on Session Timeouts states By definition, if the time out period for a session is set to -1, the session will never expire. So GlasshFish seems to have that covered. I can't find any reference in the specification that says that the same should be true for a value of 0 with setMaxInactiveInterval(). However it does say The session-config defines the session parameters for this Web application. The sub-element session-timeout defines the default session time out interval for all sessions created in this Web application. The specified time out must be expressed in a whole number of minutes. If the time out is 0 or less, the container ensures the default behavior of sessions is never to time out. If this element is not specified, the container must set its default time out period. A: This is already time out and invalidate session.setMaxInactiveInterval(0); // mean inactive immediately So this is correct error message. (please refer head first book for further reference.) You are trying to access object value which is not exist. it's already destroyed
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,338